1 Introduction

A conic optimization problem is characterized by a constraint stipulating that the optimization variables have to belong to a closed convex cone. Linear conic optimization deals with the optimization of a linear function in a feasible set given by the intersection of an affine space and a convex cone.

Conic problems form a broad and important class of optimization problems since, according to [21], any convex optimization problem can be represented as a conic one. This class includes some of the most well-known types of convex problems, such as linear and semidefinite programming problems ((LP) and (SDP), respectively). Many Semi-infinite Programming (SIP) problems which are to optimize the cost function w.r.t. an infinite number of functional constraints can also be considered as conic problems.

Copositive Programming (CoP) problems are linear conic optimization problems where a linear function is optimized over a cone of matrices that are positive semidefinite in the non-negative ortant \(\mathbb {R}^p_+\) (copositive matrices). CoP can be thought of as a special case of SIP and a generalization of SDP.

Formally, problems of CoP are rather similar to that of SDP, but CoP deals with more complex and less studied problems. Being a fairly new field of research, CoP has already gained popularity, as it has been proven to be very useful in modeling particularly complex problems of nonconvex optimization, graph theory, algebra, and different applications (see, for example, [1, 8], and the references there).

In spite of the request for applications and great interest in them, CoP problems still remain insufficiently studied. This is explained by the complexity of these problems which in turn is due to the complexity of the underlying cone.

Optimality conditions and duality relations are among the most emerging optimization topics whose importance has long been recognized (see e.g., [2, 7, 14, 20, 27]). Duality plays a central role in testing optimality, identifying infeasibility, establishing lower bounds for the optimal objective value, and designing and analyzing iterative algorithms.

Traditionally, for a given (primal) convex problem, based on its initial data, the Lagrangian dual problem is constructed. The difference between the optimal values of the primal and dual cost functions is called the duality gap. The primal and the dual problems are closely related. The strength of this relationship depends on the initial problem data, which specify the form of the constraints and the cost function. Roughly speaking, a pair of dual problems is said to satisfy (i) weak duality if the duality gap is non-negative, (ii) strong duality if, for a given cost function, the duality gap is zero, and (iii) uniform duality if the duality gap is zero for any cost function. It is evident that (iii) implies (ii), and (ii) implies (i).

Guaranteeing strong and uniform duality provides researchers with a deeper theoretical understanding and insights into the structure of an optimization problem, which is crucial for the development of stronger optimality conditions and more efficient solution methods. Therefore, a thorough study of the duality issues and the derivation of explicit criteria for achieving strong and uniform duality properties is an extremely important challenge.

Speaking about conic linear problems, in [22], G. Pataki says that these problems “often behave pathologically: the optimal values of the primal and dual programs may differ, and may not be attained”. Such “pathological” problems are (see [23]) “theoretically interesting and often impossible to solve”. This is one more argument explaining the importance of investigating strong and uniform duality for conic optimization problems.

It is well-known that even the strong duality is guaranteed unconditionally only for the LP problems, while for the most important classes of conic problems, this property is satisfied only under certain rather strong assumptions. There is an extensive literature on sufficient conditions for strong duality in conic problems (see, for example, [3, 9, 12], and the references therein).

In this paper, we study the uniform duality property which, as it is mentioned above, is a more powerful property than strong duality. The concept of uniform LP duality was considered in the work of Duffin et al. (see [7]) for linear SIP problems, and in [11], it has been used for a wider class of convex SIP problems in the form of an uniform convex duality.

In literature, the uniform duality property was studied mostly for convex SIP and SDP problems and certain classes of linear conic problems with nice cones (see [26] for definition).

In [22, 23], the author uses a normal (canonical) form of a semi-definite system to characterize semi-definite systems which do not satisfy the uniform LP duality. Uniform LP duality for the second order conic systems, was studied in version 2 of the online version of the paper [22] (see arXiv:1709.02423v2).

It is known (see, e.g., [22], Lemma 2) that the closedness of the linear image of a closed convex cone ensures that a conic linear system yields uniform LP duality. Some closedness criteria for the linear image of a closed convex cone are studied in [25], and the results therein allow to obtain the necessary conditions for a conic linear system to satisfy uniform LP duality (Theorem 1 in [22]). If the cone defining this system is nice, then these conditions are necessary and sufficient.

In [20], for a problem introduced by the authors as a special SDP-SIP problem (SDSIP), uniform duality conditions are established in the form of a closedness condition for a certain cone. In [28], it is shown that the problem (SDSIP) is an ordinary linear semi-infinite program and, therefore, all the existing results regarding duality and uniform LP duality for linear SIP can be applied to it. Using this observation, in [28], new proofs for the results from [20] are given.

In [30], the author uses a semi-infinite optimization technique from [7] to characterize semi-definite systems that satisfy or do not satisfy uniform duality and provides a different explanation of the conditions guaranteeing uniform duality. A characterization of objective functions for SDP problems satisfying the strong duality property is given there as well.

Recent references also confirm the interest in studying uniform LP duality for conic systems.

Not much literature is available for optimality and duality conditions for CoP. Moreover, the strong/uniform duality in CoP is not easy to establish due to intrinsic complexity of copositive problems. The complexity arises from the fact that the cone of copositive matrices and the corresponding dual cone of completely positive matrices lack certain "good" properties: they are neither self-dual, nor symmetric, nor facially exposed (see [5, 13, 29]), and, consequently, not nice [26]. Therefore, the results obtained for conic systems with nice cones (such as SDP and second order problems) cannot be applied to CoP problems. On the other hand, the results obtained for general linear SIP problems do not take into account specific features of CoP problems and could be too intricate for them.

The aim of this paper is to establish a new criterion for the uniform LP duality for linear CoP, and to formulate it in different equivalent forms thus broadening its scope. The main results are obtained on the base of an approach developed in previous papers of the authors and based on a concept of immobile indices. This approach first was described for SIP problems (see, e.g., [15]) and then applied to various classes of convex conic problems in [14] and others.

The remainder of this paper is organized as follows. The problem’s statement and relevant research are overviewed in Sect. 2. The main results of the paper, new necessary and sufficient conditions of uniform LP duality for linear CoP problems, are proved in Sect. 3. Several equivalent formulations of the uniform duality conditions from Sect. 3, are deduced in Sect. 4. Section 5 contains examples confirming that the conditions obtained in the paper are essential. Some comparison with known results is given. In Sect. 6, we analyze the uniform duality conditions for SIP problems applied to CoP. We show that results obtained in this paper allow one to reformulate these conditions in a more explicit form. The final Sect. 7 contains some conclusions. Some technical proofs are given in the Appendix.

2 Problem Statement and Preliminary Results

Given a finite-dimensional inner product space \(\mathfrak {X}\), let’s, first, recall some generally accepted definitions.

A set \(C\subset \mathfrak {X}\) is convex if for any \(x, y \in C\) and any \(\alpha \in [0,1]\), it holds \(\alpha x+(1-\alpha ) y \in C.\) A set \(K \subset \mathfrak {X}\) is a cone if for any \(x \in K\) and any \(\alpha > 0\), it holds \(\alpha x\in K.\) For a cone \(K \subset \mathfrak {X}\), its dual cone \(K^*\) is given by

$$\begin{aligned} K^*:=\{x\in \mathfrak {X}: \langle x,y\rangle \ge 0 \ \forall y\in K\}.\end{aligned}$$

For a set \({\mathcal {B}}\subset \mathfrak {X},\) denote by \(\textrm{conv } \, {\mathcal {B}}\) its convex hull, i.e., the minimal (by inclusion) convex set, containing this set, by \(\textrm{span}\, {\mathcal {B}}\) its span, i.e., the smallest linear subspace containing \({\mathcal {B}}, \) and by \({\textrm{cone }}\, \mathcal {B}\) its conic hull, i.e., the set of all conic combinations of the points of \(\mathcal {B}\). In what follows, we will denote by \(\textrm{cl}\, \mathcal {B}\) the closure of the set \(\mathcal {B}\), by \( \textrm{int} \, \mathcal {B}\) its interior, and by \( {\textrm{relint}} \, \mathcal {B}\) its relative interior.

Given an integer \(p>1\), consider the vector space \({\mathbb {R}}^p\) with the standard orthogonal basis \(\{{\textbf{e}}_k, \ k=1,2,\dots , p\}\). Denote by \({\mathbb {R}}^{p}_+\) the set of all p - vectors with non-negative components, by \({{\mathcal {S}}}^{p}\) the space of real symmetric \(p\times p\) matrices, and by \(\mathcal {S}^p_+\) the cone of symmetric positive semi-definite \(p\times p\) matrices. The space \({{\mathcal {S}}}^{p}\) is considered here as a vector space with the trace inner product \(A\bullet B:=\textrm{trace}\, (AB).\)

In this paper, we deal with special classes of cones, the elements of which are matrices, in particular, with the cones of copositive and completely positive matrices.

Let \(\mathcal {COP}^{p}\) denote the cone of symmetric copositive \(p\times p\) matrices:

$$\begin{aligned} \mathcal {COP}^p:=\{D\in {{\mathcal {S}}}^{p}:{{\textbf{t}}}^{\top }D{{\textbf{t}}}\ge 0\ \forall {\textbf{t}} \in \mathbb R^p_+\}. \end{aligned}$$

Consider a compact subset of \({\mathbb {R}}^{p}_+\) in the form of the simplex

$$\begin{aligned} T:=\{{\textbf{t}} \in {\mathbb {R}}^p_+:\textbf{e}^{\top }{\textbf{t}}=1\} \end{aligned}$$
(1)

with \(\textbf{e}=(1,1,\dots ,1)^{\top }\in {\mathbb {R}}^p \). It is evident that the cone \(\mathcal {COP}^p\) can be equivalently described as follows:

$$\begin{aligned} \mathcal {COP}^p=\{D\in {\mathcal {S}}^{p}: \ {{\textbf{t}}}^{\top }D{\textbf{t}}\ge 0 \ \forall {\textbf{t}}\in T\}. \end{aligned}$$
(2)

The dual cone to \(\mathcal {COP}^p\) is the cone of completely positive matrices defined as

$$\begin{aligned} (\mathcal {COP}^p)^*= : \mathcal{C}\mathcal{P}^p = \textrm{conv}\{ {\textbf{t}}{\textbf{t}} ^{\top }: {\textbf{t}} \in {{\mathbb {R}}}^p_+\}. \end{aligned}$$

The cones of copositive and completely positive matrices are known to be proper cones, which means that they are closed, convex, pointed, and full-dimensional.

Consider a linear copositive programming problem in the form

$$\begin{aligned} {\mathbf{P:}}\;\;\qquad \qquad \qquad \displaystyle \min _{\textbf{x} \in {\mathbb {R}}^n } \ {\textbf{c}}^{\top } \textbf{x} \;\;\; \text{ s.t. } \mathcal {A}(\textbf{x}) \in \mathcal {COP}^p,\qquad \qquad \qquad \end{aligned}$$

where \( \textbf{x}=(x_1,\dots ,x_n)^{\top }\) is the vector of decision variables, the constraint matrix function \({\mathcal {A}}(\textbf{x})\) is defined as \( {\mathcal {A}}(\textbf{x}):=\displaystyle \sum _{m=1}^{n}A_mx_m+A_0;\) vector \( {\textbf{c}}\in {\mathbb {R}}^n \) and matrices \(A_m\in {\mathcal {S}}^{p},\) \(m=0,1,\dots ,n\) are given. Denote by X the set of feasible solutions of this problem:

$$\begin{aligned} X=\{\textbf{x} \in {\mathbb {R}}^n:\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\}. \end{aligned}$$

For the problem (P), the Lagrange dual problem takes the form:

$$\begin{aligned} {\textbf{D}}:\ \ \max -U\bullet A_0, \text{ s.t. } U\bullet A_m=c_m \; \forall m=1,2,\dots ,n;\ U\in \mathcal{C}\mathcal{P}^p. \end{aligned}$$

In what follows, for an optimization problem (Q), \(Val(\textbf{Q})\) denotes the optimal value of the objective function in the problem (Q) (shortly, the optimal value of the problem (Q)).

It is a known fact (see, for example, [17] and Sect. 5 below) that for CoP problems, the optimal values \(Val ({\textbf{P}} )\) and \(Val( \textbf{D} )\) of the primal problem (P) and the corresponding Lagrange dual problem \((\textbf{D})\) are not necessarily equal, even if they exist and are finite. A situation where, assuming \(Val ({\textbf{P}})>-\infty \), the problem \((\textbf{D})\) has an optimal solution and the so-called duality gap (the difference \(Val ({\textbf{P}} )-Val( \textbf{D} )\)) equals to zero, is called a strong duality.

In this paper, for linear CoP problems, we consider a slightly different duality property of their feasible sets, ensuring the vanishing of the duality gap for all cost vectors \({\textbf{c}}\). As this property is related to the constraint system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) of the problem \(({\textbf{P}}) \), we will refer to it as a property of this system.

Definition 2.1

A consistent system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) yields the property of uniform LP duality if for all \({\textbf{c}}\in {\mathbb {R}}^n\), such that the optimal value of the problem \((\textbf{P})\) is finite, the corresponding Lagrange dual problem (D) has an optimal solution, and it holds \(Val ({\textbf{P}})=Val(\textbf{D}).\)

It is known that under the Slater condition (the property that for some \(\textbf{x}\in {\mathbb {R}}^n\), it holds \(\mathcal {A}(\textbf{x})\in \textrm{int} (\mathcal {COP}^p)\)), the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) yields the uniform LP duality property.

Given the set X of feasible solutions of the problem (P), denote by \(T_{im}\) the set of normalized immobile indices of constraints in this problem:

$$\begin{aligned} T_{im}:=\{{\textbf{t}} \in T: {\textbf{t}}^{\top }\mathcal {A}(\textbf{x}){\textbf{t}}=0\ \forall \textbf{x} \in X\}. \end{aligned}$$

It is known (see, e.g., [14, 16]) that the set \(T_{im}\) is either empty or an union of a finite number of convex bounded polyhedra. Also, it was shown in [14] that the emptiness of the set \(T_{im}\) is equivalent to the fulfillment of the Slater condition.

Suppose that the set \(T_{im}\) is not empty and denote by

$$\begin{aligned} \mathcal {T}:=\{{{{\varvec{\tau }}}}(j), j \in J\}, \ J\subset {\mathbb {N}},\ |J|<\infty , \end{aligned}$$

the set of all vertices of \(\textrm{conv}T_{im}.\) It was shown in [14] that

$$\begin{aligned} X\subset Z:=\{\textbf{x} \in {\mathbb {R}}^n: \mathcal {A}(\textbf{x}){{{\varvec{\tau }}}}(j)\ge 0,\ j \in J\}. \end{aligned}$$
(3)

Denote \(P:=\{1,\dots ,p\} \) and introduce the following sets:

$$\begin{aligned} & {\overline{M}}(j):=\{k \in P: {\textbf{e}}^{\top }_{k}\mathcal {A}(\textbf{x}){{{\varvec{\tau }}}}(j)=0 \ \forall \textbf{x} \in Z\},\ j \in J, \end{aligned}$$
(4)
$$\begin{aligned} & M(j)=\{k \in P: {\textbf{e}}^{\top }_{k}\mathcal {A}(\textbf{x}){{{\varvec{\tau }}}}(j)=0 \ \forall \textbf{x} \in X\}, \ j \in J, \end{aligned}$$
(5)
$$\begin{aligned} & N_*(j)\!:=\{k \!\in P: \exists \, \textbf{x}(k,j)\in X \text{ such } \text{ that } {\textbf{e}}^{\top }_k\mathcal {A}(\textbf{x}(k,j)){{{\varvec{\tau }}}}(j)=0\}, \, j \in J. \end{aligned}$$
(6)

It was shown in [16] (see Theorem 7.1 with \(\mathcal {V}=\mathcal {T}\) and Theorem 3.1 with \(Q=\{D=\mathcal {A}(\textbf{x}),\, \textbf{x} \in X\}\)) that

$$\begin{aligned} X=\{\textbf{x} \in {\mathbb {R}}^{n}:{\textbf{t}}^{\top } \mathcal {A}(\textbf{x}){\textbf{t}}\ge 0 \ \forall {\textbf{t}} \in \varOmega ,\ \mathcal {A}(\textbf{x}){{{\varvec{\tau }}}}(j) \ge \textbf{0 }\ \forall j \in J\} \end{aligned}$$
(7)

and there exists a feasible solution \(\textbf{x}^*\in X,\) such that

$$\begin{aligned} \begin{aligned}&{\textbf{t}}^{\top } \mathcal {A}(\textbf{x}^*){\textbf{t}}>0 \; \forall {\textbf{t}} \in T\setminus T_{im},\\ {\textbf{e}}^{\top }_k\mathcal {A}( \textbf{x}^*){{{\varvec{\tau }}}}(j)= 0\ &\forall k \in M(j);\; {\textbf{e}}^{\top }_k\mathcal {A}(\textbf{x}^*){{{\varvec{\tau }}}}(j)> 0 \, \forall k \in P\setminus M(j), \forall j \in J. \end{aligned} \end{aligned}$$
(8)

Here and in what follows, we will use the set

$$\begin{aligned} \varOmega :=\{{\textbf{t}} \in T: \rho ({\textbf{t}}, \textrm{conv} T_{im})\ge \sigma \}\subset T\setminus T_{im}, \ \end{aligned}$$
(9)

where \(\sigma :=\min \{\tau _k(j), k \in P_+({{{\varvec{\tau }}}}(j)), j \in J\}>0,\) \(P_+({\textbf{t}}):=\{k \in P:t_k>0\}\) for \({\textbf{t}}=(t_k,k \in P)^{\top } \in {\mathbb {R}}^p_+\), \(\rho ({\textbf{t}}, \mathcal {B} )=\min \limits _{{{{\varvec{\tau }}}}\in \mathcal {B}}\sum \limits _{k\in P}|t_k-\tau _k|\) for some set \(\mathcal {B}\subset {\mathbb {R}}^p\).

Proposition 2.1

For the sets \({\overline{M}}(j) \) and M(j), \( j \in J,\) defined in (4) and (5), the following equalities hold true:

$$\begin{aligned} {\overline{M}}(j)=M(j) \ \forall j \in J. \end{aligned}$$
(10)

The proposition is proved in Appendix.

It follows from Proposition A.1 (see Appendix) that the problem \(({\textbf{P}})\) is equivalent to the following one:

$$\begin{aligned} & {{\textbf{P}}}_{*}: \qquad \qquad \qquad \qquad \min {\textbf{c}}^{\top } \textbf{x}\qquad \qquad \qquad \qquad \nonumber \\ & \text{ s.t. } {\textbf{t}}^{\top } \mathcal {A}(\textbf{x}){\textbf{t}}\ge 0 \ \forall {\textbf{t}} \in \varOmega ,\ {\textbf{e}}^{\top }_k\mathcal {A}(\textbf{x}){{{\varvec{\tau }}}}(j) \ge 0 \ \forall k \in N_{*}(j),\ \forall j \in J. \ \ \ \end{aligned}$$
(11)

For \( {\textbf{t}} \in T\) and \(k\in P\), denote

$$\begin{aligned} \bar{\textbf{b}}(k,{\textbf{t}})= & \left( \begin{array}{c} {\textbf{e}}^{\top }_kA_m{\textbf{t}}\\ m=0,1,\dots ,n\end{array}\right) \!\in {\mathbb {R}}^{n+1},\, \textbf{a}({\textbf{t}})\!=\!\left( \begin{array}{c} {\textbf{t}}^{\top } A_m{\textbf{t}}\\ m=0,1,\dots ,n\end{array}\right) \!\in {\mathbb {R}}^{ n+1}, \end{aligned}$$
(12)
$$\begin{aligned} \textbf{b}(k,j)= & \left( \begin{array}{c} {\textbf{e}}^{\top }_kA_m{{{\varvec{\tau }}}}(j)\\ m=0,1,\dots ,n\end{array}\right) =\bar{\textbf{b}}(k,{{{\varvec{\tau }}}}(j))\in {\mathbb {R}}^{n+1}. \end{aligned}$$
(13)

Proposition 2.2

Consider a consistent system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) with the corresponding sets \( M(j),\ j \in J\), and the vectors \(\textbf{b}(k,j), k\in M(j),\ j \in J\), defined above. For any \(j_0\in J\) and \(k_0\in M(j_0),\) there exist numbers \(\alpha _{kj}= \alpha _{kj}(k_0,j_0),\) \( k\in M(j),\ j \in J,\) such that

$$\begin{aligned} -\textbf{b}(k_0,j_0)=\sum \limits _{j\in J}\sum \limits _{k\in M(j)}\alpha _{kj} \textbf{b}(k,j),\ \alpha _{kj}\ge 0\ \forall k\in M(j),\ j \in J. \end{aligned}$$
(14)

Proof

Using the notation (13), the set Z defined in (3) can be represented as

$$\begin{aligned} Z=\{\textbf{x} \in {\mathbb {R}}^n: (1, \textbf{x}^{\top })\textbf{b}(k,j)\ge 0\ \forall k\in P,\, \forall j \in J\}. \end{aligned}$$
(15)

Consider the following LP problem:

$$\begin{aligned} {{\textbf{LP}}_*}:\qquad \qquad \qquad \max (1, {\textbf{z}}^{\top })\textbf{b}(k_0,j_0)\quad \mathrm{s.t. }\ {\textbf{z}} \in Z.\qquad \qquad \end{aligned}$$

Due to (10), we have \(k_0\in M(j_0)={\overline{M}}(j_0)\) and it follows from the definition of the set \({\overline{M}}(j_0)\) that \(Val({\textbf{LP}}_*)=0.\) Hence a vector \( \textbf{x}^*\) satisfying (8) is an optimal solution of the problem (\({\textbf{LP}}_*\)). Taking into account representation (15) of the set Z and relations (8), we see that relations (14) are necessary and sufficient optimality conditions for \(\textbf{x}^*\) in the problem (\({\textbf{LP}}_*\)). \(\square \)

Proposition 2.3

There is the partition of the set J into subsets J(s),  \( s\in S,\) \(|S|\ge 1\), such that

$$\begin{aligned} & T_{im}=\bigcup \limits _{s\in S} T_{im}(s), \text{ where } T_{im}(s):=\textrm{conv} \{{{{\varvec{\tau }}}}(j), j \in J(s)\}\ \forall s \in S, \end{aligned}$$
(16)
$$\begin{aligned} & P_*(s):=\bigcup \limits _{j \in J(s)}P_+({{{\varvec{\tau }}}}(j))\subset M(j) \ \forall j \in J(s), \ \forall s \in S. \end{aligned}$$
(17)

The proposition is proved in Appendix.

Proposition 2.4

Consider a consistent system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) with the corresponding sets \(T_{im}, \mathcal {T}, M(j),\ j \in J\), and the vectors \(\textbf{a}({\textbf{t}}), \textbf{b}(k,j), k\in M(j),\ j \in J\), defined above. The following inclusions hold true:

$$\begin{aligned} \textbf{a}({\textbf{t}})\in \textrm{cone}\{\textbf{b}(k,j),\ k\in M(j),\ j \in J\} \ \ \forall {\textbf{t}}\in T_{im}. \end{aligned}$$
(18)

Proof

Consider any \({\textbf{t}}\in T_{im}.\) It follows from (16) that \({\textbf{t}}\in T_{im}(s) \) \(=\textrm{conv}\{{{\varvec{\tau }}}(j),\) \( j \in J(s)\}\) with some \(s\in S.\) Hence, there exists \(\varDelta J(s)\subset J(s)\) such that the following relations hold true:

$$\begin{aligned} {\textbf{t}}=\sum \limits _{j\in \varDelta J(s)}\alpha _j{{\varvec{\tau }}}(j),\ \alpha _j> 0 \ \forall j \in \varDelta J(s);\ \sum \limits _{j\in \varDelta J(s)}\alpha _j=1. \end{aligned}$$
(19)

Then we obtain

$$\begin{aligned} \textbf{a}({\textbf{t}})=\sum \limits _{k\in P_+({\textbf{t}})}t_k\bar{\textbf{b}}(k,{\textbf{t}}) =\sum \limits _{j \in \varDelta J(s)}\sum \limits _{k\in M(j)}t_k\alpha _j\textbf{b}(k,j). \end{aligned}$$
(20)

Here we took into account (17) and the inclusion \(P_+({\textbf{t}})\subset P_*(s)\). Since \(t_k\alpha _j\ge 0\) for all \(k\in M(j)\) and \(j\in \varDelta J(s)\subset J,\) we conclude from (20) that inclusions (18) take place. \(\square \)

3 Necessary and Sufficient Uniform LP Duality Conditions

In this section, we will prove two statements containing new necessary and sufficient uniform LP duality conditions for linear CoP systems.

Proposition 3.1

A consistent linear CoP system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) yields the uniform LP duality iff the following relations hold:

$$\begin{aligned} \textbf{b}(k,j)\in \textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\in T\} \ \forall k\in N_{*}(j), \ \ \forall j \in J. \end{aligned}$$
(21)

Proof

Notice that if \(T_{im}=\emptyset \), then \(J=\emptyset \), and we consider that conditions (21) are fulfilled.

\(\Rightarrow )\) Suppose that the consistent system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) yields the uniform LP duality. Then for any \({\textbf{c}}\in {\mathbb {R}}^n\) for which \(Val({\textbf{P}})>-\infty \), there exists a matrix \(U=U({\textbf{c}})\) in the form

$$\begin{aligned} U=\sum \limits _{i\in I}\alpha _i{\textbf{t}}(i)({\textbf{t}}(i))^{\top }, \ \alpha _i>0,\ {\textbf{t}}(i)\in T,\ i\in I,\ |I| <\infty , \end{aligned}$$
(22)

such that

$$\begin{aligned} A_m\bullet U=c_m,\, m=1,\dots ,n; \ A_0\bullet U=-Val({\textbf{P}}). \end{aligned}$$
(23)

For fixed \(j\in J\) and \(k\in N_{*}(j)\), consider the problem (P) with \({\textbf{c}}^{\top }=(c_m= {\textbf{e}}^{\top }_k A_{m}{{{\varvec{\tau }}}}(j), m=1,\dots ,n).\) It follows from (3) that

$$\begin{aligned} {\textbf{c}}^{\top } \textbf{x}=\sum \limits _{m=1}^n {\textbf{e}}^{\top }_k A_m x_{m}{{{\varvec{\tau }}}}(j)\ge - {\textbf{e}}^{\top }_k A_0 {{{\varvec{\tau }}}}(j)\ \ \forall \textbf{x}\in X, \end{aligned}$$

and (6) implies the existence of \(\textbf{x}(k,j)\in X\) such that \({\textbf{c}}^{\top } \textbf{x}(k,j)=- {\textbf{e}}^{\top }_k A_0{{{\varvec{\tau }}}}(j)\). Thus we can conclude that \(Val({\textbf{P}})=-{\textbf{e}}^{\top }_k A_0 {{\varvec{\tau }}}(j) >-\infty \). Taking into account that the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) yields the uniform LP duality, we conclude that there exists a matrix U in the form (22) such that

$$\begin{aligned} & A_m\bullet U=\sum \limits _{i\in I}\alpha _i({\textbf{t}}(i))^{\top } A_m{\textbf{t}}(i)={\textbf{e}}^{\top }_k A_{m}{{{\varvec{\tau }}}}(j),\, m=1,\dots ,n;\\ & A_0\bullet U=\sum \limits _{i\in I}\alpha _i({\textbf{t}}(i))^{\top } A_0{\textbf{t}}(i)= {\textbf{e}}^{\top }_k A_0 {{{\varvec{\tau }}}}(j). \end{aligned}$$

It is easy to see that these equalities can be rewritten as

$$\begin{aligned} \textbf{b}(k,j)=\left( \begin{array}{c} A_m\bullet U\\ m=0,\dots ,n\end{array}\right) =\sum \limits _{i\in I}\alpha _i\textbf{a}({\textbf{t}}(i)) \text{ with } \alpha _i>0, {\textbf{t}}(i)\in T, i \in I. \end{aligned}$$

Thus we have shown that inclusions (21) hold true.

\(\Leftarrow )\) Now, having supposed that inclusions (21) hold true, let us show that the consistent system \(\mathcal {A}( \textbf{x})\in \mathcal {COP}^p\) yields the uniform LP duality.

Consider any \({\textbf{c}}\in {\mathbb {R}}^n\) such that \(Val({\textbf{P}})>-\infty .\) It was stated in Sect. 2 that the problem \(({\textbf{P}})\) is equivalent to the problem (\({\textbf{P}}_*\)) and there exists \({ \textbf{x}}^*\in {\mathbb {R}}^n\) satisfying (8). Notice that \(Val({\textbf{P}}_*)=Val({\textbf{P}})\) and system (11) in the problem (\({\textbf{P}}_*\)) can be rewritten as

$$\begin{aligned} (1, \textbf{x}^{ \top })\textbf{a}({\textbf{t}})\ge 0 \ \forall {\textbf{t}} \in \varOmega ,\ (1,\textbf{x}^{\top })\textbf{b}(k,j) \ge 0 \ \forall k \in N_{*}(j),\ \forall j \in J. \end{aligned}$$
(24)

Taking into account the inequalities in (8) (which can be viewed as a generalized Slater condition for the problem (\({\textbf{P}}_*\))), let us show that system (11) yields the uniform LP duality. In fact, it follows from Theorem 1 in [19] that under conditions (8), there exist vectors \({\textbf{t}}(i)\in \varOmega ,\ i \in I, |I|\le n,\) such that an LP problem

$$\begin{aligned} \begin{aligned} {{\textbf{LP}}}:&\qquad \qquad \qquad \qquad \min {\textbf{c}}^{\top } \textbf{x} \\ &\text{ s.t. } (1,\textbf{x}^{\top })\textbf{a}({\textbf{t}}(i))\ge 0 \ \forall i \in I,\ (1,\textbf{x}^{\top })\textbf{b}(k,j) \ge 0 \ \forall k \in N_{*}(j),\ \forall j \in J,\end{aligned} \end{aligned}$$

has the same optimal value as the problem (\({{\textbf{P}}_*}\)): \(Val({\textbf{P}}_*)=Val({{\textbf{LP}}})>-\infty .\) The problem (LP) is consistent since any \(\textbf{x}\in X\) is feasible in this problem. Hence the problem (LP) has an optimal solution. Consequently, there exist numbers and vector \(\alpha _i,\ {\textbf{t}}(i)\in \varOmega ,\ i \in I, \ \lambda _k(j), \, k\in N_{*}(j),\ j \in J,\) such that

$$\begin{aligned} & \alpha _i\ge 0,i \in I,\ \lambda _k(j)\ge 0, \, k\in N_{*}(j),\ j \in J,\nonumber \\ & \sum \limits _{i\in I}\alpha _i\textbf{a}({\textbf{t}}(i))\!+ \!\sum \limits _{j\in J} \sum \limits _{k\in N_*(j)}\lambda _k(j)\textbf{b}(k,j)\!=\!(-Val({\textbf{P}}), c_m,\ m\!=\!1,\dots ,n)^{\top }. \end{aligned}$$
(25)

From (21), one can conclude that for all indices \(j\in J\), \(k \in N_{*}(j)\) and any \(\lambda _k(j)>0\), the vector \(\lambda _k(j)\textbf{b}(k,j)\) admits a representation

$$\begin{aligned}\begin{aligned}&\lambda _k(j)\textbf{b}(k,j)=\sum \limits _{i\in I(k,j)}\alpha _i(k,j)\textbf{a}({{{\varvec{\tau }}}}(i,k,j)) \\ \text{ with }&\alpha _i(k,j)>0,\, {{{\varvec{\tau }}}}(i,k,j)\in T,\, i \in I(k,j),\ |I(k,j)|<\infty .\end{aligned} \end{aligned}$$

It follows from the representation above and from (25) that

$$\begin{aligned} \sum \limits _{i\in {\bar{I}}}\bar{\alpha }_i\textbf{a}(\bar{\textbf{t}}(i))=(-Val({\textbf{P}}),c_m, m=1,\dots ,n)^{\top } \end{aligned}$$
(26)

with some \( \bar{\alpha }_i>0, \) \( \bar{{\textbf{t}}}(i)\in T,\) \(|{\bar{I}}|<\infty .\)

Denote \(U:=\sum \limits _{i\in {\bar{I}}}\bar{\alpha }_i\bar{\textbf{t}}(i)(\bar{\textbf{t}}(i))^{\top }.\) It is evident that \(U\in \mathcal{C}\mathcal{P}^p\) and relations (26) can be rewritten as (23). Hence we have shown that if inclusions (21) hold true, then for any \({\textbf{c}}\in {\mathbb {R}}^n\) for which \(Val({\textbf{P}})>-\infty \), there exists a matrix \(U=U({\textbf{c}})\in \mathcal{C}\mathcal{P}^p\) such that equalities (23) hold true.

By definition, this means that the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) yields the uniform LP duality. \(\square \)

For \(j\in J\) and \(k\in N_*(j)\), consider the following sets:

$$\begin{aligned} & X(k,j):=\{\textbf{x}\in X: {\textbf{e}}^{\top }_k\mathcal {A}(\textbf{x}){{{\varvec{\tau }}}}(j)=0\},\\ & T_{im}(k,j):=\{{\textbf{t}} \in T:{\textbf{t}}^{\top }\mathcal {A}(\textbf{x}){\textbf{t}}=0\ \forall \textbf{x} \in X(k,j)\}, \end{aligned}$$

and vectors \(\textbf{x}^*(k,j)\in X(k,j)\) such that

$$\begin{aligned} {\textbf{e}}^{\top }_k\mathcal {A}(\textbf{x}^*(k,j)){{{\varvec{\tau }}}}(j)=0,\ {\textbf{t}}^{\top }\mathcal {A}(\textbf{x}^*(k,j)){\textbf{t}}>0\ \forall {\textbf{t}} \in T\setminus T_{im}(k,j). \end{aligned}$$
(27)

For \(j\in J\) and \(k\in N_*(j)\), it follows from Theorem 3.1 in [16] with \(Q=X(k,j)\) that such a vector \(\textbf{x}^*(k,j)\) exists. Notice that by construction, for all \(j\in J,\) the following inclusions and equalities are satisfied:

$$\begin{aligned} & \emptyset \not =X(k,j)\subset X\setminus \{\textbf{x}^*\}, \ T_{im}\subset T_{im}(k,j)\ \forall k\in N(j),\ \\ & X(k,j)=X, \ \ T_{im}= T_{im}(k,j) \ \ \forall k\in M(j),\ \end{aligned}$$

where \(N(j):=N_*(j){\setminus } M(j),\) \(j\in J,\) and \(\textbf{x}^*\) is a vector satisfying (8).

Theorem 3.1

A consistent linear system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) with the corresponding sets \(T_{im}\), \(\mathcal {T}\), and other defined above, yields the uniform LP duality iff the following conditions hold:

$$\begin{aligned} & \mathbf{I)}\qquad \ \ \ \textbf{b}(k,j)\in \textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\in T_{im}\} \ \forall k \in M(j), \ \forall j \in J,\qquad \ \ \end{aligned}$$
(28)
$$\begin{aligned} & \mathbf{II)}\qquad \textbf{b}(k,j)\in \textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\in T_{im}(k,j)\} \ \forall k\in N(j), \ \forall j \in J.\qquad \end{aligned}$$
(29)

Proof

It follows from Proposition 3.1 that to prove the theorem, it is enough to show that relations (21) are equivalent to relations (28) and (29). Since \(T_{im}\subset T\) and \(T_{im}(k,j)\subset T\) \(\forall k\in N(j),\) \( j\in J\), and \(N_*(j)=N(j)\cup M(j)\), it is evident that the relations (28) and (29) imply the inclusions (21).

Suppose that inclusions (21) take a place. Hence for any \(j\in J\) and \(k\in N_*(j),\) the equality

$$\begin{aligned} \textbf{b} (k,j)=\sum \limits _{i\in I}\alpha _i\textbf{a}({\textbf{t}}(i)) \end{aligned}$$
(30)

holds true with some \(\alpha _i=\alpha _i(k,j)>0, {\textbf{t}}(i)={\textbf{t}}(i,k,j)\in T, i \in I=I(k,j).\) Let’s multiply the right and left sides of (30) by \((1,(\textbf{x}^*(k,j))^{\top })\). As a result, we get

$$\begin{aligned} \sum \limits _{i\in I}\alpha _i({\textbf{t}}(i))^{\top } \mathcal {A}(\textbf{x}^*(k,j)) {\textbf{t}}(i)= {\textbf{e}}^{\top }_k\mathcal {A}(\textbf{x}^*(k,j)){{{\varvec{\tau }}}}(j)=0 \end{aligned}$$
(31)

with some \( \alpha _i>0,\, {\textbf{t}}(i)\in T, i \in I.\) Here we took into account the equality in (27). It follows from (31) and the inequalities in (27), that in (30) vectors \({\textbf{t}}(i),i\in I,\) should satisfy the conditions \({\textbf{t}}(i)\in T_{im}(k,j),\) \( i \in I.\)

Consequently, we have shown that inclusions (21) imply the inclusions \(\textbf{b}(k,j)\in \textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\in T_{im}(k,j)\}, k\in N_*(j)=M(j)\cup N(j), \ j \in J.\) Taking into account that \( T_{im}(k,j)=T_{im}\) \(\forall k \in M(j),\) \(\forall j\in J,\) we conclude that inclusions (21) imply (28) and (29). \(\square \)

Notice that in (28) and (29), we have a finite number of inclusions.

4 Equivalent Formulations of the Condition I)

In this section, we will present several equivalent formulations of condition I), as set forth in the previous section. This condition is one of the requirements ensuring the uniform LP duality of the copositive system. This gives us the opportunity to analyze this condition from different points of view and create a theoretical basis for comparing our results with those known in the literature.

Proposition 4.1

Given a linear system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) with the corresponding sets \(T_{im}, \mathcal {T}, M(j),\ j \in J\), and the vectors \(\textbf{a}({\textbf{t}}), \textbf{b}(k,j), k\in M(j),\ j \in J,\) defined above, the following statements are equivalent:

  1. j)

    the condition \({\textbf{I}})\) is satisfied;

  2. jj)

    the cones \({\textrm{cone }}\{\textbf{b}(k,j),\, k\! \in \! M(j),\, j \!\in \! J\}\) and \({\textrm{cone }}\{\textbf{a}({\textbf{t}}), {\textbf{t}} \!\in T_{im}\} \) coincide;

  3. jjj)

    the equality \(\textrm{span }\{\textbf{b}(k,j),\, k \!\in \!M(j),\, j\! \in \! J\}\!=\!{\textrm{cone }}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\! \in \!T_{im}\}\) holds true.

In the terminology from [7], the condition jj) means that the sets \(\{\textbf{b}(k,j),\, k \in M(j),\, j \in J\}\) and \(\{\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im}\}\) are positively equivalent.

Proof

It is evident that the condition jj) implies the condition j).

Let us show that j) implies jj). In fact, it follows from j) that \({\textrm{cone }}\{\textbf{b}(k,j),\, k \in M(j),\, j \in J\}\subset {\textrm{cone }}\{\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im}\}\). On the other hand, it follows from Proposition 2.4 that \({{\textrm{cone }}}\{\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im}\}\subset {\textrm{cone }}\{\textbf{b}(k,j),\, k \in M(j),\, j \in J\}.\) Hence, \({\textrm{cone }}\{\textbf{b}(k,j),\, k \in M(j),\, j \in J\} = {\textrm{cone }}\{\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im}\} \), and we have shown that j) implies jj). Thus the equivalence of j) and jj) is proved.

To prove the equivalence of jj) and jjj), it is enough to show that

$$\begin{aligned} {\textrm{cone }}\{\textbf{b}(k,j),\, k \in M(j),\, j \in J\}=\textrm{span }\{\textbf{b}(k,j),\, k \in M(j),\, j \in J\}. \end{aligned}$$
(32)

From Proposition 2.2, it follows that relations (14) hold true for all \( k_0\in M( j_0)\) and \( j_0\in J\). This implies equality (32) and hence, the conditions jj) and jjj) are equivalent. \(\square \)

Let the set J be partitioned into subsets J(s),  \( s \in S\), such that relations (16) hold true. Introduce finite sets

$$\begin{aligned} V(s):= \{(i,j), i \in J(s),\, j\in J(s),\, i\le j\},\ s \in S, \; \ V:=\bigcup \limits _{s\in S}V(s). \end{aligned}$$
(33)

For a given vector \({\textbf{z}}=(z_0,z_1,\dots ,z_n)^{\top }\), denote

$$\begin{aligned} \mathcal {B}({\textbf{z}}):=\sum \limits _{m=0}^nA_mz_m. \end{aligned}$$
(34)

Proposition 4.2

The condition \({\textbf{I}})\) of Theorem 3.1 (see (28)) is equivalent to the following two conditions:

A1):

the set \( L:=\textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\in T_{im}\}\) is a subspace;

B1):

for any \({\textbf{z}}\in {\mathbb {R}}^{n+1}\), the equalities

$$\begin{aligned} ({{{\varvec{\tau }}}}(i))^{\top }\mathcal {B}({\textbf{z}}){{{\varvec{\tau }}}}(j)=0 \ \forall (i,j)\in V, \end{aligned}$$
(35)

imply the equalities

$$\begin{aligned} {\textbf{e}}^{\top }_k\mathcal {B}({\textbf{z}}){{{\varvec{\tau }}}}(j)=0 \ \forall k\in M(j), \ \forall j\in J . \end{aligned}$$
(36)

Proof

Suppose that inclusions (28) hold true. Then it follows from Proposition 4.1 that L is a subspace. Let us prove that inclusions (28) imply the condition B1). Note that (see Proposition 9 in [18])

$$\begin{aligned} ({{{\varvec{\tau }}}}(i))^{\top }\mathcal {B}({\textbf{z}}){{{\varvec{\tau }}}}(j)=0 \ \forall (i,j)\in V\ \Longleftrightarrow \ {{\textbf{t}}}^{\top }\mathcal {B}({{\textbf{z}}}){{\textbf{t}}}=0 \ \forall {{\textbf{t}}} \in T_{im}. \end{aligned}$$
(37)

Hence equalities (35), (36) can be written as follows:

$$\begin{aligned} & {\textbf{z}}^{\top } \textbf{a}({\textbf{t}})=0 \ \forall {\textbf{t}} \in T_{im}, \end{aligned}$$
(38)
$$\begin{aligned} & {\textbf{z}}^{\top } \textbf{b}(k,j)=0 \ \forall k \in M(j),\ \forall j \in J. \end{aligned}$$
(39)

Then it is evident that under conditions (28), equalities (38) imply (39). Thus we have shown that the condition B1) follows from (28).

Now we will show that the conditions A1) and B1) imply inclusions (28). Notice that under the condition A1) a vector \({\textbf{z}}\) satisfies (38) iff \({\textbf{z}} \in L^\perp \), where \(L^\perp \) is the orthogonal complement to L in \({\mathbb {R}}^{n+1}.\) Hence it follows from the condition A1) and (37) that the condition B1) can be reformulated as \({\textbf{e}}^{\top }_k\mathcal {B}({\textbf{z}}){{{\varvec{\tau }}}}(j)=0 \ \forall k\in M(j), \ \forall j\in J, \ \forall {\textbf{z}} \in L^\bot ,\) or, equivalently,

$$\begin{aligned} {\textbf{z}}^{\top } \textbf{b}(k,j)=0 \ \forall k\in M(j), \ \forall j\in J, \ \forall {\textbf{z}} \in L^\bot .\end{aligned}$$
(40)

Given \(j\in J\) and \(k\in M(j)\), the vector \(\textbf{b}(k,j)\) admits the representation

$$\begin{aligned} \textbf{b}(k,j)=\textbf{b}^1(k,j)+\textbf{b}^2(k,j) \ \text{ with } \textbf{b}^1(k,j)\in L \text{ and } \textbf{b}^{2}(k,j)\in L^\bot , \end{aligned}$$

where due to (40) we have \(\textbf{b}^2(k,j)=0\) and \(\textbf{b}(k,j)=\textbf{b}^1(k,j)\in L\). Hence the conditions A1) and B1) imply the inclusions (28). \(\square \)

Remark 4.1

It follows from the proof of the proposition that conditions A1) and B1) can be reformulated as follows: the set L is a subspace and conditions (40) hold true. In this form, conditions A1) and B1) resemble the conditions formulated in Theorem 2.1 in [30], which provides necessary and sufficient conditions for uniform LP duality for SDP problems.

Remark 4.2

Let us introduce (\(n+1\))-dimensional vectors \(\textbf{a}(i,j):=\) \((({{\varvec{\tau }}}(i))^{\top } A_m{{\varvec{\tau }}}(j),\) \( m=0,1,\dots ,n)^{\top }, \, (i,j)\in V,\) and consider finite-dimensional matrices

$$\begin{aligned} {\mathbb {A}}:=(\textbf{a}(i,j), \, (i,j)\in V),\ {\mathbb {B}}:=(\textbf{b}(k,j),\ k \in M(j),\, j \in J). \end{aligned}$$

Then the condition B1) can be formulated as \(\ \textrm{rank}{\mathbb {A}}=\textrm{rank}({\mathbb {A}},\, {\mathbb {B}}).\)

Based on (19) we can prove the following statements:

  • the following equality holds true:

    $$\begin{aligned} r_{im}:=\textrm{rank } (\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im})=\textrm{rank } (\textbf{a}({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q)), (l,q)\in V), \end{aligned}$$
    (41)
  • for any \({\textbf{t}}\in T_{im}\), there exist numbers \(\beta _{lq} \in {\mathbb {R}},\) where \((l,q)\in V,\) such that

    $$\begin{aligned} {\textbf{t}}{\textbf{t}}^{\top }=\sum \limits _{(l,q)\in V}\beta _{lq}({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))^{\top }. \end{aligned}$$
    (42)

Proposition 4.3

The condition A1) is equivalent to the following one:

A2) There exists a matrix \(U^*\in \mathcal{C}\mathcal{P}^p\) in the form

$$\begin{aligned} & U^*=\sum \limits _{i \in I}\alpha _i{\textbf{t}}(i)({\textbf{t}}(i))^{\top }+\frac{1}{4}\sum \limits _{(l,q) \in V}({{{\varvec{\tau }}}}(l)+ {{{\varvec{\tau }}}}(q))({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))^{\top } \nonumber \\ & \text{ with } \alpha _i>0, {\textbf{t}}(i)\in T_{im}, i \in I,\ |I|<\infty , \end{aligned}$$
(43)

such that

$$\begin{aligned} A_m\bullet U^*=0 \ \forall m=0,1,\dots ,n. \end{aligned}$$
(44)

Proof

Suppose that the condition A1) holds true. Notice that \(\frac{1}{2}({{\varvec{\tau }}}(l)+{{\varvec{\tau }}}(q))\in T_{im}\) for all \((l,q)\in V.\) Then, due to Propositions 10 and 11 in [18], there exist vectors and numbers \( {\textbf{t}}(i)\in T_{im},\, \alpha _i>0, i \in I,\) \(|I|<\infty ,\) such that

$$\begin{aligned} \sum \limits _{i \in I} \alpha _i\textbf{a}({\textbf{t}}(i))+\frac{1}{4}\sum \limits _{(l,q)\in V} \textbf{a} ( {{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))=0. \end{aligned}$$
(45)

Let \(U^*\) be a matrix in the form (43). Then (45) can be rewritten in the form (44). Thus we have proved that the condition A1) implies the condition A2).

Now, suppose that the condition A2) holds true and let us rewrite equalities (44) in the form (45). It follows from (41) that

$$\begin{aligned} \textrm{rank} (\textbf{a}({\textbf{t}}(i)), i \in I, \ \textbf{a}({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q)),\ (l,q)\in V)=r_{im}. \end{aligned}$$

Taking into account this equality, equality (45), and Proposition 10 in [18], we conclude that \(L:=\textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im}\} \) is a subspace. Thus we have shown that the condition A2) implies the condition A1). \(\square \)

To formulate the next propositions and lemmas, we need the following notation and definitions (see [22]). Given a matrix \(Y\in \mathcal {COP}^p, \) denote

$$\begin{aligned} & \textrm{dir}(Y, \mathcal {COP}^p): =\{ D\in \mathcal {S}^p: \ Y + \varepsilon D \in \mathcal {COP}^p \text{ for } \text{ some } \varepsilon >0 \},\\ & \textrm{ldir}(Y, \mathcal {COP}^p): = \textrm{dir}(Y, \mathcal {COP}^p) \cap -\textrm{dir}(Y, \mathcal {COP}^p),\\ & \textrm{tan}(Y, \mathcal {COP}^p): = \textrm{cl}\, ( \textrm{dir}(Y, \mathcal {COP}^p) ) \cap -\textrm{cl}\, ( \textrm{dir}(Y, \mathcal {COP}^p)). \end{aligned}$$

For matrices \(Y\in \mathcal {COP}^p\) and \(U\in \mathcal{C}\mathcal{P}^p\), we say that U is strictly complementary to Y if \(U\in \textrm{relint} (\mathcal{C}\mathcal{P}^p\cap Y^\perp )\).

Definition 4.1

A matrix Y is called a maximum slack in the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) if \(Y\in \textrm{relint}\, \mathcal {D}\), where \(\mathcal {D}:=\{D=\mathcal {A}(\textbf{x}), \,\textbf{x} \in X\}.\)

Denote:

$$\begin{aligned}\begin{aligned} \mathcal {R}(\mathcal {B})\!&:=\{D=\mathcal {B}({\textbf{z}}), \, {\textbf{z}} \in {\mathbb {R}}^{n+1}\}, \\ \mathcal {N}(\mathcal {B}^*):\!&=\{U\in \mathcal {S}^p: A_m\bullet U=0 \ \forall m=0,1,\dots ,n\},\end{aligned} \end{aligned}$$

where \(\mathcal {B}({\textbf{z}})\) is defined in (34).

The following Lemma was proved in [18].

Lemma 4.1

A matrix \(Y^*\) is a maximum slack in the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) iff there exists \(\textbf{y}^*\in X\) such that \(Y^*=\mathcal {A}(\textbf{y}^*)\) and

$$\begin{aligned} {\textbf{t}}^{\top } \mathcal {A}(\textbf{y}^*){\textbf{t}}>0\ \forall {\textbf{t}} \in T\setminus T_{im},\ {\textbf{e}}^{\top }_k \mathcal {A}( \textbf{y}^*){{{\varvec{\tau }}}}(j)>0\ \forall k \in P\setminus M(j), j \in J. \end{aligned}$$
(46)

Proposition 4.4

The condition A1) is equivalent to the following one:

A3) For a maximum slack \(Y^*\) in the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p,\) there is \(U^* \in \mathcal {N}(\mathcal {B}^*) \cap \mathcal{C}\mathcal{P}^p\) strictly complementary to \(Y^*\).

Proof

Suppose that the condition A3) holds true. Since the set \(\mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp \) is convex, then

$$\begin{aligned} \begin{aligned}&\qquad \qquad \qquad \qquad U^*\in \textrm{relint} (\mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp )\, \ \Longleftrightarrow \\&\forall U\!\in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp \ \exists \, \varepsilon >0 \text{ such } \text{ that } (1+\varepsilon )U^*\!-\!\varepsilon U\!\in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp . \end{aligned} \end{aligned}$$
(47)

It follows from Lemma 4.1 that \(U\in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp \) iff U admits a representation

$$\begin{aligned} U=\sum \limits _{i \in {\bar{I}}}\bar{\alpha }_i\bar{\textbf{t}}(i)(\bar{\textbf{t}}(i))^{\top } \ \text{ with } \text{ some } \bar{\alpha }_i>0,\ \bar{\textbf{t}}(i)\in T_{im}, \ i \in {\bar{I}}. \end{aligned}$$
(48)

Hence \(U^*\) takes the form

$$\begin{aligned} U^*=\sum \limits _{i \in I}\alpha _i {\textbf{t}}(i)({\textbf{t}}(i))^{\top } \text{ with } \text{ some } \alpha _i>0,\ {\textbf{t}}(i)\in T_{im}, \ i \in I. \end{aligned}$$
(49)

Since \(U^* \in \mathcal {N}(\mathcal {B}^*)\), the equalities (44) hold true.

For any \( {\textbf{t}}\in T_{im}\), consider the matrix \(U={\textbf{t}}{\textbf{t}}^{\top }\in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp .\) It follows from (47), (48) that there exist \(\varepsilon >0\) and \(\bar{\alpha }_i>0,\) \(\bar{\textbf{t}}(i)\in T_{im}\), \(i \in {\bar{I}},\) such that

$$\begin{aligned} (1+\varepsilon )U^*-\varepsilon {\textbf{t}}{\textbf{t}}^{\top }=\sum \limits _{i \in {\bar{I}}}\bar{\alpha }_i\bar{\textbf{t}}(i)(\bar{\textbf{t}}(i))^{\top } \ \Longleftrightarrow \ (1+\varepsilon )U^*=\varepsilon {\textbf{t}}{\textbf{t}}^{\top }+\sum \limits _{i \in {\bar{I}}}\bar{\alpha }_i\bar{\textbf{t}}(i)(\bar{\textbf{t}}(i))^{\top }. \end{aligned}$$

Taking into account the latter equality and (44), we obtain the equalities

$$\begin{aligned} A_m\bullet (\varepsilon {\textbf{t}}{\textbf{t}}^{\top }+\sum \limits _{i \in {\bar{I}}}\bar{\alpha }_i\bar{\textbf{t}}(i)(\bar{\textbf{t}}(i))^{\top })=0\ \forall m=0,1,\dots ,n, \end{aligned}$$

which can be rewritten in the form

$$\begin{aligned} \varepsilon \textbf{a}({\textbf{t}})+\sum \limits _{i \in {\bar{I}}}\bar{\alpha }_i\textbf{a}(\bar{\textbf{t}}(i))= 0 \text{ where } \varepsilon>0, \,\bar{\alpha }_i>0, \, \bar{\textbf{t}}(i)\in T_{im} \,\forall i \in {\bar{I}}. \end{aligned}$$

It follows from these relations that \(-\textbf{a}({\textbf{t}})\in L:=\textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\in T_{im}\}\) for any \(\textbf{a}({\textbf{t}})\in L\) and hence, L is a subspace.

Now, suppose that the condition A1) holds true. It follows from Proposition 4.3 that the condition A1) is equivalent to the condition A2). According to this condition, there exists a matrix \(U^*\in \mathcal{C}\mathcal{P}^p\) in the form (43) satisfying equalities (44). By construction, \(U^*\in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp \) and \(U^* \in \mathcal {N}(\mathcal {B}^*) \cap \mathcal{C}\mathcal{P}^p\).

Let us show that relations (47) hold true. Consider any matrix \(U\in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp \). It follows from Lemma 4.1 that this matrix admits representation (48). For a fixed \(i \in {\bar{I}}\), consider the corresponding \(\bar{\textbf{t}}(i)\in T_{im}\). Then it follows from (42) that the matrix \(\bar{\textbf{t}}(i)(\bar{\textbf{t}}(i))^{\top }\) can be presented in the form

$$\begin{aligned}\bar{\textbf{t}}(i)(\bar{\textbf{t}}(i))^{\top } =\sum \limits _{(l,q) \in V}\beta _{l,q}(i)({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))^{\top }.\end{aligned}$$

Consequently

$$\begin{aligned} (1+\varepsilon )U^*-\varepsilon U \!=\!(1+\varepsilon )\sum \limits _{i \in I}\alpha _i{\textbf{t}}(i)({\textbf{t}}(i))^{\top }+ \!\sum \limits _{(l,q) \in V}\bar{\beta }_{l,q}({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))({{{\varvec{\tau }}}}(l)+{{{\varvec{\tau }}}}(q))^{\top }, \end{aligned}$$

where \(\alpha _i>0, \, {\textbf{t}}(i)\in T_{im} \, \forall i \in I,\) \(0.5({{{\varvec{\tau }}}}(l)+\ {{{\varvec{\tau }}}}(q))\in T_{im}, \, \forall (l,q)\in V,\) and where for a sufficiently small \(\varepsilon >0\), it holds

$$\begin{aligned} \bar{\beta }_{l,q}:=(1+\varepsilon )/4-\varepsilon \sum \limits _{i\in {\bar{I}}}\bar{\alpha }_i\beta _{l,q}(i)>0 \ \forall (l,q)\in V. \end{aligned}$$

Then \((1+\varepsilon )U^*-\varepsilon U\in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp \) for any \(U \in \mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp \) and for some sufficiently small \(\varepsilon >0\). Hence \(U^*\in \textrm{relint} (\mathcal{C}\mathcal{P}^p\cap (Y^*)^\perp )\). \(\square \)

Proposition 4.5

The condition B1) is equivalent to the following one:

B2) For a maximum slack \(\ Y^*\) in the system \(\ \mathcal {A}(\textbf{x})\in \mathcal {COP}^p\), the set \( \ \ \mathcal {R}(\mathcal {B}) \cap (\textrm{tan}(Y^*, \mathcal {COP}^p) {\setminus } \textrm{ldir}(Y^*, \mathcal {COP}^p))\) is empty.

Proof

In [6] (see Theorems 6, 13), for \(A\in \mathcal {COP}^p\), it is shown that

$$\begin{aligned} & \textrm{dir}(A,\mathcal {COP}^p)\!=\!\big \{B\in \mathcal {S}^p: {\textbf{t}}^{\top } B{\textbf{t}}\ge 0 \ \forall {\textbf{t}} \in \mathcal {V}^A;\qquad \qquad \qquad \qquad \quad \ \ \ \nonumber \\ & \qquad \qquad \qquad {\textbf{e}}^{\top }_kB{\textbf{t}}\ge 0\ \forall {\textbf{t}} \in \mathcal {V}^A\cap \mathcal {V}^B,\ \forall k\in \{s \in P:{\textbf{e}}^{\top }_sA{\textbf{t}}=0\}\big \},\nonumber \\ & \textrm{tan}(A,\mathcal {COP}^p)\!=\! \big \{B\in \mathcal {S}^p: {\textbf{t}}^{\top } B{{{\varvec{\tau }}}}=0\ \forall \{{\textbf{t}},{{{\varvec{\tau }}}}\} \subset \mathcal {V}_{min}^A \ \text{ s.t. } \ {\textbf{t}}^{\top } A {{{\varvec{\tau }}}} =0\big \}, \end{aligned}$$
(50)

where \(\mathcal {V}^A:=\{{\textbf{t}} \in T: {\textbf{t}}^{\top } A {\textbf{t}}=0\}\) is the set of zeros of A and \(\mathcal {V}_{min}^A \subset \mathcal {V}^A \) is the set of minimal zeros of A. For definitions see [6]. Without loss of generality, we consider that the minimal zeros are normalized: \(||{\textbf{t}}||_1=1\) for \({\textbf{t}}\in \mathcal {V}^A_{min}\). Note that it was proved in [16] that for \(A\in \mathcal {COP}^p\), the set of normalized minimal zeros \(\mathcal {V}^A_{min}\) coincides with the set of vertices of the set \(\textrm{conv}T_{im}(A)\), where \(T_{im}(A)=\{{\textbf{t}} \in T: {\textbf{t}}^{\top } A {\textbf{t}}=0\}\) is the set of normalized zeros of A.

Let us present the condition B2) in another form. It follows from Lemma 4.1 that if \(Y^*\) is a maximum slack in the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\), then there exists a vector \(\textbf{y}^*\in X\) such that conditions (46) hold true. Taking into account the relations (46), (50) and \(\mathcal {V}^{Y^*}_{min}= \mathcal {T}=\{{{\varvec{\tau }}}(j), j \in J\},\) it is easy to see that

$$\begin{aligned} \begin{aligned}&\textrm{tan}(\mathcal {A}(\textbf{y}^*),\mathcal {COP}^p)\!\setminus \!\textrm{ldir}(\mathcal {A}(\textbf{y}^*),\mathcal {COP}^p)\!= \!\{B\!\in \! \mathcal {S}^p\!:\!({{{\varvec{\tau }}}}(i))^{\top } B{{{\varvec{\tau }}}}(j)\!=\!0 \,\forall (i,j)\!\in \! V;\\&\ \ \ \ \ \text{ and } \exists \, \bar{\textbf{t}} \in T_{im}\cap \mathcal {V}^{B}, \ \exists \, {\bar{k}}\in P \text{ such } \text{ that } {\textbf{e}}^{\top }_{{\bar{k}}}\mathcal {A}(\textbf{y}^*)\bar{\textbf{t}}=0,\ {\textbf{e}}^{\top }_{{\bar{k}}}B\bar{\textbf{t}}\not =0\}.\end{aligned} \end{aligned}$$

Consequently, the condition B2) can be reformulated as follows:

B2*) For any \({\textbf{z}} \in {\mathbb {R}}^{n+1},\) the equalities (35) imply the equalities

$$\begin{aligned} {\textbf{e}}^{\top }_{ k}\mathcal {B}({\textbf{z}}){\textbf{t}} =0\ \forall \, k \in \mathcal {P}({\textbf{t}}), \ \forall {\textbf{t}} \in T_{im}\cap \mathcal {V}^{\mathcal {B}({\textbf{z}})}=T_{im}, \end{aligned}$$
(51)

where \(\mathcal {P}({\textbf{t}}):=\{q\in P: {\textbf{e}}^{\top }_{ q}\mathcal {A}(\textbf{y}^*){\textbf{t}} =0\}.\)

Suppose that the condition B2*) holds true. For \(j \in J\), consider the corresponding vector \({{{\varvec{\tau }}}}(j)\in T_{im}\). By construction (see (46)), we have \( \mathcal {P}(\tau (j)) =M(j).\) Consequently, it follows from conditions (51) that \( {\textbf{e}}^{\top }_{ k}\mathcal {B}({\textbf{z}}){{{\varvec{\tau }}}}(j)=0\) for all \(k \in M(j).\) Hence we have shown that the condition B2*) implies the condition B1).

Now, suppose that the condition B1) holds true. Consider any \({\textbf{t}}\in T_{im}\). It follows from (16) that \({\textbf{t}} \in T_{im}(s)\) with some \(s \in S\) and consequently, \({\textbf{t}}\) admits representation (19). Hence for any \(k\in \mathcal {P}({\textbf{t}})\), we have \( 0={\textbf{e}}^{\top }_{k}\mathcal {A}(\textbf{y}^*){\textbf{t}} =\sum \limits _{j\in \varDelta J(s)}\alpha _j{\textbf{e}}^{\top }_{k}\mathcal {A}(\textbf{y}^*){{{\varvec{\tau }}}}(j).\) With respect to this equality, the inequalities \(\alpha _j>0\) \(\forall \, j\in \varDelta J(s)\), and \({\textbf{e}}^{\top }_q\mathcal {A}(\textbf{y}^*){{{\varvec{\tau }}}}(j)>0\) \( \forall q \in P{\setminus } M(j)\), \(\forall j \in \varDelta J(s),\) we conclude that for \({\textbf{t}}\in T_{im}(s),\) the following implication is valid:

$$\begin{aligned} k \in \mathcal {P}({\textbf{t}})\ \Longrightarrow \ \ k \in M(j) \ \forall j \in \varDelta J(s). \end{aligned}$$
(52)

Now, for the same \({\textbf{t}} \in T_{im}(s)\) and any \({\textbf{z}}\in {\mathbb {R}}^{n+1}\) satisfying (35), calculate \({\textbf{e}}^{\top }_{k}\mathcal {B}({\textbf{z}}){\textbf{t}}\) for \(k \in \mathcal {P}({\textbf{t}})\), taking into account conditions (19), (36), and (52):

$$\begin{aligned} {\textbf{e}}^{\top }_{k}\mathcal {B}({\textbf{z}}){\textbf{t}}=\sum \limits _{j\in \varDelta J(s)}\alpha _j{\textbf{e}}^{\top }_{k}\mathcal {B}({\textbf{z}}){{{\varvec{\tau }}}}(j) =0. \end{aligned}$$

Thus we have shown that the condition B1) implies B2*). \(\square \)

The above considerations can be formulated as follows.

Lemma 4.2

For any \(k\in \{1,2,3\}\) and any \(m\in \{1,2\}\), the conditions i) and ii) below are equivalent to each other, and are necessary for the consistent system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) to yield the uniform LP duality property:

  1. i)

    the condition \( \mathrm{\textbf{I}})\) holds true:

  2. ii)

    the conditions A k) and Bm) hold true.

The condition ii) with \(k=3\) and \(m=2\) (i.e., the conditions A3) and B2)) is a necessary condition for a linear conic system to yield the uniform LP duality proved in [22] and applied to the copositive system \(\mathcal {A}(\mathbf{x)}\in \mathcal {COP}^p.\)

It was shown in [22] that when \(\mathcal {K}\) is a nice cone, the conditions A3) and B2) are necessary and sufficient for the linear consistent conic system \(\mathcal {A}(\textbf{x})\in \mathcal {K}\) to yield the uniform LP duality.

In general, the conditions formulated in Lemma 4.2 are only necessary, but not sufficient for the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) to yield the uniform LP duality. It is illustrated by a simple example presented in the next section.

5 Examples

In this section, we consider two examples that illustrate our results and help us to compare our results with ones known in the literature.

Example 1

Consider the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) with the following data:

$$\begin{aligned} & n=2,\; p=3, \ A_0=\left( \begin{array}{ccc} a& \quad 0 & \quad 0\\ 0& \quad 0& \quad 0\\ 0& \quad 0& \quad 0\end{array}\right) , A_1=\left( \begin{array}{ccc} 0& \quad 0 & \quad 0\\ 0& \quad -1& \quad 0\\ 0& \quad 0& \quad 0\end{array}\right) ,\nonumber \\ & A_2=\left( \begin{array}{ccc} -1& \quad 0 & \quad 0\\ 0& \quad 0& \quad -1\\ 0& \quad -1& \quad 0\end{array}\right) , \ a>0. \end{aligned}$$
(53)

For \({\textbf{t}}^*=(0,0,1)^{\top }\), we have \(({\textbf{t}}^*)^{\top }\mathcal {A}(\textbf{x}){\textbf{t}}^*=0,\; {\textbf{e}}_1^{\top }\mathcal {A}(\textbf{x}){\textbf{t}}^*=0\), \( {\textbf{e}}_{3}^{\top }\mathcal {A}(\textbf{x}){\textbf{t}}^*=0\) for all \(\textbf{x}\in {\mathbb {R}}^2.\) It is easy to check that for the vector \( \textbf{x}^*=(-1,-1)^{\top }\) and the data in (53) we have \({\textbf{t}}^{\top }\mathcal {A}(\textbf{x}^*){\textbf{t}}>0\) for all \({\textbf{t}}\in {\mathbb {R}}^3_+{\setminus }\{\mathbf{t^*}\}\) and \({\textbf{e}}_2^{\top }\mathcal {A}(\textbf{x}^*)\mathbf{t^*}>0.\)

Hence, for the system under consideration, \(\mathcal {A}(\textbf{x}^*)\) is a maximum slack, \(T_{im}=\{ {{{\varvec{\tau }}}}(1)={\textbf{t}}^*\},\) \(M(1)=\{1,3\}\), \(J=\{1\},\) \(\textbf{a}({\textbf{t}}^*)=(0,\, 0,\,0)^{\top }, \) \( \textbf{b}(1,1)=\textbf{b}(3,1)=(0,\, 0,\,0)^{\top }, \) \(\textbf{b}(2,1)=(0,\, 0,\, -1)^{\top }.\) Hence \(L:=\textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im}\}=\{ \textbf{a}({\textbf{t}}^*)\}\) and \(\textbf{b}(k,j)\in L\) for all \(k \in M(j)\) and all \(j \in J.\)

Thus we see that for this system, the condition I) is satisfied, and it follows from Lemma 4.2 that the conditions Ak) for \(k=1,2,3\) and the conditions B m ) for \(m=1,2\) are satisfied as well. (It worth noting that the fulfillment of the conditions Ak) for \(k=1,2,3\) and the conditions Bm) for \(m=1,2\) can be checked directly.)

However the system under consideration does not yield the uniform LP duality. In fact, it was shown in [14] that for the primal problem (P) with the cost vector \(c^{\top }=(0,\ -1)\) and the corresponding dual problem (D), there is a positive duality gap: \(Val({{\textbf{P}}})-Val({\textbf{D}})=a>0.\)

The reason for not complying with the uniform duality is that for the system with the data (53), the condition II) is not satisfied. Indeed, for the vector \(\bar{\textbf{x}}=(-1,\, 0)^{\top }\), we have \({\textbf{t}}^{\top } \mathcal {A}(\bar{\textbf{x}}){\textbf{t}}>0\) \( \forall {\textbf{t}} \in T{\setminus } \{{\textbf{t}}^*\}\) and \({\textbf{e}}^{\top } _k\mathcal {A}(\bar{\textbf{x}}){\textbf{t}}^*=0\) for \(k=1,2,3.\) Hence for the system under consideration, we have \(N(1)=\{2\},\) \(T_{im}(k=2,j=1)=T_{im}=\{{\textbf{t}}^*\}\) and the condition II), for \(j=1\in J\) and \(k=2\in N(j)\), takes the form

$$\begin{aligned} \textbf{b}(2,1)=(0,\,0,\, -1)^{\top }\in {\textrm{cone }}\{\textbf{a}({\textbf{t}}), {\textbf{t}} \in T_{im}(k,j)\}=\{\textbf{a}({\textbf{t}}^*)\}=\{(0,\, 0,\,0)^{\top }\}. \end{aligned}$$

It is evident that this condition does not hold true.

Example 1 shows that the condition II) is essential and can not be omitted. This example also shows that for the cone \(\mathcal {COP}^p\), the conditions formulated in [22] are not sufficient unlike the case with the cone of positive semi-definite matrices \(\mathcal {S}^p_+\) for which these conditions are necessary and sufficient.

Let us now consider an example where the condition I) is violated.

Example 2

Let the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) be formed with the following data:

$$\begin{aligned} n=1, \ p=3, \ A_0= \left( \begin{array}{ccc} a& \quad 0 & \quad -a\\ 0 & \quad 0& \quad 0\\ -a& \quad 0 & \quad a\end{array}\right) , \ A_1= \left( \begin{array}{ccc} 1& \quad -1 & \quad 2\\ -1 & \quad 0& \quad 1\\ 2& \quad 1 & \quad -5 \end{array}\right) ,\ a>0.\end{aligned}$$
(54)

This system admits a unique feasible solution \(\textbf{x}=x_1=0.\) Hence \(X=\{0\}\) and it is easy to check that \(T_{im}=\{{\textbf{t}} \in T:t_1=t_3\}\), where \(T=\{{\textbf{t}} \in {\mathbb {R}}^3_+:t_1+t_2+t_3=1\}.\) The vertices of the set \(\textrm{conv} T_{im}\) are \({{{\varvec{\tau }}}}(1)=0.5(1,\, 0,\, 1)^{\top },\) \({{{\varvec{\tau }}}}(2)=(0,\, 1,\, 0)^{\top },\) and the sets M(j), \(N(j)=N_*(j)\setminus M(j),\) defined in (5) and (6) take the form \(M(j)=\{1,2,3\}\), \(N(j)=\emptyset \) for \(j \in J=\{1,2\}.\)

It is easy to see that \({\textbf{t}}^{\top } A_0{\textbf{t}}=0,\) \( {\textbf{t}}^{\top } A_1{\textbf{t}}=0\) for all \({\textbf{t}} \in T_{im},\) and \({\textbf{e}}^{\top }_1A_0{{{\varvec{\tau }}}}(1)=0,\) \( {\textbf{e}}^{\top }_1 A_1{{{\varvec{\tau }}}}(1)=1.5.\) Hence, \(\textbf{a}({\textbf{t}})=\textbf{0}\ \forall {\textbf{t}} \in T_{im}\) and \(\textbf{b}(k_0,j_0)=(0,\, {1.5})^{\top }\) for \(k_0=1\), \(j_0=1\), \(k_0\in M(j_0).\) Thus we obtain \(\textbf{b} (k_0,j_0)\not \in {\textrm{cone }} \{ \textbf{a} ({\textbf{t}} ), {\textbf{t}} \in T_{im}\},\) wherefrom we conclude that the condition I) does not hold true and, consequently, the system under consideration does not yield the uniform LP duality.

Let us show this directly. Since the system \(\mathcal {A}(\textbf{x} )\in \mathcal {COP}^p\) with data (54) has a unique feasible solution \(\textbf{x} = x_{1}=0\), then the corresponding primal problem (P) has the optimal solution \(\textbf{x}^* = x_{1}^*=0\) with \(Val({{\textbf{P}}})=0\) for any objective function \( {{{\textbf {c}}}^{{\top }} {{\textbf {x}}} =}c_{1}x_{1}, \ c_1\in {\mathbb {R}}\). The corresponding dual problem (D) takes the form

$$\begin{aligned} \max (-A_0\bullet U)\ \text{ s.t. } A_1\bullet U=c_1,\; U\in \mathcal{C}\mathcal{P}^p. \end{aligned}$$

Suppose that the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) yields the uniform LP duality. Hence the dual problem should have an optimal solution \(U^0\) such that

$$\begin{aligned} \begin{aligned}&U^0=\sum \limits _{i \in I}\alpha _i{\textbf{t}}(i)({\textbf{t}}(i))^{\top }\ \text{ with } \alpha _i>0,\, {\textbf{t}}(i)\in T, \, i \in I,\\&-A_0\bullet U^0=0,\ A_1\bullet U^0=c_1. \end{aligned} \end{aligned}$$
(55)

Since \({\textbf{t}}^{\top } A_0{\textbf{t}}=a(t_1-t_3)^2\) for all \( {\textbf{t}} \in {\mathbb {R}}^3,\) we conclude that the equality \( -A_0\bullet U^0=0\) implies the equalities \( t_1(i)=t_3(i) \text{ for } \text{ all } i \in I\) and hence \(({\textbf{t}}(i))^{\top } A_1{\textbf{t}}(i)=0\) for all \( i\in I\). Then \( A_1\bullet U^0=0.\) Thus we have shown that for any \(c_{1}\not =0\), the dual problem has no solutions satisfying relations (55) which permits to conclude that the system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) with the data defined in (54) does not yield the uniform LP duality.

As noted in [24] (see page 3), for the SDP system \(\mathcal {A}(\textbf{x})\in \mathcal {S}^p_+\) with \(n=1\), the uniform LP duality property is always satisfied. In our example, however, for \(n=1\) we present the CoP system \(\mathcal {A}(\textbf{x})\in \mathcal {COP}^p\) that does not yield the uniform LP duality. This further confirms the assertion that CoP systems are much more complex (and more pathological) than SDP systems.

6 On a Relationship of the Obtained Results with the Uniform Duality for SIP Problems

Consider a general linear SIP problem in the form

$$\begin{aligned} {\textbf{P}}^*_{SIP}: \qquad \qquad \qquad \displaystyle \min _{x \in {\mathbb {R}}^n } \ {\textbf{c}}^{\top } \textbf{x} \;\;\; \text{ s.t. } (1,\textbf{x}^{\top })\textbf{a}({\textbf{t}})\ge 0 \ \forall {\textbf{t}} \in T, \qquad \qquad \end{aligned}$$

where T is an index set and \(\textbf{a}({\textbf{t}})=(a_m({\textbf{t}}), m=0,1,\dots ,n)^{\top },\) \( {\textbf{t}} \in T.\) Denote

$$\begin{aligned} G:=\left\{ \textbf{a}({\textbf{t}}), {\textbf{t}} \in T; \ (1,\textbf{0}^{\top }_n)^{\top }\right\} . \end{aligned}$$
(56)

The following theorem is proved in [7] (Theorem 3.2, conditions (ii) and (v)).

Theorem 6.1

The consistent constraint system of the problem (\({\textbf{P}}^*_{SIP}\)) yields the uniform LP duality iff

$$\begin{aligned} {\textrm{cone }}(G)=\textrm{cone} (F\cup W) \end{aligned}$$
(57)

with some \(F\subset {\mathbb {R}}^{n+1}\) and \(W\subset {\mathbb {R}}^{n+1}\) satisfying the following conditions: F is finite, W is compact, and there exists a vector \(\bar{\textbf{x}}\in {\mathbb {R}}^n\) such that

$$\begin{aligned} \begin{aligned}&s_{0}+\textbf{s}^{\top } \bar{\textbf{x}}=0 \ \forall (s_{0}, \textbf{s}^{\top })^{\top } \in F, \ s_{0} \in {\mathbb {R}},\\&t_{0}+{\textbf{t}}^{\top } \bar{\textbf{x}} >0 \ \forall (t_{0}, {\textbf{t}}^{\top })^{\top } \in W,\ t_{0}\in {\mathbb {R}}.\end{aligned} \end{aligned}$$
(58)

It follows from the equivalent description (2) of the cone \(\mathcal {COP}^p\) that the problem (P) is equivalent to the linear SIP problem (\({\textbf{P}}^*_{SIP}\)) where the set T and the vector \(\textbf{a}({\textbf{t}})\) are defined in (1) and (12), respectively. Let us denote this special SIP problem by (\({\textbf{P}}_{SIP}\)).

Since the problem (\({\textbf{P}}_{SIP}\)) is a special case of a general linear SIP problem, the statements of Theorem 6.1 should be satisfied for the problem (\({\textbf{P}}_{SIP}\)) as well. In Theorem 6.1, there is no reference of the construction of the sets F and W mentioned in the theorem. Obviously, it is interesting to know how to find these sets and characterize their properties for our CoP problem. The following theorem provides a response to this question regarding the CoP problem under consideration.

Theorem 6.2

Given the problem (\({\textbf{P}}_{SIP}\)), the sets F and W, mentioned in Theorem 6.1, can be chosen as follows:

$$\begin{aligned} & F:=\{ \textbf{b}(k,j), k \in M(j),j\in J\}, \end{aligned}$$
(59)
$$\begin{aligned} & {W}:=\{\textbf{a}({\textbf{t}}),\textbf{t} \in \varOmega \}\cup \{\textbf{b}(k,j), k \in N(j),j \in J\}\cup (1,\textbf{0}^{\top }_n)^{\top }, \end{aligned}$$
(60)

where the set \(\varOmega \) is defined in (9).

Proof

In what follows, we suppose that the sets G, F and W are defined in (56), (59), and (60). Let us prove the theorem in three steps.

\(\bullet \) First, let us show that the inclusion

$$\begin{aligned} {\textrm{cone }}(G)\subset \textrm{cone} (F\cup { W})\end{aligned}$$
(61)

holds true. To do this, let us consider the problem (\({\textbf{P}}_{*}\)) (see (11)) that can be rewritten in the form

$$\begin{aligned} {\textbf{P}}_{*}\!: \ \displaystyle \min _{\textbf{x} \in {\mathbb {R}}^n } {\textbf{c}}\!^{\top } \textbf{x} \;\; \text{ s.t. } (1,\textbf{x}^{\top }\!)\textbf{a}({\textbf{t}})\ge 0 \; \forall {\textbf{t}} \in \varOmega ,\; (1, \textbf{x}^{\top }\!)\textbf{b}(k,j)\ge 0, k \in N_*(j), j \in J. \end{aligned}$$

It follows from (7) that X is the set of feasible solutions in problem (\({\textbf{P}}_{*}\)) and since there exists \(\textbf{x}^{*}\in X\) satisfying (8), it is easy to show that this problem yields the uniform LP duality (see proof of Proposition 3.1).

For a fixed \({\textbf{t}}\in T\), consider the consistent problem (\({\textbf{P}}_{*}\)) with \({\textbf{c}}^{\top }=(c_{m}={\textbf{t}}^{\top } A_m{\textbf{t}},\, m=1,\dots ,n).\) Then for all \(\textbf{x}\in X\), we have

$$\begin{aligned} {\textbf{c}}^{\top } \textbf{x}=\sum \limits _{m=1}^n {\textbf{t}}^{\top } A_m {\textbf{t}}x_{m}\ge - {\textbf{t}}^{\top } A_0 {\textbf{t}}>-\infty . \end{aligned}$$

Consequently, \(Val({\textbf{P}}_{*})=\beta - {\textbf{t}}^{\top } A_0 {\textbf{t}}\) with some \(\beta \ge 0\). Taking into account that the problem (\({\textbf{P}}_{*}\)) yields the uniform LP duality, we conclude that there exist numbers and vectors \( \ \alpha _i,\ {\textbf{t}}(i)\in \varOmega ,\ i \in I, \ \lambda _k(j), \, k\in N_{*}(j),\ j \in J, \ \) such that \(\alpha _i> 0,i \in I,\) \(|I|<\infty ,\) \( \lambda _k(j)\ge 0, \, k\in N_{*}(j),\ j \in J,\) and equality (25) holds true with \({\textbf{c}}=(c_m={\textbf{e}}^{\top }_m\textbf{a}({\textbf{t}}),\, m=1,\dots ,n)\) and \(Val({\textbf{P}}_{*})=\beta - {\textbf{t}}^{\top } A_0 {\textbf{t}}\). This implies

$$\begin{aligned} \textbf{a}({\textbf{t}})=\sum \limits _{i\in I}\alpha _i\textbf{a}({\textbf{t}}(i))+ \sum \limits _{j\in J} \sum \limits _{k\in N_{*}(j)}\lambda _k(j)\textbf{b}(k,j)+(1,\textbf{0}^{\top }_n)^{\top } \beta \in \textrm{cone} (F\cup { W}). \end{aligned}$$

The inclusion above is satisfied for any \({\textbf{t}}\in T\), and hence, inclusion (61) takes place.

\(\bullet \) Now, let us show that the following conditions are equivalent:

C1: \(\ \ \ \textbf{b}(k,j)\in \textrm{cone}\{\textbf{a}({\textbf{t}}), {\textbf{t}}\in T\} \ \forall k\in N_{*}(j), \ \ \forall j \in J,\)

C2: \(\ \ \ \textrm{cone} (F\cup { W})\subset {\textrm{cone }}(G).\)

Notice that since \(\varOmega \subset T\) and \((1,\textbf{0}^{\top }_n)^{\top }\in G\), we always have

$$\begin{aligned} {\textrm{cone}}(\textbf{a}({\textbf{t}}), \, {\textbf{t}}\in \varOmega ,\, (1,\textbf{0}^{\top }_n)^{\top })\subset \textrm{cone}(G). \end{aligned}$$
(62)

Suppose that the condition C1 holds true. Then it follows from this condition and the inclusion (62) that the condition C2 holds true as well.

Now, suppose that the condition C2 holds true. Hence the following inclusions take place:

$$\begin{aligned} \textbf{b}(k,j)\in \textrm{cone}(\textbf{a}({\textbf{t}}), \, {\textbf{t}}\in T,\, (1,\textbf{0}^{\top }_n)^{\top })\ \ \forall k \in N_*(j), \ \forall j \in J. \end{aligned}$$

This implies that for \(j\in J\) and \(k\in N_{*}(j),\) we have

$$\begin{aligned} \textbf{b}(k,j)=\sum \limits _{i\in I(k,j)}\alpha _i\textbf{a}({\textbf{t}}(i))+\beta (1,\textbf{0}^{\top }_n)^{\top } \end{aligned}$$
(63)

with some finite index set \(I(k,j)\subset {\mathbb {N}},\) \(\alpha _i>0,\) \({\textbf{t}}(i)\in T, \) \( i\in I(k,j)\) and \(\beta \ge 0.\) Let us multiply the left and right sides of equality (63) by a vector \(\textbf{x}^*(k,j)\) satisfying relations (27). As a result, we obtain

$$\begin{aligned} 0=\sum \limits _{i\in I(k,j)}\alpha _i({\textbf{t}}(i))^{\top } \mathcal {A}(\textbf{x}^{*}(k,j)){\textbf{t}}(i))+\beta . \end{aligned}$$

From this equality and the inequalities \(\alpha _i>0,\) \(({\textbf{t}}(i))^{\top } \mathcal {A}(x^*(k,j)){\textbf{t}}(i))\ge 0\) for \(i\in I(k,j)\), and \(\beta \ge 0\), we can conclude that \(\beta = 0\). Then it follows from (63) with \(\beta = 0\) that the condition C1 holds true. Thus, the equivalence of the conditions C1 and C2 is proved.

\(\bullet \) Now, let us finalize the proof of the theorem. Note that the problems (\({\textbf{P}}_{SIP}\)) and (P) are equivalent and the respective dual problems are equivalent too. Hence the problem (\({\textbf{P}}_{SIP}\)) yields the uniform LP duality simultaneously with the problem (P).

Suppose that the uniform LP duality property is satisfied for the problem (\({\textbf{P}}_{SIP}\)). Then the problem (P) also satisfies this property and it follows from Proposition 3.1 that the condition C1 is fulfilled. This implies that the condition C2 holds as well. It is evident that the equality (57) follows from the condition C2 and inclusion (61).

Suppose now that the equality (57) holds true. This implies that the condition C2 is fulfilled, and hence the condition C1 is also fulfilled. It follows from the condition C1 and Proposition 3.1 that the problem (P) yields the uniform LP duality. Consequently, the problem (\({\textbf{P}}_{SIP}\)) yields the uniform LP duality as well.

Thus we have shown that the consistent system of the problem (\({\textbf{P}}_{SIP}\)) yields the uniform LP duality iff condition (57) is satisfied.

By construction, the set F is finite and the set W is a compact. Relations (58) hold true with \(\bar{\textbf{x}}=\textbf{x}^*\) where \(\textbf{x}^*\) is defined in (8). \(\square \)

7 Conclusion

The main result of the paper is the establishment of the necessary and sufficient conditions that guarantee the uniform LP duality for linear CoP problems. These conditions are derived using the concept of immobile indices and the sets generated by them, and are formulated in various equivalent forms, thereby expanding the scope of their application. The examples illustrate how the conditions obtained can be applied to confirm or refute the uniform LP duality of a CoP system. Additionally, the relationship between the uniform LP duality conditions for the related problems of CoP and SIP is explored.