Keywords

4.1 Introduction

Copositive matrices appear in various applications in mathematics, and especially, in the characterization of the solution set of constrained optimization problems and the linear complementarity problem. Recently, Copositive optimization has been an object of research because many NP-hard combinatorial problems have a representation in this domain. Copositive optimization deals with minimizing a linear function in matrix variables subject to linear constraints and the constraint that the matrix should be in the convex cone of copositive matrices. In what follows, we make use of the following notations.

\(\mathcal{S}^n\)

Set of symmetric matrices of order n

\(\mathcal{C}^n\)

Set of copositive matrices of order n

\(\mathcal{C}^{*n}\)

Set of comp. positive matrices of order n

E

Matrix of all-ones

conv(\(\mathcal{M}\))

Convex hull of a set \(\mathcal{M}\)

Copositive cone \(\mathcal{C}^n\)

\(\{A\in \mathcal{S}^n\;|\;x^{T}Ax\ge 0\;\forall x\in R^{n}\}\)

Completely positive cone \(\mathcal{C}^{*n}\)

\(\{BB^T\in \mathcal{S}^n \;|\;B\ge 0\}\)

For an arbitrary givn cone \(\mathcal{K}\subseteq \mathcal{S}\), dual cone \(\mathcal{K}^*\)

\(=\{\sum _{i=1}^n {a_i}^Ta_i\;|\;a_i\in R_{+}^n\;\forall \;i \}\)

\(\{A\in \mathcal{S}\;|\langle A,B\rangle \ge 0,\;\forall \;B\in \mathcal{K}\}\)

Recession cone of \(A,\;\; \)

\(rec(A):=\{ y\in \mathbb {R}^n :\, \forall x \in A, \forall \lambda \ge 0:\; x + \lambda y \in A\}\)

Inner Product \(\langle A,B\rangle \)

trace\(\langle B,A\rangle =\sum _{i,j=1}^n a_{ij}b_{ij}\)

Consider the standard quadratic problem (stQP)

$$\min x^TQx \text{ s.t. } e^Tx=1, x\ge 0.$$

where e denotes the all-ones vector. This optimization problem asks for the minimum of a (not necessarily convex) quadratic function over the standard simplex \(\Delta =\{x\in \mathbb {R}^{n}_+\,:\;e^{T}x=1\}.\)

Note that \(\langle x^TQx\rangle = \langle Q,xx^T\rangle \) E is square matrix consisting entirely of unit entries, so that \(x^TEx=(e^{T}x)^2=1\) on \(\Delta .\)

So \(e^T x = 1 \text{ transforms } \text{ to } \langle E,xx^T\rangle =1.\)

Hence, the problem stQP can be written as

$$\min \langle Q,X\rangle $$
$$ \text{ s. } \text{ t. } \langle E,X\rangle =1, $$
$$ X\in \mathcal{C}^{*n}.$$

More generally, a primal-dual pair in copositive optimization (COP) is of the following form:

$$\begin{aligned}&\text {min}\langle C,X\rangle \\ \nonumber&\text {s.t.} \ \langle A_i,X \rangle \ =b_i \ (i= 1, \ldots , m)\\ \nonumber&\ \ \ \ \quad \ \ X \in \mathcal {C}^{n}, \end{aligned}$$
(4.1)

where \(\mathcal {C}^{n} =\{ A \in \mathcal {S}^{n} : x^TAx \ge 0\;\forall \; \ x \in \mathbb {R}^n_+ \}\) is the cone of copositive matrices. Bundfuss and D\(\ddot{\mathrm{u}}\)r [8] developed an efficient algorithm to solve the optimization problem (4.1) over the copositive cone using iteratively polyhedral inner and outer approximations of the copositive cone.

Associated with problem (4.1), there is a dual problem which involves the constraint that the dual variable lies in the dual cone of \(\mathcal {C}^{n},\) that is, the convex cone \(\mathcal{C}^{*n}\) of completely positive matrices: \(\mathcal{C}^{*n} = conv\{x x^T : x \in \mathbb {R}^n_+ \}\)

The dual of (4.1) is

$$\begin{aligned}&\text {max} \ \sum _{i=1}^{m} b_i y_i \\ \nonumber&\text {s.t.} \ C -\sum _{i=1}^{m} y_i A_i \in \mathcal{C}^{*n}, y_i \in \mathbb {R}, \end{aligned}$$
(4.2)

where \(\mathcal{C}^{*n} = \text {conv}\{ x x^T : x \in \mathbb {R}^n_+ \}\) is the cone of completely positive matrices. Clearly, (4.1) and (4.2) are convex optimization problems since both \(\mathcal {C}^{n}\) and \(\mathcal{C}^{*n}\) are convex cones. Note that KKT optimality conditions hold if Slater’s condition is satisfied and imposing a constraint qualification guarantees strong duality, i.e., equality of the optimal values of (4.1) and (4.2). It is well known that most common constraint qualification assume that both problems are feasible and one of them strictly feasible.

Copositive programming can be visualized as a convexification approach for nonconvex quadratic programs. In many cases, nonconvex optimization problems admit exact copositive formulation. In this chapter, we show that some nonconvex quadratic programming problems that arise in graph theory can be converted into a convex quadratic problem. The first account of copositive optimization goes back to [4], which established a copositive representation of a subclass of particular interest, namely, in standard quadratic optimization (StQP).

4.1.1 Quadratic Programming Problem with Binary and Continuous Variables

Burer [6] considered an extremely large class of nonconvex quadratic programs with a mixture of binary and continuous variables, and showed that they can be expressed as completely positive program (CPPs).

We consider the following problem:

$$\begin{aligned}&\text {min} \ x^T Q x + 2c^T x \\ \nonumber&\text {s.t.} \ \ a_i^T x \ =b_i \ (i= 1, \ldots , m)\\ \nonumber&\ \ \ \ \quad \ \ x \ge 0, \\ \nonumber&\ \ \ \ \ \quad x_j \in \{0 ,1\} \ \forall \;j \in B \text{ where } B\subseteq \{1, \ldots , n \}. \nonumber \end{aligned}$$
(4.3)

Burer [6] showed that (4.3) is equivalent to the following linear problem over the cone of completely positive matrices.

$$\begin{aligned}&\text {min} \ \langle Q,X \rangle + 2c^T x \\ \nonumber&\text {s.t.} \ \ a_i^T x \ =b_i \ (i= 1, \ldots , m)\\ \nonumber&\ \ \ \ \quad \langle a_i a_i^T ,X \rangle \ =b_i^2 \ (i= 1, \ldots , m) \\ \nonumber&\ \ \ \ \ \quad x_j = X_{jj} \ (j\in B) \nonumber \\&\ \ \ \ \ \quad \begin{bmatrix} 1 &{} x \\ x &{} X \end{bmatrix} \in \mathcal{C}^{*n+1}. \nonumber \end{aligned}$$
(4.4)

This is a nice result since a nonconvex quadratic integer problem is equivalently written as a linear problem over a convex cone. Note that the dual problem of a completely positive program is an optimization problem over the cone of copositive matrices. Clearly, both problem classes are NP-hard since they are equivalent to an integer programming problem. Bundfuss and D\(\ddot{\mathrm{u}}\)r [8] posed an open question whether problems with general quadratic constraints can similarly be restated as completely positive problems. Bomze [2] demonstrated the diversity of copositive formulations in various domains of optimization, namely, continuous and discrete, deterministic and stochastic.

4.1.2 Fractional Quadratic Optimization Problem

Consider the fractional quadratic optimization problem

$$\begin{aligned} \min _x f(x) =\min _x \frac{x^TCx +2c^Tx + \gamma }{x^T B x +2b^Tx +\beta } : Ax = a, x \in \mathbb {R}^n_+, \end{aligned}$$
(4.5)

where B is positive semidefinite matrix and \(C=C^{T}\in \mathbb {R}^{n\times n} \), \(\{b, c\} \subset \mathbb {R}^n\), \(A\in \mathbb {R}^{m\times n}\), \(a \in \mathbb {R}^m\) and \(\beta ,\gamma \in \mathbb {R}.\)

Now define the symmetric \((n+1) \times (n+1)\) matrices

\(\tilde{A} = \) \(\begin{bmatrix} a^Ta &{} -a^tA \\ -A^Ta &{} A^T A \end{bmatrix}\), \(\tilde{B} =\) \(\begin{bmatrix} \beta &{} b^T \\ b &{} B \end{bmatrix}\), \(\tilde{C} =\) \(\begin{bmatrix} \gamma &{} c^t \\ c &{} C \end{bmatrix}.\) We further assume that the problem in (4.5) is well defined. Amral et al. [1] observed that the problem (4.5) can be written as the completely positive problem:

$$ \text {min} \{<\tilde{C}, X> \, : \,<\tilde{B}, X> =1 , \, <\tilde{A}, X> = 0, \, X \in \mathcal{C}^{*n+1}\}.$$

The above problem occurs in many engineering applications. For further details, see [2] and references therein.

4.1.3 More on Nonconvex Quadratic Programming Problems

Burer [7] generalized the sign constraints \(x \in \mathbb {R}^n_{+}\) to arbitrary cone constraints \(x\in \mathcal{K}\), where \(\mathcal{K}\) is a closed, convex cone, and studied the following (nonconvex) quadratic cone-constrained problem.

$$\begin{aligned}&\text {min} \ x^T Q x + 2c^T x \\ \nonumber&\text {s.t.} \ \ A x \ =b \\ \nonumber&\ \ \ \ \quad \ \ x\in \mathcal{K}. \\ \nonumber \end{aligned}$$
(4.6)

Note that the dimension of the problem is increased by one by passing from the cone \(\mathcal{K}\subseteq \mathbb {R}^n\) to the cone \(\hat{\mathcal{K}}=\mathbb {R}_{+}\times \mathcal{K}.\) Let \(\mathcal{C}_{\hat{\mathcal{K}}}=conv\{zz^T:z\in \mathcal{K}\}\), the dual cone \(\mathcal{C}^{*}_{\hat{\mathcal{K}}}\) of all \(\hat{\mathcal{K}}\) -copositive \((n + 1)\times (n + 1)\) matrices.

In [2, 7, 13], it has been shown that (4.6) is equivalent to the (generalized) completely positive problem of the following form.

$$\begin{aligned}&\text {min} \ \langle \tilde{Q},Y \rangle + 2c^T x \\ \nonumber&\text {s.t.} \ \ A x =b \\ \nonumber&\ \ \ \ \quad (AXA^T)_{ii} \ =b_i^2 \ (i= 1, \ldots , m) \\ \nonumber&\ \ \ \ \ \quad Y= \begin{bmatrix} 1 &{} x \\ x &{} X \end{bmatrix} \in \mathcal{C}^{*}_{\hat{\mathcal{K}}}, \nonumber \end{aligned}$$
(4.7)

where \(\tilde{Q} = \begin{bmatrix} 0 &{} c^T \\ c &{} Q \end{bmatrix}\).

4.1.4 Quadratic Optimization Problem and the Concept of Lifted Problem

Nguyen [17] presents a general concept of lifting a nonconvex quadratic optimization problem into an equivalent convex optimization problem with matrix variables. Further, they apply this lifting concept to a class of quadratic optimization problem with linear inequality and mixed binary constraints.

Nguyen [17] consider the following quadratic optimization problem (QP)

$$\begin{aligned}&\text {min} \ x^T Q x \\ \nonumber&\text {s.t.} \ \ x \in F(QP), \\ \nonumber \end{aligned}$$
(4.8)

where \(Q \in \mathcal{S}^n \) and F(QP) is some non-empty feasible set in \(R^n\).

Consider the following subsets of \(\mathcal{S}^n\).

$$\mathcal{C}:= \text {conv} \{ x x^T : x \in F(QP)\},$$
$$\mathcal {R} := \text {conv} \{ yy^T : y \in \text {rec} F(QP) \}.$$

The optimization problem

$$\begin{aligned}&\text {min} \ \langle Q, X \rangle \\ \nonumber&\text {s.t.} \ \ X \in \mathcal{C}+ \mathcal {R}, \\ \nonumber \end{aligned}$$
(4.9)

is called the lifted problem according to the original quadratic problem (4.8).

Proposition 4.1

(Proposition 2.2, [17]) Assume that an optimal solution of (4.8) exists. Then the problem (4.8) and (4.9) are equivalent in the sense that they have the same optimal value, and any optimal solution of (4.9) is a convex combination of matrices \(x^i (x^i)^T\), where \(x^i\) are optimal solution of (4.8).

Note that the original Problem (4.8) of minimizing is not necessarily a convex quadratic function over a not necessarily convex set. However, the lifted problem (4.9) is a convex optimization problem. Therefore, as every local optimal solution obtained by solving (4.9) is a global one, we can obtain global optimal solutions for (4.8) by computing local optimal solutions of (4.9).

4.1.5 Quadratic Optimization Problem and the Role of Special Matrix Classes

In this section, we discuss about some matrix classes that plays a role in quadratic optimization problem. Consider QP\((q,A):\;\) \([\min x^{T}(Ax+q); x\ge 0,\;Ax+q\ge 0].\) We denote by \(S^{1}(q,A)\), the set of optimal solutions of QP(qA) and feasible solutions by \(F(q,A)=\{x\,:\,Ax+q\ge 0,x\ge 0\}.\) Applying the Farkas-Lemma, the feasibility is equivalent to the following condition:

$$x\ge 0, A^Tx\le 0 \Rightarrow q^Tx\ge 0.$$

Let us consider the polyhedral convex cone

$$C_A= \{x\ge 0\,|\, A^Tx\le 0\}$$

and its polar cone

$$C^{*}_{A}= \{x^*\,|\, x^T x^*\le 0\;\forall x\in C_A\}.$$

Thus QP(qA) is feasible iff \(-q\in C^{*}_{A}.\)

$$S^{1}(q,A)\ne \emptyset \text{ if } \text{ and } \text{ only } \text{ if } -q\in C^{*}_{A}.$$

Assume that \(x^*\in S^{1}(q, A).\) Then in view of KKT-condition for optimality there exist \(u, v\in \mathbb {R}^n\) such that

$$\begin{aligned} (A +A^T )x^* + q-A^T u - v = 0, \end{aligned}$$
(4.10)
$$\begin{aligned} x^*, u, v,Ax^* + q\ge 0, \end{aligned}$$
(4.11)
$$\begin{aligned} {x^*}^{T} v= u^T(Ax^* + q) = 0. \end{aligned}$$
(4.12)

We denote by \(S^2(q,A)\) the set of points for which such u and v exist. \(S^2(q,A)\) is called the set of KKT-stationary points. We are interested in conditions implying that \(S^2(q,A)= S^{1}(q,A).\)

In what follows, we introduce the following matrix classes. A is said to be column sufficient if for all \(x\in \mathbb {R}^{n}\) the following implication holds:

$$\displaystyle {x_{i}(Ax)_{i}\le 0\;\forall \;i\;\,\Rightarrow \;x_{i}(Ax)_{i}=0\;\,\forall \;i.}$$

A is said to be row sufficient if \(A^{T}\) is column sufficient. A is sufficient if A and \(A^{T}\) are both column sufficient. We say that A is positive semidefinite (PSD) if \(x^{T}Ax\ge 0\;\forall \;x\in \mathbb {R}^{n}\) and A is positive definite (PD) if \(x^{T}Ax> 0\;\forall \;0\ne x\in \mathbb {R}^{n}.\) A matrix \(A\in \mathbb {R}^{n\times n}\) is a positive subdefinite (PSBD) matrix if for all \(x\in \mathbb {R}^{n}\)

$$x^{T}Ax<0\;\;\text{ implies } \text{ either }\;\;A^{T}x\le 0\;\,\text{ or }\;\,A^{T}x\ge 0.$$

A matrix \(A\in \mathbb {R}^{n\times n}\) is said to be generalized positive subdefinite matrix (GPSBD) if there exist two nonnegative diagonal matrices S and T with \(S+T=I\) such that

$$\begin{aligned} \forall \;z\in \mathbb {R}^{n},\;\;z^{t}Az<0 \Rightarrow \left\{ \begin{array}{l}\text{ either } -Sz+TA^{T}z\ge 0 \\ \;\text{ or } \;\;\;\;\;-Sz+TA^{T}z\le 0.\end{array} \right. \end{aligned}$$
(4.13)

A matrix A is called merely GPSBD matrix (MGPSBD) if it is not a PSBD matrix. For details on these classes, see [9, 10, 16]. We now state the following theorem.

Theorem 4.1.1

Assume any one of the following conditions hold:

  1. (i)

    A is a copositive PSBD matrix with \(rank(A)\ge 2.\)

  2. (ii)

    A is a copositive MGPSBD matrix with \(0<t_{i}<1\) for all i.

Then A is a row sufficient matrix.

The following result is an immediate consequence of the above theorem.

Lemma 4.1.1

Suppose A is a copositive PSBD matrix with \(rank(A)\ge 2\) or a copositive MGPSBD matrix with \(0<t_{i}<1\) for all i. For each vector \(q\in \mathbb {R}^{n},\) if \((\tilde{x},\tilde{u})\) is a Karush–Kuhn–Tucker (KKT) pair of the quadratic program QP\((q,A):\;\) \([\min x^{T}(Ax+q); x\ge 0,\;Ax+q\ge 0],\) then \(\tilde{x}\) solves QP(qA).

4.2 Applications of Copositive Optimization in Graph Theory

We discuss the connection between nonconvex quadratic optimization and copositive optimization that allows the reformulation of nonconvex quadratic problems as convex ones in a unified way. Copositive optimization is a new approach for analyzing the specific, difficult case of optimizing a general nonconvex quadratic function over a polyhedron \(\{x : Ax = b, x \ge 0\}\). In this section, we consider graph theoretic problems and reformulate stQP discussed in Sect. 4.1 as a convex quadratic optimization problem. We begin with some preliminaries on graph theory which will be used throughout the section. A graph G is a set of points V(G) called vertices along with a set of line segments E(G) called edges joining pairs of vertices. We say that two vertices are adjacent if there is an edge joining them. The set of vertices adjacent to v is the neighborhood of v which we denote as N(v). If e is an edge joining v to one of its neighbors, we say e is incident to v. The degree of a vertex v,  denoted \(\deg (v)\), is the number of vertices adjacent to v. A graph is connected if there exists a path between every pair of distinct vertices. A closed walk is a walk with the same starting and ending vertex. An open walk is a walk in which the start and end vertices differ. A path is a walk in which no vertex is repeated. The distance between two vertices v and w is the length of the shortest path between v and w. A cycle is a closed walk in which no vertex is repeated (except that the starting and ending vertices are the same). The diameter of a connected graph G,  denoted diam(G) is the greatest distance between any two vertices of G. A tree is a connected graph that contains no cycles. A pendant vertex is a vertex whose degree is one. A tree on n vertices has \(n-1\) edges. Let \(G = (V, E)\) be a connected graph with vertex set V(G) and edge set E(G). Let \(c\; :\; V(G)\rightarrow \mathbb {R}^{+}\) be a nonnegative vertex weight function such that the total weight of the vertices is \(\displaystyle { N =\sum _{v\in V(G)} c(v)}.\)

Suppose \(d_{G}(u, v)\) (or simply d(uv)) denotes the usual distance (the length of the shortest path) between u and v in G. Then the total distance of G with respect to c,  is defined by

$$\displaystyle {d_{c}(G) =\sum _{\{u.v\}\subseteq V(G)} c(u)c(v)d_{G}(u, v).}$$

Among all nonnegative weight functions c of given weight N, we seek to find one that maximizes \(d_{c}(G).\)

Let G be a graph with vertices \(\{1,2,\ldots ,n\}.\) The distance matrix of G is defined as \(D = [d_{ij}]\) where \(d_{ij}\) (which we also denote as d(ij)) is the distance between vertices i and j. As an example, consider the tree

The distance matrix of the tree is given by

$$ \left[ \begin{array}{cccccc} 0 &{} 1 &{} 2 &{} 2 &{} 3 &{} 3\\ 1 &{} 0 &{} 1 &{} 1 &{} 2 &{} 2\\ 2 &{} 1 &{} 0 &{} 2 &{} 3 &{} 3\\ 2 &{} 1 &{} 2 &{} 0 &{} 1 &{} 1\\ 3 &{} 2 &{} 3 &{} 1 &{} 0 &{} 2\\ 3 &{} 2 &{} 3 &{} 1 &{} 2 &{} 0 \end{array} \right] $$

Let \(\mathcal{D}\) be the distance matrix of a tree with n vertices. Let \(\Delta =\{x\in \mathbb {R}^{n}_+\,:\;e^{T}x=1\}.\) We consider the problem:

Problem I    \(\max x^T\mathcal{D}x\) subject to \(x \in \Delta .\)

If T is a tree on n vertices with distance matrix \(\mathcal{D},\) then clearly, Problem I is equivalent to maximizing \(d_c(T)\) over all nonnegative weight functions with given fixed weight N.

Note that Problem I and more general versions of it have occurred in the literature in different contexts. Apart from graph theory literature (see [11] and the references therein) there are at least two other areas where the problem has been considered. These areas are: (i) a generalized notion of diameter of finite metric space and (ii) Nash equilibria of symmetric bimatrix games associated with the distance matrix involving tree and resistance distance.

Theorem 4.2.2

Let T be a tree with vertex set \(\{1, \ldots , n\}\) and let \(\mathcal{D}\) be the distance matrix of T. Then, there exists \(\alpha _0\) such that for all \(\alpha > \alpha _0,\) the matrix \(\alpha \mathcal{E}- \mathcal{D}\) is positive definite, where \(\mathcal{E}\) is a \(m \times m\) matrix of all-ones.

Note that D is a copositive matrix and the Problem I is a nonconvex quadratic Programming (NQP) problem, and we may write the equivalent convex quadratic programming (CQP) problem. By Theorem 4.2.2 there exists k such that \(\tilde{\mathcal{D}} = k\mathcal{E}- \mathcal{D}\) is positive definite. We remark that to construct \(\tilde{\mathcal{D}},\) it is sufficient to find the diameter (length of the longest path) of the tree. This can be done in polynomial time. Note that the maximum of \(\frac{1}{2}x^T\mathcal{D}x\) over all \(x \in \Delta \) is attained at \(x^*\) if and only if the minimum of \(\frac{1}{2}x^T\tilde{\mathcal{D}}x\) over all \(x \in \Delta \) is attained at \(x^*.\) Therefore, we solve Problem II.

Problem II: \(\min \frac{1}{2} x^{T}\tilde{\mathcal{D}}x\) \(\text{ subject } \text{ to } Ax\ge b \text{ and } x\ge 0\) where \( \displaystyle { A = \left[ \begin{array}{r} e_{n}^{T}\\ -e_{n}^{T}\end{array} \right] }\) and and \( \displaystyle { b= \left[ \begin{array}{r} 1\\ -1\end{array} \right] .}\)

A vertex of a tree of degree 1 is called an end vertex (or a pendant vertex) of T. The following result is useful for subsequent discussion.

Lemma 4.2.2

[11, Proposition 2, p. 15] Given a tree T on at least two vertices and a real \(N > 0.\) Let c be a nonnegative weight function on V(T) of total weight N that maximizes \(d_{c}(T)\) among all such weight functions. Then, \(c(v) > 0\) only if v is an end vertex of T

In view of Lemma 4.2.2, we may replace \(\tilde{D}\) in Problem II by the principal submatrix \(\tilde{D}_{p}\) of \(\tilde{\mathcal{D}}\) corresponding to the end vertices of the tree. The matrix A will be modified to \(A_{p}\) by replacing \(e_n\) by \(e_p,\) where p is the number of pendant vertices. We denote this problem as

Problem III: \(\min \frac{1}{2} y^{T}\tilde{\mathcal{D}}_{p}y\) \(\text{ subject } \text{ to } A_{p}y\ge b \text{ and } y\in R^{p}_{+}\) where \( \displaystyle { A_{p} = \left[ \begin{array}{r} e_{p}^{T}\\ -e_{p}^{T}\end{array} \right] }\) and and \( \displaystyle { b= \left[ \begin{array}{r} 1\\ -1\end{array} \right] .}\)

Lemma 4.2.3

Problem II has a unique solution if and only if Problem III has a unique solution.

We will write PD for positive definite and PSD for positive semidefinite. We may rewrite Problem II or III as a linear complementarity problem (denoted as LCP(qM)) which is defined as follows. Given a real square matrix \(A\in \mathbb {R}^{n\times n}\) and a vector \(\,q\,\in \,\mathbb {R}^{n},\,\) the linear complementarity problem is to find \(w, z\;\in \mathbb {R}^{n}\;\) such that \(w\,-\,M z\;=\;q,\;\;w\ge 0,\;z\ge 0\) and \(w^{T}\,z\,=\;0.\)

The Karush–Kuhn–Tucker (KKT) necessary and sufficient optimality conditions specialized to Problem III yields the linear complementarity problem LCP(qM) with \(M=\left[ \begin{array}{cc} \tilde{\mathcal{D}}_{p} &{} -A_{p}^T \\ A_{p} &{} 0 \end{array} \right] , \; \; q=\left[ \begin{array}{c} 0 \\ -b \end{array} \right] .\) If (wz) solves LCP(qM) where \(w=\left[ \begin{array}{c} u \\ v \end{array} \right] \; \text{ and } z=\left[ \begin{array}{c} x \\ y \end{array} \right] \) then x solves Problem III. It is easy to see that M is a PSD matrix.

Granot and Skorin-Kapov [14] extend Tardos’ results and present a polynomial algorithm for solving strictly convex quadratic programming problems, in which, the number of arithmetic steps is independent of the size of the numbers on the right-hand side and the linear cost coefficients. Under the assumption that M is positive semidefinite, Kojima et al. [15] present a polynomial time algorithm that solves LCP(qM) in \(O(n^{3}L)\) arithmetic operations.

Remark 4.1

Dubey and Neogy [12] consider the question of solving the quadratic programming problem of finding maximum of \(x^{T}\mathcal{R}x\) subject to \(x\in \Delta =\{x\in \mathbb {R}^{n}_+\,:\;e^{T}x=1\}\) and observe that this problem can be solved in polynomial time for the class of simple graphs with resistance distance matrix \((\mathcal{R})\) which are not necessarily a tree by reformulating this problem as a strictly convex quadratic programming problem.

4.2.1 Maximum Weight Clique Problem

We consider a copositive reformulation for the maximum weight clique problem. Consider an undirected graph \(G= (V,E)\) with n nodes. A clique \(\mathcal{S}\) is a subset of the node set V which corresponds to a complete subgraph of G (i.e., any pair of nodes in \(\mathcal{S}\) is an edge in \(\mathcal{E},\) the edge set). A clique \(\mathcal{S}\) is said to be maximal if there is no larger clique containing \(\mathcal{S}\).

Let \(A_G\) denotes the adjacency matrix of the graph G. Let \(f^*\) denotes the optimal value of the standard quadratic optimization problem \(\max f(x),x\in \Delta \) where \(f(x)=x^{T}A_{G}x\). Then \(\frac{1}{(1-f^*)}\) is the size of a maximum clique. This approach has served as the basis of many clique-finding algorithms and to determine theoretical bounds on the maximum clique size.

In [3], this problem was reformulated as a standard quadratic optimization problem and in [4] standard quadratic optimization problems were, in turn, reformulated as a copositive optimization problems. Therefore, the maximum weight clique problem is equivalent to copositive optimization problems.

4.3 The Notion of Transfinite Diameter in a Finite Metric Space and Copositive Optimization Problem

Let \(M = (X,d)\) be a finite metric space, where \(X = \{x_1, \ldots , x_n\}.\) The distance matrix D of the metric space is the \(n \times n\) matrix \(D = [d_{ij}],\) where \(d_{ij} = d(x_i,x_j).\) The metric space is completely described by its distance matrix. As a generalization of the diameter, the notion of transfinite diameter has been introduced. The notion of transfinite diameter (the maximal average distance in a multiset of points placed in the space), is a natural generalization of the diameter. The \(\infty \)-extender is the load vectors realizing the transfinite diameter provide strong structural information about metric spaces. It is, therefore, natural to study conditions under which \(\infty \)-extender are unique. The transfinite diameter of M equals the maximum of \(x^TDx\) over \(x \in \Delta .\) The vector that attains the maximum has been called \(\infty \)-extender of M can be posed as a copositive optimization problem. In what follows, we need the following definition to state a result related to a unique \(\infty \)-extender. The matrix A is said to be conditionally negative definite (c.n.d.) if \(x^TAx \le 0\) for all \(x \in \mathbb {R}^n\) such that \(\displaystyle {\sum _{i=1}^n} x_i = 0.\) Furthermore, a c.n.d. matrix is said to be strictly c.n.d. if \(x^TAx = 0\) only for \(x = 0.\) The matrix space M is said to be of negative type if D is c.n.d., while it is of strictly negative type if D is strictly c.n.d. Now we have the following theorem.

Theorem 4.3.3

Let (Xd) be a finite metric space. If (Xd) is of strictly negative type, then (Xd) has a unique \(\infty \)-extender.

4.4 Symmetric Bimatrix Game as a Copositive Optimization Problem

A bimatrix game is a noncooperative two-person game described by a pair \((\mathcal{A},\mathcal{B})\) of \(m \times n\) matrices. There are two players, Player 1 and Player 2, with m and n pure strategies respectively. If Player 1 chooses the i-th strategy and Player 2 chooses the j-th strategy, then \(a_{ij}\) and \(b_{ij}\) are the payoffs to Players 1 and 2, respectively. The mixed strategy spaces of Players 1 and 2 are \(\Delta _m\) and \(\Delta _n,\) respectively. A pair of strategies \(({x}^*,{y}^*) \in \Delta _m \times \Delta _n\) is a Nash equilibrium if \(x^T\mathcal{A}{y}^* \le {x^*}^T\mathcal{A}{y^*}\) and \({x^*}^T\mathcal{B}{y} \le {x^*}^T \mathcal{B}{y}^*,\) for all \(x \in \Delta _m, y \in \Delta _n.\)

The celebrated theorem of Nash guarantees the existence of an equilibrium pair in any bimatrix game.

A bimatrix game is said to be symmetric if there is symmetry in strategies and payoffs, that is, if \(m = n\) and \(\mathcal{B}= \mathcal{A}^T.\) A symmetric bimatrix game there is at least one symmetric Nash equilibrium, that is, an equilibrium of the form \(({x}^*,{x}^*) \in \Delta _n \times \Delta _n.\) It can be seen that \(({x}^*,{x}^*)\) is a symmetric Nash equilibrium of \((\mathcal{A},\mathcal{A}^T)\) if and only if \((\mathcal{A}{x^*})_i \le {x^*}^T\mathcal{A}{x}^*, i = 1, \ldots , n;\) or equivalently, \({x}^*\) maximizes \(x^T\mathcal{A}x\) over \(x \in \Delta _n.\) In what follows, we consider symmetric bimatrix game associated with a tree.

Let T be a tree with n vertices and let \(\mathcal{D}\) be the distance matrix of T. Consider the symmetric bimatrix game \((\mathcal{D},\mathcal{D})\) in [5]. This game is interpreted as follows. Players 1 and 2 both choose a vertex each of the trees and tries to be as away from each other as possible. In view of the preceding discussion, \(({x}^*,{x}^*) \in \Delta _n \times \Delta _n\) is a symmetric Nash equilibrium of the game \((\mathcal{D},\mathcal{D})\) if and only if \({x}^*\) is a solution of Problem I. Note that the game \((\mathcal{D},\mathcal{D})\) has a unique symmetric Nash equilibrium. The symmetric bimatrix game associated with a tree is extended by Dubey and Neogy [12] for resistance matrix as payoff matrix.

Let G be a connected graph with vertex set \(\{1, \ldots , n\}\) and \(\mathcal{R}\) be the resistance matrix where \(\mathcal{R}= [r_{ij}]\) with its (ij)-entry \(r_{ij}\) equal to the resistance distance between the i-th and the j-th vertices. In [12], Dubey and Neogy consider the symmetric bimatrix game \((\mathcal{R},\mathcal{R}).\) \((\tilde{x},\tilde{x}) \in \Delta _n \times \Delta _n\) is a symmetric Nash equilibrium of the game \((\mathcal{R},\mathcal{R})\) if and only if \(\tilde{x}\) is a solution of Problem I. By using the same argument, it is easy to see that the game \((\mathcal{R},\mathcal{R})\) has a unique symmetric Nash equilibrium.