Abstract
In this paper we give a matrix version of Handelman’s Positivstellensatz (Handelman in Pac J Math 132:35–62, 1988), representing polynomial matrices which are positive definite on convex, compact polyhedra. Moreover, we propose also a procedure to find such a representation. As a corollary of Handelman’s theorem, we give a special case of Schmüdgen’s Positivstellensatz for polynomial matrices positive definite on convex, compact polyhedra.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\mathbb R[X]:=\mathbb R[X_1,\ldots ,X_n]\) be the ring of polynomials in the variables \(X_1,\ldots ,X_n\) with real coefficients. Denote by \(\varDelta _n\) the standard n-simplex in \(\mathbb R^n\), which is defined by
Pólya [9] proved in 1928 that for a homogeneous polynomial \(f\in \mathbb R[X]\), if \(f(x)>0\) for every \(x\in \varDelta _n\), then there exists a sufficiently large number N such that all coefficients of the polynomial \((X_1+\cdots +X_n)^N \cdot f\) are positive.
Powers and Reznick [10] gave an explicit bound for the number N, and applied it to give a constructive version of Handelman’s Positivstellensatz. More explicitly, let \(P\subseteq \mathbb R^n\) be a convex, compact polyhedron with non-empty interior, bounded by linear polynomials \(L_1, \ldots , L_m\in \mathbb R[X]\). By choosing the sign of the \(L_i\)’s, we may assume that
Theorem 1
(Handelman’s Positivstellensatz [4]) For a polynomial \(f\in \mathbb R[X]\), if \(f(x)>0\) for all \(x\in P\), then f can be represented as
for some \(M\in \mathbb N\) and \(f_\alpha \ge 0\) for all \(\alpha =(\alpha _1,\ldots ,\alpha _m) \in \mathbb N^m\) such that \(|\alpha |\le M\).
Krivine [6] proved Handelman’s Positivstellensatz for a special polyhedron. Moreover, one can find a generalization of this Positivstellensatz in [11, Theorem 5.4.6] (or [8, Theorem 7.1.6]). A bound for the number M was given by Powers and Reznick [10], using the bound for the number N in Pólya’s Positivstellensatz.
Theorem 1 yields the following consequence.
Corollary 1
For a polynomial \(f\in \mathbb R[X]\), if \(f(x)>0\) for all \(x\in P\), then f can be represented as
where \(f_e\in \mathbb R[X]\) and \(\deg (f_e^2) \le M\).
This corollary is a special case of Schmüdgen’s Positivstellensatz [14] for convex, compact polyhedra which includes an explicit bound on the degrees of sums of squares coefficients \(f_e^2\).
Schmüdgen’s Positivstellensatz has many important applications, especially in solving polynomial optimization problems and moment problems for compact semi-algebraic sets. Therefore, as a special case of Schmüdgen’s Positivstellensatz, Handelman’s theorem for polynomials plays an important role in application.
A matrix version of Pólya’s Positivstellensatz was given by Scherer and Hol [13], with applications e.g. in robust polynomial semi-definite programs. Schmüdgen’s theorem for operator polynomials has been discovered by Cimpric̆ and Zalar [3]. Positivstellensätze for polynomial matrices have been studied by some other authors, see for example in [1, 2, 7, 12]. The main aim of this paper is to give a version of Handelman’s Positivstellensatz for polynomial matrices with an explicit degree bound.
We need to introduce some notations. For \(t\in \mathbb N^*\), let \(\mathcal {M}_t(R)\) denote the ring of square matrices of order t with entries from a commutative unital ring R. Denote by \(\mathcal {S}_t(R)\) the subset of \(\mathcal {M}_t(R)\) consisting of all symmetric matrices.
In this paper we consider mainly R to be the ring \(\mathbb R[X]\) of polynomials in n variables \(X_1,\ldots , X_n\) with real coefficients. Each element \(\mathbf{A } \in \mathcal {M}_t(\mathbb R[X])\) is a matrix whose entries are polynomials from \(\mathbb R[X]\), called a polynomial matrix. Each element \(\mathbf{A }\in \mathcal {M}_t(\mathbb R[X])\) is also called a matrix polynomial, because it can be viewed as a polynomial in \(X_1,\ldots , X_n\) whose entries from \(\mathcal {M}_t(\mathbb R)\). Namely, we can write \(\mathbf{A }\) as
where \(\alpha =(\alpha _1,\ldots ,\alpha _n) \in \mathbb N^{n}\), \(|\alpha |:=\alpha _1+\cdots + \alpha _n\), \(X^{\alpha }:=X_1^{\alpha _1}\ldots X_n^{\alpha _n}\), \(\mathbf{A }_\alpha \in \mathcal {M}_t(\mathbb R)\), d is the maximum over all degree of entries of \(\mathbf{A }\), and it is called the degree of the matrix polynomial \(\mathbf{A }\). To unify notation, throughout the paper each element of \(\mathcal {M}_t(\mathbb R[X])\) is called a polynomial matrix.
For any polynomial matrix \(\mathbf{A }\in \mathcal {M}_t(\mathbb R[X])\) and for any subset \(K\subseteq \mathbb R^n\), by \(\mathbf{A }\succcurlyeq 0 \) (resp. \(\mathbf{A }\succ 0\)) on K we mean that for any \(x\in K\), the matrix \(\mathbf{A }(x)\) is positive semidefinite (resp. positive definite), i.e. all eigenvalues of the matrix \(\mathbf{A }(x)\) are non-negative (resp. positive).
For any polynomial matrices \(\mathbf{A }, \mathbf{B }\in \mathcal {M}_t(\mathbb R[X])\), the notation \(\mathbf{A }\succcurlyeq \mathbf{B }\) on K means that \(\mathbf{A }-\mathbf{B }\succcurlyeq \mathbf{0 }\) on K.
Suppose that we have a convex, compact polyhedron \(P\subseteq \mathbb R^n\) with non-empty interior, bounded by linear polynomials \(L_1, \ldots , L_m\in \mathbb R[X]\), defined by (1). Let \(\mathbf{F } \in \mathcal {S}_t(\mathbb R[X])\) be a polynomial matrix of degree \(d>0\). Assume \(\mathbf{F } \succ \mathbf{0 }\) on P. The main result of this paper is presented in Theorem 3 which is a matrix version of Handelman’s Positivstellensatz, stating that there exists a number \(N_0\) such that for all integer \(N >N_0\) the polynomial matrix \(\mathbf{F }\) can be written as
where \(\mathbf{F }_\alpha \in \mathcal {S}_t(\mathbb R)\) are positive definite scalar matrices with \(|\alpha |=N+d\).
The main idea in the proof of this theorem inherits from Powers and Reznick [10], using a matrix version of Pólya’s Positivstellensatz [13] and the continuity of eigenvalue functions of the polynomial matrix \(\mathbf{F }\) on the entries of \(\mathbf{F }\) (by [16, Theorem 1]). As a corollary of this theorem, we give a special case of Schmüdgen’s Positivstellensatz for polynomial matrices positive definite on convex, compact polyhedra. Furthermore, we give a procedure to find such a representation for the polynomial matrix \(\mathbf{F }\).
2 Representation of polynomial matrices positive definite on simplices
In this section we consider a simple case where P is an n-simplex with vertices \(\{v_0,v_1,\ldots ,v_n\}\) and let \(\{L_0,L_1,\ldots ,L_n\}\) be the set of barycentric coordinates of P, that is, each \(L_i\in \mathbb R[X]\) is linear and
Let \(\mathbf{F }\in \mathcal {S}_t(\mathbb R[X])\) be a polynomial matrix of degree \(d>0\). We can express \(\mathbf{F }\) as
where \(\mathbf{A }_\alpha \in \mathcal {M}_t(\mathbb R)\).
Let us consider the Bernstein–Bézier form of \(\mathbf{F }\) with respect to P:
It is easy to see that \(\widetilde{\mathbf{F }}_d(Y)\in \mathcal {S}_t(\mathbb R[Y])\) is a homogeneous polynomial matrix of degree d. Moreover, it follows from the relations (2) that
Following Scherer and Hol [13], for each multi-index \(\alpha =(\alpha _1,\ldots ,\alpha _n)\in \mathbb N^n\), let us denote
With these notations, we can re-write \(\mathbf{F }\) as
With the spectral norm \(\Vert {\cdot }\Vert \), following Scherer and Hol [13], we define
Using these notations, we have the following representation of polynomial matrices which are positive on simplices.
Theorem 2
Let \(P\subseteq \mathbb R^n\) be an n-simplex given as above and \(\mathbf{F }\in \mathcal {S}_t(\mathbb R[X])\) a polynomial matrix of degree \(d>0\). Assume that \(\mathbf{F }\succcurlyeq \lambda \mathbf{I }_t\) on P for some \(\lambda >0\). Let \(C:=C(\widetilde{\mathbf{F }_d})\). Then for each \(N>\dfrac{d(d-1)}{2} \dfrac{C}{\lambda }-d\), \(\mathbf{F }\) can be represented as
where each \(\mathbf{F }_\alpha \in \mathcal {S}_t(\mathbb R)\) is positive definite.
Proof
Let us denote by \(\varDelta _{n+1}\) the standard simplex in \(\mathbb R^{n+1}\), i.e.
Since \(\mathbf{F }(x)\succcurlyeq \lambda \mathbf{I }_t\) for all \(x\in P\), the Bernstein–Bézier form \(\widetilde{\mathbf{F }_d}\) of \(\mathbf{F }\) with respect to P satisfies
Then it follows from Pólya’s theorem for polynomial matrices [13, Theorem 3], that for each \(N> \dfrac{d(d-1)}{2} \dfrac{C}{\lambda }-d\),
where each \(\mathbf{F }_\alpha \in \mathcal {S}_t(\mathbb R)\) is positive definite. Substituting \(Y_i\) by \(L_i\) on both sides of (5), noting that
we obtain the required representation for \(\mathbf{F }\). \(\square \)
3 Representation of polynomial matrices positive definite on convex, compact polyhedra
Throughout this section, let \(P\subseteq \mathbb R^n\) be a convex, compact polyhedron with non-empty interior, given by (1). By [15], there exist positive numbers \(c_i\in \mathbb R\) such that \(\sum _{i=1}^m c_iL_i(X)=1\). Replacing each \(L_i\) by \(c_iL_i\) we may assume that
Moreover, it is easy to check that for each \(i=1,\ldots ,n\), there exist real numbers \(b_{ij}\in \mathbb R, j=1,\ldots ,m\) such that
Let us consider the \(n\times m\) matrix \(\mathbf{B }:=(b_{ij})_{i=1,\ldots ,n; j=1,\ldots ,m}\). Then for \(X=(X_1,\ldots ,X_n)\) and \(L=(L_1,\ldots ,L_m)\), we have \( X^T=\mathbf{B }\cdot L^T. \) In other words, we have
Denote \(\mathbb R[Y]:=\mathbb R[Y_1,\ldots ,Y_m]\), and consider the ring homomorphism
It follows from (6) that \(\sum _{i=1}^m Y_i-1\in \text{ Ker }(\varphi )\). Hence we may assume that the ideal \(I:=\text{ Ker }(\varphi )\) is generated by polynomials \(R_1(Y),\ldots , R_s(Y)\in \mathbb R[Y]\),
where \(\sum _{i=1}^m Y_i-1\) is one of the \(R_i\)’s.
Note that the homomorphism \(\varphi \) induces a ring homomorphism
Lemma 1
The homomorphism \(M_\varphi \) is surjective, and
where \(\mathbf{I }_t\) denotes the identity matrix in \(\mathcal {M}_t(\mathbb R[Y])\).
Proof
For each \(g(X)=\sum _{|\alpha |\le d}a_\alpha X^\alpha \in \mathbb R[X]\), denote
It is clear that \(\widetilde{g}\) is homogeneous of degree d. Moreover \(\varphi (\widetilde{g}(Y))=g(X)\). Hence \(\varphi \) is surjective. Then the surjectivity of \(M_\varphi \) follows from that of \(\varphi \).
On the other hand, \(\mathbf{G }=(g_{ij}(Y))\in \text{ Ker }(M_\varphi )\) if and only if \(g_{ij}\in \text{ Ker }(\varphi )\) for all \(i,j=1,\ldots ,t\). Hence for each \(i,j=1,\ldots ,t\) we have
Then \(\mathbf{G }\) can be written as
where \(\mathbf{A }_{\mathbf{k}}=(a_{ijk}(Y))\in \mathcal {M}_t(\mathbb R[Y])\) for each \(k=1,\ldots , s\). It is equivalent to the fact that \(\mathbf{G }\in \left\langle R_1\mathbf{I }_t,\ldots , R_s\mathbf{I }_t\right\rangle .\) The proof is complete. \(\square \)
Let \(\mathbf{F }=(f_{ij})\in \mathcal {S}_t(\mathbb R[X])\) be a polynomial matrix of degree \(d>0\). Denote \(\widetilde{\mathbf{F }}:=(\widetilde{f_{ij}})\in \mathcal {S}_t(\mathbb R[Y])\), where each \(\widetilde{f_{ij}}\) is defined by (8), which is a homogeneous polynomial of degree d.
Assume \(\lambda (\mathbf{F })\) is an eigenvalue function of \(\mathbf{F }\). It follows from [16, Theorem 1] that \(\lambda (\mathbf{F })\) is a continuous function on \(f_{ij}(X)\), \(i,j=1,\ldots ,t\). That is, there exists a continuous function \(\varLambda {:}\,\mathbb R^{t\times t} \rightarrow \mathbb R\) such that \(\lambda (\mathbf{F })=\varLambda (f_{ij}(X))\). Denote \(\widetilde{\lambda (\mathbf{F })}(Y):=\varLambda (\widetilde{f_{ij}}(Y))\), which is actually an eigenvalue function of the polynomial matrix \(\widetilde{\mathbf{F }}\).
Denote \(R(Y):=\sum _{i=1}^s R_i^2(Y)\). With the notations given above, we have the following useful lemma.
Lemma 2
Let \(\mathbf{F }=(f_{ij})\in \mathcal {S}_t(\mathbb R[X])\) be a polynomial matrix of degree \(d>0\). Let \(\lambda (\mathbf{F })\) is an eigenvalue function of \(\mathbf{F }\). If \(\lambda (\mathbf{F })> 0\) on P, then there exists a sufficiently large constant c such that \(\widetilde{\lambda (\mathbf{F })}+cR>0\) on the standard m-simplex \( \varDelta _m\). More explicitly, this holds for \(c>-m_1/m_2\), where \(m_1\) is the minimum of \(\widetilde{\lambda (\mathbf{F })}\) on \(\varDelta _m\) and \(m_2\) is the minimum of the polynomial R on the compact set \(\varDelta _m\cap \{y\in \mathbb R^m|\widetilde{\lambda (\mathbf{F })}(y)\le 0\}\).
Proof
The proof goes along the same lines as the proof of [10, Lemma 4], using continuity of the function \(\widetilde{\lambda (\mathbf{F })}\). \(\square \)
Applying this lemma, we have
Lemma 3
Let \(\mathbf{F }=(f_{ij})\in \mathcal {S}_t(\mathbb R[X])\) be a polynomial matrix of degree \(d>0\). Denote \(\widetilde{\mathbf{F }}:=(\widetilde{f_{ij}})\in \mathcal {S}_t(\mathbb R[Y])\). Assume \(\mathbf{F }\succ \mathbf{0 }\) on P. Then there exists a sufficiently large constant c such that \(\widetilde{\mathbf{F }}+cR\mathbf{I }_t \succ \mathbf{0 }\) on the standard m-simplex \(\varDelta _m\).
Proof
Since \(\mathbf{F }\) is positive definite on P, its eigenvalue functions \(\lambda _k(\mathbf{F }), k=1,\ldots ,t\), are positive on P. It follows from Lemma 2 that for each k, there exists a sufficiently large constant \(c_k\) such that \(\widetilde{\lambda _k(\mathbf{F })} + c_k R\) is positive on \(\varDelta _m\). Let \( c:=\max _{k=1,\ldots ,t}{c_k}\). Then \(\widetilde{\lambda _k(\mathbf{F })} + c R\) is positive on \(\varDelta _m\) for each \(k=1, \ldots , t\). Note that, \(\widetilde{\lambda _k(\mathbf{F })}\), \(k=1,\ldots ,t\), are eigenvalues of the polynomial matrix \(\widetilde{\mathbf{F }}\). Moreover, the eigenvalues of the matrix \(\widetilde{\mathbf{F }}+cR\mathbf{I }_t\) are \(\widetilde{\lambda _k(\mathbf{F })}+cR\), \(k=1,\ldots ,t\). It follows that \(\widetilde{\mathbf{F }}+cr\mathbf{I }_t\) is positive definite on \(\varDelta _m\). The proof is complete. \(\square \)
Note that \(\overline{\mathbf{F }}:=\widetilde{\mathbf{F }}+cR\mathbf{I }_t\) need not be homogeneous. However, by homogenization \(\overline{\mathbf{F }}\) by \(\sum _{i=1}^m Y_i\), we obtain a homogeneous polynomial matrix of the same degree as \(\overline{\mathbf{F }}\). More explicitly, if we express \(\overline{\mathbf{F }}\) as
then its homogenization by \(\sum _{i=1}^m Y_i\) is
\(\overline{\mathbf{F }}^h\) is a homogeneous polynomial matrix of degree d. Moreover, \(M_\varphi (\overline{\mathbf{F }}^h)=\mathbf{F }\), and \(\overline{\mathbf{F }}^h\) is positive definite on \(\varDelta _m\).
Now we can state and prove the following matrix version of Handelman’s Positivstellensatz.
Theorem 3
Let P, \(\mathbf{F }\), \(\overline{\mathbf{F }}\), \(\overline{\mathbf{F }}^h\) be given as above, with \(\mathbf{F }\) positive definite on P. Assume that \(\overline{\mathbf{F }}^h\succcurlyeq \lambda \mathbf{I }_t\) on \(\varDelta _m\) for some \(\lambda >0\). Let \(d:=\text{ deg }(\overline{\mathbf{F }})\) and \(C:=C(\overline{\mathbf{F }}^h)\). Then for each \(N> \dfrac{d(d-1)}{2} \dfrac{C}{\lambda }-d\), \(\mathbf{F }\) can be represented as
where each \(\mathbf{F }_\alpha \in \mathcal {S}_t(\mathbb R)\) is positive definite.
Proof
Firstly, applying the matrix version of Pólya’s Positivstellensatz given in [13, Theorem 3] for \(\overline{\mathbf{F }}^h\), observing that \(d=\deg (\overline{\mathbf{F }}^h)\). Then, applying \(M_\varphi \), using the fact that \(M_\varphi (\overline{\mathbf{F }}^h)=\mathbf{F }\) and \(\varphi (\sum _{i=1}^m Y_i)=1\). \(\square \)
As a summary, we formulate the construction given above as a procedure to find a representation for the polynomial matrix \(\mathbf{F }=(f_{ij})\in \mathcal {S}_t(\mathbb R[X])\) positive definite on a convex, compact polyhedron \(P\subseteq \mathbb R^n\) as follows:
-
(1)
Following [4] to find positive constants \(c_i\in \mathbb R\) such that \(\sum _{i=1}^m c_iL_i(X) = 1\). Constructing the \(c_i\)’s comes down to find a positive solution to an under-determined linear system.
-
(2)
Solving the system of equations
$$\begin{aligned} X_i=\sum _{j=1}^m b_{ij}L_{i}(X), \quad i=1,\ldots ,n, \end{aligned}$$to find the matrix \(\mathbf{B }=(b_{ij})_{i=1,\ldots ,n; j=1,\ldots ,m}\).
-
(3)
Using (8) to find \(\widetilde{f_{ij}}\), \(i,j=1,\ldots ,t\).
-
(4)
Using Gröbner bases to find a basis \(\{R_1,\ldots ,R_s\}\) for the kernel \(\text{ Ker }(\varphi )\) of the ring homomorphism \(\varphi \).
-
(5)
Following the proof of Lemma 3 to find a sufficiently large c such that \(\widetilde{\mathbf{F }}+c R\mathbf{I }_t \succ \mathbf{0 }\) on \(\varDelta _m\).
-
(6)
Using (9) to construct the homogenization \(\overline{\mathbf{F }}^h\) of \(\overline{\mathbf{F }}:=\widetilde{\mathbf{F }}+c R \mathbf{I }_t\).
-
(7)
Following the proof of Lemma 4 below to find the positive number \(\lambda \) such that \(\overline{\mathbf{F }}^h(y)\succcurlyeq \lambda \mathbf{I }_t\) for all \(y\in \varDelta _m\).
Lemma 4
Let \(K\subseteq \mathbb R^m\) be a non-empty compact set, and \(\mathbf{G }\in \mathcal {S}_t(\mathbb R[Y])\). Then there exists a number \(c \in \mathbb R\) such that
In particular, if \(\mathbf{G }(y)\succ 0\) for all \(y\in K\), then we can choose a number \(c >0 \) such that \(\mathbf{G }(y)\succcurlyeq c \mathbf{I }_t, \text{ for } \text{ all } y \in K. \)
Proof
Let \(\lambda _{1}(\mathbf{G }), \ldots , \lambda _t(\mathbf{G })\) be (real-valued) eigenvalue functions of the polynomial matrix \(\mathbf{G }\in \mathcal {S}_t(\mathbb R[Y]). \) It follows from [16, Theorem 1] that \(\lambda _i(\mathbf{G })\) are continuous functions. Since K is compact, let
Denote \(c:=\min _{i=1,\ldots ,t} c_i\). Since eigenvalue functions of \(\mathbf{G }-c\mathbf{I }_t\) are \(\lambda _i(\mathbf{G })-c\), \(i=1,\ldots ,t\), it follows from the definition of c that
for all \(y\in K\) and for all \(i=1,\ldots ,t\). This implies that \(\mathbf{G }(y)\succcurlyeq c\mathbf{I }_t, \text{ for } \text{ all } y \in K.\) \(\square \)
-
(8)
Using the formula (4) to find the number \(C:=C(\overline{\mathbf{F }}^h)\).
-
(9)
Find a number \(N> \dfrac{d(d-1)}{2} \dfrac{C}{\lambda }-d\).
-
(10)
Find the matrix coefficients of the polynomial matrix \((\sum _{i=1}^m Y_i)^N \overline{\mathbf{F }}^h\in \mathcal {S}_t(\mathbb R[Y])\), substituting \(Y_i\) by \(L_i(X)\), we obtain the desired representation for \(\mathbf{F }\).
We illustrate the procedure given above by the following example which is computed explicitly using MATLAB Version 7.10 (Release 2010a) and its add-on GloptiPoly 3 discovered by Henrion et al. [5].
Example 1
Let us consider the unit square centered at the origin
Choosing \(c_1=c_2=c_3=c_4=\dfrac{1}{4}\), we have \(\sum _{i=1}^4c_iL'_i(x,y)=1\). Therefore, consider
we have \( \sum _{i=1}^4 L_i = 1\).
It is easy to see that the matrix \(\mathbf{B }=\left[ \begin{array}{llll} 2&{}\quad -2&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 2&{}\quad -2 \end{array}\right] \) satisfies the equation
Let \(\varphi {:}\,\mathbb R[y_1,y_2,y_3,y_4] \rightarrow \mathbb R[x,y]\) be the ring homomorphism defined by \(\varphi (y_i):=L_i(x,y)\), \(i=1,2,3,4\). Using any monomial ordering in \(\mathbb R[y_1,y_2,y_3,y_4]\) we can find a Gröbner basis for the kernel \(\text{ Ker }(\varphi )\) of \(\varphi \):
Consider \(R:=R_1^2+R_2^2\).
Now we consider the polynomial matrix
Eigenvalue functions of \(\mathbf{F }\) are
For any \((x,y)\in P\) we have \(\lambda _i(\mathbf{F })(x,y)\ge 2\), \(i=1,2\). Hence \(\mathbf{F }(x,y)\succcurlyeq 2\mathbf{I }_2\) for every \((x,y)\in P\).
With the matrix \(\mathbf{B }\) considered above, using the formula (8), we find \(\widetilde{f_{ij}}\), \(i,j=1,2\), and then we obtain the polynomial matrix \(\widetilde{\mathbf{F }}=(\widetilde{f}_{ij})\). We can compute exactly the eigenvalue functions \(\lambda _1(\widetilde{\mathbf{F }})\) and \(\lambda _2(\widetilde{\mathbf{F }})\) of \(\widetilde{\mathbf{F }}\) which satisfy
Moreover, \(\min _{\varDelta _4\cap \{\lambda _2(\widetilde{\mathbf{F }})\le 0\}} R(y_1,y_2,y_3,y_4) = 0.125\). Thus we can choose
for which \(\overline{\mathbf{F }}:=\widetilde{\mathbf{F }}+cR \mathbf{I }_2 \succ \mathbf{0 } \) on \(\varDelta _4\).
Homogenizing \(\overline{\mathbf{F }}\) by \(\sum \nolimits _{i=1}^4y_i\) we obtain a homogeneous polynomial matrix \(\overline{\mathbf{F }}^h=(\overline{f_{ij}}^h)\). Then we compute exactly the eigenvalue functions of the matrix \(\overline{\mathbf{F }}^h\) which satisfy
It follows that \(\overline{\mathbf{F }}^h\succcurlyeq 1.5294 ~ \mathbf{ I }_2\) on \(\varDelta _4\), and put \(\lambda :=1.5294\).
Using the formula (4), we can find the number \(C:=C(\overline{\mathbf{F }}^h)=\dfrac{1044}{24}=\dfrac{87}{2}\).
Therefore, choosing \(N=167\), the polynomial matrix \((y_1+y_2+y_3+y_4)^{167}\overline{\mathbf{F }}^h\) has positive definite coefficients.
Find the matrix coefficients of the polynomial matrix \((y_1+y_2+y_3+y_4)^{167}\overline{\mathbf{F }}^h\in \mathcal {S}_t(\mathbb R[y_1,y_2,y_3,y_4])\), substituting \(y_i\) by \(L_i(x,y)\), we obtain the desired representation for \(\mathbf{F }\).
As a consequence of Theorem 3, we obtain the following matrix version of Schmüdgen’s Positivstellensatz for convex, compact polyhedra.
Corollary 2
Let P, \(\mathbf{F }\), \(\overline{\mathbf{F }}\), \(\overline{\mathbf{F }}^h\) be given as above, with \(\mathbf{F }\) positive definite on P. Assume that \(\overline{\mathbf{F }}^h\succcurlyeq \lambda \mathbf{I }_t\) on \(\varDelta _m\) for some \(\lambda >0\). Let \(d:=\text{ deg }(\overline{\mathbf{F }})\) and \(C:=C(\overline{\mathbf{F }}^h)\). Then for \(N> \dfrac{d(d-1)}{2} \dfrac{C}{\lambda }-d\), \(\mathbf{F }\) can be represented as
where \(\mathbf{F }_e\in \mathcal {M}_t(\mathbb R[X])\) and the degree of each sum of squares \(\mathbf{F }_e^T\mathbf{F }_e\) does not exceed \(N+d\).
Proof
The proof follows directly from Theorem 3, with the observation that any positive definite matrix \(\mathbf{F }_{\alpha }\in \mathcal {S}_t(\mathbb R)\) can be written as
where \( \mathbf{G }_{\alpha }\in \mathcal {M}_t(\mathbb R)\) is a non-singular matrix. \(\square \)
References
Cimpric̆, J.: Strict Positivstellensätze for matrix polynomials with scalar constraints. Linear Algebra Appl. 434(8), 1879–1883 (2011)
Cimpric̆, J.: Real algebraic geometry for matrices over commutative rings. J. Algebra 359, 89–103 (2012)
Cimpric̆, J., Zalar, A.: Moment problems for operator polynomials. J. Math. Anal. Appl. 401(1), 307–316 (2013)
Handelman, D.: Representing polynomials by positive linear functions on compact convex polyhedra. Pac. J. Math. 132, 35–62 (1988)
Henrion, D., Lasserre, J., Loefberg, J.: GloptiPoly 3: Moments, Optimization and Semidefinite Programming. http://homepages.laas.fr/henrion/software/gloptipoly3/
Krivine, J.L.: Quelques propértiés des préordres dans les anneaux commutatifs unitaires. C. R. Acad. Sci. Paris 258, 3417–3418 (1964)
Lê, C.-T.: Some Positivstellensätze for polynomial matrices. Positivity 19(3), 513–528 (2015)
Marshall, M.: Positive Polynomials and Sums of Squares, Mathematical Surveys and Monographs. American Mathematical Society, Providence (2008)
Pólya, G.: Über positive Darstellung von Polynomen Vierteljschr. Naturforsch. Ges. Zürich 73, 141–145 (1928). In: Boas, R.P. (ed.) Collected Papers, vol. 2, pp. 309–313. MIT Press, Cambridge (1974)
Powers, V., Reznick, B.: A new bound for Pólya’s theorem with applications to polynomials positive on polyhedra. J. Pure Appl. Algebra 164, 221–229 (2001)
Prestel, A., Delzell, C.N.: Positive Polynomials—From Hilbert’s 17th Problem to Real Algebra. Springer, New York (2001)
Savchuk, Y., Schmüdgen, K.: Positivstellensätze for algebras of matrices. Linear Algebra Appl. 43, 758–788 (2012)
Scherer, C.W., Hol, C.W.J.: Matrix sum-of-squares relaxations for robust semi-definite programs. Math. Program. 107(1–2), 189–211 (2006)
Schmüdgen, K.: The K-moment problem for compact semi-algebraic sets. Math. Ann. 289(2), 203–206 (1991)
Schweighofer, M.: An algorithmic approach to Schmüdgen’s Positivstellensatz. J. Pure Appl. Algebra 166(3), 307–319 (2002)
Zedek, M.: Continuity and location of zeros of linear combinations of polynomials. Proc. Am. Math. Soc. 16, 78–84 (1965)
Acknowledgements
The authors would like to thank the anonymous referees for their useful comments and suggestions. They would also like to thank Dr. Ngo Lam Xuan Chau for his fruitful discussion to compute the kernel of the homomorphism \(M_\varphi \). Both authors are partially supported by the Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant No. 101.01-2016.27.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lê, CT., Dư, THB. Handelman’s Positivstellensatz for polynomial matrices positive definite on polyhedra. Positivity 22, 449–460 (2018). https://doi.org/10.1007/s11117-017-0520-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11117-017-0520-y
Keywords
- Handelman’s theorem
- Pólya’s theorem
- Schmüdgen’s theorem
- Matrix polynomial
- Polynomial matrix
- Positivstellensatz
- Positive definite
- Standard simplex
- Polyhedron