1 Introduction

The question of representing real polynomials by sums of squares of polynomials is one of the main topics in real algebraic geometry. Starting with Hilbert’s question of whether every nonnegative real polynomial in several variables is a sum of squares of real rational functions, many questions have arisen, and many interesting results. For more details, we refer the reader to [2, 14, 26, 31] with the references therein.

Let K be a basic closed semialgebraic set in \(\mathbb {R}^{n}\) defined by finitely many polynomial inequalities \(\{x \in \mathbb {R}^{n} \,|\, g_{1}(x) \ge 0, \ldots , g_{m}(x) \ge 0\}\), where each \(g_{i}\) is a real polynomial. Positivstellensätze are results characterizing all polynomials, which are positive on K, in terms of sums of squares and the polynomials \(g_{i}\) used to describe K. Theorems about the existence of such representations have various applications, notably in problems of optimizing polynomial functions on semialgebraic sets (see, for example, [9, 12,13,14]).

In case K is compact, Schmüdgen [33] has proved that any positive polynomial on K is in the preordering generated by the \(g_{i}\)’s, i.e., the set of finite sums of elements of the form \(\sigma _{e} g_{1}^{e_{1}} {\cdots } g_{m}^{e_{m}}\), where \(e_{i} \in \{0, 1\}\) and each \(\sigma _{e}\) is a sum of squares of polynomials. Putinar [27] has proved that, under a certain condition where the preordering can be replaced by \(\sigma _{0} + \sigma _{1} g_{1} + {\cdots } + \sigma _{m} g_{m}\), where each \(\sigma _{i}\) is a sum of squares of polynomials.

If K is not compact, the above characterizations do not hold in general and can depend on the choice of generators. In fact, Scheiderer [28] has shown that Schmüdgen’s Positivstellensätz does not hold if K is not compact and \(\dim K \ge 3\), or \(\dim K = 2\) and K contains a two-dimensional cone. On the other hand, there exist non-compact semialgebraic sets K of any dimension for which Schmüdgen’s Positivstellensätz (or even Putinar’s Positivstellensätz) holds for polynomials, which are positive on K and satisfy certain extra conditions (see [8, 15, 21, 24, 25, 28,29,30, 34]).

We also would like to note that both Schmüdgen’s and Putinar’s Positivstellensätz were extended from the usual real polynomials to the real symmetric matrix polynomials or operator polynomials (see [3, 5, 10, 32]). Here and in the following, by a matrix polynomial, we mean a polynomial whose coefficients are matrices of the same order. Equivalently, a matrix polynomial is a matrix whose entries are all polynomials. The symmetricalness and positiveness of a matrix polynomial are defined point-wise. In particular, a positive semidefinite real matrix polynomial is necessarily symmetric.

The aim of this paper is to extend the results obtained in [8] to matrix polynomials. More precisely, via Newton polyhedra, we define and study (symmetric) matrix polynomials, which are nondegenerate at infinity. From this, we construct a class of (not necessarily compact) semialgebraic sets in \(\mathbb {R}^{n}\) such that for each set K in the class, we have the following two statements: (i) the space of symmetric matrix polynomials, whose eigenvalues are bounded on K, is described in terms of the Newton polyhedron corresponding to the matrix polynomials used to define K and is generated by a finite set of matrix monomials; and (ii) a matrix version of Schmüdgen’s Positivstellensätz holds for matrix polynomials whose eigenvalues are “strictly” positive and bounded on K.

Notation

Throughout this paper, \(\mathbb {Z}\) denotes the set of integer numbers, \(\mathbb {Z}_{\ge 0}\) the set of nonnegative integer numbers, and \({\Bbb R}^{n}\) the Euclidean space of dimension n. The corresponding inner product (resp., norm) in \({\Bbb R}^{n}\) is defined by \(\langle x, y \rangle \) for any \(x, y \in {\Bbb R}^{n}\) (resp., \(\| x \| := \sqrt {\langle x, x \rangle }\) for any \(x \in {\Bbb R}^{n}\)). We let \({\Bbb R}[x]\) denote the ring of real polynomials in n indeterminates.

In what follows, we fix a positive integer number d. We will denote by \(\text {Mat}_{d}({\mathbb {R}[x]})\) the ring of all \(d \times d\) matrices with entries from \(\mathbb {R}[x]\) (elements in this ring will be called matrix polynomials) and by \(\text {Sym}_{d}({\mathbb {R}[x]})\) the set of all symmetric matrix polynomials from \(\text {Mat}_{d}({\mathbb {R}[x]})\). The unit of \(\text {Mat}_{d}({\mathbb {R}[x]})\) is the identity matrix \(I_{d}\).

Recall that a symmetric matrix \(A \in \mathbb {R}^{d \times d}\) is called positive semidefinite if \(\langle A v, v \rangle \ge 0\) for all vectors \(v \in \mathbb {R}^{d}\). A is positive definite if it is positive semidefinite and invertible. For symmetric matrices A and B of the same size, we write \(A \succeq B\) (resp., \(A \succ B\)) to express that AB is positive semidefinite (resp., positive definite). Geometrically, A is positive semidefinite if and only if all of its eigenvalues are nonnegative and A is positive definite if and only if all of its eigenvalues are positive.

Given a symmetric matrix polynomial \(F \in \text {Sym}_{d}({\mathbb {R}[x]})\) and a set \(K \subset \mathbb {R}^{n}\), we write \(F \succeq 0\) (resp., \(F \succ 0\)) on K if for all \(x \in K\), the matrix \(F(x)\) is positive semidefinite (resp., the matrix \(F(x)\) is positive definite).

A subset \(\mathcal {M}\) of \(\text {Sym}_{d}({\mathbb {R}[x]})\) is said to be a quadratic module if \(I_{d} \in \mathcal {M}, \mathcal {M} + \mathcal {M} \subset \mathcal {M}\) and \(A^{T} \mathcal {M} A \subset \mathcal {M}\) for every \(A \in \text {Mat}_{d}({\mathbb {R}[x]})\). The smallest quadratic module which contains a given subset \(\mathcal {G}\) of \(\text {Sym}_{d}({\mathbb {R}[x]})\) will be denoted by \(\mathcal {M}_{\mathcal {G}}\). It consists of all finite sums of elements of the form \(A^{T}GA\) where \(G \in \mathcal {G} \cup \{I_{d}\}\) and \(A \in \text {Mat}_{d}({\mathbb {R}[x]})\). A subset \(\mathcal {T}\) of the set \(\text {Sym}_{d}({\mathbb {R}[x]})\) is said to be a preordering if \(\mathcal {T}\) is a quadratic module and the set \(\mathcal {T} \cap {\mathbb {R}[x]} \cdot I_{d}\) is closed under multiplication. The smallest preordering containing a given set \(\mathcal {G} \subset \text {Sym}_{d}({\mathbb {R}[x]})\) will be denoted by \(\mathcal {T}_{\mathcal {G}}\).

2 Nondegeneracy of Matrix Polynomials

Let \(\mathcal {G} := \{G_{1}, \ldots , G_{m}\} \subset \text {Sym}_{d}(\mathbb {R}[x])\). For each \(i = 1, \ldots , m\), we can write

$$G_{i}(x) = \sum\limits_{\alpha \in \mathbb{Z}_{\ge 0}^{n}} A_{i, \alpha} x^{\alpha} $$

for some symmetric matrices \(A_{i, \alpha } \in \text {Sym}_{d}(\mathbb {R})\). Then, we define

$$\text{supp}(\mathcal{G}) := \bigcup\limits_{i = 1}^{m} \left\{\alpha \in \mathbb{Z}_{\ge 0}^{n} \,|\, A_{i, \alpha} \ne 0 \right\}. $$

The Newton polyhedron (at infinity) of \(\mathcal {G}\), denoted by \({\Gamma }(\mathcal {G})\), is defined as the convex hull in \(\mathbb {R}^{n}\) of the set \(\text {supp}(\mathcal {G}) \) . The system \(\mathcal {G}\) is said to be convenient if \({\Gamma }(\mathcal {G})\) intersects each coordinate axis in a point different from the origin 0 in \(\mathbb {R}^{n}\).

Given a nonzero vector \(q \in {\Bbb R}^{n}\), we define

$$\begin{array}{@{}rcl@{}} \ell(q, {\Gamma}(\mathcal{G})) &:=& \min \{\langle q, \alpha \rangle \,:\, \alpha \in {\Gamma}(\mathcal{G})\}, \\ {\Delta}(q, {\Gamma}(\mathcal{G})) &:=& \{\alpha \in {\Gamma}(\mathcal{G}) \,:\, \ \langle q, \alpha \rangle = \ell(q, {\Gamma}(\mathcal{G})) \}. \end{array} $$

We say that a subset \({\Delta }\) of \({\Gamma }(\mathcal {G})\) is a face of \({\Gamma }(\mathcal {G})\) if there exists a nonzero vector \(q \in {\Bbb R}^{n}\) such that \({\Delta } = {\Delta }(q, {\Gamma }(\mathcal {G}))\). The dimension of a face \({\Delta }\) is defined as the minimum of the dimensions of the affine subspaces containing \({\Delta }\). The faces of \({\Gamma }(\mathcal {G})\) of dimension 0 are called the vertices of \({\Gamma }(\mathcal {G})\). The Newton boundary (at infinity) of the system \(\mathcal {G}\), denoted by \({\Gamma }_{\infty }(\mathcal {G})\), is defined as the union of all faces \({\Delta }(q, {\Gamma }(\mathcal {G}))\) for some \(q \in \mathbb {R}^{n}\) with \(\min _{j = 1, \ldots , n}q_{j} < 0\). For \(i = 1, \ldots \), m and \({\Delta } \in {\Gamma }_{\infty }(\mathcal {G})\), we denote by \(G_{i, {\Delta }}\) the matrix polynomial \({\sum }_{\alpha \in {\Delta }} A_{i, \alpha } x^{\alpha }\).

Let

$$\rho(x) := \sum\limits_{\alpha \in V(\mathcal{G})} |x^{\alpha}|, $$

where \(V(\mathcal {G})\) is the set of all vertices of \({\Gamma }(\mathcal {G})\).

Remark 2.1

By the Tarski–Seidenberg theorem (see, for example, [1, 2]), it is easy to check that \(\rho \) is a semialgebraic function on \(\mathbb {R}^{n}\). Furthermore, we can find a constant \(c > 0\) such that (see also [22, Remark 3.1])

$$c\left( \sum\limits_{\alpha \in {\Gamma}(\mathcal{G}) \cap \mathbb{Z}^{n}_{\ge 0}} |x^{\alpha}| \right) \le \rho(x) \ \le \ \sum\limits_{\alpha \in {\Gamma}(\mathcal{G}) \cap \mathbb{Z}^{n}_{\ge 0}} |x^{\alpha}| \quad \text{ for all} \quad x \in \mathbb{R}^{n}. $$

From now on for each \(A\in \text {Sym}_{d}(\mathbb {R})\), we denote by \(\lambda (A)\) the largest eigenvalue of A. It is well known that

$$\lambda(A) = \max\{\langle Av, v \rangle \,|\, v \in \mathbb{R}^{n}, \|v\| = 1\}. $$

The following definition is inspired from the works of Gindikin [6, 7] and Mikhalov [17, 18].

Definition 2.1

We say that the system \(\mathcal {G} := \{G_{1}, \ldots , G_{m}\}\) is nondegenerate (at infinity) if and only if for any face \({\Delta } \in {\Gamma }_{\infty }(\mathcal {G})\),

$$\max\limits_{i = 1, \ldots, m} \lambda(G_{i, {\Delta}}(x)) > 0 \quad\text{for any}\ x \in (\mathbb{R}\backslash \{0\})^{n}. $$

Example 2.1

Let \(\alpha ^{1}, \ldots , \alpha ^{m}\) be nonzero vectors in \(\mathbb {Z}_{\ge 0}^{n}\). Then, the system

$$\{x^{2\alpha^{1}} \cdot I_{d}, \ldots, x^{2\alpha^{m}} \cdot I_{d} \} \subset \text{Sym}_{d}(\mathbb{R}) $$

is nondegenerate.

Lemma 2.1

There exists a constant \(c > 0\) such that

$$\max\limits_{i = 1, \ldots, m} \lambda(G_{i}(x)) \le c \rho(x) \quad \text{ for all} \quad x \in \mathbb{R}^{n}. $$

Proof

Take any \(x \in \mathbb {R}^{n}\) and any \(i \in \{1, \ldots , m\}\). Let \(\lambda \) be an eigenvalue of \(G_{i}(x)\). By definition, there exists a vector \(v \in \mathbb {R}^{n}\) with ∥v∥ = 1 such that \(\lambda = \langle G_{i}(x)v, v \rangle \). Write \(G_{i}(x) = {\sum }_{\alpha }A_{i, \alpha }x^{\alpha }\). Then,

$$|\lambda| = | \langle G_{i}(x)v, v \rangle| \le \sum\limits_{\alpha} | \langle A_{i, \alpha} v, v \rangle | \cdot | x^{\alpha} | \le \sum\limits_{\alpha} \| A_{i, \alpha}\| \cdot | x^{\alpha} | \le \left( \max\limits_{\alpha} \| A_{i, \alpha}\| \right) \cdot {\sum}_{\alpha} | x^{\alpha} |. $$

Since \(\lambda \) is an arbitrary eigenvalue of \(G_{i}(x)\), we get

$$\max\limits_{i = 1, \ldots, m} \lambda(G_{i}(x)) \le \max\limits_{i = 1, \ldots, m} \left[ \left( \max\limits_{\alpha} \| A_{i, \alpha}\| \right) \cdot \sum\limits_{\alpha} | x^{\alpha} | \right]. $$

This, together with Remark 3, completes the proof. □

We come now to the main result of this section.

Theorem 2.1

The following two statements are equivalent:

  1. (i)

    The system \(\mathcal {G} := \{G_{1}, \ldots , G_{m}\}\) is nondegenerate;

  2. (ii)

    There exist constants \(c > 0\) and \(R > 0\) such that

    $$c \rho(x) \le \max_{i = 1, \ldots, m} \lambda(G_{i}(x)) \quad \text{ for all} \quad \|x\| \ge R. $$

Proof

(i) \(\Rightarrow \) (ii) By contradiction and using the Curve Selection Lemma at infinity ([19, 20]), we can find an analytic curve φ(s) = (φ1(s),…, φn(s)) for \(s \in (0, \epsilon )\), such that

  1. (a)

    \(\|\varphi (s) \| \rightarrow + \infty \) as \(s \rightarrow 0^{+};\) and

  2. (b)

    \(\rho (\varphi (s)) \gg \max _{i = 1, \ldots , m} \lambda (G_{i}(\varphi (s)))\) as \(s \rightarrow 0^{+}\).

Let \(J := \{ j \, | \, \varphi _{j} \not \equiv 0 \} \subset \{ 1, \ldots , n \}\). By Condition (a), \(J \neq \emptyset \). For \(j \in J, \)we can expand the coordinate \(\varphi _{j}\) in terms of the parameter: say

$$\varphi_{j} (s) = {x_{j}^{0}} s^{q_{j}} + \text{ higher-order terms in } s,$$

where \({x_{j}^{0}} \neq 0\) and \(q_{j} \in \mathbb {Q}\). From Condition (a), we get \(\min _{j \in J} q_{j} <0\).

Let \(\mathbb {R}^{J} := \{ \alpha = (\alpha _{1}, \ldots , \alpha _{n}) \in \mathbb {R}^{n} \, | \, \alpha _{j} = 0 \text { for}~ j \not \in J \}\). We first suppose that \({\Gamma } (\mathcal {G}) \cap \mathbb {R}^{J} = \emptyset \). Then, for each \(\alpha := (\alpha _{1}, \ldots , \alpha _{n}) \in {\Gamma }(\mathcal {G})\), there exists an index \(j \not \in J\) such that \(\alpha _{j} > 0\), and so \((\varphi _{j}(s))^{\alpha _{j}} \equiv 0\). Hence,

$$\begin{array}{@{}rcl@{}} \rho(\varphi(s)) &=& \sum\limits_{\alpha \in V(\mathcal{G})} |\varphi(s)|^{\alpha} = \sum\limits_{\alpha} \left( \prod\limits_{j \in J} |\varphi_{j}(s)|^{\alpha_{j}} \prod\limits_{j \not \in J} |\varphi_{j}(s)|^{\alpha_{j}} \right) \equiv 0, \\ G_{i}(\varphi(s)) &=& \sum\limits_{\alpha} A_{i, \alpha} \varphi(s)^{\alpha} = \sum\limits_{\alpha} A_{i, \alpha} \left( \prod\limits_{j \in J} \varphi_{j}(s)^{\alpha_{j}}\prod\limits_{j \not \in J} \varphi_{j}(s)^{\alpha_{j}} \right) \equiv 0, \end{array} $$

which contradicts to (b). Therefore, \({\Gamma } (\mathcal {G}) \cap \mathbb {R}^{J} \ne \emptyset \). Let \(\ell \) be the minimal value of the linear function \({\sum }_{j \in J} q_{j} \alpha _{j}\) on \({\Gamma }(\mathcal {G}) \cap \mathbb {R}^{J}\), and let \({\Delta }\) be the (unique) maximal faceFootnote 1 of \( {\Gamma } (\mathcal {G}) \cap \mathbb {R}^{J}\) where this function takes its minimal value. Then, \({\Delta } \in {\Gamma }_{\infty }(\mathcal {G})\) because \(\min _{j \in J} q_{j} < 0\). Moreover, we can write

$$\rho(\varphi (s)) = \rho_{{\Delta}} (x^{0}) s^{\ell} + \text{ higher-order terms in } s, $$

where \(x^{0} :=({x_{1}^{0}}, \ldots ,{x_{n}^{0}} )\) and \({x_{j}^{0}} := 1\) for \(j \not \in J\). Notice that \(\rho _{{\Delta }} (x^{0}) > 0\). Hence,

$$ \rho(\varphi (s)) \simeq s^{\ell} \quad \text{ as} \quad s \to 0^{+}. $$
(2.1)

Let \(i^{*} \in \{1, \ldots , m\}\) be an index such that

$$ \lambda(G_{i^{*}, {\Delta}}(x^{0})) = \max\limits_{i = 1, \ldots, m} \lambda(G_{i, {\Delta}}(x^{0})) > 0. $$
(2.2)

By definition, there exists a vector \(v \in \mathbb {R}^{n}\) with \(\|v\| = 1\) such that

$$\lambda(G_{i^{*}, {\Delta}}(x^{0})) = \langle G_{i^{*}, {\Delta}}(x^{0}) v, v \rangle. $$

Hence, if we write \(G_{i}(x) = \sum A_{i,\alpha }x^{\alpha }\), then we deduce successively

$$\begin{array}{@{}rcl@{}} \lambda(G_{i^{*}}(\varphi(s))) & \ge & \langle G_{i^{*}}(\varphi(s)) v, v \rangle = \sum\limits_{\alpha} \langle A_{i^{*}, \alpha} v, v \rangle (\varphi(s))^{\alpha} \\ & = & \left( \sum\limits_{\alpha \in {\Delta}} \langle A_{i^{*}, \alpha} v, v \rangle \right) s^{\ell} + \text{ higher-order terms in } s \\ & = & \langle G_{i^{*}, {\Delta}}(x^{0}) v, v \rangle s^{\ell} + \text{ higher-order terms in } s \\ & = & \lambda(G_{i^{*}, {\Delta}}(x^{0})) s^{\ell} + \text{ higher-order terms in } s. \end{array} $$

Combining this with (2.1) and (2.2) and Condition (b), we get a contradiction.

(ii) \(\Rightarrow \) (i) By contradiction, suppose that there exists a face \({\Delta } \in {\Gamma }_{\infty }(\mathcal {G})\) and a point \(x^{0} \in (\mathbb {R} \setminus \{0\})^{n}\) such that

$$\max\limits_{i = 1, \ldots, m} \lambda(G_{i, {\Delta}}(x^{0})) \le 0. $$

Let \(q := (q_{1}, \ldots , q_{n})\) be a nonzero vector in \(\mathbb {R}^{n}, \)with \(\min _{j = 1, \ldots , n} q_{j} < 0, \)such that \({\Delta } = {\Delta }(q, {\Gamma }(\mathcal {G}))\). We define the monomial curve \(\varphi \colon (0, 1) \rightarrow \mathbb {R}^{n}, s \mapsto (\varphi _{1}(s), \ldots , \varphi _{n}(s)), \)by setting

$$\varphi_{j}(s) := \left\{\begin{array}{lll} {x_{j}^{0}} s^{q_{j}} & \text{ if } q_{j} \ne 0, \\ 0 & \text{ otherwise.} \end{array}\right. $$

Then, \(\|\varphi (s) \| \rightarrow + \infty \) as \(s \rightarrow 0^{+}\). Moreover, we can write

$$\rho(\varphi(s)) = \rho_{{\Delta}}(x^{0}) s^{\ell} + \text{ higher-order terms in } s, $$

where \(\ell := \ell (q, {\Gamma }(\mathcal {G}))\).

On the other hand, for \(i = 1, \ldots , m\), the functions

$$(0, 1) \rightarrow \mathbb{R}^{n}, \quad s \mapsto \lambda(G_{i}(\varphi(s)), $$

are semialgebraic. By Monotonicity Lemma (see, for example, [1, 2]), these functions are \(C^{1}\), and either constant or strictly monotone, for \(0 < s \ll 1\). Therefore, there exists an index \(i^{*} \in \{1, \ldots , m\}\) such that

$$\lambda(G_{i^{*}}(\varphi(s))) = \max\limits_{i = 1, \ldots, m} \lambda(G_{i}(\varphi(s))) \quad \text{ for all} \quad 0 < s \ll 1. $$

Furthermore, if we write \(G_{i}(x) = {\sum }_{\alpha } A_{i,\alpha }x^{\alpha }\), then we deduce successively

$$\begin{array}{@{}rcl@{}} \lambda(G_{i^{*}}(\varphi(s))) & = & \max\limits_{\|v\| = 1} \langle G_{i^{*}}(\varphi(s)) v, v \rangle = \max\limits_{\|v\| = 1} \left( \sum\limits_{\alpha} \langle A_{i^{*}, \alpha} v, v \rangle (\varphi(s))^{\alpha} \right) \\ & = & \max\limits_{\|v\| = 1} \left[\left( \sum\limits_{\alpha \in {\Delta}} \langle A_{i^{*}, \alpha} v, v \rangle (x^{0})^{\alpha} \right)s^{\ell} + \text{ higher-order terms in } s \right]\\ & \le & \lambda(G_{i^{*}, {\Delta}}(x^{0})) s^ \ell + \text{ higher-order terms in } s. \end{array} $$

Therefore, by the assumption (ii), we get

$$0 < c \rho_{{\Delta}}(x^{0}) \ \le \ \lambda(G_{i^{*}, {\Delta}}(x^{0})) \ \le \ \max\limits_{i = 1, \ldots, m} \lambda(G_{i, {\Delta}}(x^{0})) \ \le \ 0, $$

which is impossible. □

For any matrix \(A\in \mathbb {R}^{d \times d}\), denote by \(\frak {r}(A)\) the spectral radius of A, that is,

$$\frak{r}(A) = \max \{|\lambda| \ | \ \lambda \in \sigma (A)\}, $$

where \(\sigma (A)\) is the set of all eigenvalues of A. The following is an immediate application of Theorem 2.1.

Corollary 2.1

Let \(\mathcal {G} := \{G_{1}, \ldots , G_{m}\} \subset \text {Sym}_{d}({\mathbb {R}[x]})\) be nondegenerate. Then, there exist positive numbers \(c_{1}, c_{2}\) and R such that

$$c_{1} \rho(x) \le \max\limits_{i = 1, \ldots, m} \frak{r}(G_{i}(x)) \ \le \ c_{2} \rho(x) \quad \text{ for all } \quad \|x\| \ge R. $$

Proof

By Lemma 2.1 and Theorem 2.1, there exist positive numbers \(c_{1}, R\) such that

$$c_{1} \rho(x) \le \max\limits_{i = 1, \ldots, m} \lambda(G_{i}(x)) \quad \text{ for all} \quad \|x\| \ge R. $$

Since \(\lambda (G_{i}(x)) \le \frak {r}(G_{i}(x))\) for every \(i = 1,2, \ldots , m\), we get the first required inequality in the statement.

On the other hand, similarly to the proof of Lemma 2.1, we can find a constant \(c_{2} > 0\) such that

$$\max\limits_{i = 1, \ldots, m} \frak{r}(G_{i}(x)) \le c_{2} \rho(x) \quad \text{ for all} \quad x \in \mathbb{R}^{n}. $$

The proof is complete. □

3 Matrix Polynomials Having All Eigenvalues Bounded on a Nondegenerate Semialgebraic Set

For a set \(K \subset \mathbb {R}^{n}\), we will denote by \(\mathcal A(K)\) the set of matrix polynomials \(F \in \text {Sym}_{d}(\mathbb {R}[x])\) such that all eigenvalues of F are bounded on \(K,\) i.e.,

$$\mathcal A(K) = \{F \in \text{Sym}_{d}(\mathbb{R}[x]) \, | \, \exists M > 0, \ M \cdot I_{d} \pm F(x) \succeq 0 \ \forall x \in K \}. $$

A natural question is: how to check whether a symmetric matrix polynomial is in \(\mathcal {A}(K)?\) The answer is trivial when K is compact. The case of non-compact sets remains mainly unsolved. In this section, we present a solution for semialgebraic sets defined by nondegenerate matrix polynomials.

To start with, notice that \(\mathcal A(K)\) is not closed under multiplication because products of symmetric matrices are not symmetric in general. Furthermore, the following observations are clear.

Property 3.1

Let \(K, L\) be subsets of \(\mathbb {R}^{n}\) . The following statements hold:

  1. (i)

    If \(K \subset L\) , then \(\mathcal {A}(L) \subset \mathcal {A}(K)\) ;

  2. (ii)

    \(\mathcal {A}(K) = \mathcal {A}(\overline {K})\) ;

  3. (iii)

    \(\mathcal {A}(K \cup L) = \mathcal {A}(K) \cap \mathcal {A}(L)\) ;

  4. (iv)

    \(\mathcal A(K) = \text {Sym}_{d}(\mathbb {R}[x])\) provided that K is bounded;

  5. (v)

    Every root in \(\text {Sym}_{d}(\mathbb {R}[x])\) of a monic polynomial with coefficients in \(\mathcal A(K)\) belongs to \(\mathcal A(K)\) .

Here and in the following, \(\overline {K}\) stands for the closure of K.

Proof

For (v), let \(F \in \text {Sym}_{d}(\mathbb {R}[x])\) be a root of a monic polynomial with coefficients in \(\mathcal A(K)\); that is,

$$F^{N} + \sum\limits_{i = 1}^{N} A_{i}F^{N-i} = 0 $$

for some matrices \(A_{i} \in \mathcal A(K)\). Let \(c_{i} := \sup _{x \in K}\|A_{i}(x)\| < +\infty \). For any \(x\in K\), we have

$$\begin{array}{@{}rcl@{}} \|F^{N}(x)\| & = & \left\| \sum\limits_{i = 1}^{N} A_{i}(x)F^{N-i}(x) \right\| \le \sum\limits_{i = 1}^{N} \|A_{i}(x)\| \|F(x)\|^{N-i} \\ & \le & \sum\limits_{i = 1}^{N} c_{i} (\max{\{\|F(x)\| ,1 \}})^{N-i} \le \sum\limits_{i = 1}^{N} c_{i} (\max{\{\|F(x)\| ,1 \}})^{N-1}. \end{array} $$

Hence,

$$\|F(x)\| \le \sum\limits_{i = 1}^{N} c_{i} + 1 < \infty, $$

which implies that \(F \in \mathcal A(K)\). □

The next property states that a polynomial automorphism induces an isomorphism of appropriate spaces.

Property 3.2

If \({\Phi }\) is a polynomial automorphism of \(\mathbb {R}^{n}\), then for any set \(K \subset \mathbb {R}^{n}\), we have

$$\mathcal A(K) = {\Phi}_{*} \mathcal A({\Phi}(K)), $$

where \({\Phi }_{*}\) is the isomorphism \(F \mapsto F \circ {\Phi }\) of \(\text {Sym}_{d}(\mathbb {R}[x])\).

Proof

It suffices to note that a matrix polynomial \(F \in \text {Sym}_{d}(\mathbb {R}[x])\) belongs to \(\mathcal A({\Phi }(K))\) if and only if \(F \circ {\Phi }\) belongs to \(\mathcal A(K)\). □

From now on, we let \(\mathcal {G} := \{G_{1}, \ldots , G_{m}\} \subset \text {Sym}_{d}(\mathbb {R}[x])\) and consider the set

$$K := \{x \in \mathbb{R}^{n} \,|\, {\Lambda}_{i} - G_{i}(x) \succeq 0 \quad \text{ for} \quad i = 1, \ldots, m\}, $$

where \({\Lambda }_{i} \in \text {Sym}_{d}(\mathbb {R}), i = 1, \ldots , m\). Since \({\Lambda }_{i} - G_{i}(x) \succeq 0\) can be presented as a system of polynomial inequalities, K is a basic closed semialgebraic set. The crucial role in our considerations is played by the following corollary, which follows from Theorem 2.1.

Corollary 3.1

If the system \(\mathcal {G} := \{G_{1}, \ldots , G_{m}\}\) is nondegenerate, then there exists a constant \(r > 0\) such that

$$K \subset \{x \in \mathbb{R}^{n} \,|\, |x^{\alpha}| \le r \ \text{ for } \ \alpha \in V(\mathcal{G})\}. $$

Proof

By Theorem 2.1, there exist constants \(c > 0\) and \(R > 0\) such that

$$c \rho(x) \le \max_{i = 1, \ldots, m} \lambda(G_{i}(x)) \quad \text{ for all} \quad \|x\| \ge R. $$

Hence, we have for all \(x \in K\) with \(\|x\| \ge R,\)

$$\rho(x) \le \frac{1}{c} \max\limits_{i = 1, \ldots, m} \lambda(G_{i}(x)) \le \frac{1}{c} \max\limits_{i = 1, \ldots, m} \lambda({\Lambda}_{i}). $$

Let

$$r := \max\left\{ \max_{\|x\| \le R} \rho(x), \frac{1}{c} \max\limits_{i = 1, \ldots, m} \lambda({\Lambda}_{i})\right\} > 0. $$

Then, it is clear that

$$K \subset \{x \in \mathbb{R}^{n} \,|\, \rho(x) \le r \} \subset \{x \in \mathbb{R}^{n} \,|\, |x^{\alpha}| \le r \ \text{ for } \ \alpha \in V(\mathcal{G})\}, $$

which completes the proof of the corollary. □

In the rest of the paper, we will make the following assumptions: for all \(i = 1, \ldots , m\), it holds that

  1. (H1)

    \(G_{i}(0) = 0\) (this can be achieved by replacing \(G_{i}\) by \(G_{i} - G_{i}(0)\)); and

  2. (H2)

    the matrices \({\Lambda }_{i} \in \text {Sym}_{d}(\mathbb {R})\) are positive definite.

Let \({\mathcal C}({\mathcal G})\) be the cone with vertex at the origin generated by the Newton polyhedron \({\Gamma }(\mathcal {G})\) of the system \({\mathcal G}\), i.e.,

$${\mathcal C}({\mathcal G}) := \left\{ \sum\limits_{\alpha \in V(\mathcal{G})} t_{\alpha} \alpha \,|\, t_{\alpha} \ge 0 \right\}.$$

The next result says that the space of symmetric matrix polynomials, whose eigenvalues are bounded on K, can be described in terms of the Newton polyhedron corresponding to the matrix polynomials used to define K.

Theorem 3.1

Assume that the system \({\mathcal G} = \{G_{1}, \ldots , G_{m}\}\) is nondegenerate. Let \(F \in \text {Sym}_{d}(\mathbb {R}[x])\) , then the following statements are equivalent:

  1. (i)

    \(\text {supp}(F) \subset {\mathcal C}({\mathcal G})\) ;

  2. (ii)

    \(F \in \mathcal A(K)\) , i.e., there exists a constant \(M > 0\) such that

    $$M \cdot I_{d} \pm F(x) \succeq 0 \quad \text{ for all} \quad x \in K. $$

Proof

We first remark from Lemma 2.1 and Theorem 2.1 that there exist positive constants \(c_{1}, c_{2}, \) and R such that

$$ c_{1} \rho(x) \le \max_{i = 1, \ldots, m} \lambda(G_{i}(x)) \le c_{2} \rho(x) \quad \text{ for all} \quad \|x\|\ge R. $$
(3.1)

(i) \(\Rightarrow \) (ii) By definition, we have for all \(x \in K\) and all \(i \in \{1, \ldots , m\},\)

$$\lambda(G_{i}(x)) \le \lambda_{i},$$

where \(\lambda _{i} := \lambda ({\Lambda }_{i})\) is the largest eigenvalue of the matrix \({\Lambda }_{i}\). It follows from (3.1) that

$$\sum\limits_{\alpha \in V(\mathcal{G})} |x^{\alpha}| = \rho(x) \le \max\limits_{i = 1, \ldots, m} \lambda_{i} / c_{1} \quad \text{ for all} \quad x \in K, \|x\| \ge R.$$

Since \(\text {supp}(F) \subset {\mathcal C}({\mathcal G})\), for each \(\beta \in \text {supp}(F)\), we can find constants \(t_{\alpha } \ge 0\) such that \(\beta = {\sum }_{\alpha \in V(\mathcal {G})} t_{\alpha } \alpha \). Then,

$$|x^{\beta} | \,=\, |x^{\sum\limits_{\alpha \in V(\mathcal{G})} t_{\alpha} \alpha}| \,=\, \prod\limits_{\alpha \in V(\mathcal{G})} |x^{\alpha}|^{t_{\alpha}} \!\le\! \prod\limits_{\alpha \in V(\mathcal{G})} \left( \max\limits_{i = 1, \ldots, m} \lambda_{i} / c_{1} \right)^{t_{\alpha}} \!\quad\! \text{for all} \quad x \in K, \|x\| \!\ge\! R. $$

It follows that

$$\sup_{\beta \in \text{supp}(F)} |x^{\beta} | \le c_{3} \quad \text{for all} \quad x \in K, \|x\| \ge R $$

for some \(c_{3} > 0\).

Now let us write \(F(x) = {\sum }_{\beta } F_{\beta } x^{\beta }\) for some symmetric matrices \(F_{\beta } \in \text {Sym}_{d}(\mathbb {R})\). Take any vector \(v \in \mathbb {R}^{n}\) with ∥v∥ = 1. We have for all \(x \in K\) with \(\|x\| \ge R,\)

$$|\langle F(x) v, v\rangle | = \left| \sum\limits_{\beta} \langle F_{\beta} v, v\rangle x^{\beta} \right| \le \sum\limits_{\beta} | \langle F_{\beta} v, v\rangle| | x^{\beta} | \le \left( \sum\limits_{\beta} | \langle F_{\beta} v, v\rangle| \right) c_{3}. $$

Let

$$M := \max\limits_{\|v\| = 1} \left( \sum\limits_{\beta} | \langle F_{\beta} v, v\rangle| \right) c_{3} .$$

Then, we have

$$|\langle F(x) v, v\rangle | \le M \quad \text{ for all} \quad x \in K, \|x\| \ge R \ \text{ and} \ v \in \mathbb{R}^{n}, \|v\| = 1.$$

This, together with the compactness of the ball \(\{x \in \mathbb {R}^{n} \,|\, \|x\| \le R\}\), proves (ii).

(ii) \(\Rightarrow \) (i) Suppose on the contrary that there exists \(\beta \in \text {supp}(F) \setminus \mathcal {C} (\mathcal {G})\). By the separation theorem, there exists a nonzero vector \(q := (q_{1}, \ldots , q_{n}) \in \mathbb {R}^{n}\) such that

$$ \langle q, \alpha \rangle \ge 0 \ > \ \langle q, \beta \rangle \quad \text{ for all} \quad \alpha \in \mathcal {C} (\mathcal {G}). $$
(3.2)

For simplicity, we put

$$\begin{array}{@{}rcl@{}} {\Delta} := {\Delta}(q, {\Gamma}(F)), && \ell := \ell(q, {\Gamma}(F)), \\ {\Delta}^{\prime} := {\Delta}(q, {\Gamma}(\mathcal{G})), && \ell^{\prime} := \ell(q, {\Gamma}(\mathcal{G})). \end{array} $$

Then, by (3.2), we have

$$\ell < 0 \le \ell^{\prime} \quad \text{ and} \quad \min_{j = 1, \ldots, n} q_{j} < 0.$$

In particular, by definition, \({\Delta }^{\prime } \in {\Gamma }_{\infty }(\mathcal {G})\).

On the other hand, the assumption that the matrices \({\Lambda }_{i}, i = 1, \ldots , m\), are positive definite gives us a real number \(\lambda _{*} > 0\) such that

$${\Lambda}_{i} - \lambda_{*} \cdot I_{d} \succ 0 \quad \text{ for all} \quad i = 1, \ldots, m.$$

Furthermore, since \(G_{i}(0) = 0, i = 1, \ldots , m\), we have that \(0 \not \in {\Gamma }(\mathcal {G})\) and so \(\rho _{{\Delta }^{\prime }}(0) = 0\). Therefore, we can find a point \(x^{0} \in (\mathbb {R} \setminus \{0\})^{n}\) satisfying the following conditions:

$$ \rho_{{\Delta}^{\prime}}(x^{0}) \ll \frac{1}{c_{2}}\lambda_{*} \quad \text{ and} \quad F_{{\Delta}}(x^{0}) \not \equiv 0. $$
(3.3)

Consider the monomial curve

$$\phi \colon (0, 1) \rightarrow \mathbb{R}^{n}, \quad s \mapsto \left( {x_{1}^{0}} s^{q_{1}}, \ldots, {x_{n}^{0}} s^{q_{n}}\right).$$

We have \(\|\varphi (s) \| \rightarrow + \infty \) as \(s \rightarrow 0^{+}\) because \(\min _{j = 1, \ldots , n} q_{j} < 0\). Furthermore, a simple calculation shows that

$$\rho(\phi(s)) = \rho_{{\Delta}^{\prime}}(x^{0}) s^{\ell^{\prime}} + \text{ higher-order terms in }s.$$

Since \(\ell ^{\prime } \ge 0\), it follows from the first inequality of (3.3) that

$$\rho(\phi(s)) < \frac{1}{c_{2}}\lambda_{*} \quad \text{ for all} \quad |s| \ll 1.$$

Thanks to (3.1),

$$\max\limits_{i = 1, \ldots, m} \lambda(G_{i}(\phi(s))) < \lambda_{*} \quad \text{ for all} \quad |s| \ll 1.$$

Therefore, \(\phi (s) \in K\) for \(|s| \ll 1\).

On the other hand, the second condition of (3.3) gives us a vector \(v \in \mathbb {R}^{n}, \|v\| = 1\), such that \(\langle F_{{\Delta }}(x^{0}) v, v \rangle \ne 0\). Then, a simple calculation shows that

$$\langle F(\phi(s)) v, v \rangle = \langle F_{{\Delta}}(x^{0}) v, v \rangle s^{\ell} + \text{ higher-order terms in } s. $$

It follows from the facts \(\ell < 0\) and \(\langle F_{{\Delta }}(x^{0}) v, v \rangle \ne 0\) that

$$\lim\limits_{s \to 0^{+}} \langle F(\phi(s)) v, v \rangle = \infty,$$

which contradicts to the assumption that \(F \in \mathcal {A}(K)\). □

Corollary 3.2

Assume that the system \({\mathcal G} = \{G_{1}, \ldots , G_{m}\}\) is nondegenerate. Then, the system \(\mathcal {G}\) is convenient if and only if K is a compact set.

Proof

Indeed, assume that the system \(\mathcal {G}\) is convenient, i.e., the Newton polyhedron \({\Gamma }(\mathcal {G})\) intersects each coordinate axis at one point different from the origin 0 in \(\mathbb {R}^{n}\). By definition, this is equivalent to the fact that \(\mathcal {C}(\mathcal {G}) = \mathbb {R}^{n}_{\ge 0}\). Let \(F(x) := ({\sum }_{j = 1}^{n} {x_{j}^{2}}) \cdot I_{d} \in \text {Sym}_{d}(\mathbb {R}[x])\). We have \(\text {supp}(F) \subset {\mathcal C}({\mathcal G})\). By Theorem 3.1, \(F \in \mathcal A(K)\). Consequently, K is a compact set.

Conversely, assume that the set K is compact but the system \(\mathcal {G}\) is not convenient. By definition, there exists a vector \(\beta \in \mathbb {R}^{n}_{\ge 0} \setminus \mathcal {C}(\mathcal {G})\). Then, by a similar argument as in the proof of the implication (ii) \(\Rightarrow \) (i) in Theorem 2.1, we can construct a curve \(\phi \colon (0, 1) \rightarrow \mathbb {R}^{n}\) such that \(\|\varphi (s) \| \rightarrow + \infty \) as \(s \rightarrow 0^{+}\) and \(\phi (s) \in K\) for all s sufficiently small, which contradicts to the compactness of K. □

As a direct consequence of Theorem 3.1, we get the following stability of \(\mathcal A(K)\) (see also [8, Lemma 3.1] and [11, Theorem 3.1]).

Corollary 3.3

Assume that the system \({\mathcal G} = \{G_{1}, \ldots , G_{m}\}\) is nondegenerate. Then, the space of matrix polynomials which are bounded on the semialgebraic set

$$\{x \in \mathbb{R}^{n} \,|\, {\Lambda}_{i} - G_{i}(x) \succeq 0 \quad \text{ for} \quad i = 1, \ldots, m\}$$

is independent on the positive define matrices \({\Lambda }_{i}\) for \(i = 1, \ldots , m\).

We say that the set \(\mathcal A(K) \subset \text {Sym}_{d}(\mathbb {R}[x])\) is finitely generated if one can find a finite set \(\{\zeta _{1}, \ldots , \zeta _{k}\} \subset \mathcal A(K)\) such that for any matrix polynomial \(F \in \mathcal A(K)\), there exists a matrix polynomial \(G \in \text {Sym}_{d}(\mathbb {R}[x])\) such that F = G(ζ1,…, ζk).

Now, we can state a theorem on the structure of sets of matrix polynomials, whose eigenvalues are bounded on nondegenerate semialgebraic sets in \(\mathbb {R}^{n}\). For related results, see [11, Theorem 2.5], [16, Therorem 2.1], and [23].

Theorem 3.2

Assume that the system \({\mathcal G} = \{G_{1}, \ldots , G_{m}\}\) is nondegenerate. Then, the space of matrix polynomials which are bounded on the semialgebraic set

$$\{x \in \mathbb{R}^{n} \,|\, {\Lambda}_{i} - G_{i}(x) \succeq 0 \quad \text{ for} \quad i = 1, \ldots, m\}$$

is generated by the matrix monomials \(x^{\alpha } \cdot I_{d}\) for \(\alpha \in \text {co}(\{0\} \cup \text {supp}(\mathcal {G}))\), and hence is finitely generated.

Proof

By Theorem 3.1, it suffices to show that there exists a finite set \(L \subset {\mathcal C}(\mathcal {G}) \cap \mathbb {Z}^{n}\) satisfying the condition: for any \(\beta \in {\mathcal C}(\mathcal {G}) \cap \mathbb {Z}^{n}\), there exist some constants \(t_{\alpha } \in \mathbb {Z}_{\ge 0}\) for \(\alpha \in L\) such that \(\beta = {\sum }_{\alpha \in L} t_{\alpha } \alpha \).

In fact, let \(k := \dim {\mathcal C}(\mathcal {G}) \le n\). Then, there exist cones \({\mathcal C}_{1}, \ldots , {\mathcal C}_{p}\) and vectors \(\omega _{ij} \in \mathbb {Z}^{n}_{\ge 0}\) for \(i = 1, \ldots , p, j = 1, \ldots , k, \)such that:

  • \({\mathcal C}(\mathcal {G}) = \cup _{i = 1}^{p} {\mathcal C}_{i}\), \(\dim {\mathcal C}_{i} = k\) for \(i = 1, \ldots , p;\)

  • for each index \(i, \)the vectors \(\omega _{i1}, \ldots , \omega _{ik}\) are linearly independent and the greatest common divisor of the coordinates of these vectors is equal to \(1\); and

  • each cone \({\mathcal C}_{i}\) is generated by the vectors \(\omega _{i1}, \ldots , \omega _{ik}\).

Next, we define the k-dimensional parallelepiped \(P_{i}\) to be the set of all α in \(\mathbb {R}^{n}\) such that

$$\alpha = c_{1} \omega_{i1} + {\cdots} + c_{k} \omega_{ik} $$

for some scalars \(c_{j}\) with \(0 \le c_{j} \le 1\). Then, it is easy to see, by a linear isomorphism, that \({\mathcal C}_{i}\) and \(P_{i}\), respectively, are identified to the cone \(\mathbb {R}^{k}_{\ge 0}\) and the cube \([0, 1]^{k}\). Notice that

$$\mathbb{R}^{k}_{\ge 0} = \{\alpha + (t_{1}, \ldots, t_{k}) \,|\, \alpha \in [0, 1]^{k} \text{ and } t_{j} \in \mathbb{Z}_{\ge 0} \}. $$

Consequently, we have

$$\mathcal C_{i} = \{\alpha + t_{1} \omega_{i1} + {\cdots} + t_{k} \omega_{ik} \,|\, \alpha \in P_{i} \text{ and } t_{j} \in \mathbb{Z}_{\ge 0} \}.$$

Hence, if we put \(L_{i} := P_{i} \cap \mathbb {Z}^{n}\), then

$$\mathcal C_{i} \cap \mathbb{Z}^{n} = \{\alpha + t_{1} \omega_{i1} + {\cdots} + t_{k} \omega_{ik} \,|\, \alpha \in L_{i} \text{ and } t_{j} \in \mathbb{Z}_{\ge 0} \}.$$

Clearly, the set \(L := \cup _{i = 1}^{p} L_{i}\) has the desired properties. □

Example 3.1

Let \(n = 2\) and \(\mathcal {G}= \{ \pm x \cdot I_{d}, \pm xy \cdot I_{d} \} \subset \text {Sym}_{d}(\mathbb {R}[x, y])\) and consider the corresponding semialgebraic set

$$K = \{(x, y) \in \mathbb{R}^{2} \,|\, I_{d} \pm x \cdot I_{d} \succeq 0 \quad \text { and} \quad I_{d} \pm xy \cdot I_{d} \succeq 0 \}. $$

By Theorem 3.2, it is easy to see that \(\mathcal {A}(K)\) is generated by the matrix monomials \(x \cdot I_{d}\) and \(xy \cdot I_{d}\), i.e., \(\mathcal {A}(K) = \text {Sym}_{d}(\mathbb {R}[x,xy])\). This example is inspired by [23, Example 3.10].

Example 3.2

Let \(d = 1\) and \(n = 2\). Take the Motzkin polynomial \({\frak m}(x, y) := 1 + x^{2}y^{2}(x^{2} + y^{2} - 3) \in \mathbb {R}[x, y]\) and consider the semialgebraic set

$$K_{c} := \{(x, y) \in \mathbb{R}^{2} \,|\, c - \frak m(x, y) \ge 0 \}, $$

where c is a real parameter. It is easy to check that the system \(\mathcal {G} := \{ \frak m - 1\}\) is nondegenerate and \(\frak m(0, 0) - 1 = 0\). If \(c < 1\), then \(K_{c}\) is a compact set, and hence \(\mathcal {A}(K_{c}) = \mathbb {R}[x, y]\). If \(c = 1\), then the algebra \(\mathcal {A}(K_{c})\) does not admit a finite set of generators (see [11, Example 3.5] and [16, Example 3.2]). On the other hand, if \(c > 1\), then we deduce from Theorem 3.2 that \(\mathcal {A}(K_{c}) = \mathbb {R}[xy, x^{2}y, xy^{2}]\) is finitely generated.

Example 3.3

Let \(d = n = 2\) and consider \(K_{c} = \{ (x,y) \,|\, {\Lambda }_{c}- G(x,y) \succeq 0 \}\), where c is a real parameter,

$$G(x,y) = \left( \begin{array}{lll} x^{2}y^{2}& 0 \\ 0 & x^{2} \end{array}\right) \quad \text{ and} \quad {\Lambda}_{c} :=\left( \begin{array}{lll} 1& 0 \\ 0 & c \end{array}\right). $$

By definition, the cone \( \mathcal {C(G)}\) (with \(\mathcal {G} :=\{G\}\)) is the convex cone generated by the vectors \((1, 0)\) and \((1, 1)\). Hence,

$$\{ F \in \text{Sym}_{2}(\mathbb{R}[x,y]) \,|\, \text{supp}(F) \subset \mathcal{C(G)} \} = \text{Sym}_{2}(\mathbb{R}[x, xy]). $$

The matrix \({\Lambda }_{c}\) has eigenvalues 1 and c. If \(c < 0, K_{c} = \emptyset \) and so \(\mathcal {A}(K_{c}) = \text {Sym}_{2}(\mathbb {R} [x, y])\). If \(c > 0\), we can use Theorem 4.1 to get \(\mathcal {A}(K_{c}) = \text {Sym}_{2}(\mathbb {R}[x, xy])\). If \(c = 0\), the set \(K_{c}\) is the y-axis; hence, \(\mathcal {A}(K_{c}) =\text {Sym}_{2}(\mathbb {R} [x, y] x + \mathbb {R})\).

Finally, we can see that \(\mathcal A(K)\) is “absorbing” in the following sense.

Corollary 3.4

Suppose that \({\mathcal G} = \{G_{1}, \ldots , G_{m}\}\) is nondegenerate. Denoted by \(\mathcal {V}\) the linear subspace of \(\mathbb {R}^{n}\) spanned by the cone \({\mathcal C}(\mathcal {G})\) . Then for any matrix polynomial \(F \in \text {Sym}_{d}(\mathbb {R}[x])\) with \(\text {supp}(F) \subset \mathcal {V}\) , there exists a vector \(\beta \in {\mathcal C}(\mathcal {G}) \cap \mathbb {Z}_{\ge 0}^{n}\) such that

$$x^{\beta} \cdot F \in \mathcal A(K).$$

Proof

Let \(F \in \text {Sym}_{d}(\mathbb {R}[x])\) be a matrix polynomial with \(\text {supp}(F) \subset \mathcal {V}\). Assume that we have proved the claim that: for each \(\alpha \in \mathcal {V} \cap \mathbb {Z}_{\ge 0}^{n}\), there exists \(\beta (\alpha ) \in {\mathcal C}(\mathcal {G}) \cap \mathbb {Z}_{\ge 0}^{n}\) such that

$$\alpha + \beta(\alpha) \in {\mathcal C}(\mathcal{G}). $$

This, of course, implies that if we let \(\beta := {\sum }_{\alpha \in \text {supp}(F)} \beta (\alpha ) \in {\mathcal C}(\mathcal {G}) \cap \mathbb {Z}_{\ge 0}^{n}\), then \(\alpha + \beta \in {\mathcal C}(\mathcal {G})\) for all \(\alpha \in \text {supp}(F)\). By Theorem 3.1, \(x^{\beta } \cdot F \in \mathcal A(K)\).

So we are left with proving the claim. To this end, let \(\alpha \) be an arbitrary vector in \(\mathcal {V} \cap \mathbb {Z}_{\ge 0}^{n}\). Then, there exist numbers \(\mu _{i} \in \mathbb {R}\) and vectors \(\alpha ^{i} \in {\mathcal C}(\mathcal {G}) \cap \mathbb {Z}_{\ge 0}^{n}\) for \(i = 1, \ldots , k\), such that

$$\alpha = \mu_{1} \alpha^{1} + {\cdots} + \mu_{k} \alpha^{k}. $$

Take integers \(\nu _{i} > 0\) for \(i = 1, \ldots , k\), and let

$$\gamma := \nu_{1}\alpha^{1} + {\cdots} + \nu_{k}\alpha^{k} \in {\mathcal C}(\mathcal{G}) \cap \mathbb{Z}_{\ge 0}^{n}. $$

Choose \(t \in \mathbb {Z}_{\ge 0}\) large enough such that \(\mu _{i} + t \nu _{i} \ge 0\) for all \(i = 1, {\ldots } k\), and let \(\beta (\alpha ) := t \gamma \). Then, the vectors \(\beta (\alpha )\) and \(\alpha + \beta (\alpha )\) belong to \({\mathcal C}(\mathcal {G}) \cap \mathbb {Z}_{\ge 0}^{n}\). □

4 Positivstellensätze for Matrix Polynomials on Nondegenerate Semialgebraic Sets

In this section, we establish a matrix version of Schmüdgen’s Positivstellensätz for matrix polynomials whose eigenvalues are “strictly” positive and bounded on K. To do this, recall that

$$K := \{x \in \mathbb{R}^{n} \,|\, {\Lambda}_{i} - G_{i}(x) \succeq 0 \quad \text{ for} \quad i = 1, \ldots, m\}. $$

Definition 4.1

We say that the cone \(\mathcal {C}(\mathcal {G})\) is unimodular if there exist n vectors \(\alpha ^{1}, \ldots , \alpha ^{n} \in \mathcal {C}(\mathcal {G}) \cap \mathbb {Z}_{\ge 0}^{n}\) such that the following two conditions hold:

  1. (a)

    \(\det A = 1\), where \(A := [\alpha ^{1}, \ldots , \alpha ^{n}]\); and

  2. (b)

    \(\mathcal {C}(\mathcal {G})\) is generated by \(\alpha ^{1}, \ldots , \alpha ^{n}\), i.e.,

    $$\mathcal{C}(\mathcal{G}) = \left\{\sum\limits_{j = 1}^{n} t_{j} \alpha^{j} \,|\, t_{j} \ge 0 \right\}. $$

Suppose that the cone \(\mathcal {C}(\mathcal {G})\) is unimodular. Then, it is straightforward to show that for any \(\beta \in \mathcal {C}(\mathcal {G}) \cap (\mathbb {Z}_{\ge 0})^{n}\), there are nonnegative integers \(t_{1}, \ldots , t_{n}\) such that \(\beta = {\sum }_{j = 1}^{n} t_{j} \alpha ^{j}\). As a consequence,

$$\{ F \in \text{Sym}_{d}(\mathbb{R}[x]) \,|\, \text{supp } (F) \subset \mathcal{C}(\mathcal{G}) \} = \text{Sym}_{d}(\mathbb{R}[x^{\alpha^{1}}, \ldots, x^{\alpha^{n}}]). $$

Let us make a change of variables \(u := x^{A}\), that is \(u_{j} = x^{\alpha ^{j}}, j = 1, \ldots , n\), and \(u = (u_{1}, \ldots , u_{n})\). Then for any \(\beta \in \mathcal {C}(\mathcal {G}) \cap \mathbb {Z}_{\ge 0}^{n}\), there is a representation

$$\beta = \sum\limits_{j = 1}^{n} t_{j} \alpha^{j}, $$

for some nonnegative integers \(t_{j}, j = 1, \ldots , n;\) and so

$$x^{\beta} = \prod\limits_{j = 1}^{n} x^{t_{j} \alpha^{j}} = \prod\limits_{j = 1}^{n} (x^{\alpha^{j}})^{t_{j}} = \prod\limits_{j = 1}^{n} (u_{j})^{t_{j}} .$$

Consequently, for each \(i = 1, \ldots , m\), we can define a matrix polynomial \(\widetilde {G}_{i} \in \text {Sym}_{d}(\mathbb {R}[u])\) with the property that \(\widetilde {G}_{i}(x^{A}) = G_{i}(x)\) for all \(x \in \mathbb {R}^{n}\). Let

$$\widetilde{K} = \{u \in \mathbb{R}^{n} \,|\, {\Lambda}_{i} - \widetilde{G}_{i}(u) \succeq 0, \ i = 1, \ldots, m \}. $$

Lemma 4.1

Define the monomial mapping \({\Phi } \colon \mathbb {R}^{n} \rightarrow \mathbb {R}^{n}\) by \({\Phi }(x) = x^{A}\) . We have

  1. (i)

    \(\overline { {\Phi }(K)} \subset \widetilde {K};\)

  2. (ii)

    The restriction \({\Phi } \colon K \cap (\mathbb {R} \setminus \{0\})^{n} \longrightarrow \widetilde {K} \cap (\mathbb {R} \setminus \{0\})^{n}\) is one-to-one and onto. In particular,

    $${\Phi}(K \cap (\mathbb{R} \setminus \{0\})^{n}) = {\Phi}(K) \cap (\mathbb{R} \setminus \{0\})^{n} = \widetilde{K} \cap (\mathbb{R} \setminus \{0\})^{n}. $$

Proof

Suppose that \(x \in K\). Then \( \widetilde {G}_{i}({\Phi }(x)) = G_{i}(x)\), which implies (i).

If \(u \in (\mathbb {R} \setminus \{0\})^{n}\), then \(x = u^{A^{-1}}\) is again an element in \((\mathbb {R} \setminus \{0\})^{n}\) and \({\Phi }(x) = u\). Then (ii) follows easily. □

Recall that the preordering generated by the matrix polynomials \({\Lambda }_{1} - G_{1}, \ldots , {\Lambda }_{m} - G_{m}\), denoted by \(\mathcal {T}_{\{{\Lambda }_{1} - G_{1}, \ldots , {\Lambda }_{m} - G_{m}\}}\), is defined to be the smallest quadratic module in \(\text {Sym}_{d}(\mathbb {R}[x])\) which contains Λ1G1,…,ΛmGm and whose intersection with the set \({\mathbb {R}[x]} \cdot I_{d}\) is closed under multiplication. By definition, every matrix polynomial in \(\mathcal {T}_{\{{\Lambda }_{1} - G_{1}, \ldots , {\Lambda }_{m} - G_{m}\}}\) is positive semidefinite on K. The converse does not hold in general. On the other hand, we have the following statement.

Theorem 4.1

Assume that the system \({\mathcal G} := \{G_{1}, \ldots , G_{m}\}\) is nondegenerate, the cone \({\mathcal C}(\mathcal {G})\) is unimodular and that \(\overline { {\Phi }(K)} = \widetilde {K}\). Let \(F \in \mathcal {A}(K)\) be such that

$$\inf_{x\in K}\lambda_{\min}(F(x)) > 0, $$

where \(\lambda _{\min }(F(x))\) is the smallest eigenvalue of \(F(x)\). Then,

$$F \in \mathcal{T}_{\{{\Lambda}_{1}-G_{1}, \ldots, {\Lambda}_{m} - G_{m}\}}. $$

Proof

Since \(\mathcal {G}\) is nondegenerate, by Corollary 3.1, there exists a constant \(r > 0\) such that

$$K \subset \{x \in \mathbb{R}^{n} \,|\, |x^{\alpha}| \le r \ \text{ for} \ \alpha \in V(\mathcal{G})\}. $$

This, together with the assumptions that \(\mathcal {C}(\mathcal {G})\) is unimodular and \(\overline { {\Phi }(K)} = \widetilde {K}\), implies easily that

$$\widetilde{K} := \{u \in \mathbb{R}^{n} \,|\, {\Lambda}_{i} - \widetilde{G}_{i}(u) \succeq 0, \ i = 1, \ldots, m \} \subset \{u \in \mathbb{R}^{n} \ | \ |u_{j}| \le \widetilde{r}, \ j = 1, \ldots, n \} $$

for some \(\widetilde {r} > 0\). Consequently, the set \(\widetilde {K}\) is compact.

On the other hand, by Theorem 3.1, we have \(\text {supp}(F) \subset {\mathcal C}({\mathcal G})\) because of \(F \in \mathcal {A}(K)\). Since \(\mathcal {C}(\mathcal {G})\) is unimodular, we can define a matrix polynomial \(\widetilde {F} \in \text {Sym}_{d}(\mathbb {R}[u])\) with the property that \(\widetilde {F} \circ {\Phi }(x) = F(x)\) for all \(x \in \mathbb {R}^{n}\). We will show that \(\widetilde {F} \succ 0\) on \(\widetilde {K}\). In fact, it is clear that \(\widetilde {F} \succeq 0\) on \(\widetilde {K}\) because we have that \(F \succ 0\) on K and \(\overline { {\Phi }(K)} = \widetilde {K}\). Assume that there exists a point \(\widetilde {a} \in \widetilde {K}\) such that the smallest eigenvalue of \(\widetilde {F}(\widetilde {a})\) is equal to zero. There are two cases to be considered.

Case 1

There exists a point \(a \in K\) such that \(\widetilde {a} = a^{A}. \) By definition,

$$\lambda_{\min}(F(a)) = \lambda_{\min}(\widetilde{F}(\widetilde{a})) = 0,$$

which contradicts to the assumption.

Case 2

There is no point \(a \in K\) such that \(\widetilde {a} = a^{A}\). Then, by the hypothesis and Lemma 4.1, there exists a sequence \(\{a^{k}\}_{k \ge 1} \subset K \cap (\mathbb {R} \setminus \{0\})^{n}\) such that \(\widetilde {a}^{k} := {\Phi }(a^{k}) = (a^{k})^{A} \in \widetilde {K}\) for all \(k \ge 1\) and \(\lim _{k \to \infty } \widetilde {a}^{k} = \widetilde {a}\). Hence,

$$\lim\limits_{k \to \infty} \lambda_{\min}(F(a^{k})) = \lim\limits_{k \to \infty} \lambda_{\min}(\widetilde{F}((a^{k})^{A})) = \lim\limits_{k \to \infty} \lambda_{\min}(\widetilde{F}(\widetilde{a}^{k})) = \lambda_{\min}(\widetilde{F}(\widetilde{a})) = 0,$$

which contradicts to the assumption.

Therefore, \(\widetilde {F} \succ 0\) on \(\widetilde {K}\). By [5, Theorem 6], we get

$$\widetilde{F} \in \mathcal{T}_{{\Lambda}_{1} - \widetilde{G}_{1}, \ldots, {\Lambda}_{m} - \widetilde{G}_{m}}. $$

Notice from [4, Lemma 2] that

$$\mathcal{T}_{\{ {\Lambda}_{1} -\widetilde{ G}_{1}, \ldots, {\Lambda}_{m} - \widetilde{G}_{m} \}} = \mathcal{M}_{\{ {\Lambda}_{1} -\widetilde{ G}_{1}, \ldots, {\Lambda}_{m} - \widetilde{G}_{m} \} \cup (\prod (\{ {\Lambda}_{1} -\widetilde{ G}_{1}, \ldots, {\Lambda}_{m} - \widetilde{G}_{m} \})^{\prime}\cdot I_{d})}, $$

where \(\prod (\{ {\Lambda }_{1} -\widetilde { G}_{1}, \ldots , {\Lambda }_{m} - \widetilde {G}_{m} \})^{\prime }\) is the set of all finite products of elements from

$$(\{ {\Lambda}_{1} -\widetilde{ G}_{1}, \ldots, {\Lambda}_{m} - \widetilde{G}_{m} \})^{\prime} = \{ \widetilde{p}^{T} ({\Lambda}_{i} - \widetilde{G_{i}}) \widetilde{p}\,|\, i = 1, \ldots, m, \text { and}~ \widetilde{p} \in (\mathbb{R}[u])^{d} \}. $$

By definition, \(F \in \mathcal {T}_{\{ {\Lambda }_{1} - G_{1}, \ldots , {\Lambda }_{m} - G_{m} \}}\). □

Define the function \(\theta \colon \mathbb {R}^{n} \rightarrow \mathbb {R}\) by

$$\theta(u) := \min_{i = 1, \ldots, m} \lambda_{\min}({\Lambda}_{i} - \widetilde{G}_{i}(u)),$$

where \(\lambda _{\min }({\Lambda }_{i} - \widetilde {G}_{i}(u))\) is the smallest eigenvalue of the matrix \({\Lambda }_{i} - \widetilde {G}_{i}(u)\). Then, it is easy to see that the function \(\theta \) is continuous and satisfies

$$\widetilde{K} = \{ u \in \mathbb{R}^{n} \,|\, \theta(u) \ge 0 \}.$$

Corollary 4.1

Assume that the system \({\mathcal G} = \{G_{1}, \ldots , G_{m}\}\) is nondegenerate, the cone \({\mathcal C}(\mathcal {G})\) is unimodular and 0 is not a local maximum value of \(\theta \). Let \(F \in \mathcal {A}(K)\) such that

$$\inf_{x\in K}\lambda_{\min}(F(x)) > 0,$$

where \(\lambda _{\min }(F(x))\) is the smallest eigenvalue of \(F(x)\). Then,

$$F \in \mathcal{T}_{\{{\Lambda}_{1}-G_{1}, \ldots, {\Lambda}_{m} - G_{m}\}}.$$

Proof

By Theorem 4.1, it suffices to show that \(\widetilde {K}\) is equal to the closure of \(\widetilde {K} \cap (\mathbb {R} \setminus \{0\})^{n}\). To this end, let u be an element in \(\widetilde {K}\) which does not belong to \((\mathbb {R} \setminus \{0\})^{n}\). Then, \(\theta (u) \ge 0\).

We first assume that \(\theta (u) > 0\). By continuity, there exists a real number \(\eta > 0\) such that

$$\theta(v) > 0 \quad \text{ for all} \quad v \in \mathbb{B}(u, \eta),$$

where \(\mathbb {B}(u, \eta )\) denotes the open ball centered at u with radius \(\eta \). In particular, \(\mathbb {B}(u, \eta ) \subset \widetilde {K}\). Note that \(\mathbb {B}(u, \epsilon ) \cap (\mathbb {R} \setminus \{0\})^{n} \ne \emptyset \) for all \(\epsilon \in (0, \eta )\). Therefore, \(u \in \overline {\widetilde {K} \cap (\mathbb {R} \setminus \{0\})^{n}}\).

We now assume that \(\theta (u) = 0\). Since 0 is not a local maximum value of \(\theta \), we can find a sequence \(\{u^{k}\} \subset \mathbb {R}^{n}\) tending to u as \(k \to \infty \) such that \(\theta (u^{k}) > 0\). By the previous argument, \(u^{k}\) belongs to the closure of \(\widetilde {K} \cap (\mathbb {R} \setminus \{0\})^{n}\), and so does u. □

We illustrate here some examples where we can or cannot apply Corollary 4.1.

Example 4.1

Let \(d = n = 2\) and c be a positive number, and consider the set

$$K_{c}: = \{ (x,y) \in \mathbb{R}^{2} \,|\, c I_{2} - G(x, y) \succeq 0 \}, $$

where

$$G(x, y) = \left( \begin{array}{ll} 2& 3 \\ 3 & 5 \end{array}\right)x ^{8}y^{4} + \left( \begin{array}{ll} 2& 3 \\ 3 & 5 \end{array}\right) x^{8} + \left( \begin{array}{ll} 0& -1 \\ -1 & -3 \end{array}\right)x^{4}y^{4} + \left( \begin{array}{ll} 2& 5 \\ 5 & 11 \end{array}\right) x^{2}y^{2}. $$

Then, the support of G is \(\{(8, 4), (8, 0), (4, 4), (2, 2) \}\). The cone \(C(\mathcal {G})\) is the convex cone generated by \((1,0), (1,1)\) and is equal to {(α + β, β) | α ≥ 0, β ≥ 0}, where \(\mathcal {G}\) is a singleton \(\{ G\}\). Hence, \(C(\mathcal {G}) \) is unimodular. The set \(V(\mathcal {G})\) of vertices of the Newton polygon of G consists four points \(\{(8, 4), (8, 0), (4, 4), (2, 2)\}\). Making change of variables \(u=x, v=xy,\) (i.e., \({\Phi } (x,y) = (x, xy)\)), we have

$$\widetilde{K}_{c} := \{ (u, v) \in \mathbb{R}^{2} \,|\, c I_{2} - \widetilde{G}(u, v) \succeq 0\},$$

where

$$\widetilde{G}(u,v) = \left( \begin{array}{ll} 2& 3 \\ 3 & 5 \end{array}\right)u^{4}v^{4} + \left( \begin{array}{ll} 2& 3 \\ 3 & 5 \end{array}\right)u^{8} + \left( \begin{array}{ll} 0& -1 \\ -1 & -3 \end{array}\right)v^{4} + \left( \begin{array}{ll} 2& 5 \\ 5 & 11 \end{array}\right) v^{2} . $$

A direct calculation shows that \(\widetilde {K}_{c}\) is a subset of \(\{ (u,v) \in \mathbb {R}^{2} \,|\, u^{8} \le c, v^{2} \le c \}\), and hence, is compact. Furthermore,

$$\widetilde{K}_{c} = \overline{{\Phi}(K_{c})} \cup \{ (0,v)\,|\, \max \{ v^{4}-v^{2}, -v^{4}+ 3v^{2} \} \le c \}. $$

If \(c \ne 2, \widetilde {K}_{c} = \overline {{\Phi }(K_{c})}\) and in this case, we can apply Corollary 4.1. However, if \(c = 2\), then the function

$$v \mapsto \max \{ v^{4}-v^{2}, -v^{4}+ 3v^{2} \}, $$

attains its local minimum at the points \(v = 0,-\sqrt {2},\sqrt {2}\) and its local minimal values are 0,2. Hence, 0 is a local maximal value of the function \(\theta \) in Corollary 4.1, where

$$\theta(u, v) = 2-\max \{ u^{4}v^{4} + u^{8} + v^{4} - v^{2}, u^{4}v^{4} + u^{8} - v^{4} + 3v^{2}\}. $$

In addition, \(\widetilde {K}_{2}\) contains and does not equal \(\overline {{\Phi }(K_{2})}\). Indeed, \((0, \sqrt {2})\) belongs to \(\widetilde {K}_{2}\) while it does not lie in \(\overline {{\Phi }(K_{2})}\). Hence, we can not apply Corollary 4.1 in this case.

Example 4.2

Let K be a logarithmic polyhedron determined by

$$K := \{ x\in \mathbb{R}^{n} \,|\, ({r_{1}^{2}} - x^{2\alpha^{1}}) \cdot I_{d} \succeq 0, \ldots, ({r_{m}^{2}} - x^{2\alpha^{m}}) \cdot I_{d} \succeq 0\}$$

where \(r_{i} > 0\) and \(\alpha ^{i} \in \mathbb {Z}_{\ge 0}^{n}\) for \(i = 1, \ldots , m\).

It is easy to see that the system \(\mathcal {G} := \{x^{2\alpha ^{1}} \cdot I_{d}, \ldots , x^{2\alpha ^{m}} \cdot I_{d} \}\) is nondegenerate and its Newton polyhedron has even vertices. Suppose that \(\mathcal {C}(\mathcal {G})\) is unimodular, \(F\in \text {Sym}_{d}(\mathbb {R}[x])\) is bounded on K, and that

$$\inf_{x \in K}\lambda_{\min}(F(x)) > 0.$$

By Theorem 4.1, F belongs to the preordering generated by the matrix polynomials \(\left ({r_{1}^{2}}-x^{2\alpha ^{1}}\right ) \cdot I_{d}, \ldots , \left ({r_{m}^{2}} - x^{2\alpha ^{m}}\right ) \cdot I_{d}\) in \(\text {Sym}_{d}(\mathbb {R}[x])\). Moreover, by a similar argument as in the proof of [8, Theorem 2.2]), we can show that F belongs to the quadratic module generated by \(\left ({r_{1}^{2}}-x^{2\alpha ^{1}}\right ) \cdot I_{d}, \ldots , \left ({r_{m}^{2}} - x^{2\alpha ^{m}}\right ) \cdot I_{d}\) in \(\text {Sym}_{d}(\mathbb {R}[x])\).

In the rest of this paper, for simplicity, we write \(\mathcal {T}\) instead of \(\mathcal {T}_{\{ {\Lambda }_{1} - G_{1}, \ldots , {\Lambda }_{m} - G_{m} \}}\)-the preordering generated by Λ1G1,…,ΛmGm in \(\text {Sym}_{d}(\mathbb {R}[x])\). Set

$$\begin{array}{@{}rcl@{}} \mathcal{T}^{\vee} & = & \{ \mathscr{L} \colon \text{Sym}_{d}({\mathbb{R}[x]}) \rightarrow \mathbb{R} \,|\, \mathscr{L} \text{ is linear, } \mathscr{L}(I_{d}) = 1, \mathscr{L}(\mathcal{T}) \ge 0 \},\\ \mathcal{T}^{\vee \vee} & = & \{ F \in \text{Sym}_{d}({\mathbb{R}[x]}) \,|\, \mathscr{L}(F) \ge 0, \ \forall \mathscr{L} \in \mathcal{T}^{\vee} \},\\ \mathcal{T}^{Sat} & = & \{ F \in \text{Sym}_{d}({\mathbb{R}[x]}) \ | \ F(x) \succeq 0, \ \forall x \in K \}. \end{array} $$

Clearly, \(\mathcal {T} \subset \mathcal {T}^{\vee \vee }\). Furthermore, we have the following statement.

Corollary 4.2

Assume that the system \(\mathcal {G} = \{G_{1}, \ldots , G_{m}\}\) is nondegenerate, the cone \({\mathcal C}(\mathcal {G})\) is unimodular, and that \(\widetilde {K} = \overline {{\Phi }(K)}\) . Then,

$$\mathcal A(K) \cap \mathcal{T}^{Sat} \subset \mathcal{T}^{\vee \vee}. $$

Proof

Let \(F \in \mathcal {A}(K) \cap \mathcal {T}^{Sat}\) and take any \(\epsilon > 0\). We have \(F + \epsilon \cdot I_{d} \in \mathcal {A}(K)\) and \(\lambda _{\min }(F(x) + \epsilon \cdot I_{d} ) \ge \epsilon \) for all \(x \in K\). By Theorem 4.1, \(F + \epsilon \cdot I_{d} \) belongs to \(\mathcal {T}\). Hence for all \(\mathscr{L} \in \mathcal {T}^{\vee },\)

$$\mathscr{L}(F) + \epsilon = \mathscr{L}(F+ \epsilon \cdot I_{d} ) \ \ge \ 0, $$

so by taking \(\epsilon \rightarrow 0\), we get \(\mathscr{L}(F) \ge 0\). Therefore, \(F \in \mathcal {T}^{\vee \vee }\). □

We conclude the paper by the following remark.

Remark 4.1

Let \({\Psi } \colon \mathbb {R}^{n} \rightarrow \mathbb {R}^{n}\) be a polynomial isomorphism. Let \(K_{1}, K_{2} \) be two semialgebraic sets in \(\mathbb {R}^{n}\) such that K1 = Ψ(K2). If \(K_{2}\) is determined by a nondegenerate system of matrix polynomials, then the results obtained in this paper hold also for \(K_{1}\).