Abstract
A classical inequality of Szász bounds polynomials with no zeros in the upper half plane entirely in terms of their first few coefficients. Borcea–Brändén generalized this result to several variables as a piece of their characterization of linear maps on polynomials preserving stability. In this paper, we use determinantal representations to prove Szász type inequalities in two variables and then prove that one can use the two variable inequality to prove an inequality for several variables.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We say \(p \in \mathbb {C}[z]\) is stable if p has no zeros in \(\mathbb {C}_{+}:=\{z\in \mathbb {C}: \text {Im}z > 0\}\).
This note is about improvements and generalizations of the following classical inequality of O. Szász.
Theorem 1.1
(Szász [8]) If \(p(z) = \sum _{j=0}^{d}c_j z^j \in \mathbb {C}[z]\) is stable and \(p(0)=1\) then
The purpose of the theorem is to prove
is a normal family whose local uniform limits are entire functions of order at most 2. One can use this to give a complete characterization of the local uniform limits of stable polynomials. See Theorem 4 Chapter VIII (page 334) of [6].
Only recently have multivariable Szász type inequalities been considered. We say \(p \in \mathbb {C}[z_1,\dots , z_n]\) is stable if p has no zeros in \((\mathbb {C}_{+})^n\). In their groundbreaking characterization of linear operators on polynomials \(T:\mathbb {C}[z_1,\dots , z_n] \rightarrow \mathbb {C}[z_1,\dots , z_n]\) that preserve stability, Borcea–Brändén [1] established a Szász type inequality. Its purpose was to prove that the symbol of T
is actually an entire function. Formally, the symbol is given as \(\overline{G_T}(z,w) = T[e^{-z\cdot w}]\).
We let \(e_1,\dots , e_n\) be standard basis vectors of \(\mathbb {Z}^n\).
Theorem 1.2
(Borcea–Brändén Theorem 6.5 [1]) Suppose that \(p(z) = \sum _\beta a(\beta ) z^{\beta } \in \mathbb {C}[z_1,\dots , z_n]\) is stable with \(p(0)=1\). Let
Then,
Here \(\Vert z\Vert _{\infty } = \max \{|z_j|: 1\le j \le n\}\).
The original proof of this theorem uses an inequality of Szász (actually inequality (2.1) below) along with some linear operators that preserve stability to bound all of the coefficients \(\{a(\beta )\}\) and then reassemble p to obtain the bound above.
The goal of this paper is to present improvements to both Theorem 1.1 and Theorem 1.2 and to give a more function theoretic proof in the multivariable case. Our strategy is to use determinantal representations of two variable stable polynomials to prove a version of Theorem 1.2 in two variables and then to show that an inequality in the two variable case yields an inequality in several variables. We also show how to handle the case where \(p(0)=0\).
We think it is instructive to present and prove a sharper inequality in the one variable case. The following inequality is due to de Branges. We take this opportunity to prove that the inequality is sharp in a certain sense.
Theorem 1.3
(de Branges, Lemma 5 [2]) Suppose \(p(z) = \sum _{j=0}^{d}p_j z^j \in \mathbb {C}[z]\) is stable and \(p(0)=1\). Then,
The inequality is sharp on the imaginary axis for stable \(p \in \mathbb {R}[z]\). Specifically, given \(c_1,c_2 \in \mathbb {R}\) with \(\gamma := \frac{1}{2}(c_1^2-2c_2)> 0\) there exist stable polynomials \(p_n \in \mathbb {R}[z]\) with \(p_n=1+c_1z+c_2z^2 + \dots \) such that
We are unsure about sharpness more generally. Note that necessarily \(c_1^2-2c_2 \ge 0\) if \(p=1+c_1z+c_2z^2+\dots \in \mathbb {R}[z]\) is stable (e.g. examine \(c_1,c_2\) in terms of roots) and \(c_1^2-2c_2=0\) if and only if \(p\equiv 1\).
The next inequality subsumes this one, however the elementary one variable argument in Sect. 2 is a good warm-up for what follows. Using determinantal representations for stable two variable polynomials we are able to offer the following improvement on Theorem 1.2. Below, \(p_j = \frac{\partial p}{\partial z_j}\) and \(p_{j,k} = \frac{\partial ^2 p}{\partial z_j \partial z_k}\).
Theorem 1.4
Let \(p\in \mathbb {C}[z_1,z_2]\) be stable. If \(p(0)=1\), then
For comparison, using \(a \le (1+a^2)/2\) for the first term in the exponential and writing \(p = 1+\sum _{\beta \ne 0} a(\beta ) z^{\beta }\) we get
Here we used \(p_{jj}(0)= 2 a(2e_j)\) and \(p_{jk}(0) = a(e_j+e_k)\) for \(j\ne k\).
It turns out that the two variable result can be used to prove an n-variable result.
Theorem 1.5
Suppose \(p\in \mathbb {C}[z_1,\dots ,z_n]\) is stable. If \(p(0)=1\) then
where Hp is the Hessian of p and \(\Vert \cdot \Vert \) denotes operator norm.
We also get an inequality more comparable to Theorem 1.2:
To close the introduction we discuss what happens when \(p(0)=0\). In one variable there is only a minor issue because we can factor \(p(z) = z^{k}q(z)= p_kz^k + p_{k+1}z^{k+1} + p_{k+2}z^{k+2}+\dots \) and get a bound depending on \(p_k\ne 0 ,p_{k+1},p_{k+2}\):
for stable \(p \in \mathbb {C}[z]\) with a zero of order k at 0. We used the inequality \(\log |z|^k \le k(|z|-1)\).
In several variables the case \(p(0)=0\) is a little more delicate. Borcea–Brändén covered this case as follows. Given \(p(z) =\sum _{\alpha } a(\alpha )z^{\alpha } \in \mathbb {C}[z_1,\dots ,z_n]\) let \(\text {supp}(p) = \{\alpha \in \mathbb {N}^n: a(\alpha ) \ne 0\}\) and let \(\mathcal {M}(p)\) denote the set of minimal elements of \(\text {supp}(p)\) with respect to the partial order \(\le \) on \(\mathbb {N}^n\). Also, for fixed \(\mathcal {M} \subset \mathbb {N}^n\) let
Theorem 1.6
(Borcea–Brändén Theorem 6.6 [1]) Let \(\mathcal {M} \subset \mathbb {N}^n\) be a finite nonempty set and \(p(z) = \sum _{\alpha } a(\alpha ) z^{\alpha } \in \mathbb {C}[z_1,\dots , z_n]\) be stable with \(\mathcal {M}(p)= \mathcal {M}\). Then, there are constants B and C depending only on the coefficients \(a(\alpha )\) with \(\alpha \in \mathcal {M}_2\) such that
Moreover, Band C can be chosen so that they depend continuously on the aforementioned set of coefficients.
With our approach we are able to to get a more explicit estimate in two and several variables. Set \(\vec {1} = (1,\dots , 1) \in \mathbb {C}^n\).
Theorem 1.7
Let \(p\in \mathbb {C}[z_1,z_2]\) be stable and assume p vanishes to order r at 0. Write out the homogeneous expansion of p:
where \(P_j\) is homogeneous of degree j. Then,
where
This is proved in Sect. 6.
Finally, we present a multivariable Szász inequality for the case \(p(0)=0\).
Theorem 1.8
Suppose \(p\in \mathbb {C}[z_1,\dots ,z_n]\) is stable and vanishes to order r at 0. If we write out the homogeneous expansion of p
then
where \(C_0, C_1,C_2\) are constants depending on \(r, P_r(\vec {1}), P_{r+1}(\vec {1}), P_{r+2}(\vec {1}), \nabla P_{r}(\vec {1}), \nabla P_{r+1}(\vec {1})\).
The constants \(C_0,C_1,C_2\) along with the proof of this theorem are explicitly given in Sect. 8.
2 One Variable Inequality
In this section we prove Theorem 1.3 whose main inequality is due to de Branges [2]. lema
Lemma 2.1
Suppose \(\alpha _1,\dots , \alpha _d\in \mathbb {C}\) with \(\text {Im}\alpha _j \le 0\). Then,
Equality holds if and only if \(\alpha _j \in \mathbb {R}\) for all but at most one j.
Proof
Note
So, our inequality reduces to showing the following is non-negative:
The last quantity is evidently non-negative and equals zero exactly when \(\text {Im}\alpha _j \text {Im}\alpha _k = 0\) for all \(j\ne k\), which means \(\text {Im}\alpha _j = 0\) for all j or there is one j such that \(\text {Im}\alpha _j \ne 0\) while \(\text {Im}\alpha _k=0\) for all \(k\ne j\). \(\square \)
Szász uses the inequality \(|(1+z)e^{-z}|\le e^{|z|^2}\) instead of the stronger inequality:
Lemma 2.2
For \(z \in \mathbb {C}\), \(z\ne -1\)
Proof
Since \(\log (1+x) \le x\) we have
\(\square \)
Proof
(Proof of Theorem 1.3) Write \(p(z) = \prod _{j=1}^{d}(1+\alpha _j z)\) where \(\text {Im}\alpha _j \le 0\). Note \(\sum _{j} \alpha _j = p_1\) and \(\sum _{j<k} \alpha _j \alpha _k = p_2\). By Lemmas 2.2 and 2.1
Regarding sharpness define \(\gamma = (c_1^2-2c_2)/2 >0\). Choose n large enough that \(d_n = \gamma - c_1^2/(2n)\ge 0.\) Then, the polynomial
is stable, belongs to \(\mathbb {R}[z]\), and has the correct normalizations. Since \(p_n(z) \rightarrow \exp {(c_1z-\gamma z^2)}\) locally uniformly, we have
which is exactly what was claimed. \(\square \)
It is worth pointing out that Szász proves
and then converts \(\sum _j \alpha _j^2 = p_1^2-2p_2\) to get the estimate
By sidestepping the former inequality and estimating \(\sum _j |\alpha _j|^2\) directly in terms of polynomial coefficients we get a better bound. The inequality (2.1) is used in [1] to prove multivariable Szász inequalities. So, using Lemma 2.1 in their proof would already improve Theorem 1.2.
3 Two Variable Szász Inequality
Using determinantal formulas it is possible to establish a Szász inequality for two variable polynomials.
Definition 3.1
We shall say a stable polynomial \(p\in \mathbb {C}[z_1,\dots ,z_n]\) of total degree d has a determinantal representation if there exist \(d\times d\) matrices \(A,B_1,\dots , B_n\) and a constant \(c\in \mathbb {C}\) such that
-
1.
\(\text {Im}A := \frac{1}{2i}(A-A^*) \ge 0\)
-
2.
for all j, \(B_j \ge 0\)
-
3.
\(\sum _{j=1}^{n} B_j=I\).
-
4.
\(p(z) = c \det (A+\sum _{j=1}^{n} z_j B_j)\).
Theorem 1.4 will be broken into two theorems.
Theorem 3.2
If \(p\in \mathbb {C}[z_1,z_2]\) is stable, then p has a determinantal representation.
Several different determinantal representations are closely related to this one but not quite equivalent. There are determinantal representations for three variable hyperbolic polynomials, two variable real-zero polynomials, and two variable real-stable polynomials (see [3, 4, 9]). It turns out this formula can be derived from a determinantal representation for polynomials with no zeros on the bidisk \(\mathbb {D}^2 = \{(z_1,z_2): |z_1|,|z_2|<1\}\) from [3]. We show how to convert from the bidisk formula to Theorem 3.2 in Sect. 4. The method of conversion is a very slight modification of what is done in the paper [5]. We include the argument for the reader’s convenience; the essence of Sect. 4 is not new.
In Sect. 5 we prove the following Szász inequality for stable polynomials with determinantal representations.
Theorem 3.3
Suppose \(p\in \mathbb {C}[z_1,\dots , z_n]\) has a determinantal representation as above. If \(p(0)=1\), then
4 Determinantal Representations
In this section we prove Theorem 3.2. We begin by recalling the following.
Theorem 4.1
(See [3] Theorem 2.1) If \(q\in \mathbb {C}[z_1,z_2]\) has no zeros in \(\mathbb {D}^2\) and bidegree (n, m), then there exists a constant c and an \((n+m)\times (n+m)\) contractive matrix D such that
where \(\Delta (z) = z_1P_1+z_2P_2\) and
Let \(p \in \mathbb {C}[z_1,z_2]\) be stable and have bidegree (n, m). Define \(\phi (\zeta ) = i\frac{1+\zeta }{1-\zeta }\) and
One can calculate that \(\phi ^{-1}(\zeta ) = \frac{\zeta - i}{\zeta + i}\) and
Then, q has no zeros in \(\mathbb {D}^2\) and so the conclusion of Theorem 4.1 holds. Then, converting (4.1) to a formula for p yields
Since D is a contraction, the eigenspace corresponding to eigenvalue 1 is reducing (if nontrivial). Thus, there exists a unitary U such that
where K is a contractive \(k\times k\) matrix for which 1 is not an eigenvalue. Here k is the codimension of the eigenspace of D corresponding to eigenvalue 1.
Then,
where \(A = i(I+K)(I-K)^{-1}\).
Let \(B_j\) equal the bottom right \(k\times k\) block of \(U^*P_jU\). Then,
where \(c_0\) is a new constant (the \(*\) denotes a block we are unconcerned with). Since \(P_1+P_2 = I\), \(B_1+B_2= I\). Also note that \(p(t,t) = c_0 \det (A+tI)\) has degree k so that \(k \le \deg p\). On the other hand, the determinantal formula for p has total degree at most k, so that \(\deg p \le k\). Therefore the matrices in our formula have size matching the total degree of p. Finally,
This proves Theorem 3.2.
5 Szász Inequality for Determinantal Polynomials
In this section we prove Theorem 3.3.
Suppose \(p(z) = c \det (A+\sum _{j=1}^{n} z_j B_j)\) where \(\sum _{j=1}^{n} B_j = I\), \(B_j \ge 0\), \(\text {Im}A \ge 0\), and \(p(0)=1\). By the last normalization A is invertible with \(c\det A = 1\) so that
where \(X_j = B_j A^{-1}\). As with complex numbers, \(\text {Im}(A^{-1}) \le 0\).
It helps to make note of a few formulas for the derivatives of p. Recall that if A(t) is a differentiable matrix function then
whenever A(t) is invertible. Here \(\text {tr}\) is the trace of a matrix.
Letting \(X(z) = I+\sum _{j=1}^n z_j X_j\), whenever \(p(z) \ne 0\) we have
so that
For a positive definite matrix P we have \(\log P \le P-I\) simply because the same inequality holds for the eigenvalues of P. Therefore,
Now,
By Lemmas 5.1, 5.2 and 5.3 below we have
Finally, since \(\sum _j B_j = I\) we have
and
Thus,
which proves Theorem 3.3 modulo the following three lemmas.
Lemma 5.1
Let P, M be \(n\times n\) matrices. If \(P \ge 0\), then
Proof
Since \(P\ge 0\), we can decompose \(P = \sum _j v_jv_j^*\) where \(v_j \in \mathbb {C}^n\). Then,
\(\square \)
The following is a standard result (the finite dimensional version of the Naimark dilation theorem—see [7]).
Lemma 5.2
Suppose \(B_1,\dots , B_n\) are \(N\times N\) matrices. Assume for all j, \(B_j \ge 0\) and \(\sum _j B_j = I\). Then, there exist pairwise orthogonal projection matrices \(P_1,\dots , P_n\) of size \(m\times m\) where \(m= nN\) such that
In particular, for \(z=(z_1,\dots , z_n) \in \mathbb {C}^n\)
Proof
We can factor \(B_j = A_j^* A_j\) with \(N\times N\) matrix \(A_j\). The \(nN\times N\) matrix
is an isometry from \(\mathbb {C}^N\) to \(\mathbb {C}^{nN}\) since \(T^*T = \sum _j B_j = I\). We can extend T to a \(m\times m\) unitary U. Let \(Q_j\) be the orthogonal projection onto the j-th block of \(\mathbb {C}^{m} = \mathbb {C}^N\oplus \cdots \oplus \mathbb {C}^N\). Set \(P_j = U^*Q_j U\). Then, (5.4) holds and
\(\square \)
The following lemma is an adaptation of our one variable argument.
Lemma 5.3
If M is a square matrix with \(\text {Im}M \ge 0\), then
Proof
Write \(A = \text {Re}M, B = \text {Im}M\). Then,
Then,
If B has eigenvalues \(\beta _j\ge 0\) then
This proves the claimed inequality. \(\square \)
6 Szász Inequality for Determinants with \(p(0)=0\)
As with Theorem 1.4 we will prove a Szász inequality for polynomials with determinantal representations and Theorem 1.7 will follow via Theorem 3.2.
Theorem 6.1
Suppose \(p\in \mathbb {C}[z_1,\dots ,z_n]\) has a determinantal representation as in Definition 3.1. Assume p vanishes to order r at 0. Write out the homogeneous expansion of p:
where \(P_j\) is homogeneous of degree j. Then,
where
and \(\vec {1} = (1,\dots ,1) \in \mathbb {C}^n\).
Proof
Write \(p(z) = c\det (A+\sum _{j=1}^{d} z_j B_j)\) as in Definition 3.1. Since \(p(0)=0\), \(\det A = 0\). Since \(\text {Im}A \ge 0\), the eigenspace corresponding to eigenvalue 0 is reducing for A; see Lemma 6.1 below. Let s equal the dimension of the kernel of A.
So, after conjugating by a unitary we can rewrite p in the form
where C is an invertible \((d-s)\times (d-s)\) matrix with \(\text {Im}C\ge 0\) and the \(B_j\) are relabelled after conjugating (they satisfy all of the same properties as before). Define \(X_j = B_j \begin{pmatrix} I &{} 0 \\ 0 &{} C^{-1} \end{pmatrix}\) and \(J = \begin{pmatrix} O_s &{} 0 \\ 0 &{} I_{d-s} \end{pmatrix}\). Then,
where \(c_0 = c \det C\). Let \(X(z) = J +\sum _{j=1}^{d} z_j X_j\). Let \(X_s(z)\) be the top left \(s\times s\) block of X(z). Evaluating \(\det X\) starting with the top left \(s\times s\) block gives
Note that \(\det X_s(z)\) is homogeneous of degree s and \(X_s(\vec {1}) = I_s\) since \(\sum _j B_j=I\). This proves \(s=r\).
We can follow some of the argument in Sect. 5. Equations (5.1), (5.2), (5.3) hold when \(p(z)\ne 0\) but (5.3) rearranges into
As before using Lemmas 5.1, 5.2, 5.3 we have
Now we must relate these quantities to intrinsic quantities of p.
First, \(p(t\vec {1}) = c_0 \det \begin{pmatrix} tI_r &{} 0 \\ 0 &{} I+tC^{-1} \end{pmatrix} = c_0 t^r \det (I+tC^{-1}).\) So, using this formula and the homogeneous expansion of p we get
It is more difficult to calculate \(\text {tr}X_j\). Define
Note
Then,
Thus, we can do the following computation with matrices and also with the homogeneous expansion of p
Therefore,
which implies
Therefore,
If we reassemble we get
where
and this concludes the proof. \(\square \)
Lemma 6.1
Suppose A is a matrix with \(\text {Im}A \ge 0\). If 0 is an eigenvalue of A with eigenspace of dimension s, then there exists a unitary U such that
where \(\text {Im}C \ge 0\) and C is invertible.
Proof
If we write A using an orthonormal basis for its kernel followed by an orthonormal basis for the orthogonal complement of its kernel, we can put A into the form
by conjugating by a unitary. This matrix will still have positive semi-definite imaginary part:
This implies \(B=0\). Note C is invertible because it cannot have 0 as an eigenvalue. \(\square \)
7 Multivariable Szász Inequalities
Using the two variable Szász inequality we can establish the multivariable inequality Theorem 1.5.
We will frequently use the component-wise partial order on \(\mathbb {R}^n\): \(x \ge y\) if and only if for all \(j=1,\dots , n\), \(x_j\ge y_j\).
Proof (Proof of Theorem 1.5)
For \(z=x+iy \in \mathbb {C}_+^{n}\) we have
for all independent choices of ± by Lemma 7.1 below (more precisely, we can hold fixed any variables with a “\(+\)” and apply Lemma 7.1 to the remaining variables). So, it is enough to prove Theorem 1.5 for \(z \in \mathbb {C}_+^n\).
By Lemma 7.2 below, if \(0\le y\le \tilde{y}\) then
Define \(\tilde{y}\) as the vector with j-th component
Then, \(\tilde{y} \ge \pm x\) and \(\tilde{y}\ge y\).
Define
which has no zeros in \(\mathbb {C}_+^2\) and \(q(0)=1\). We will now apply Theorem 1.4 using all of the following computations.
Thus, by Theorem 1.4
where we have used \(|x+i\tilde{y}| \le \sqrt{2}|z|\) and \(|\tilde{y}|\le |z|\). \(\square \)
Since Theorem 1.2 is an estimate on polydisks it is worth pointing out that (7.1) yields
where \(p = \sum a(\beta ) z^{\beta }\) and in the last line we used the inequality \(a\le (1+a^2)/2\). This gives
The following is a standard result. See Lemma 2.8 of [1] for instance.
Lemma 7.1
If \(p\in \mathbb {C}[z_1,\dots , z_n]\) has no zeros in \(\mathbb {C}_+^n\) then for \(z=x+iy\in \mathbb {C}_{+}^n\)
Proof
The one variable polynomial \(q(\zeta ) = p(x+\zeta y)\) has no zeros in \(\mathbb {C}_+\). Then, q can be factored as a product of terms of the form \((1+\alpha \zeta )\) where \(\text {Im}\alpha \le 0\). We can then check directly that
which implies \(|q(i)|\ge |q(-i)|\). \(\square \)
Lemma 7.2
If \(p\in \mathbb {C}[z_1,\dots , z_n]\) has no zeros in \(\mathbb {C}_{+}^n\) and if \(0\le y \le \tilde{y}\) then for any \(x\in \mathbb {R}^n\)
Proof
The one variable polynomial \(q(\zeta ) = p(x+iy+\zeta (\tilde{y}-y))\) has no zeros in \(\mathbb {C}_{+}\). Factors of q are of the form \((1+\alpha \zeta )\) with \(\text {Im}\alpha \le 0\). Since \(|1+i\alpha | \ge 1\) we have \(|q(0)| \le |q(i)|\). \(\square \)
We can get a slightly better bound on \(\mathbb {R}^n\) by modifying the argument of Theorem 1.5.
Theorem 7.1
Suppose \(p\in \mathbb {C}[z_1,\dots , z_n]\) is stable. If \(p(0)=1\) then for \(x\in \mathbb {R}^n\)
where Hp is the Hessian matrix of p.
Proof
We can write \(x \in \mathbb {R}^n\) as \(x= x_{+} - x_{-}\) where \((x_+)_j = {\left\{ \begin{array}{ll} x_j &{} \text { if } x_j \ge 0 \\ 0 &{} \text { if } x_j<0 \end{array}\right. }\). Define
which has no zeros in \(\mathbb {C}_{+}^2\) and \(P(0)=1\). Set \(S_+ = \{j: x_j \ge 0\}\), \(S_{-}=\{j:x_j<0\}\).
Note that
Now, since \(P(1,-1) = p(x)\) we have
by Theorem 1.4. \(\square \)
8 Multivariable Inequalities When \(p(0)=0\)
In this section we prove Theorem 1.8. Write the homogeneous expansion of p
Notice that \(P_r(z)\) is stable itself by Hurwitz’s theorem because
exhibits \(P_r\) as a limit of polynomials with no zeros in \(\mathbb {C}_+^{n}\).
We can make some of the reductions as in the previous section. We may assume \(z= x+iy \in \mathbb {C}_{+}^n\) by Lemma 7.1. Define \(m =\max \{|x_j|,y_j: 1\le j \le n\}\) and \(\tilde{y} = m\vec {1}\). Then, \(\tilde{y}\ge \pm x\), \(\tilde{y}\ge y\) and \(|p(z)| \le |p(x+i\tilde{y})|\). Define
which is stable and has homogeneous expansion
All of the terms above are homogeneous of the correct degree but it is conceivable that the first term vanishes. Setting \(w_1=w_2=1\) we see the first term evaluates to \(P_r(2\tilde{y}) = (2m)^r P_r(\vec {1})\) which is non-zero.
The data we need for Theorem 1.7 is:
and so (omitting some details)
where
Note \(m\le \Vert z\Vert _{\infty }\) and \(\Vert x+i\tilde{y}\Vert _{\infty } \le \sqrt{2} m\). We can crudely estimate A:
and
Here we use \(\Vert \cdot \Vert _1\) for \(\ell ^1\) norm of a vector. Putting everything together
where
References
Borcea, J., Brändén, P.: The Lee–Yang and Pólya-Schur programs. I. Linear operators preserving stability. Invent. Math. 177(3), 541–569 (2009). https://doi.org/10.1007/s00222-009-0189-3
de Branges, L.: Some Hilbert spaces of entire functions II. Trans. Am. Math. Soc. 99, 118–152 (1961). https://doi.org/10.2307/1993448
Grinshpan, A., Kaliuzhnyi-Verbovetskyi, D.S., Vinnikov, V., Woerdeman, H.J.: Stable and real-zero polynomials in two variables. Multidimens. Syst. Signal Process. 27(1), 1–26 (2016). https://doi.org/10.1007/s11045-014-0286-3
Helton, J.W., Vinnikov, V.: Linear matrix inequality representation of sets. Commun. Pure Appl. Math. 60(5), 654–674 (2007). https://doi.org/10.1002/cpa.20155
Knese, G.: Determinantal representations of semihyperbolic polynomials. Michigan Math. J. 65(3), 473–487 (2016). https://doi.org/10.1307/mmj/1472066143
Levin, B.J.: Distribution of zeros of entire functions. Translations of Mathematical Monographs Vol 5, Revised edition Translated Russian by R.P. Boas, J.M. Danskin, F.M. Goodspeed, J. Korevaar, A.L. Shields, H.P. Thielman. American Mathematical Society, Providence, RI, pp. xii+523, ISBN: 0-8218-4505-5 (1980)
Paulsen, V. (2002). Completely Bounded Maps and Operator Algebras. Cambridge Studies in Advanced Mathematics, Vol. 78, Cambridge University Press, Cambridge, pp xii+300, ISBN 0-521-81669-6
Szász, O.: On sequences of polynomials and the distribution of their zeros. Bull. Am. Math. Soc. 49, 377–380 (1943). https://doi.org/10.1090/S0002-9904-1943-07919-0
Vinnikov, V. (2012). LMI representations of convex semialgebraic sets and determinantal representations of algebraic hypersurfaces: past, present, and future, conference Mathematical methods in systems, optimization, and control. Oper. Theory Adv. Appl, 222, Birkhäuser/Springer Basel AG, Basel, pp. 325–349, https://doi.org/10.1007/978-3-0348-0411-0-23
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Mihai Putinar.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research was supported by NSF Grant DMS-1363239.
Rights and permissions
About this article
Cite this article
Knese, G. Global Bounds on Stable Polynomials. Complex Anal. Oper. Theory 13, 1895–1915 (2019). https://doi.org/10.1007/s11785-018-0873-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11785-018-0873-7