1 Introduction

We say \(p \in \mathbb {C}[z]\) is stable if p has no zeros in \(\mathbb {C}_{+}:=\{z\in \mathbb {C}: \text {Im}z > 0\}\).

This note is about improvements and generalizations of the following classical inequality of O. Szász.

Theorem 1.1

(Szász [8]) If \(p(z) = \sum _{j=0}^{d}c_j z^j \in \mathbb {C}[z]\) is stable and \(p(0)=1\) then

$$\begin{aligned} |p(z)| \le \exp (|z||c_1| + 3|z|^2(|c_1|^2 + |c_2|)). \end{aligned}$$

The purpose of the theorem is to prove

$$\begin{aligned} \mathcal {F}_C = \{p\in \mathbb {C}[z]: p \text {is stable}, p(0)=1, |p'(0)|, |p''(0)| \le C\} \end{aligned}$$

is a normal family whose local uniform limits are entire functions of order at most 2. One can use this to give a complete characterization of the local uniform limits of stable polynomials. See Theorem 4 Chapter VIII (page 334) of [6].

Only recently have multivariable Szász type inequalities been considered. We say \(p \in \mathbb {C}[z_1,\dots , z_n]\) is stable if p has no zeros in \((\mathbb {C}_{+})^n\). In their groundbreaking characterization of linear operators on polynomials \(T:\mathbb {C}[z_1,\dots , z_n] \rightarrow \mathbb {C}[z_1,\dots , z_n]\) that preserve stability, Borcea–Brändén [1] established a Szász type inequality. Its purpose was to prove that the symbol of T

$$\begin{aligned} \overline{G_T}(z,w) = \sum _{\alpha \in \mathbb {N}^n} (-1)^{\alpha } T(z^\alpha ) \frac{w^\alpha }{\alpha !} \end{aligned}$$

is actually an entire function. Formally, the symbol is given as \(\overline{G_T}(z,w) = T[e^{-z\cdot w}]\).

We let \(e_1,\dots , e_n\) be standard basis vectors of \(\mathbb {Z}^n\).

Theorem 1.2

(Borcea–Brändén Theorem 6.5 [1]) Suppose that \(p(z) = \sum _\beta a(\beta ) z^{\beta } \in \mathbb {C}[z_1,\dots , z_n]\) is stable with \(p(0)=1\). Let

$$\begin{aligned} B = 2^{n-1} \frac{\sqrt{2e^2-e}}{e-1}= & {} 2^{n-1}\cdot 2.0210\dots ,\\ C= & {} 6e^2\left( \sum _{i=1}^{n}|a(e_i)|\right) ^2 + 4e^2\sum _{i,j=1}^{n} |a(e_i+e_j)|. \end{aligned}$$

Then,

$$\begin{aligned} |p(z)| \le B\exp (C\Vert z\Vert _{\infty }^2). \end{aligned}$$

Here \(\Vert z\Vert _{\infty } = \max \{|z_j|: 1\le j \le n\}\).

The original proof of this theorem uses an inequality of Szász (actually inequality (2.1) below) along with some linear operators that preserve stability to bound all of the coefficients \(\{a(\beta )\}\) and then reassemble p to obtain the bound above.

The goal of this paper is to present improvements to both Theorem 1.1 and Theorem 1.2 and to give a more function theoretic proof in the multivariable case. Our strategy is to use determinantal representations of two variable stable polynomials to prove a version of Theorem 1.2 in two variables and then to show that an inequality in the two variable case yields an inequality in several variables. We also show how to handle the case where \(p(0)=0\).

We think it is instructive to present and prove a sharper inequality in the one variable case. The following inequality is due to de Branges. We take this opportunity to prove that the inequality is sharp in a certain sense.

Theorem 1.3

(de Branges, Lemma 5 [2]) Suppose \(p(z) = \sum _{j=0}^{d}p_j z^j \in \mathbb {C}[z]\) is stable and \(p(0)=1\). Then,

$$\begin{aligned} |p(z)| \le \exp ( \text {Re}(p_1 z) + \frac{1}{2}(|p_1|^2 - 2\text {Re}(p_2))|z|^2). \end{aligned}$$

The inequality is sharp on the imaginary axis for stable \(p \in \mathbb {R}[z]\). Specifically, given \(c_1,c_2 \in \mathbb {R}\) with \(\gamma := \frac{1}{2}(c_1^2-2c_2)> 0\) there exist stable polynomials \(p_n \in \mathbb {R}[z]\) with \(p_n=1+c_1z+c_2z^2 + \dots \) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } |p_n(iy)| = \exp (\gamma y^2). \end{aligned}$$

We are unsure about sharpness more generally. Note that necessarily \(c_1^2-2c_2 \ge 0\) if \(p=1+c_1z+c_2z^2+\dots \in \mathbb {R}[z]\) is stable (e.g. examine \(c_1,c_2\) in terms of roots) and \(c_1^2-2c_2=0\) if and only if \(p\equiv 1\).

The next inequality subsumes this one, however the elementary one variable argument in Sect. 2 is a good warm-up for what follows. Using determinantal representations for stable two variable polynomials we are able to offer the following improvement on Theorem 1.2. Below, \(p_j = \frac{\partial p}{\partial z_j}\) and \(p_{j,k} = \frac{\partial ^2 p}{\partial z_j \partial z_k}\).

Theorem 1.4

Let \(p\in \mathbb {C}[z_1,z_2]\) be stable. If \(p(0)=1\), then

$$\begin{aligned} |p(z)|\le \exp \left( \text {Re}\left( \sum _{j=1}^{2} z_j p_j(0)\right) + \frac{1}{2}\Vert z\Vert _{\infty }^2 \left( |\sum _{j=1}^{2} p_j(0)|^2-\text {Re}\left( \sum _{j,k=1}^{2} p_{j,k}(0)\right) \right) \right) . \end{aligned}$$
(1.1)

For comparison, using \(a \le (1+a^2)/2\) for the first term in the exponential and writing \(p = 1+\sum _{\beta \ne 0} a(\beta ) z^{\beta }\) we get

$$\begin{aligned}&|p(z)| \le \sqrt{e} \exp (C\Vert z\Vert _{\infty }^2)\\&C=\left( \sum _{j=1}^{2} |a(e_j)|\right) ^2+\sum _{j,k=1}^{2} |\text {Re}[a(e_j+e_k)]|. \end{aligned}$$

Here we used \(p_{jj}(0)= 2 a(2e_j)\) and \(p_{jk}(0) = a(e_j+e_k)\) for \(j\ne k\).

It turns out that the two variable result can be used to prove an n-variable result.

Theorem 1.5

Suppose \(p\in \mathbb {C}[z_1,\dots ,z_n]\) is stable. If \(p(0)=1\) then

$$\begin{aligned} |p(z)| \le \exp ( \sqrt{2}|\nabla p(0)||z| +(|\nabla p(0)|^2+\Vert \text {Re}Hp(0)\Vert )|z|^2) \end{aligned}$$

where Hp is the Hessian of p and \(\Vert \cdot \Vert \) denotes operator norm.

We also get an inequality more comparable to Theorem 1.2:

$$\begin{aligned}&|p(z)| \le \sqrt{e}\exp (C \Vert z\Vert _{\infty }^2)\\&C = 2\left( \sum _{j=1}^{n} |a(e_j)|\right) ^2 +2 \sum _{j,k=1}^{n} |\text {Re}[a(e_j+e_k)]|). \end{aligned}$$

To close the introduction we discuss what happens when \(p(0)=0\). In one variable there is only a minor issue because we can factor \(p(z) = z^{k}q(z)= p_kz^k + p_{k+1}z^{k+1} + p_{k+2}z^{k+2}+\dots \) and get a bound depending on \(p_k\ne 0 ,p_{k+1},p_{k+2}\):

$$\begin{aligned} |p(z)| \le |p_k| \exp \left[ k (|z|-1) + \text {Re}\left( \frac{p_{k+1}}{p_{k}}z\right) + \frac{1}{2}|z|^2\left( \left| \frac{p_{k+1}}{p_{k}}\right| ^2 - 2\text {Re}\left( \frac{p_{k+2}}{p_k} \right) \right) \right] \end{aligned}$$

for stable \(p \in \mathbb {C}[z]\) with a zero of order k at 0. We used the inequality \(\log |z|^k \le k(|z|-1)\).

In several variables the case \(p(0)=0\) is a little more delicate. Borcea–Brändén covered this case as follows. Given \(p(z) =\sum _{\alpha } a(\alpha )z^{\alpha } \in \mathbb {C}[z_1,\dots ,z_n]\) let \(\text {supp}(p) = \{\alpha \in \mathbb {N}^n: a(\alpha ) \ne 0\}\) and let \(\mathcal {M}(p)\) denote the set of minimal elements of \(\text {supp}(p)\) with respect to the partial order \(\le \) on \(\mathbb {N}^n\). Also, for fixed \(\mathcal {M} \subset \mathbb {N}^n\) let

$$\begin{aligned} \mathcal {M}_2 = \{\alpha + \beta : \alpha \in \mathcal {M}, \beta \in \mathbb {N}^n, |\beta | \le 2\} \end{aligned}$$

Theorem 1.6

(Borcea–Brändén Theorem 6.6 [1]) Let \(\mathcal {M} \subset \mathbb {N}^n\) be a finite nonempty set and \(p(z) = \sum _{\alpha } a(\alpha ) z^{\alpha } \in \mathbb {C}[z_1,\dots , z_n]\) be stable with \(\mathcal {M}(p)= \mathcal {M}\). Then, there are constants B and C depending only on the coefficients \(a(\alpha )\) with \(\alpha \in \mathcal {M}_2\) such that

$$\begin{aligned} |p(z)| \le B\exp (C\Vert z\Vert _{\infty }^2). \end{aligned}$$

Moreover, Band C can be chosen so that they depend continuously on the aforementioned set of coefficients.

With our approach we are able to to get a more explicit estimate in two and several variables. Set \(\vec {1} = (1,\dots , 1) \in \mathbb {C}^n\).

Theorem 1.7

Let \(p\in \mathbb {C}[z_1,z_2]\) be stable and assume p vanishes to order r at 0. Write out the homogeneous expansion of p:

$$\begin{aligned} p(z) = \sum _{j=r}^{d} P_j(z) \end{aligned}$$

where \(P_j\) is homogeneous of degree j. Then,

$$\begin{aligned} |p(z)| \le |P_r(\vec {1})|e^{-r/2}\exp \left[ \text {Re}\left( \sum _{j=1}^{2} c_j z_j\right) +B \Vert z\Vert _{\infty }^2 \right] \end{aligned}$$

where

$$\begin{aligned}&c_j = \frac{1}{P_{r}(\vec {1})} \left[ \frac{\partial P_r}{\partial z_j}(\vec {1})\left( 1-\frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right) + \frac{\partial P_{r+1}}{\partial z_j}(\vec {1})\right] \\&B=\frac{1}{2}\left( \left| \frac{P_{r+1}(\vec {1})}{P_{r}(\vec {1})}\right| ^2 -2\text {Re}\left( \frac{P_{r+2}(\vec {1})}{P_r(\vec {1})}\right) + r\right) . \end{aligned}$$

This is proved in Sect. 6.

Finally, we present a multivariable Szász inequality for the case \(p(0)=0\).

Theorem 1.8

Suppose \(p\in \mathbb {C}[z_1,\dots ,z_n]\) is stable and vanishes to order r at 0. If we write out the homogeneous expansion of p

$$\begin{aligned} p(z) = \sum _{j=r}^{d} P_j(z) \end{aligned}$$

then

$$\begin{aligned} |p(z)| \le \Vert z\Vert _{\infty }^{r} |P_r(\vec {1})| \exp (C_0+ C_1 \Vert z\Vert _{\infty } + C_2 \Vert z\Vert _{\infty }^2) \end{aligned}$$

where \(C_0, C_1,C_2\) are constants depending on \(r, P_r(\vec {1}), P_{r+1}(\vec {1}), P_{r+2}(\vec {1}), \nabla P_{r}(\vec {1}), \nabla P_{r+1}(\vec {1})\).

The constants \(C_0,C_1,C_2\) along with the proof of this theorem are explicitly given in Sect. 8.

2 One Variable Inequality

In this section we prove Theorem 1.3 whose main inequality is due to de Branges [2]. lema

Lemma 2.1

Suppose \(\alpha _1,\dots , \alpha _d\in \mathbb {C}\) with \(\text {Im}\alpha _j \le 0\). Then,

$$\begin{aligned} \sum _{j=1}^{d} |\alpha _j|^2 \le |\sum _{j=1}^{d} \alpha _j|^2 - 2 \text {Re}\sum _{j<k} \alpha _j \alpha _k. \end{aligned}$$

Equality holds if and only if \(\alpha _j \in \mathbb {R}\) for all but at most one j.

Proof

Note

$$\begin{aligned} |\sum _{j=1}^{d} \alpha _j|^2 = \sum _{j=1}^{d} |\alpha _j|^2 + 2 \text {Re}\sum _{j<k} \alpha _j\overline{\alpha _k} \end{aligned}$$

So, our inequality reduces to showing the following is non-negative:

$$\begin{aligned} 2\text {Re}\sum _{j<k} (\alpha _j \overline{\alpha _k} - \alpha _j \alpha _k) = 2\text {Re}\sum _{j<k} \alpha _j(-2i)\text {Im}\alpha _k = 4\sum _{j<k} \text {Im}\alpha _j \text {Im}\alpha _k. \end{aligned}$$

The last quantity is evidently non-negative and equals zero exactly when \(\text {Im}\alpha _j \text {Im}\alpha _k = 0\) for all \(j\ne k\), which means \(\text {Im}\alpha _j = 0\) for all j or there is one j such that \(\text {Im}\alpha _j \ne 0\) while \(\text {Im}\alpha _k=0\) for all \(k\ne j\). \(\square \)

Szász uses the inequality \(|(1+z)e^{-z}|\le e^{|z|^2}\) instead of the stronger inequality:

Lemma 2.2

For \(z \in \mathbb {C}\), \(z\ne -1\)

$$\begin{aligned} \log |1+z|\le \text {Re}z + \frac{1}{2}|z|^2. \end{aligned}$$

Proof

Since \(\log (1+x) \le x\) we have

$$\begin{aligned} \log |1+z|= & {} \frac{1}{2}\log |1+z|^2 \\= & {} \frac{1}{2}(\log (1+2\text {Re}z + |z|^2)\\\le & {} \frac{1}{2}(2\text {Re}z + |z|^2). \end{aligned}$$

\(\square \)

Proof

(Proof of Theorem 1.3) Write \(p(z) = \prod _{j=1}^{d}(1+\alpha _j z)\) where \(\text {Im}\alpha _j \le 0\). Note \(\sum _{j} \alpha _j = p_1\) and \(\sum _{j<k} \alpha _j \alpha _k = p_2\). By Lemmas 2.2 and 2.1

$$\begin{aligned} \log |p(z)|&\le \sum _{j} (\text {Re}(\alpha _j z) + \frac{1}{2}|\alpha _j|^2|z|^2) \\&= \text {Re}(p_1 z) + \frac{1}{2}(\sum _{j} |\alpha _j|^2) |z|^2 \\&\le \text {Re}(p_1 z) + \frac{1}{2}(|p_1|^2 - 2\text {Re}p_2)|z|^2. \end{aligned}$$

Regarding sharpness define \(\gamma = (c_1^2-2c_2)/2 >0\). Choose n large enough that \(d_n = \gamma - c_1^2/(2n)\ge 0.\) Then, the polynomial

$$\begin{aligned} p_n(z) = \left( 1+\frac{c_1z}{n}\right) ^n\left( 1-\frac{d_n z^2}{n}\right) ^n \end{aligned}$$

is stable, belongs to \(\mathbb {R}[z]\), and has the correct normalizations. Since \(p_n(z) \rightarrow \exp {(c_1z-\gamma z^2)}\) locally uniformly, we have

$$\begin{aligned} \lim _{n\rightarrow \infty } |p_n(iy)| = \exp {\gamma y^2} \end{aligned}$$

which is exactly what was claimed. \(\square \)

It is worth pointing out that Szász proves

$$\begin{aligned} \sum _j |\alpha _j|^2 \le 2|\sum _j \alpha _j|^2+|\sum _j \alpha _j^2| \end{aligned}$$

and then converts \(\sum _j \alpha _j^2 = p_1^2-2p_2\) to get the estimate

$$\begin{aligned} \sum _j |\alpha _j|^2 \le 3|p_1|^2+2|p_2|. \end{aligned}$$
(2.1)

By sidestepping the former inequality and estimating \(\sum _j |\alpha _j|^2\) directly in terms of polynomial coefficients we get a better bound. The inequality (2.1) is used in [1] to prove multivariable Szász inequalities. So, using Lemma 2.1 in their proof would already improve Theorem 1.2.

3 Two Variable Szász Inequality

Using determinantal formulas it is possible to establish a Szász inequality for two variable polynomials.

Definition 3.1

We shall say a stable polynomial \(p\in \mathbb {C}[z_1,\dots ,z_n]\) of total degree d has a determinantal representation if there exist \(d\times d\) matrices \(A,B_1,\dots , B_n\) and a constant \(c\in \mathbb {C}\) such that

  1. 1.

    \(\text {Im}A := \frac{1}{2i}(A-A^*) \ge 0\)

  2. 2.

    for all j, \(B_j \ge 0\)

  3. 3.

    \(\sum _{j=1}^{n} B_j=I\).

  4. 4.

    \(p(z) = c \det (A+\sum _{j=1}^{n} z_j B_j)\).

Theorem 1.4 will be broken into two theorems.

Theorem 3.2

If \(p\in \mathbb {C}[z_1,z_2]\) is stable, then p has a determinantal representation.

Several different determinantal representations are closely related to this one but not quite equivalent. There are determinantal representations for three variable hyperbolic polynomials, two variable real-zero polynomials, and two variable real-stable polynomials (see [3, 4, 9]). It turns out this formula can be derived from a determinantal representation for polynomials with no zeros on the bidisk \(\mathbb {D}^2 = \{(z_1,z_2): |z_1|,|z_2|<1\}\) from [3]. We show how to convert from the bidisk formula to Theorem 3.2 in Sect. 4. The method of conversion is a very slight modification of what is done in the paper [5]. We include the argument for the reader’s convenience; the essence of Sect. 4 is not new.

In Sect. 5 we prove the following Szász inequality for stable polynomials with determinantal representations.

Theorem 3.3

Suppose \(p\in \mathbb {C}[z_1,\dots , z_n]\) has a determinantal representation as above. If \(p(0)=1\), then

$$\begin{aligned} |p(z)|\le \exp (\text {Re}(\sum _{j=1}^{n} z_j p_j(0)) + \frac{1}{2}\Vert z\Vert _{\infty }^2 (|\sum _{j=1}^{n} p_j(0)|^2-\text {Re}(\sum _{j,k=1}^{n} p_{j,k}(0)))). \end{aligned}$$

Theorems 3.2 and 3.3 combine to give Theorem 1.4.

4 Determinantal Representations

In this section we prove Theorem 3.2. We begin by recalling the following.

Theorem 4.1

(See [3] Theorem 2.1) If \(q\in \mathbb {C}[z_1,z_2]\) has no zeros in \(\mathbb {D}^2\) and bidegree (nm), then there exists a constant c and an \((n+m)\times (n+m)\) contractive matrix D such that

$$\begin{aligned} q(z) = c \det (I-D \Delta (z)) \end{aligned}$$
(4.1)

where \(\Delta (z) = z_1P_1+z_2P_2\) and

$$\begin{aligned} P_1 = \begin{pmatrix} I_n &{} 0 \\ 0 &{} O_m \end{pmatrix} \qquad P_2 = \begin{pmatrix} O_n &{} 0 \\ 0 &{} I_m \end{pmatrix}. \end{aligned}$$

Let \(p \in \mathbb {C}[z_1,z_2]\) be stable and have bidegree (nm). Define \(\phi (\zeta ) = i\frac{1+\zeta }{1-\zeta }\) and

$$\begin{aligned} q(z_1,z_2) = p(\phi (z_1),\phi (z_2))\left( \frac{1-z_1}{2i}\right) ^n \left( \frac{1-z_2}{2i}\right) ^m. \end{aligned}$$

One can calculate that \(\phi ^{-1}(\zeta ) = \frac{\zeta - i}{\zeta + i}\) and

$$\begin{aligned} p(z_1,z_2) = q(\phi ^{-1}(z_1),\phi ^{-1}(z_2)) (z_1+i)^n(z_2+i)^m. \end{aligned}$$

Then, q has no zeros in \(\mathbb {D}^2\) and so the conclusion of Theorem 4.1 holds. Then, converting (4.1) to a formula for p yields

$$\begin{aligned} p(z)&= c\det ( (z_1+i)P_1 + (z_2+i)P_2 - D((z_1-i)P_1+(z_2-i)P_2))\\&= c \det ( (I-D)\Delta (z) +i (I+D)) \end{aligned}$$

Since D is a contraction, the eigenspace corresponding to eigenvalue 1 is reducing (if nontrivial). Thus, there exists a unitary U such that

$$\begin{aligned} D = U\begin{pmatrix} I &{} 0 \\ 0 &{} K \end{pmatrix} U^* \end{aligned}$$

where K is a contractive \(k\times k\) matrix for which 1 is not an eigenvalue. Here k is the codimension of the eigenspace of D corresponding to eigenvalue 1.

Then,

$$\begin{aligned} p(z)&= c \det \left( \begin{pmatrix} 0 &{} 0 \\ 0 &{} I-K \end{pmatrix} U^* \Delta (z) U + i\begin{pmatrix} 2I &{} 0 \\ 0 &{} I+K \end{pmatrix}\right) \\&= c \det (I-K) \det \left( \begin{pmatrix} 0 &{} 0 \\ 0 &{} I \end{pmatrix} U^*\Delta (z) U + \begin{pmatrix} 2iI &{} 0 \\ 0 &{} A \end{pmatrix}\right) \end{aligned}$$

where \(A = i(I+K)(I-K)^{-1}\).

Let \(B_j\) equal the bottom right \(k\times k\) block of \(U^*P_jU\). Then,

$$\begin{aligned} p(z) = c\det (I-K) \det \begin{pmatrix} 2iI &{} 0 \\ * &{} A + \sum _j z_jB_j \end{pmatrix} = c_0 \det (A+\sum _j z_j B_j) \end{aligned}$$

where \(c_0\) is a new constant (the \(*\) denotes a block we are unconcerned with). Since \(P_1+P_2 = I\), \(B_1+B_2= I\). Also note that \(p(t,t) = c_0 \det (A+tI)\) has degree k so that \(k \le \deg p\). On the other hand, the determinantal formula for p has total degree at most k, so that \(\deg p \le k\). Therefore the matrices in our formula have size matching the total degree of p. Finally,

$$\begin{aligned} \text {Im}A = (I-K)^{-1}(I-KK^*)(I-K^*)^{-1} \ge 0. \end{aligned}$$

This proves Theorem 3.2.

5 Szász Inequality for Determinantal Polynomials

In this section we prove Theorem 3.3.

Suppose \(p(z) = c \det (A+\sum _{j=1}^{n} z_j B_j)\) where \(\sum _{j=1}^{n} B_j = I\), \(B_j \ge 0\), \(\text {Im}A \ge 0\), and \(p(0)=1\). By the last normalization A is invertible with \(c\det A = 1\) so that

$$\begin{aligned} p(z) = \det \left( I + \sum _{j=1}^{n} z_j X_j\right) \end{aligned}$$

where \(X_j = B_j A^{-1}\). As with complex numbers, \(\text {Im}(A^{-1}) \le 0\).

It helps to make note of a few formulas for the derivatives of p. Recall that if A(t) is a differentiable matrix function then

$$\begin{aligned} \frac{d}{dt} \det A(t) = \text {tr}( A'(t) A(t)^{-1})) \det A(t) \end{aligned}$$

whenever A(t) is invertible. Here \(\text {tr}\) is the trace of a matrix.

Letting \(X(z) = I+\sum _{j=1}^n z_j X_j\), whenever \(p(z) \ne 0\) we have

$$\begin{aligned} p_j(z)&= \text {tr}(X_j(X(z))^{-1}) p(z)\\ p_{jk}(z)&= -\text {tr}(X_j (X(z))^{-1}X_k(X(z))^{-1})p(z) + \text {tr}(X_j(X(z))^{-1})\text {tr}(X_k(X(z))^{-1})p(z) \end{aligned}$$

so that

$$\begin{aligned} p_j(0) = \text {tr} X_j \quad p_{jk}(0)=-\text {tr}(X_jX_k) + \text {tr}(X_j)\text {tr}(X_k). \end{aligned}$$

For a positive definite matrix P we have \(\log P \le P-I\) simply because the same inequality holds for the eigenvalues of P. Therefore,

$$\begin{aligned} \log |p(z)|= & {} \frac{1}{2} \log \det (X(z)^*X(z)) \end{aligned}$$
(5.1)
$$\begin{aligned}= & {} (1/2) \text {tr}\log X(z)^*X(z) \end{aligned}$$
(5.2)
$$\begin{aligned}\le & {} (1/2) \text {tr}(X(z)^*X(z) - I) \nonumber \\= & {} (1/2) \text {tr}\left( 2 \text {Re}\left( \sum _{j=1}^{n} z_j X_j\right) + \left( \sum _{j=1}^{n} z_j X_j\right) ^*\left( \sum _{k=1}^{n} z_k X_k\right) \right) .\qquad \end{aligned}$$
(5.3)

Now,

$$\begin{aligned} \text {tr}\left( \sum _{j}z_j X_j\right) ^*\left( \sum _{k} z_k X_k\right)&= \text {tr}\left[ (A^*)^{-1} \left( \sum _{j}z_j B_j\right) ^*\left( \sum _{k} z_k B_k\right) A^{-1}\right] \\&= \text {tr}\left[ \left( \sum _{j}z_j B_j\right) ^*\left( \sum _{k} z_k B_k\right) A^{-1}(A^*)^{-1}\right] . \end{aligned}$$

By Lemmas 5.15.2 and 5.3 below we have

$$\begin{aligned} \text {tr}\left( \sum _{j}z_j X_j\right) ^*\left( \sum _{k} z_k X_k\right)&\le \Vert \sum _j z_j B_j\Vert ^{2} \text {tr}\left[ (A^*)^{-1} A^{-1}\right] \\&\le \Vert z\Vert _{\infty }^2\left[ |\text {tr}(A^{-1})|^2 - \text {Re}( (\text {tr}A^{-1})^2 - \text {tr}A^{-2})\right] . \end{aligned}$$

Finally, since \(\sum _j B_j = I\) we have

$$\begin{aligned} \sum _j p_j(0) = \text {tr}A^{-1} \end{aligned}$$

and

$$\begin{aligned} \sum _{j,k} p_{jk}(0) = (\text {tr}A^{-1})^2 - \text {tr}A^{-2}. \end{aligned}$$

Thus,

$$\begin{aligned} \log |p(z)| \le \text {Re}\left( \sum _{j=1}^{n} p_j(0)z_j\right) + \frac{1}{2} \Vert z\Vert _{\infty }^2 \left( |\sum _{j=1}^{n} p_j(0)|^2 - \text {Re}\left( \sum _{j,k=1}^{n} p_{jk}(0)\right) \right) \end{aligned}$$

which proves Theorem 3.3 modulo the following three lemmas.

Lemma 5.1

Let PM be \(n\times n\) matrices. If \(P \ge 0\), then

$$\begin{aligned} |\text {tr}(MP)| \le \Vert M\Vert \text {tr}P \end{aligned}$$

Proof

Since \(P\ge 0\), we can decompose \(P = \sum _j v_jv_j^*\) where \(v_j \in \mathbb {C}^n\). Then,

$$\begin{aligned} |\text {tr}{MP}| \le \sum _j |\text {tr}{Mv_jv_j^*}| = \sum _j |\langle Mv_j, v_j \rangle | \le \Vert M\Vert \sum _j \Vert v_j\Vert ^2 = \Vert M\Vert \text {tr}{P}. \end{aligned}$$

\(\square \)

The following is a standard result (the finite dimensional version of the Naimark dilation theorem—see [7]).

Lemma 5.2

Suppose \(B_1,\dots , B_n\) are \(N\times N\) matrices. Assume for all j, \(B_j \ge 0\) and \(\sum _j B_j = I\). Then, there exist pairwise orthogonal projection matrices \(P_1,\dots , P_n\) of size \(m\times m\) where \(m= nN\) such that

$$\begin{aligned} B_j = (I_N,0,\dots , 0) P_j (I_N,0,\dots , 0)^t. \end{aligned}$$
(5.4)

In particular, for \(z=(z_1,\dots , z_n) \in \mathbb {C}^n\)

$$\begin{aligned} \Vert \sum _{j} z_j B_j\Vert \le \Vert z\Vert _{\infty }. \end{aligned}$$

Proof

We can factor \(B_j = A_j^* A_j\) with \(N\times N\) matrix \(A_j\). The \(nN\times N\) matrix

$$\begin{aligned} T = \begin{pmatrix} A_1 \\ \vdots \\ A_n \end{pmatrix} \end{aligned}$$

is an isometry from \(\mathbb {C}^N\) to \(\mathbb {C}^{nN}\) since \(T^*T = \sum _j B_j = I\). We can extend T to a \(m\times m\) unitary U. Let \(Q_j\) be the orthogonal projection onto the j-th block of \(\mathbb {C}^{m} = \mathbb {C}^N\oplus \cdots \oplus \mathbb {C}^N\). Set \(P_j = U^*Q_j U\). Then, (5.4) holds and

$$\begin{aligned} \Vert \sum _{j} z_j B_j\Vert \le \Vert \sum _j z_j P_j \Vert \le \Vert z\Vert _{\infty }. \end{aligned}$$

\(\square \)

The following lemma is an adaptation of our one variable argument.

Lemma 5.3

If M is a square matrix with \(\text {Im}M \ge 0\), then

$$\begin{aligned} \text {tr}M^*M \le |\text {tr}M|^2 - \text {Re}( (\text {tr} M)^2 - \text {tr} M^2). \end{aligned}$$

Proof

Write \(A = \text {Re}M, B = \text {Im}M\). Then,

$$\begin{aligned} |\text {tr}M|^2= & {} (\text {tr}A)^2+(\text {tr}B)^2,\\ \text {tr}M^*M= & {} \text {tr}(A^2+B^2 + i(AB-BA)) = \text {tr}A^2 + \text {tr}B^2,\\ \text {Re}(\text {tr}M)^2= & {} (\text {tr}A)^2 - (\text {tr}B)^2,\\ \text {Re}[\text {tr}M^2]= & {} \text {Re}[\text {tr}( A^2-B^2 + i(AB+BA))] = \text {tr}A^2-\text {tr}B^2. \end{aligned}$$

Then,

$$\begin{aligned} |\text {tr}M|^2-\text {tr}M^*M - \text {Re}( (\text {tr} M)^2 - \text {tr} M^2) = 2((\text {tr}B)^2 - \text {tr}B^2) \end{aligned}$$

If B has eigenvalues \(\beta _j\ge 0\) then

$$\begin{aligned} (\text {tr}B)^2 - \text {tr}B^2 = \left( \sum \beta _j\right) ^2 - \sum \beta _j^2 = \sum _{j\ne k} \beta _j\beta _k \ge 0. \end{aligned}$$

This proves the claimed inequality. \(\square \)

6 Szász Inequality for Determinants with \(p(0)=0\)

As with Theorem 1.4 we will prove a Szász inequality for polynomials with determinantal representations and Theorem 1.7 will follow via Theorem 3.2.

Theorem 6.1

Suppose \(p\in \mathbb {C}[z_1,\dots ,z_n]\) has a determinantal representation as in Definition 3.1. Assume p vanishes to order r at 0. Write out the homogeneous expansion of p:

$$\begin{aligned} p(z) = \sum _{j=r}^{d} P_j(z) \end{aligned}$$

where \(P_j\) is homogeneous of degree j. Then,

$$\begin{aligned} |p(z)| \le |P_r(\vec {1})|e^{-r/2}\exp \left[ \text {Re}\left( \sum _{j=1}^{n} c_j z_j\right) +B \Vert z\Vert _{\infty }^2 \right] \end{aligned}$$

where

$$\begin{aligned} c_j= & {} \frac{1}{P_{r}(\vec {1})} \left[ \frac{\partial P_r}{\partial z_j}(\vec {1})\left( 1-\frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right) + \frac{\partial P_{r+1}}{\partial z_j}(\vec {1})\right] \\ B= & {} \frac{1}{2}\left( \left| \frac{P_{r+1}(\vec {1})}{P_{r}(\vec {1})}\right| ^2 -2\text {Re}\left( \frac{P_{r+2}(\vec {1})}{P_r(\vec {1})}\right) + r\right) \end{aligned}$$

and \(\vec {1} = (1,\dots ,1) \in \mathbb {C}^n\).

Proof

Write \(p(z) = c\det (A+\sum _{j=1}^{d} z_j B_j)\) as in Definition 3.1. Since \(p(0)=0\), \(\det A = 0\). Since \(\text {Im}A \ge 0\), the eigenspace corresponding to eigenvalue 0 is reducing for A; see Lemma 6.1 below. Let s equal the dimension of the kernel of A.

So, after conjugating by a unitary we can rewrite p in the form

$$\begin{aligned} p(z) = c\det \left( \begin{pmatrix} 0 &{} 0 \\ 0 &{} C \end{pmatrix} + \sum _{j=1}^{d} z_j B_j \right) \end{aligned}$$

where C is an invertible \((d-s)\times (d-s)\) matrix with \(\text {Im}C\ge 0\) and the \(B_j\) are relabelled after conjugating (they satisfy all of the same properties as before). Define \(X_j = B_j \begin{pmatrix} I &{} 0 \\ 0 &{} C^{-1} \end{pmatrix}\) and \(J = \begin{pmatrix} O_s &{} 0 \\ 0 &{} I_{d-s} \end{pmatrix}\). Then,

$$\begin{aligned} p(z) = c_0 \det \left( J +\sum _{j=1}^{d} z_j X_j\right) \end{aligned}$$

where \(c_0 = c \det C\). Let \(X(z) = J +\sum _{j=1}^{d} z_j X_j\). Let \(X_s(z)\) be the top left \(s\times s\) block of X(z). Evaluating \(\det X\) starting with the top left \(s\times s\) block gives

$$\begin{aligned} \det X(z) = \det X_s(z) + \text { higher order terms}. \end{aligned}$$

Note that \(\det X_s(z)\) is homogeneous of degree s and \(X_s(\vec {1}) = I_s\) since \(\sum _j B_j=I\). This proves \(s=r\).

We can follow some of the argument in Sect. 5. Equations (5.1), (5.2), (5.3) hold when \(p(z)\ne 0\) but (5.3) rearranges into

$$\begin{aligned} \log |p(z)/c_0|&\le (1/2) \text {tr}\left( \begin{pmatrix} -I_r &{} 0 \\ 0 &{} 0 \end{pmatrix} + 2\text {Re}\left( \sum _{j=1}^{n} z_j X_j\right) + \left( \sum _j z_j X_j\right) ^*\left( \sum _k z_k X_k\right) \right) \\&\le -r/2 + \text {Re}\left( \sum _{j=1}^{n} z_j \text {tr}(X_j)\right) + (1/2)\text {tr}\left( \sum _{j}z_j X_j\right) ^*\left( \sum _k z_k X_k\right) . \end{aligned}$$

As before using Lemmas 5.1, 5.2, 5.3 we have

$$\begin{aligned} \text {tr}\left( \sum _{j}z_j X_j\right) ^*\left( \sum _k z_k X_k\right)\le & {} \Vert \sum _{j} z_j B_j\Vert ^2 \text {tr}\left( \begin{pmatrix} I_r &{} 0 \\ 0 &{} (C^*)^{-1}C^{-1} \end{pmatrix}\right) \\&\le \Vert z\Vert _{\infty }^2( r + |\text {tr}(C^{-1})|^2 - \text {Re}( (\text {tr}C^{-1})^2 - \text {tr}C^{-2})). \end{aligned}$$

Now we must relate these quantities to intrinsic quantities of p.

First, \(p(t\vec {1}) = c_0 \det \begin{pmatrix} tI_r &{} 0 \\ 0 &{} I+tC^{-1} \end{pmatrix} = c_0 t^r \det (I+tC^{-1}).\) So, using this formula and the homogeneous expansion of p we get

$$\begin{aligned} t^{-r}p(t\vec {1}) \big |_{t=0}= & {} c_0 = P_r(\vec {1}) \\ \frac{d}{dt}\big |_{t=0} t^{-r}p(t\vec {1})= & {} c_0 \text {tr}C^{-1} = P_{r+1}(\vec {1})\\ \frac{d^2}{dt^2}\big |_{t=0} t^{-r} p(t\vec {1})= & {} c_0( (\text {tr}C^{-1})^2 - \text {tr}C^{-2}) = 2P_{r+2}(\vec {1}). \end{aligned}$$

It is more difficult to calculate \(\text {tr}X_j\). Define

$$\begin{aligned} q(s,t) = p(s e_j + t\vec {1}) = c_0 \det \left( J + s X_j + t \begin{pmatrix} I &{} 0 \\ 0 &{} C^{-1} \end{pmatrix}\right) . \end{aligned}$$

Note

$$\begin{aligned} \frac{\partial q}{\partial s}(0,t) = \frac{\partial p}{\partial z_j}(t\vec {1}). \end{aligned}$$

Then,

$$\begin{aligned} \frac{\partial q}{\partial s}(0,t)= & {} \text {tr}\left( X_j \begin{pmatrix} t^{-1}I_r &{} 0 \\ 0 &{} (I+tC^{-1})^{-1} \end{pmatrix}\right) p(t\vec {1}) \\= & {} \text {tr}\left( X_j \begin{pmatrix} I_r &{} 0 \\ 0 &{} t(I+tC^{-1})^{-1} \end{pmatrix}\right) c_0 t^{r-1} \det (I+tC^{-1}) \end{aligned}$$

Thus, we can do the following computation with matrices and also with the homogeneous expansion of p

$$\begin{aligned} t^{-r+1}\frac{\partial q}{\partial s}(0,t)\Big |_{t=0} = c_0 \text {tr}\left( X_j \begin{pmatrix} I_r &{} 0 \\ 0 &{} 0 \end{pmatrix}\right) = \frac{\partial P_r}{\partial z_j}(\vec {1}) \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{\partial q}{\partial s}(0,t) - t^{-1}\frac{p(t\vec {1})}{P_r(\vec {1})} \frac{\partial P_r}{\partial z_j}(\vec {1}) = \text {tr}\left( X_j \begin{pmatrix} 0 &{} 0 \\ 0 &{} t(I+tC^{-1})^{-1} \end{pmatrix}\right) c_0 t^{r-1} \det (I+tC^{-1}) \end{aligned}$$

which implies

$$\begin{aligned} t^{-r}\left( \frac{\partial q}{\partial s}(0,t) - t^{-1}\frac{p(t\vec {1})}{P_r(\vec {1})} \frac{\partial P_r}{\partial z_j}(\vec {1})\right) \Big |_{t=0}= & {} \text {tr}\left( X_j \begin{pmatrix} 0 &{} 0 \\ 0 &{} I\end{pmatrix}\right) \\ c_0= & {} \frac{\partial P_{r+1}}{\partial z_j}(\vec {1}) - \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})} \frac{\partial P_r}{\partial z_j}(\vec {1}) \end{aligned}$$

Therefore,

$$\begin{aligned} c_0 \text {tr}X_j = \left( 1-\frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right) \frac{\partial P_r}{\partial z_j}(\vec {1}) + \frac{\partial P_{r+1}}{\partial z_j}(\vec {1}). \end{aligned}$$

If we reassemble we get

$$\begin{aligned} \log |p(z)/P_r(\vec {1})| \le -r/2 + \text {Re}(\sum _{j=1}^{n} c_j z_j ) + B \Vert z\Vert _\infty ^2 \end{aligned}$$

where

$$\begin{aligned} c_j= & {} \frac{1}{P_r(\vec {1})} \left[ \left( 1-\frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right) \frac{\partial P_r}{\partial z_j}(\vec {1}) + \frac{\partial P_{r+1}}{\partial z_j}(\vec {1})\right] \\ B= & {} \frac{1}{2}\left( r+ \left| \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right| ^2 -2\text {Re}\left( \frac{P_{r+2}(\vec {1})}{P_r(\vec {1})}\right) \right) \end{aligned}$$

and this concludes the proof. \(\square \)

Lemma 6.1

Suppose A is a matrix with \(\text {Im}A \ge 0\). If 0 is an eigenvalue of A with eigenspace of dimension s, then there exists a unitary U such that

$$\begin{aligned} U^* A U = \begin{pmatrix} O_{s} &{} 0 \\ 0 &{} C\end{pmatrix} \end{aligned}$$

where \(\text {Im}C \ge 0\) and C is invertible.

Proof

If we write A using an orthonormal basis for its kernel followed by an orthonormal basis for the orthogonal complement of its kernel, we can put A into the form

$$\begin{aligned} \begin{pmatrix} O_s &{} B \\ 0 &{} C\end{pmatrix} \end{aligned}$$

by conjugating by a unitary. This matrix will still have positive semi-definite imaginary part:

$$\begin{aligned} \begin{pmatrix} O_s &{} \frac{1}{2i} B \\ -\frac{1}{2i} B^* &{} \text {Im}C\end{pmatrix} \ge 0. \end{aligned}$$

This implies \(B=0\). Note C is invertible because it cannot have 0 as an eigenvalue. \(\square \)

7 Multivariable Szász Inequalities

Using the two variable Szász inequality we can establish the multivariable inequality Theorem 1.5.

We will frequently use the component-wise partial order on \(\mathbb {R}^n\): \(x \ge y\) if and only if for all \(j=1,\dots , n\), \(x_j\ge y_j\).

Proof (Proof of Theorem 1.5)

For \(z=x+iy \in \mathbb {C}_+^{n}\) we have

$$\begin{aligned} |p(x_1+iy_1,\dots , x_n+iy_n)| \ge |p(x_1\pm i y_1, \dots , x_n\pm iy_n)| \end{aligned}$$

for all independent choices of ± by Lemma 7.1 below (more precisely, we can hold fixed any variables with a “\(+\)” and apply Lemma 7.1 to the remaining variables). So, it is enough to prove Theorem 1.5 for \(z \in \mathbb {C}_+^n\).

By Lemma 7.2 below, if \(0\le y\le \tilde{y}\) then

$$\begin{aligned} |p(x+iy)| \le |p(x+i\tilde{y})|. \end{aligned}$$

Define \(\tilde{y}\) as the vector with j-th component

$$\begin{aligned} \tilde{y}_j = \max (|x_j|, y_j). \end{aligned}$$

Then, \(\tilde{y} \ge \pm x\) and \(\tilde{y}\ge y\).

Define

$$\begin{aligned} q(w_1,w_2) = p(w_1(\tilde{y}+x)+w_2(\tilde{y}-x)) \end{aligned}$$

which has no zeros in \(\mathbb {C}_+^2\) and \(q(0)=1\). We will now apply Theorem 1.4 using all of the following computations.

$$\begin{aligned}&q\left( \frac{i+1}{2}, \frac{i-1}{2}\right) = p(x+i\tilde{y})\\&q_1(w) = \sum _j p_j(w_1(\tilde{y}+x)+w_2(\tilde{y}-x))(\tilde{y}_j+x_j)\\&q_2(w) = \sum _j p_j(w_1(\tilde{y}+x)+w_2(\tilde{y}-x))(\tilde{y}_j-x_j)\\&q_{11}(0) = \sum _{j,k} p_{jk}(0)(\tilde{y}_j+x_j)(\tilde{y}_k+x_k)\\&q_{12}(0) = \sum _{j,k} p_{jk}(0)(\tilde{y}_j+x_j)(\tilde{y}_k-x_k)\\&q_{22}(0) = \sum _{j,k} p_{jk}(0)(\tilde{y}_j-x_j)(\tilde{y}_k-x_k)\\&q_1(0)(i+1)/2 + q_2(0)(i-1)/2 = \nabla p(0)\cdot (x+i\tilde{y})\\&q_1(0)+q_2(0) = 2\nabla p(0)\cdot \tilde{y}\\&q_{11}(0)+2q_{12}(0)+q_{22}(0) = 4\sum _{j,k}p_{jk}(0)\tilde{y}_j\tilde{y}_k. \end{aligned}$$

Thus, by Theorem 1.4

$$\begin{aligned} \log \left| q\left( \frac{i+1}{2},\frac{i-1}{2}\right) \right| \le \text {Re}(\nabla p(0)\cdot (x+i\tilde{y})) \nonumber \\ +\frac{1}{4}(|2\nabla p(0)\cdot \tilde{y}|^2 -4Re\left( \sum _{j,k} p_{jk}(0)\tilde{y}_j\tilde{y}_k)\right) \nonumber \\ \le \sqrt{2}|\nabla p(0)||z| +(|\nabla p(0)|^2+\Vert \text {Re}Hp(0)\Vert )|z|^2 \end{aligned}$$
(7.1)

where we have used \(|x+i\tilde{y}| \le \sqrt{2}|z|\) and \(|\tilde{y}|\le |z|\). \(\square \)

Since Theorem 1.2 is an estimate on polydisks it is worth pointing out that (7.1) yields

$$\begin{aligned} \log |p(z)|\le & {} \Vert z\Vert _{\infty } \sqrt{2} \sum _j |p_j(0)| + \Vert z\Vert _{\infty }^2\left( \left( \sum _j |p_j(0)|\right) ^2 + \sum |\text {Re}[p_{jk}(0)]|\right) \\\le & {} \Vert z\Vert _{\infty } \sqrt{2} \sum _j |a(e_j)| + \Vert z\Vert _{\infty }^2\left( \left( \sum _j |a(e_j)|\right) ^2 +2 \sum |\text {Re}[a(e_j+e_k)]|\right) \\\le & {} \frac{1}{2} + \Vert z\Vert _{\infty }^2\left( 2\left( \sum _j |a(e_j)|\right) ^2 +2 \sum |\text {Re}[a(e_j+e_k)]|\right) \end{aligned}$$

where \(p = \sum a(\beta ) z^{\beta }\) and in the last line we used the inequality \(a\le (1+a^2)/2\). This gives

$$\begin{aligned} |p(z)| \le \sqrt{e}\cdot \exp (C\Vert z\Vert _{\infty }^2)\\ C = 2\left( \sum _j |a(e_j)|\right) ^2 +2 \sum |\text {Re}[a(e_j+e_k)]|. \end{aligned}$$

The following is a standard result. See Lemma 2.8 of [1] for instance.

Lemma 7.1

If \(p\in \mathbb {C}[z_1,\dots , z_n]\) has no zeros in \(\mathbb {C}_+^n\) then for \(z=x+iy\in \mathbb {C}_{+}^n\)

$$\begin{aligned} |p(x+iy)| \ge |p(x-iy)|. \end{aligned}$$

Proof

The one variable polynomial \(q(\zeta ) = p(x+\zeta y)\) has no zeros in \(\mathbb {C}_+\). Then, q can be factored as a product of terms of the form \((1+\alpha \zeta )\) where \(\text {Im}\alpha \le 0\). We can then check directly that

$$\begin{aligned} |1+i \alpha | \ge |1-i\alpha | \end{aligned}$$

which implies \(|q(i)|\ge |q(-i)|\). \(\square \)

Lemma 7.2

If \(p\in \mathbb {C}[z_1,\dots , z_n]\) has no zeros in \(\mathbb {C}_{+}^n\) and if \(0\le y \le \tilde{y}\) then for any \(x\in \mathbb {R}^n\)

$$\begin{aligned} |p(x+iy)| \le |p(x+i\tilde{y})|. \end{aligned}$$

Proof

The one variable polynomial \(q(\zeta ) = p(x+iy+\zeta (\tilde{y}-y))\) has no zeros in \(\mathbb {C}_{+}\). Factors of q are of the form \((1+\alpha \zeta )\) with \(\text {Im}\alpha \le 0\). Since \(|1+i\alpha | \ge 1\) we have \(|q(0)| \le |q(i)|\). \(\square \)

We can get a slightly better bound on \(\mathbb {R}^n\) by modifying the argument of Theorem 1.5.

Theorem 7.1

Suppose \(p\in \mathbb {C}[z_1,\dots , z_n]\) is stable. If \(p(0)=1\) then for \(x\in \mathbb {R}^n\)

$$\begin{aligned} \log |p(x)| \le \text {Re}(\nabla p(0)\cdot x) + \frac{1}{2}(|\nabla p(0)|^2+ \Vert \text {Re}(Hp)(0)\Vert ) |x|^2 \end{aligned}$$

where Hp is the Hessian matrix of p.

Proof

We can write \(x \in \mathbb {R}^n\) as \(x= x_{+} - x_{-}\) where \((x_+)_j = {\left\{ \begin{array}{ll} x_j &{} \text { if } x_j \ge 0 \\ 0 &{} \text { if } x_j<0 \end{array}\right. }\). Define

$$\begin{aligned} P(z_1,z_2) = p(z_1 x_+ + z_2 x_{-}) \end{aligned}$$

which has no zeros in \(\mathbb {C}_{+}^2\) and \(P(0)=1\). Set \(S_+ = \{j: x_j \ge 0\}\), \(S_{-}=\{j:x_j<0\}\).

Note that

$$\begin{aligned}&P_1(z) = \sum _{j\in S_{+}} p_j(z_1x_+ + z_2 x_{-}) |x_j| \qquad P_2(z) = \sum _{j\in S_{-}} p_j(z_1x_+ + z_2 x_{-}) |x_j|\\&P_{11}(0)= \sum _{j,k\in S_{+}} p_{jk}(0)|x_j||x_k|, \quad P_{12}(0) = \sum _{j\in S_{+},k\in S_{-}} p_{jk}(0)|x_j||x_k|, \\&P_{22}(0) = \sum _{j,k\in S_{-}} p_{jk}(0) |x_j||x_k|. \end{aligned}$$

Now, since \(P(1,-1) = p(x)\) we have

$$\begin{aligned} \log |p(x)| \le \text {Re}(\nabla p(0) \cdot x) + (1/2)(|\nabla p(0)|^2|x|^2 - \text {Re}( \sum _{jk} p_{jk}(0) |x_j||x_k|)) \end{aligned}$$

by Theorem 1.4. \(\square \)

8 Multivariable Inequalities When \(p(0)=0\)

In this section we prove Theorem 1.8. Write the homogeneous expansion of p

$$\begin{aligned} p(z) = \sum _{j=r}^{d} P_j(z). \end{aligned}$$

Notice that \(P_r(z)\) is stable itself by Hurwitz’s theorem because

$$\begin{aligned} P_r(z) = \lim _{t\searrow 0} t^{-r} p(t z) \end{aligned}$$

exhibits \(P_r\) as a limit of polynomials with no zeros in \(\mathbb {C}_+^{n}\).

We can make some of the reductions as in the previous section. We may assume \(z= x+iy \in \mathbb {C}_{+}^n\) by Lemma 7.1. Define \(m =\max \{|x_j|,y_j: 1\le j \le n\}\) and \(\tilde{y} = m\vec {1}\). Then, \(\tilde{y}\ge \pm x\), \(\tilde{y}\ge y\) and \(|p(z)| \le |p(x+i\tilde{y})|\). Define

$$\begin{aligned} q(w_1,w_2) = p(w_1 (\tilde{y} +x) + w_2(\tilde{y}-x)) \end{aligned}$$

which is stable and has homogeneous expansion

$$\begin{aligned} \sum _{j=r}^{d} Q_j(w) = \sum _{j=r}^{d} P_j(w_1 (\tilde{y} +x) + w_2(\tilde{y}-x)). \end{aligned}$$

All of the terms above are homogeneous of the correct degree but it is conceivable that the first term vanishes. Setting \(w_1=w_2=1\) we see the first term evaluates to \(P_r(2\tilde{y}) = (2m)^r P_r(\vec {1})\) which is non-zero.

The data we need for Theorem 1.7 is:

$$\begin{aligned} \begin{aligned} Q_{j}(\vec {1})&= (2m)^{j}P_{j}(\vec {1}) \\ \frac{\partial Q_j}{\partial w_1}(\vec {1})&= (2m)^{j-1}\sum _{k=1}^{n} \frac{\partial P_j}{\partial z_k}(\vec {1}) (m+x_k) \\ \frac{\partial Q_j}{\partial w_2}(\vec {1})&= (2m)^{j-1}\sum _{k=1}^{n} \frac{\partial P_j}{\partial z_k}(\vec {1}) (m-x_k) \end{aligned} \end{aligned}$$

and so (omitting some details)

$$\begin{aligned} \left| q\left( \frac{i+1}{2},\frac{i-1}{2}\right) \right| \le (2m)^{r}|P_r(\vec {1})| e^{-r/2} \exp (\text {Re}( A ) + \frac{1}{2} B ) \end{aligned}$$

where

$$\begin{aligned} A = \frac{1}{P_r(\vec {1})} \left( \nabla P_r(\vec {1})\cdot (x+i\tilde{y})\left( \frac{1}{2m}- \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right) + \nabla P_{r+1}(\vec {1})\cdot (x+i\tilde{y})\right) \end{aligned}$$
$$\begin{aligned} B = \frac{1}{2}\left( (2m)^2 \left| \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right| ^2 - 2(2m)^2 \text {Re}\left( \frac{P_{r+2}(\vec {1})}{P_r(\vec {1})}\right) +r\right) . \end{aligned}$$

Note \(m\le \Vert z\Vert _{\infty }\) and \(\Vert x+i\tilde{y}\Vert _{\infty } \le \sqrt{2} m\). We can crudely estimate A:

$$\begin{aligned} |A|&\le \frac{1}{|P_r(\vec {1})|} \left( \Vert \nabla P_r(\vec {1})\Vert _{1}(\sqrt{2}m)\left( \frac{1}{2m} + \left| \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right| \right) + \Vert \nabla P_{r+1}(\vec {1})\Vert _{1} \sqrt{2}m\right) \\&\le \frac{\Vert \nabla P_r(\vec {1})\Vert _1}{\sqrt{2} |P_r(\vec {1})|} + \frac{\sqrt{2}}{|P_r(\vec {1})|}\left( \Vert \nabla P_r(\vec {1})\Vert _1 \left| \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right| + \Vert \nabla P_{r+1}(\vec {1})\Vert _{1}\right) \Vert z\Vert _{\infty } \end{aligned}$$

and

$$\begin{aligned} |B| \le 2\Vert z\Vert ^2_{\infty }\left( \left| \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right| ^2 - 2 \text {Re}\left( \frac{P_{r+2}(\vec {1})}{P_r(\vec {1})}\right) \right) + r/2. \end{aligned}$$

Here we use \(\Vert \cdot \Vert _1\) for \(\ell ^1\) norm of a vector. Putting everything together

$$\begin{aligned} |p(z)| \le \Vert z\Vert _{\infty }^{r} |P_r(\vec {1})| \exp (C_0+C_1\Vert z\Vert _{\infty } + C_2\Vert z\Vert _{\infty }^{2}) \end{aligned}$$

where

$$\begin{aligned} C_0 = r(\log (2)-1/4) + \frac{\Vert \nabla P_r(\vec {1})\Vert _1}{\sqrt{2} |P_r(\vec {1})|} \end{aligned}$$
$$\begin{aligned} C_1 = \frac{\sqrt{2}}{|P_r(\vec {1})|}\left( \Vert \nabla P_r(\vec {1})\Vert _1 \left| \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right| + \Vert \nabla P_{r+1}(\vec {1})\Vert _{1}\right) \end{aligned}$$
$$\begin{aligned} C_2 = \left( \left| \frac{P_{r+1}(\vec {1})}{P_r(\vec {1})}\right| ^2 - 2 \text {Re}\left( \frac{P_{r+2}(\vec {1})}{P_r(\vec {1})}\right) \right) . \end{aligned}$$