Abstract
We study a family of random Taylor series
with radius of convergence almost surely 1 and independent, identically distributed complex Gaussian coefficients \((\zeta _n)\); these Taylor series are distinguished by the invariance of their zero sets with respect to isometries of the unit disk. We find reasonably tight upper and lower bounds on the probability that F does not vanish in the disk \(\{|z|\leqslant r\}\) as \(r\uparrow 1\). Our bounds take different forms according to whether the non-random coefficients \((a_n)\) grow, decay or remain of the same order. The results apply more generally to a class of Gaussian Taylor series whose coefficients \((a_n)\) display power-law behavior.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Random analytic functions are a classical topic of study, attracting the attention of analysts and probabilists [8]. One of the most natural instances of such functions is provided by Gaussian analytic functions (GAFs, for short). The zero sets of Gaussian analytic functions are point processes exhibiting many interesting features [7, 12]. These features depend on the geometry of the domain of analyticity of the function and, sometimes, pique the interest of mathematical physicists, see, for instance, [1, 3,4,5, 15].
In this work we consider Gaussian analytic functions in the unit disk represented by a Gaussian Taylor series
where \((\zeta _n)\) is a sequence of independent, identically distributed standard complex normal random variables (i.e., the probability density of each \(\zeta _n\) is \(\frac{1}{\pi }e^{-|z|^2}\) with respect to Lebesgue measure on the complex plane), and \((a_n)\) is a non-random sequence of non-negative numbers satisfying \(\limsup \root n \of {a_n} = 1\). Properties of zero sets of Gaussian Taylor series with infinite radius of convergence have been studied quite intensively in recent years, see, e.g., [7, 11, 14] and the references therein. When the radius of convergence is finite, the hyperbolic geometry often leads to some peculiarities and complications. In particular, this is the case in the study of the hole probability, that is, the probability of the event
an important characteristic both from the point of view of analytic function theory and the theory of point processes. For arbitrary Gaussian Taylor series with radius of convergence one, understanding the asymptotic behaviour of the hole probability as \(r\uparrow 1\) seems difficult. Here, we focus on a natural family \(F_L\) of hyperbolic Gaussian analytic functions (hyperbolic GAFs, for short),
distinguished by the invariance of the distribution of their zero set under Möbius transformations of the unit disk [7, Ch. 2], [19]. Note that the parameter L used to parameterize this family equals the mean number of zeros of \(F_L\) per unit hyperbolic area.
The main result of our work provides reasonably tight asymptotic bounds on the logarithm of the hole probability as r increases to 1. These bounds show a transition in the asymptotics of the hole probability according to whether \(0<L<1\), \(L=1\) or \(L>1\). Curiously, a transition taking place at a different point, \(L=\tfrac{1}{2}\), was observed previously in the study of the asymptotics of the variance of the number of zeroes of \(F_L\) in disks of increasing radii [2].
1.1 The main result
Theorem 1
Suppose that \(F_L\) is a hyperbolic GAF, and that \(r\uparrow 1\). Then,
-
(i)
for \(0<L<1\),
$$\begin{aligned} \frac{1-L-o(1)}{2^{L+1}}\, \frac{1}{(1-r)^L}\, \log \frac{1}{1-r}\leqslant & {} - \log \mathbb {P}[\mathrm{Hole}(r)] \\\leqslant & {} \frac{1-L+o(1)}{2^{L}}\, \frac{1}{(1-r)^L}\, \log \frac{1}{1-r}; \end{aligned}$$ -
(ii)
for \(L=1\),
$$\begin{aligned} -\log \mathbb {P}[\mathrm{Hole}(r)] = \frac{\pi ^2+o(1)}{12}\, \frac{1}{1-r}; \end{aligned}$$ -
(iii)
for \(L>1\),
$$\begin{aligned} -\log \mathbb {P}[\mathrm{Hole}(r)] = \frac{(L-1)^2+o(1)}{4}\, \frac{1}{1-r}\, \log ^2 \frac{1}{1-r}. \end{aligned}$$
The case \(L=1\) in this theorem is due to Peres and Virág. We’ll briefly discuss their work in Sect. 1.2.2. The rest is apparently new.
1.2 Previous work
1.2.1 Gaussian Taylor series with an infinite radius of convergence
For an arbitrary Gaussian Taylor series with an infinite radius of convergence, the logarithmic asymptotics of the hole probability was obtained in [13]. The main result [13, Theorem 1] says that when \(r\rightarrow \infty \) outside an exceptional set of finite logarithmic length,
where
In this generality, the appearance of an exceptional set of values of r is unavoidable due to possible irregularities in the behaviour of the coefficients \((a_n)\) (see [14, Section 17]).
For a Gaussian Taylor series with a finite radius of convergence the asymptotic rate of decay of the hole probability has been described only in several rather special cases.
1.2.2 The determinantal case: \(L=1\)
Peres and Virág [16] (see also [7, Section 5.1]) discovered that the zero set of the Gaussian Taylor series
(that corresponds to \(L=1\) in (2)) is a determinantal point process [16, Theorem 1] and, therefore, many of its characteristics can be explicitly computed. In particular, they found that [16, Corollary 3(i)]
For \(L\ne 1\), the zero set of \(F_L\) is not a determinantal point process [7, p. 83], requiring the use of other techniques.
1.2.3 Fast growing coefficients
Skaskiv and Kuryliak [18] showed that the technique developed in [13] can be applied to Gaussian Taylor series on the disk, that grow very fast near the boundary. Put
They proved [18, Theorem 4] that if
and the sequence \((a_n)\) is logarithmically concave, then the same logarithmic asymptotic (3) holds when \(r\rightarrow 1\) outside a small exceptional subset of \([\tfrac{1}{2}, 1)\). Note that in our case, \(\sigma _{F_L}(r) = (1-r^2)^{-L/2}\).
1.2.4 The case when F is bounded on \(\mathbb {D}\)
At the opposite extreme, we may consider the case when, almost surely, the random series F is bounded on \(\mathbb {D}\). Then \(\mathbb {P}[\text {Hole}(r)]\) has a positive limit as \(r\rightarrow 1\). Indeed, put \(F=F(0)+G\). If F is bounded on \(\mathbb {D}\), then G is bounded on \(\mathbb {D}\) as well. Take M so that \(\mathbb {P}[\sup _{\mathbb {D}}|G| \leqslant M]\ge \frac{1}{2}\). Then
Since F(0) and G are independent, we get
In view of this observation, we recall a classical result that goes back to Paley and Zygmund and gives a sufficient condition for continuity (and hence, boundedness) of F on \(\bar{\mathbb {D}}\). Introduce the sequence
If the sequence \(s_j\) decreases and \(\sum _j s_j < \infty \), then, almost surely, F is a continuous function on \(\bar{\mathbb {D}}\) [8, Section 7.1]. On the other hand, under a mild additional regularity condition, divergence of the series \(\sum _j s_j = +\infty \) guarantees that, almost surely, F is unbounded in \(\mathbb {D}\) [8, Section 8.4].
1.3 Several comments on Theorem 1
1.3.1
The proof of Theorem 1 combines the tools introduced in [13, 20] with several new ingredients. Unfortunately, in the case \(0< L <1\), our techniques are insufficient for finding the main term in the logarithmic asymptotics of \(\mathbb {P}[\text {Hole}(r)]\). Also we cannot completely recover the aforementioned result of Peres and Virág. On the other hand, our arguments do not use the hyperbolic invariance of the zero distribution of \(F_L\). We make use of the fact that
where \(\Gamma \) is Euler’s Gamma function, and our techniques apply more generally to a class of Gaussian Taylor series whose coefficients \((a_n)\) display power-law behavior. We will return to this in the concluding Sect. 8.
1.3.2
In the case \(L>1\), using (7), it is easy to see that
That is, in this case the hyperbolic geometry becomes less relevant and the main term in the logarithmic asymptotic of the hole probability is governed by the same function as in the planar case, discussed in Sect. 1.2.1.
1.3.3
For \(0<L<1\), the gap between the upper and lower bounds in Theorem 1 remains unsettled. It would be also interesting to accurately explore the behaviour of the logarithm of the hole probability near the transition points \(L=0\) and \(L=1\); for instance, to consider the cases \(a_n = n^{-1/2}\ell (n)\) and \(a_n = \ell (n)\), where \(\ell \) is a slowly varying function.
Of course, the ultimate goal would be to treat the case of arbitrary Gaussian Taylor series with finite radii of convergence.
1.3.1 Notation
-
Gaussian analytic functions will be called GAFs. The Gaussian analytic functions \(F_L\) defined in (2) will be called hyperbolic GAFs.
-
We suppress all of the dependence on L unless it is absolutely necessary. In particular, from here on,
-
\(F=F_L\),
-
c and C denote positive constants that might only depend on L. The values of these constants are irrelevant for our purposes and may vary from line to line. By A, \(\alpha \), \(\alpha _i\), etc. we denote positive constants whose values we keep fixed throughout the proof in which they appear.
-
-
The notation \(X\simeq Y\) means that \(cX \leqslant Y \leqslant CY\).
-
We set
$$\begin{aligned} \delta = 1-r\quad \text {and}\quad r_0=1-\kappa \delta \quad \text {with }1<\kappa \leqslant 2. \end{aligned}$$Everywhere, except Sect. 6, we set \(\kappa =2\), that is, \(r_0=1-2\delta \). The value of r is assumed to be sufficiently close to 1. Correspondingly, the value of \(\delta \) is assumed to be sufficiently small.
-
The variance \(\sigma _F^2 = \sigma _F(r)^2\) is defined by
$$\begin{aligned} \sigma _F^2 = \mathbb {E}\left[ |F(re^{\mathrm{i}\theta })|^2\right] = (1-r^2)^{-L} = (1+o(1)) (2\delta )^{-L}, \quad \mathrm{as}\ \delta \rightarrow 0. \end{aligned}$$Usually, we suppress the dependence on r and write \(\sigma _F\) instead of \(\sigma _F(r)\). Notice that \(\log \frac{1}{\delta } \simeq \log \sigma _F\).
-
An event E depending on r will be called negligible if \(-\log \mathbb {P}[\mathrm{Hole}(r)] = o(1) \left( -\log \mathbb {P}[E]] \right) \) as \(r\rightarrow 1\). Notice that this may depend on the value of L.
-
If f takes real values, then we define \(f_+ = \max \{0,f\}\) and \(f_- = \max \{0,-f\}\).
-
\(e(t)=e^{2\pi \mathrm{i}t}\).
-
[x] denotes the integer part of x.
-
\(n \equiv k\, (N)\) means that \(n \equiv k\) modulo N.
-
\(\mathbb {D}\) denotes the open unit disk, \(\mathbb {T}\) denotes the unit circle.
-
The planar Lebesgue measure is denoted by m, and the (normalized) Lebesgue measure on \(\mathbb {T}\) is denoted by \(\mu \).
2 Idea of the proof
We give a brief description of the proof of Theorem 1 in the cases \(0<L<1\) and \(L>1\). In the case \(L=1\) our arguments suffice to estimate the logarithm of the hole probability up to a constant, as discussed in Sect. 8, and we briefly sketch the argument for this case as well. Our proof for the upper bound on the hole probability in the case \(0<L<1\) is more involved than the other cases.
2.1 Upper bounds on the hole probability when \(L>1\) and \(L=1\)
Our starting point for proving upper bounds is the mean-value property. On the hole event,
Off an event of negligible probability, the integral may be discretized, yielding the inequality
in the slightly smaller radius \(r_0<r\), for a random \(\tau \in \mathbb {T}\), taken out of a small set of possibilities, suitable N and \(\omega = e(1/N)\) (see Lemmas 8 and 9). Thus it suffices to bound from above, for each fixed \(\tau \in \mathbb {T}\), the probability that (8) holds. We may further simplify by fixing a threshold \(T>0\), noting that \(\mathbb {P}\left[ |F(0)|\ge T\right] = \exp (-T^2)\) and writing
We focus on the first summand, setting T sufficiently large so that the second summand is negligible. Taking \(0<\theta <2\) and applying Chebyshev’s inequality,
It remains to estimate the expectation in the last expression. Our bounds for it make use of the fact that the covariance matrix \(\Sigma \) of the Gaussian vector \((F(\tau \omega ^j r_0))\), \(1\leqslant j\leqslant N\), has a circulant structure, allowing it to be explicitly diagonalized. In particular, its eigenvalues are (see Lemma 10)
This is used together with the following, somewhat rough, bound (see Lemma 15)
where \(\Lambda \) is the maximal eigenvalue of \(\Sigma \) and \(\Gamma \) is Euler’s Gamma-function.
2.1.1 The case \(L>1\)
We set the parameters to be \(T = \delta ^{-\frac{1}{2}}\exp \left( -\sqrt{\log \frac{1}{\delta }}\right) \), so that the factor \(e^{-T^2}\) in (9) is indeed negligible, \(N = \left[ \,\frac{L-1}{2\delta } \log \frac{1}{\delta }\,\right] \) and \(\theta = 2 - \left( \log \frac{1}{\delta }\right) ^{-1}\). With this choice, the dominant term in the combination of the bounds (10) and (12) is the factor \(T^{\theta N} / \det \Sigma \). Its logarithmic asymptotics are calculated using (11) and yield the required upper bound.
We mention that choosing \(\theta \) close to its maximal value of 2 corresponds, in some sense, to the fact that the event (8) constitutes a very large deviation for the random sum \(\sum _{j=1}^N \log | F(\tau \omega ^j r_0)|\).
2.1.2 The case \(L=1\)
The same approach may be applied with the parameters \(T = b\,\delta ^{-1/2}\), for a small parameter \(b>0\), \(N = \left[ \,\delta ^{-1}\,\right] \) and \(\theta = 1\). For variety, Sect. 8.1 presents a slightly different alternative.
In the same sense as before, the choice \(\theta =1\) indicates that we are now considering a large deviation event.
2.2 Upper bound on the hole probability when \(0<L<1\)
Our goal here is to show that the intersection of the hole event with the event \(\left\{ |F(0)|\leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\,\right\} \) is negligible when \(A^2 < \frac{1}{2}\). The upper bound then follows from the estimate
The starting point is again the inequality (8) in which we choose the parameter
with \(\alpha \) eventually chosen close to 1. However, a more refined analysis is required here. First, we separate the constant term from the function F, writing
Second, to have better control of the Gaussian vector
we couple G with two independent GAFs \(G_1\) and \(G_2\) so that \(G = G_1 + G_2\) almost surely, the vector \((G_1(\tau \omega ^j r_0))\), \(1\leqslant j\leqslant N\), is composed of independent, identically distributed variables, and \(G_2\) is a polynomial of degree N with relatively small variance. In essence, we are treating the variables in (13) as independent, identically distributed up to a small error captured by \(G_2\).
We proceed by conditioning on F(0) and \(G_2\), using the convenient notation
Applying Chebyshev’s inequality we may use the above independence to exchange expectation and product, writing, for \(0<\theta <2\),
Thus we need to estimate expectations of the form
in which F(0) and \(G_2\) are given. Two bounds are used to this end. Given a standard complex Gaussian random variable \(\zeta \), real \(t>0\) and \(0<\theta \leqslant 1\) we have the simple estimate,
and, for \(0\leqslant \theta \leqslant \frac{1}{2}\), the more refined
see Lemmas 11 and 14. The most important feature of the second bound is that it is less than 1 (though only slightly) when \(\theta \) is very close to 0, satisfying \(\theta \leqslant c(1+t^2)^{-1}e^{-t^2}\).
Combining (14) and (15) with the simple estimate (16) (with \(\theta =1\)) already suffices to prove that the intersection of the hole event with the event \(\{|F(0)|\leqslant a\sigma _F\}\) is negligible when a is a sufficiently small constant. However, on the event
the error term \(G_2\) becomes more relevant and we consider two cases according to its magnitude. Taking small constants \(\varepsilon ,\alpha _0>0\) and \(\eta = \delta ^{\alpha _0}\) we let
2.2.1 The case \(|J|>(1-2\eta )N\)
Here, discarding (a priori, before conditioning on \(G_2\)) a negligible event to handle the rotation \(\tau \), for many values of j, the terms in (15) have \(\left| 1 + \frac{G_2(\omega ^j \tau r_0 )}{F(0)}\right| \ge 1+\varepsilon \). This fact together with the bound (17) (in simplified form, with right-hand side \(1 + C\theta ^2\)), taking \(\theta \) tending to 0 as a small power of \(\delta \), suffices to show that the probability in (14) is negligible.
2.2.2 The case \(|J|\leqslant (1-2\eta )N\)
In this case we change our starting point. By the mean-value inequality,
Off an event of negligible probability, the integral may again be discretized, yielding
in the slightly smaller radius \(r_0<r\), for a random \(\tau \), taken out of a small set of possibilities, see Lemma 8. As before, for each fixed \(\tau \), Chebyshev’s inequality and independence show that,
and we are left with the task of estimating terms of the form
Again discarding (a priori) a negligible event to handle the rotation \(\tau \), for many values of j, the terms in (19) have \(\left| 1 + \frac{G_2(\omega ^j \tau r_0 )}{F(0)}\right| \leqslant 1+\varepsilon \). These terms are estimated by using (17) with \(t\leqslant (1+4\varepsilon )A\,\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\). Correspondingly we set \(\theta = c\eta ((1-r)\sigma _F^2)^{(1+10\varepsilon )A^2}\) and obtain that the probability in (18) satisfies
Recalling that \(\sigma _F^2 = (1-r^2)^{-L}\), \(\eta = \delta ^{\alpha _0}\) and \(N = [\delta ^{-\alpha }]\), and choosing \(\varepsilon \) and \(\alpha _0\) close to 0 and \(\alpha \) close to 1 shows that this probability is negligible provided that \(A^2 < \frac{1}{2}\).
In contrast to the cases \(L>1\) and \(L=1\), the fact that we take \(\theta \) tending to 0 can be viewed as saying that we are now considering a moderate deviation event.
2.3 Lower bounds on the hole probability
The proofs of our lower bounds on the hole probability are less involved than the proofs of the upper bounds and the reader is referred to the relevant sections for details. We mention here that in all cases we rely on the same basic strategy: Fix a threshold \(M>0\) and observe that
2.3.1 The case \(0<L<1\)
Here we take
To estimate the right-hand side of (20) we discretize the circle \(r\mathbb {T}\) into \(N = \left[ (1-r)^{-(1+\varepsilon )}\right] \) equally-spaced points. We then use Hargé’s version of the Gaussian correlation inequality to estimate, by bounding \(F'\), the probability that the maximum attained on the circle \(r\mathbb {T}\) is not much bigger than the maximum attained on these points and that the value F attains at each of the points is not too large.
2.3.2 The case \(L>1\)
Here we take \(M = \frac{1}{\sqrt{\delta }}\left( \log \frac{1}{\delta }\right) ^{\alpha }\) for some \(\frac{1}{2}<\alpha <1\). We also set \(N=\left[ \frac{2 L}{\delta }\log \frac{1}{\delta }\right] \). We prove that the event on the right-hand side of (20) becomes typical after conditioning that the first N coefficients in the Taylor series of F are suitably small, and estimate the probability of the conditioning event.
2.3.3 The case \(L=1\)
This case seems the most delicate of the lower bounds. We take \(M = B\sqrt{\frac{1}{1-r}}\) for a large constant B. To estimate the probability on the right-hand side of (20) we write the Taylor series of F as an infinite sum of polynomials of degree \(N = \left\lceil \frac{1}{1-r}\right\rceil \) and use an argument based on Bernstein’s inequality and Hargé’s version of the Gaussian correlation inequality.
3 Preliminaries
Here, we collect several lemmas, which will be used in the proof of Theorem 1.
3.1 GAFs
Lemma 1
([7], Lemma 2.4.4) Let g be a GAF on \(\mathbb {D}\), and let \(\sup _{\mathbb {D}} \mathbb {E}[|g|^2] \leqslant \sigma ^2\). Then, for every \(\lambda >0\),
Lemma 2
Let f be a GAF on \(\mathbb {D}\), and \(s \in (0,\delta )\). Put
Then, for every \(\lambda >0\),
In particular, for every \(\lambda >0\),
Proof of Lemma 2
Take an integer \(N \simeq \frac{1}{s}\) and consider the scaled functions
Since \(\max _{r\bar{\mathbb {D}}} |f| = \max _{r\mathbb {T}} |f(z)|\), the the first statement follows by applying Lemma 1 to each \(g_j\) and using the union bound. The second statement follows from the first by taking \(s=\tfrac{1}{2} \delta = \tfrac{1}{2} (1-r)\). \(\square \)
3.2 A priori bounds for hyperbolic GAFs
Lemma 3
Suppose that F is a hyperbolic GAF. Then, for \(p>1\),
Proof of Lemma 3
Since \(\sigma _F\left( 1-\tfrac{1}{4} \delta \right) \leqslant C \sigma _F\), this follows from Lemma 2. \(\square \)
Lemma 4
Suppose that F is a hyperbolic GAF. Then
Proof of Lemma 4
Suppose that \( \max _{(1-2\delta )\mathbb {D}} |F| \leqslant e^{-\log ^2 \sigma _F} \). Then, by Cauchy’s inequalities,
whence, for \(n>0\), using the fact that \(a_n \ge c n^{\frac{1}{2} (L-1)}\) (see (7)), we get
In the range \(n\simeq \tfrac{1}{\delta }\log ^2\sigma _F\), we get
provided that \(\delta \) is sufficiently small. The probability that (21) holds simultaneously for all such n, does not exceed
completing the proof. \(\square \)
Next, we define “the good event” \(\Omega _\mathtt{g} = \Omega _\mathtt{g} (r)\) by
and note that by Lemmas 3 and 4 the event \(\Omega _\mathtt{g}^c\) is negligible.
Lemma 5
Suppose that F is a hyperbolic GAF. If \(\gamma > 1\) then
Proof of Lemma 5
Suppose that we are on the event \( \mathrm{Hole}(r) \bigcap \Omega _\mathtt{g}(r) \). Since F does not vanish in \(r\mathbb {D}\), the function \(\log |F|\) is harmonic therein, and therefore,
Furthermore, on \(\Omega _\mathtt{g}\), let \(w\in (1-2\delta )\mathbb {T}\) be a point where \(|F(w)|\ge e^{-\log ^2\sigma _F}\). Then
whence,
Then,
proving the lemma.\(\square \)
3.3 Averaging \(\log |F|\) over roots of unity
We start with a polynomial version.
Lemma 6
Let S be a polynomial of degree \(n \ge 1\), let \(k\ge 4\) be an integer, and let \(\omega =e(1/k)\). Then there exists \(\tau \) with \(\tau ^{k^2 n}=1\) so that
Proof of Lemma 6
Assume that \(S(0)=1\) (otherwise, replace S by S / S(0)). Then \( S(z) = \prod _{\ell =1}^n (1-s_\ell z) \) and
where \( S_1(w) \mathop {=}\limits ^\mathrm{def} \prod _{\ell =1}^n (1-s_\ell ^k w) \) is a polynomial of degree n.
Let \(M=\max _{\mathbb {T}} |S_1|\). By the maximum principle, \(M \ge |S_1(0)|=1\). By Bernstein’s inequality, \(\max _{\mathbb {T}} |S_1'|\leqslant nM\). Now, let \(t_0=e^{\mathrm{i}\varphi _0}\in \mathbb {T}\) be a point where \(|S_1(t_0)|=M\). Then, for any \(t=e^{\mathrm{i}\varphi }\in \mathbb {T}\) with \(|\varphi -\varphi _0|< \tfrac{\varepsilon }{n}\) (with \(0\leqslant \varepsilon <1\)), we have
The roots of unity of order kn form a \(\frac{\pi }{k n}\)-net on \(\mathbb {T}\). Thus, there exists t such that \(t^{k n}=1\) and \(\log |S_1(t)| \ge - \tfrac{C}{k}\). Taking \(\tau \) so that \(\tau ^k=t\), we get
while \(\tau ^{k^2 n}=1\).\(\square \)
Lemma 7
Let f be an analytic function on \(\mathbb {D}\) such that \(M\mathop {=}\limits ^\mathrm{def}\sup _{\mathbb {D}} |f|<+\infty \). Let \(0<\rho <1\), and denote by q the Taylor polynomial of f around 0 of degree N. If
then \(\max _{\rho \bar{\mathbb {D}}} |f-q|<1\).
Proof of Lemma 7
Let
Then, by Cauchy’s inequalities, \(|c_n|\leqslant M\), whence
\(\square \)
Lemma 8
Let \(0< m < 1\), \(0< \rho < 1\), \(k\ge 4\) be an integer, and \(\omega = e(1/k)\). Suppose that F is an analytic function on \(\mathbb {D}\) with \(m\leqslant \inf _{\mathbb {D}}|F| \leqslant \sup _{\mathbb {D}} |F| \leqslant m^{-1}\) and that P is a polynomial with \(P(0) \ne 0\). There exist a positive integer
and \(\tau \in \mathbb {T}\) satisfying \(\tau ^{K}=1\) such that
Proof of Lemma 8
Let \(0< \varepsilon < m\) be a small parameter. Applying Lemma 7 to the function \(f=(\varepsilon F)^{-1}\), we get a polynomial Q with
such that
Then, assuming that \(\varepsilon <\frac{1}{2} m\), we get
Applying Lemma 6 to the polynomial \(S=P\cdot Q\) and taking into account (24), we see that there exists \(\tau \) so that \(\tau ^{k^2\deg S}=1\) and
It remains to let \(\varepsilon = \tfrac{1}{2} m k^{-2} \) and \( K = k^2\deg S = k^2 (\deg P + \deg Q)\). \(\square \)
We will only need the full strength of this lemma in Sect. 4.3.3. Elsewhere, the following lemma will be sufficient.
Lemma 9
Let F be a hyperbolic GAF. Let
Let \(k\ge 4\) be an integer and \(\omega =e(1/k)\). Then
Proof of Lemma 9
Outside the negligible event \( \mathrm{Hole}(r){\setminus }\Omega _\mathtt{g}\) we have the bound
According to Lemma 5 applied with \(\gamma =1+\tfrac{1}{2} (\kappa -1)\), we have
Therefore, letting \(m= \exp \left[ -C (\kappa - 1)^{-1} \delta ^{-3} \right] \), we get
Applying Lemma 8 to the function \(F\left( (1-\gamma \delta ) z \right) \) with \(P \equiv 1\) and
we find that
with some
completing the proof. \(\square \)
Note that everywhere except for Sect. 6 we use this lemma with \(\kappa =2\).
3.4 The covariance matrix
We will constantly exploit the fact that the covariance matrix of the random variables \( F(\omega ^j r)\), \(\omega = e(1/N)\), \(1\leqslant j \leqslant N\), has a simple structure. This is valid for general Gaussian Taylor series, as the following lemma details.
Lemma 10
Let F be any Gaussian Taylor series of the form (1) with radius of convergence R and let \(z_j = re(j/N)\) for \(r<R\) and \(j=0, \ldots , N-1\). Consider the covariance matrix \(\Sigma = \Sigma (r, N)\) of the random variables \(F(z_0)\), ..., \(F(z_{N-1})\), that is,
Then, the eigenvalues of \(\Sigma \) are
where \(n\equiv m\, (N)\) denotes that n is equivalent to m modulo N.
Proof of Lemma 10
Observe that
Define the \(N \times N\) matrix U by
and the vector \(\Upsilon \) by
Then U is a unitary matrix (it is the discrete Fourier transform matrix) and the components of \(\Upsilon \) are independent complex Gaussian random variables with
Finally, note that
Hence, the covariance matrices of \( ( F(z_0)\), ..., \(F(z_{N-1}))\) and of \(\Upsilon \) have the same set of eigenvalues. \(\square \)
3.5 Negative moments of Gaussian random variables
In the following lemmas \(\zeta \) is a standard complex Gaussian random variable. Recall that, for \(\theta <2\),
where \(\Gamma \) is Euler’s Gamma-function.
Lemma 11
There exists a numerical constant \(C>0\) such that, for every \(t>0\) and \(0<\theta \leqslant 1\),
Proof of Lemma 11
Write
where m is the planar Lebesgue measure, and use the Hardy–Littlewood rearrangement inequality, noting that the symmetric decreasing rearrangement of \( \left| w+\tfrac{z}{t} \right| ^{-\theta } \) is \( \left| \frac{z}{t}\right| ^{-\theta } \), and that \(e^{-|z|^2}\) is already symmetric and decreasing.\(\square \)
Lemma 12
For each \(\tau >0\) there exists a \(C(\tau )>0\) such that for every integer \(n\ge 1\),
Proof of Lemma 12
By the symmetry of \(\zeta \) we may assume that \(t > 0\). Put \(X=\left| 1+\tfrac{\zeta }{t} \right| \) and write
We have
In addition,
Finally, for \(t\ge \tau \),
completing the proof. \(\square \)
Lemma 13
For \(t>0\),
Proof of Lemma 13
We have
Integrating by parts once again, we see that
whence,
completing the proof.\(\square \)
Lemma 14
There exist numerical constants \(c, C > 0\) such that, for every \(t>0\) and \(0\leqslant \theta \leqslant \tfrac{1}{2}\),
Proof of Lemma 14
Lemma 11 yields that there exist \(c, \tau >0\) such that, for every \(0<t\leqslant \tau \) and \(0\leqslant \theta \leqslant \tfrac{1}{2}\), we have
Thus, we need to consider the case \(t\ge \tau \). Write
Using Lemma 12, we get
Applying Lemma 13, we get the result. \(\square \)
Lemma 15
Suppose that \((\eta _j)_{1\leqslant j \leqslant N}\) are complex Gaussian random variables with covariance matrix \(\Sigma \). Then, for \(0\leqslant \theta < 2\),
where \(\Lambda \) is the maximal eigenvalue of \(\Sigma \).
Proof of Lemma 15
We have
proving the lemma. \(\square \)
4 Upper bound on the hole probability for \(0< L < 1\)
Now we are ready to prove the lower bound part of Theorem 1, in the case \(0< L < 1\). Throughout the proof, we use the parameters
where
In many instances, we assume that r is sufficiently close to 1, that is, that \(\delta \) is sufficiently small.
It will be convenient to separate the constant term from the function F, letting
4.1 Splitting the function G
We define two independent GAFs \(G_1\) and \(G_2\) so that \(G=G_1+G_2\) and
-
\(G_1(\omega ^j r_0)\), \(1\leqslant j \leqslant N\), are independent, identically distributed Gaussian random variables with variance close to \(\sigma _F^2(r_0)\);
-
\(G_2\) is a polynomial of degree \(N-1\) and, for \(|z|=r_0\), the variance of \(G_2(z)\) is much smaller than \(\sigma _F^2(r_0)\).
Let \((\zeta _n')_{n\ge 1}\) and \((\zeta _n'')_{1\leqslant n \leqslant N-1}\) be two independent sequences of independent standard complex Gaussian random variables, and let
where the non-negative coefficients \(b_n\) are defined by
Since the sequence \((a_n)\) does not increase, the expression in the brackets is positive. For the same reason, for \(1\leqslant n \leqslant N-1\), we have
whence, for these values of n we have \(b_n\leqslant a_n\). The coefficients \(d_n \ge 0\) are defined by \( a_n^2 = b_n^2 + d_n^2\). This definition implies that the random Gaussian functions G and \( G_1 + G_2 \) have the same distribution, and we couple G, \(G_1\) and \(G_2\) (that is, we couple the sequences \((\zeta _n)\), \((\zeta _n')\) and \((\zeta _n'')\)) so that \(G = G_1+G_2\) almost surely.
Lemma 16
For any \(\tau \in \mathbb {T}\), the random variables \(( G_1(\omega ^j \tau r_0) )\), \(1\leqslant j \leqslant N\), are independent, identically distributed \(\mathcal N_{\mathbb {C}}(0, \sigma _{G_1}^2(r_0))\) with
In addition, we have
Proof of Lemma 16
Applying Lemma 10 to the function \(G_1\) evaluated at \(\tau z\) we see that the eigenvalues of the covariance matrix of the random variables \((G_1(\omega ^j\tau r_0))_{1\leqslant j \leqslant N}\) are all equal to \(N\, \sum _{k\ge 1} a_{kN}^2 r_0^{2kN}\). Hence, the covariance matrix of these Gaussian random variables is diagonal, that is, they are independent, and the relation (26) holds.
To prove estimate (27), observe that
and since the sequence \((a_n)\) does not increase, we have
Recalling (see (7)) that
we see that
proving the lemma. \(\square \)
We henceforth condition on F(0) and \(G_2\) (that is, on \(\zeta _0\) and \((\zeta _n'')_{1\leqslant n \leqslant N-1}\)), and write
In the following section, we consider the case when \(|F(0)|/\sigma _F\) is sufficiently small.
4.2 \(|F(0)|\leqslant a \sigma _F\)
We show that the intersection of the hole event with the event \(\{|F(0)|\leqslant a \sigma _F\}\) is negligible for a sufficiently small.
By Lemma 9 (with \(\kappa =2\) and \(k=N\)), it suffices to estimate the probability
with some fixed \(\tau \in \mathbb {T}\). By Chebyshev’s inequality, the right-hand side is bounded by
where in the last equality we used the independence of \(\left( G_1(\omega ^j \tau r_0) \right) _{1\leqslant j \leqslant N}\) proven in Lemma 16, and the independence of \(G_1\) and F(0), \(G_2\). Then, applying Lemma 11 with \(t=|F(0)|/\sigma _{G_1}(r_0)\), we get using (27), for each \(1\leqslant j \leqslant N\),
provided that the constant a is sufficiently small.
Thus,
Since \(\alpha >L\), this case gives a negligible contribution to the probability of the hole event.
4.3 \(a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\)
Our strategy is to make the constant A as large as possible, while keeping the hole event negligible. Then, up to negligible terms, we will bound \(\mathbb {P}[\text {Hole}(r)]\) by
We fix a small positive parameter \(\varepsilon \), put
and introduce the event \(\left\{ |J|\leqslant (1-2\eta ) N \right\} \), where \(\eta =\delta ^{\alpha _0}\), \(\alpha _0\) is a sufficiently small positive constant, and |J| denotes the size of the set J. We will estimate separately the probabilities of the events
and
Given \(\tau \in \mathbb {T}\), put
Our first goal is to show that, outside a negligible event, the random sets \(J_\pm (\tau )\) are similar in size to the set J.
4.3.1 Controlling \(G_2\) at the points \(\left( \omega ^j\tau r_0 \right) _{1\leqslant j \leqslant N}\)
Lemma 17
Given \(\alpha _0>0\), there is an event \(\widetilde{\Omega }=\widetilde{\Omega }(r)\) with \(\mathbb {P}^{F(0)} \left[ \widetilde{\Omega } \right] \leqslant e^{-\delta ^{-(L+c)}}\) on \(\{|F(0)|\ge a\sigma _F\}\), for r sufficiently close to 1, such that, for any \(\tau \in \mathbb {T}\), on the complement \(\widetilde{\Omega }^c\) we have
with \(\eta =\delta ^{\alpha _0}\).
Proof of Lemma 17
We take small positive constants \(\alpha _1\) and \(\alpha _2\) satisfying
let \(M=[\delta ^{-\alpha _2}]\), write \(G_2~=~G_3 + G_4\) as follows
and define the events
First, we show that (under our running assumption \(|F(0)|\ge a\sigma _F\)) the events \(\Omega _3\) and \(\Omega _4\) are negligible. Then, outside these events, we estimate the functions \(G_3\) and \(G_4\) at the points \(\left( \omega ^j\tau r_0 \right) _{1\leqslant j \leqslant N}\). We may assume without loss of generality that \(|\arg (\tau )|\leqslant \pi /N\), as rotating \(\tau \) by \(2\pi k/N\) leaves \(|J_\pm (\tau )|\) unchanged. \(\square \)
Lemma 18
For r sufficiently close to 1, we have \(\mathbb {P}^{F(0)}\left[ \Omega _3 \right] \leqslant e^{-\delta ^{-(L+c)}}\) on \(\{|F(0)|\ge a\sigma _F\}\).
Proof of Lemma 18
Using the union bound, we get
\(\square \)
Lemma 19
For r sufficiently close to 1, we have \(\mathbb {P}^{F(0)}\left[ \Omega _4 \right] \leqslant e^{-\delta ^{-(L+c)}}\) on \(\{|F(0)|\ge a\sigma _F\}\).
Proof of Lemma 19
Put
Let \(\lambda >0\) satisfy \(\lambda d_n^2<1\) for \(M+1\leqslant n\leqslant N-1\). Then,
We take \(\lambda = 1/(2 a_M^2)\), recalling that \(a_n^2 = b_n^2 + d_n^2\) with \((a_n)\) non-increasing. Then,
Thus,
provided that
We use estimate (30) with \(t=\delta ^{2 \alpha _1}|F(0)|^2\). Note that \( \delta ^{2 \alpha _1}|F(0)|^2 \ge c\delta ^{-(L-2 \alpha _1)} \) and that
which satisfies (31) by (29). Finally, using again (29), we get
\(\square \)
Lemma 20
Suppose that r is sufficiently close to 1. Then on the event \(\Omega _3^c\) we have
Proof of Lemma 20
For each \(1\leqslant j \leqslant N\), since \(0 \leqslant d_n \leqslant 1\), we have
provided that \(2 \alpha _2 + \alpha _1 < \alpha \), and that r is sufficiently close to 1. \(\square \)
Lemma 21
Suppose that r is sufficiently close to 1. Then, on the event \(\Omega _4^c\), for any \(\tau \in \mathbb {T}\), the cardinality of the set
does not exceed \(\eta N\).
Proof of Lemma 21
We have
and similarly,
Hence, on \(\Omega _4^c\), the cardinality of the set we are interested in does not exceed
provided that \(2 \alpha _1<\alpha _0\) and that r is sufficiently close to 1. \(\square \)
Now, Lemma 17 is a straightforward consequence of Lemmas 18, 19, 20, and 21. \(\Box \)
4.3.2 \(|J|> (1-2\eta )N\), \(\eta =\delta ^{\alpha _0}\)
In this section we show that the intersection of the hole event with the event
is negligible. Taking into account the fact that \(J, J_-(\tau )\) and \(J_+(\tau )\) are measurable with respect to F(0) and \(G_2\), Lemma 9 (with \(\kappa =2\) and \(k=N\)) and Lemma 17 show that it suffices to estimate uniformly in \(\tau \in \mathbb {T}\), on the intersection of the events (32) and (28), the probability
Taking some positive \(\theta _1=\theta _1(r)\) tending to 0 as \(r\rightarrow 1\) (the function \(\theta _1(r)\) will be chosen later) and applying Chebyshev’s inequality, the last probability does not exceed
Once again, we used the independence of \(\left( G_1(\omega ^j \tau r_0) \right) _{1\leqslant j \leqslant N}\) and the independence of \(G_1\) and F(0), \(G_2\).
For \(j\in J_-(\tau )\), we have
Hence, by Lemma 14, for such j we obtain
provided that r is so close to 1 that \( \theta _1 (r)\) is much smaller than \(\varepsilon \) (recall that \(\varepsilon \) is small but fixed).
For \(j\notin J_-(\tau )\), using Lemma 11 (with \(t = \frac{|F(0)|}{\sigma _G} \leqslant CA\,\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\)), we get
Thus, (33) does not exceed
As we are on the intersection of the events (32) and (28) we have \(|J_-(\tau )| \ge (1-3\eta )N\). Therefore, the expression in the last displayed formula does not exceed
Then, letting \(\theta _1 = \delta ^c\) with \(c<\min \{\alpha _0, \alpha - L\}\), and using that
we see that \(E \leqslant \exp \left[ -c\varepsilon \theta _1 N \right] \) (recall that \(\eta = \delta ^{\alpha _0}\)) and conclude that the event
is negligible.
4.3.3 \(|J| \leqslant (1-2\eta )N\), \(\eta =\delta ^{\alpha _0}\)
Here we show that the intersection of the hole event with the event
is negligible, provided that \( A^2 < \frac{1}{2}\).
In this case, our starting point is Lemma 8 which we apply with the polynomial \(P=F(0)+G_2\). Combined with Lemmas 5 and 17, it tells us that it suffices to estimate uniformly in \(\tau \in \mathbb {T}\), on the intersection of the events (34) and (28), the probability
Noting that
we rewrite this expression as
The positive \(\theta _2=\theta _2(r)\) (again tending to 0 as \(r\rightarrow 1\)) will be chosen later. By the independence of \((G_1(\omega ^j \tau r_0)_{1\leqslant j \leqslant N}\) proven in Lemma 16, and the independence of \(G_1\) and F(0), \(G_2\), it suffices to estimate the product
First we consider the terms with \(j\in J_+(\tau )\). In this case, by Lemma 14, we have
For the terms with \(j\notin J_+(\tau )\), we apply Lemma 14 with
provided that r is sufficiently close to 1. Then, by Lemma 14 (using that \((1-r)\sigma _F^2\) is sufficiently small)
Assuming that
we continue our estimate as follows
Multiplying the bounds (35) and (37), we get
As we are on the intersection of the events (34) and (28), we have \(|J_+(\tau )| \leqslant |J| + \eta N \leqslant (1-\eta )N\). Then, the expression in the exponent on the right-hand side of the previous displayed equation does not exceed
Letting \( \theta _2 = c \eta \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2} \) (which satisfies our previous requirement (36) since \(\eta \rightarrow 0\) as \(r\rightarrow 1\)), we estimate the previous expression by
To make the event with probability bounded by \(\exp \left[ -c \delta ^{2 \alpha _0+2(1+10\varepsilon ) A^2 (1-L) -\alpha } \right] \) negligible, we need to be sure that \(\alpha - 2(1+10\varepsilon ) A^2(1-L) - 2 \alpha _0 > L\). Since the constants \(\varepsilon \) and \(\alpha _0\) can be made arbitrarily small, while \(\alpha \) can be made arbitrarily close to 1, we conclude that the event
is negligible, provided that \( A^2 < \frac{1}{2}\).
We conclude that the event
is negligible whenever \( A^2 < \frac{1}{2}\), and therefore, combined with the bound of Sect. 4.2, for any such A,
whence, letting \(r\uparrow 1\),
completing the proof of the upper bound in the case \(0<L<1\). \(\Box \)
5 Lower bound on the hole probability for \(0<L<1\)
As before, let \(F=F(0)+G\). Fix \(\varepsilon >0\), set
define the events
and observe that \(\text {Hole}(r) \supset \Omega _1 \bigcap \Omega _2\) and that \(\Omega _1\) and \(\Omega _2\) are independent.
Put
with a sufficiently large positive constant A that will be chosen later, and define the events
Then, \( \Omega _2 \supset \Omega _3 \bigcap \Omega _4\) when r is sufficiently close to 1 as a function of \(\varepsilon \). Hence,
For \(K \in \mathbb {N}\), we put \(\lambda = e\left( 1/2^K\right) \), \(w_k = \lambda ^k r\), \(1 \leqslant k \leqslant 2^K\) and define
Notice that \(\Omega _4^K \downarrow \Omega _4\) as \(K\rightarrow \infty \). In order to bound \(\mathbb {P}\left[ \, \Omega _3 \bigcap \Omega _4 \, \right] \) from below we use Hargé’s version [6] of the Gaussian correlation inequality:Footnote 1
Theorem 2
(G. Hargé) Let \(\gamma \) be a Gaussian measure on \(\mathbb {R}^n\), let \(A\subset \mathbb {R}^n\) be a convex symmetric set, and let \(B\subset \mathbb {R}^n\) be an ellipsoid (that is, a set of the form \(\{X\in \mathbb {R}^n:\langle CX, X \rangle \leqslant 1\}\), where C is a non-negative symmetric matrix). Then \( \gamma (A\bigcap B) \ge \gamma (A) \gamma (B)\).
We apply this inequality N times to the Gaussian measure on \(\mathbb {R}^{2\left( N - j + 1 + 2^K\right) }\), \(1 \leqslant j \leqslant N\), generated by
and
and the sets
Thus, we get
Thus, by the monotone convergence of \(\Omega _4^K\) to \(\Omega _4\),
For each \(1\leqslant j \leqslant N\), \(G(z_j)\) is a complex Gaussian random variable with variance at most \((1-r^2)^{-L}\), so that
with r sufficiently close to 1.
Next note that
and therefore we have
Thus, noting that
and applying Lemma 2, we see that
provided that the constant A in the definition of \(M''\) is chosen sufficiently large. Thus, \( \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |G'| \leqslant M'' \right] \ge \tfrac{1}{2}\). Then, piecing everything together, we get
Finally, the theorem follows from the fact that \( \mathbb {P}\left[ \Omega _1 \right] = e^{-M^2} \). \(\square \)
5.1 Remark
Having in mind the gap between the upper and lower bounds on the hole probability for \(0<L<1\), as given in Theorem 1, we note here that a different method would be required in order to improve the lower bound. Precisely, setting \(F = F(0) + G\) as before, we will show that
Our proof above shows that this holds with a greater or equal sign instead of the equality sign and it remains to establish the opposite inequality. It is clear that
and hence the opposite inequality need only be verified in the regime where
Now let \(0<\varepsilon <\tfrac{1}{2} (1-L)\) and suppose that
Set also \(N=\left[ (1-r)^{-1+\epsilon }\right] \) and \(\omega = e(1/N)\). Let \(G_1\) and \(G_2\) be as in Sect. 4.1, so that \(G = G_1 + G_2\) and the random variables \((G_1(\omega ^j r))\), \(1\leqslant j\leqslant N\), are independent and identically distributed by Lemma 16. (The decomposition in Sect. 4.1 yields independent random variables at radius \(r_0 = 1 - 2(1-r)\) but may easily be modified to radius r (or indeed any radius).)
We condition on \(G_2\) and write \(\mathbb {P}^{G_2}[ . ]=\mathbb {P}[.|G_2]\). Recalling that \((G_1(\omega ^j r))\) are independent and applying the Hardy–Littlewood rearrangement inequality, we get
Since the right-hand side does not depend on \(G_2\), we can drop the conditioning on the left-hand side and finally get
It remains to note that with our choice of N and using the upper bound (39) on M and the relation of \(\sigma _F^2(r)\) and \(\sigma _{G_1}^2(r)\) given in Lemma 16 we have
We conclude that
for all M satisfying the upper bound (39). As \(\varepsilon \) can be taken arbitrarily small, this completes the proof of (38).
6 Upper bound on the hole probability for \(L>1\)
6.1 Beginning the proof
Put \(\delta =1-r\), \(1 < \kappa \leqslant 2\) and \(r_0=1-\kappa \delta \). We take
and put \(\omega =e(1/N)\). By Lemma 9, it suffices to estimate the probability
Set
Discarding a negligible event, we assume that \( |F(0)| \leqslant \delta ^{-(\frac{1}{2} + a)} \). Let \(0<\theta <2\) be a parameter depending on \(\delta \) which we will choose later. Then,
Using Lemma 15, we can bound the expectation on the right by
where \(\Sigma \) is the covariance matrix of \(\left( F(\omega ^j \tau r_0) \right) _{1\leqslant j \leqslant N} \) and \(\Lambda \) is the maximal eigenvalue of \(\Sigma \). Note that, since the distribution of F(z) is rotation invariant, the covariance matrix \(\Sigma \) does not depend on \(\tau \).
6.2 Estimating the eigenvalues of \(\Sigma \)
By Lemma 10, the eigenvalues of \(\Sigma \) are
Now, for small \(\delta \) and \(j\ge 1\), we have
Take \(\delta \) sufficiently small so that
which yields
Put
and note that the sequence
increases for \(m\leqslant [M]\) and decreases for \(m\ge [M]+1\). Thus, the maximal term of this sequence does not exceed
whence
Also
as \(\delta \rightarrow 0\).
6.3 Completing the proof
Finally,
and then,
We set \(\theta = 2 - a^2\), \(\kappa = 1 + \delta \), and continue to bound \(P_N\) [using the choices (40) and (41)]:
Thus,
From Lemma 9 we have
by our choice of \(\kappa \), completing the proof of the upper bound in the case \(L>1\). \(\square \)
7 Lower bound on the hole probability for \(L>1\)
As before, let \(\delta =1-r\) and assume that r is sufficiently close to 1. Introduce the parameter
and put
We now introduce the events
Then
We have
In order to give a lower bound for \(\mathbb {P}\left[ \mathcal {E}_2 \right] \), we rely on a comparison principle between Gaussian analytic functions which might be of use in other contexts. To introduce this principle let us say that a random analytic function G has the \(\mathrm {GAF}(b_n)\) distribution, for some sequence of complex numbers \((b_n)_{n\ge 0}\), if G has the same distribution as
where, as usual, \((\zeta _n)\) is a sequence of independent standard complex Gaussian random variables.
Lemma 22
Let \((b_n)\), \((c_n)\), \(n\ge 0\), be two sequences of complex numbers, such that \(|c_n| \leqslant |b_n|\) for all n, and put
where we take the ratio \(c_n / b_n\) to be 1 if both \(b_n\) and \(c_n\) are zero. If \(Q > 0\), then there exists a probability space supporting a random analytic function G with the \(\mathrm {GAF}(b_n)\) distribution, and an event E satisfying \(\mathbb {P}[E] = Q^2\), such that, conditioned on the event E, the function G has the \(\mathrm {GAF}(c_n)\) distribution.
The proof of Lemma 22 uses the following simple property of Gaussian random variables.
Lemma 23
Let \(0 < \sigma \leqslant 1\). There exists a probability space supporting a standard complex Gaussian random variable \(\zeta \), and an event E satisfying \(\mathbb {P}[E] = \sigma ^2\), such that, conditioned on the event E, the random variable \(\zeta \) has the complex Gaussian distribution with variance \(\mathbb {E}[|\zeta |^2\mid E] = \sigma ^2\).
Proof of Lemma 23
We may assume that \(\sigma < 1\). Write
for the density of a standard complex Gaussian, and the density of a complex Gaussian with variance \(\sigma ^2\), respectively. Observe that, since \(\sigma < 1\),
for some non-negative function \(g_\sigma \) with integral 1. Now, suppose that our probability space supports a complex Gaussian \(\zeta _\sigma \) with \(\mathbb {E}|\zeta _\sigma |^2 = \sigma ^2\), a random variable \(Y_\sigma \) with density function \(g_\sigma \), and a Bernoulli random variable \(I_\sigma \), satisfying \(\mathbb {P}[I_\sigma = 1] = 1 - \mathbb {P}[I_\sigma ~=~0] = \sigma ^2\), which is independent of both \(\zeta _\sigma \) and \(Y_\sigma \). Then (42) implies that the random variable
has the standard complex Gaussian distribution. In this probability space, after conditioning that \(I_\sigma = 1\), the distribution of \(\zeta \) is that of a complex Gaussian with variance \(\mathbb {E}[|\zeta |^2 | I_\sigma ~=~1] = \sigma ^2\), as required. \(\square \)
Proof of Lemma 22
Let \(\sigma _n = \left| c_n/b_n\right| \) (where again, the ratio is defined to be 1 if \(b_n\) and \(c_n\) are both zero). Lemma 23 yields, for each n, a probability space supporting a standard Gaussian random variable \(\zeta _n\) and an event \(E_n\) with \(\mathbb {P}[E_n] = \sigma _n^2\). Clearly we may assume that the sequence \(\zeta _n\) is mutually independent and we take the probability space to be the product of these probability spaces, extend each \(E_n\) to this space in the obvious way and define
The claim follows with the event \(E=\bigcap _{n\ge 0} E_n\). \(\square \)
7.1 Estimating \( \mathbb {P}\left[ \mathcal {E}_2 \right] \)
Put
and let \((q_n)_{n=1}^N\) be a sequence of numbers in [0, 1] to be specified below. According to Lemma 22, there is an event E with probability \(Q^2 = \prod _{n=1}^N q_n^2\), such that on the event E, the function G has the same distribution as
Notice also that
If we now set
then applying first Lemma 22, and then Lemma 2 to the function \(G_Q\), with \(\lambda \) as above, we obtain that for \(\delta \) sufficiently small
For this estimate to be useful in our context we have to ensure, by choosing the sequence \(q_n\) appropriately, that
First, it is straightforward to verify that
Since \(c n^{L-1} \leqslant a_n^2 \leqslant C n^{L-1}\), this implies that for \(n \in \{1, \ldots , N \}\)
Putting
and noting that the function \(x \mapsto (L-1)\log x - 2 x \delta \) attains its maximum at \(x = \frac{L-1}{2 \delta }\) we see that \(a_{n}^2 r_2^{2 n} \ge c\) for \(n \leqslant N_1\), while for \(n \in \{N_1, \ldots , N\}\) we have \(a_{n}^2 r_2^{2 n} \leqslant C \left( \log \frac{1}{\delta }\right) ^{L-1}\).
We therefore choose
where we choose the constant \(\alpha _1>0\) sufficiently small to ensure that \(q_n \leqslant 1\) for all \(n \in \{1, \ldots N\}\). With this choice we have
and further choosing \(\alpha _1\) small if necessary, Condition (43) is satisfied.
It remains to estimate the probability of the event E. Notice that
Recalling that \(N \leqslant C N_1\), and using that \(N_1 = \left[ \, \tfrac{L - 1}{2 \delta }\, \log \tfrac{1}{\delta }\, \right] \), we obtain
7.2 Estimating \( \mathbb {P}\left[ \mathcal {E}_3 \right] \)
Put
Then
The choice of the parameter N guarantees that the ‘tail’ H will be small.
Lemma 24
Put \(r_1=\tfrac{1}{2} (1+r)\). There exists a constant \(C > 0\) such that, for \(\delta \) sufficiently small,
Proof of Lemma 24
Since the function \(r \mapsto 2 \log (1+r) - r\), \(0\leqslant r \leqslant 1\) attains its maximum at \(r=1\) we have \(r_1^2 \leqslant \exp (-\delta )\) for \(\delta = 1 - r \in [0,1]\). Recalling that \(N = \left[ \, \tfrac{2L}{\delta }\, \log \tfrac{1}{\delta }\, \right] \), we observe that for \(n > N\) we have
and therefore \(n^{L-1} \leqslant \exp (\tfrac{1}{2} \delta n)\). Then, using that \(a_n^2 \leqslant C n^{L-1}\), we have
Since \(L>1\) this is a stronger result than we claimed. \(\square \)
By Lemma 24,
if \(c>0\) is sufficiently small. By Lemma 2, the right-hand side is at most \(C\delta ^{-1}\, \exp \left[ -c M^2 \right] \rightarrow 0 \) as \( \delta \rightarrow 0 \). Thus, \(\mathbb {P}\left[ \mathcal {E}_3 \right] \ge \tfrac{1}{2} \) for \(\delta \) sufficiently small.
7.3 Putting the estimates together
Finally,
completing the proof of the lower bound in the case \(L>1\), and hence of Theorem 1. \(\square \)
8 The hole probability for non-invariant GAFs with regularly distributed coefficients
The proofs we gave do not use the hyperbolic invariance of the zero distribution of \(F=F_L\). In the case \(0<L<1\) we could assume that the sequence of coefficients \((a_n)\) in (1) does not increase, that \(a_0=1\) and that
Then, setting as before
the same proof yields the bounds
In the case \(L>1\), assuming that \(a_0=1\) and (44), we also get the same answer as in Theorem 1:
In the case \(L=1\) (that is, \(a_n=1\), \(n\ge 0\)) the result of Peres and Virág relies on their proof that the zero set of \(F_1\) is a determinantal point process. This ceases to hold under a slight perturbation of the coefficients \(a_n\), while our techniques still work, though yielding less precise bounds:
Theorem 3
Suppose that F is a Gaussian Taylor series of the form (1) with \(a_n \simeq 1\) for \(n\ge 0\). Then
For the reader’s convenience, we supply the proof of Theorem 3, which is based on arguments similar to those we have used above. As before, we put \(\delta =1-r\), \(\sigma _F=\sigma _F(r)\), and note that under the assumptions of Theorem 3, \( \sigma _F(r)^2 \simeq \delta ^{-1}\). Also put
and \(\omega = e(1/N)\).
8.1 Upper bound on the hole probability in Theorem 3
8.1.1 Beginning the proof
The starting point is the same as in the proofs of the upper bound in the cases \(L\ne 1\). Put \(r_0 = 1-2\delta \). Then, by Lemma 9 (with \(\kappa =2\) and \(k=N\)),
For fixed \(\tau \in \mathbb {T}\) and for a small positive parameter b, we have
where \(w_j = F(\omega ^j \tau r_0)/\sqrt{N}\) are complex Gaussian random variables. Next,
Thus, up to negligible terms, the hole probability is bounded from above by
What remains is to show that the expectation of the product of \(|w_j|^{-1}\) grows at most exponentially with N. Then, choosing the constant b so small that the prefactor \( (Cb)^N \) overcomes this growth of the expectation, we will get the result.
8.1.2 Estimating \( \mathbb {E}\left[ \prod _{j=1}^N |w_j|^{-1} \right] \)
One can use Lemma 15 in order to bound the expectation above, below we give an alternative argument. Put \(z_j = \omega ^j \tau r_0\), \(1\leqslant j \leqslant N\), and consider the covariance matrix
For each non-empty subset \(I\subset \{1, 2, \ldots , N\}\), we put \( \Gamma _I = \left( \Gamma _{ij} \right) _{i, j\in I}\).
Lemma 25
For each \(I\subset \{1, 2, \ldots , N\}\), we have \( \det \Gamma _I \ge c^{|I|} \).
Proof of Lemma 25
By Lemma 10, the eigenvalues of the matrix \(\Gamma \) are
that is, the minimal eigenvalue of \(\Gamma \) is separated from zero. It remains to recall that the \(N-1\) eigenvalues of any minor of order \(N-1\) of an Hermitian matrix of order N interlace with the N eigenvalues of the original matrix. Applying this principle several times, we conclude that the minimal eigenvalues of the matrix \(\Gamma _I\) cannot be less than the minimal eigenvalue of the full matrix \(\Gamma \). \(\square \)
Now, we write
By Lemma 25, the expectations on the right-hand side do not exceed
(here, \(\lambda _I\) is the maximal eigenvalue of \(\Gamma _I\)). Since the number of subsets I of the set \( \left\{ 1, 2, \ldots , N \right\} \) is \(2^N\), we finally get
This completes the proof of the upper bound on the hole probability in Theorem 3. \(\Box \)
8.2 Lower bound on the hole probability in Theorem 3
We write
where the \(S_k (z)\) are independent random Gaussian polynomials,
We have
We fix \(1<A<e\) and consider the independent events \( \mathcal {E}_k = \left\{ \max _{r\mathbb {T}} |S_k| < A^k \sqrt{N} \right\} \). If these events occur together, then
with some positive numerical constant B. Then, \(F(z) \ne 0\) on \(r\bar{\mathbb {D}}\), provided that \(|F(0)| > B\sqrt{N}\), and that all the events \(\mathcal {E}_k\) occur together, that is,
It remains to estimate from below the probability of each event \(\mathcal {E}_k\) and to multiply the estimates.
8.2.1 Estimating \( \mathbb {P}\left[ \mathcal {E}_k \right] \)
Take 4N points \(z_j = r e\left( \frac{j}{4N}\right) \), \(1\leqslant j \leqslant 4N\). By the Bernstein inequality,
Therefore,
whence,
and
Then, applying as in Sect. 5, Hargé’s version of the Gaussian correlation inequality, we get
(In fact, passing to real and imaginary parts we could use here the simpler Khatri-Sidak version of the Gaussian correlation inequality [10, Section 2.4].) Each value \(S_k(z_j)\) is a complex Gaussian random variable with variance
Therefore,
whence,
and then,
completing the proof. \(\square \)
References
Bogomolny, E., Bohigas, O., Leboeuf, P.: Quantum chaotic dynamics and random polynomials. J. Stat. Phys. 85, 639–679 (1996)
Buckley, J.: Fluctuations in the zero set of the hyperbolic Gaussian analytic function. Int. Math. Res. Not. IMRN 6, 1666–1687 (2015)
Forrester, P.J., Honner, G.: Exact statistical properties of the zeros of complex random polynomials. J. Phys. A 32, 2961–2981 (1999)
Hannay, J.H.: Chaotic analytic zero points: exact statistics for those of a random spin state. J. Phys. A 29, L101–L105 (1996)
Hannay, J.H.: The chaotic analytic function. J. Phys. A 31, L755–L761 (1998)
Hargé, G.: A particular case of correlation inequality for the Gaussian measure. Ann. Probab. 27, 1939–1951 (1999)
Hough, B., Krishnapur, M., Peres, Y., Virág, B.: Zeros of Gaussian Analytic Functions and Determinantal Point Processes. American Mathematical Society, Providence, RI (2009)
Kahane, J.-P.: Some Random Series of Functions, 2nd edn. Cambridge University Press, Cambridge (1985)
Latała, R., Matlak, D.: Royen’s Proof of the Gaussian Correlation Inequality. arXiv:1512.08776
Li, W.V., Shao, Q.-M.: Gaussian processes: inequalities, small ball probabilities and applications. In: Shanbhag, D.N., Rao, C.R. (eds.) Stochastic Processes: Theory and Methods, Handbook of Statistics, vol. 19, pp. 533–597. North-Holland, Amsterdam (2001)
Nazarov, F., Sodin, M.: Random complex zeroes and random nodal lines. In: Proceedings of the International Congress of Mathematicians, vol. III, pp. 1450–1484. Hindustan Book Agency, New Delhi (2010). arXiv:1003.4237
Nazarov, F., Sodin, M.: What is. a Gaussian entire function? Not. Am. Math. Soc. 57, 375–377 (2010)
Nishry, A.: Hole probability for entire functions represented by Gaussian Taylor series. J. Anal. Math. 118, 493–507 (2012)
Nishry, A.: Topics in the Value Distribution of Random Analytic Functions. Ph.D. Thesis - Tel Aviv University (2013). arXiv:1310.7542
Nonnenmacher, S., Voros, A.: Chaotic eigenfunctions in phase space. J. Stat. Phys. 92, 431–518 (1998)
Peres, Y., Virág, B.: Zeros of the i.i.d. Gaussian power series: a conformally invariant determinantal process. Acta Math. 194, 1–35 (2005)
Royen, T.: A simple proof of the Gaussian correlation conjecture extended to multivariate gamma distributions. Far East J. Theor. Stat. 48, 139–145 (2014)
Skaskiv, O., Kuryliak, A.: The probability of absence of zeros in the disc for some random analytic functions. Math. Bull. Shevchenko Sci. Soc. 8, 335–352 (2011)
Sodin, M., Tsirelson, B.: Random complex zeroes. I. Asymptotic normality. Isr. J. Math. 144, 125–149 (2004)
Sodin, M., Tsirelson, B.: Random complex zeroes. III. Decay of the hole probability. Isr. J. Math. 147, 371–379 (2005)
Acknowledgements
The authors thank Alexander Borichev, Fedor Nazarov, and the referee for several useful suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Jeremiah Buckley: Supported by ISF Grants 1048/11 and 166/11, by ERC Grant 335141 and by the Raymond and Beverly Sackler Post-Doctoral Scholarship 2013–14.
Ron Peled: Supported by ISF Grants 1048/11 and 861/15 and by IRG Grant SPTRF.
Mikhail Sodin: Supported by ISF Grants 166/11 and 382/15.
Rights and permissions
About this article
Cite this article
Buckley, J., Nishry, A., Peled, R. et al. Hole probability for zeroes of Gaussian Taylor series with finite radii of convergence. Probab. Theory Relat. Fields 171, 377–430 (2018). https://doi.org/10.1007/s00440-017-0782-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-017-0782-0