1 Introduction

Random analytic functions are a classical topic of study, attracting the attention of analysts and probabilists [8]. One of the most natural instances of such functions is provided by Gaussian analytic functions (GAFs, for short). The zero sets of Gaussian analytic functions are point processes exhibiting many interesting features [7, 12]. These features depend on the geometry of the domain of analyticity of the function and, sometimes, pique the interest of mathematical physicists, see, for instance, [1, 3,4,5, 15].

In this work we consider Gaussian analytic functions in the unit disk represented by a Gaussian Taylor series

$$\begin{aligned} F(z) = \sum _{n\ge 0} \zeta _n a_n z^n, \end{aligned}$$
(1)

where \((\zeta _n)\) is a sequence of independent, identically distributed standard complex normal random variables (i.e., the probability density of each \(\zeta _n\) is \(\frac{1}{\pi }e^{-|z|^2}\) with respect to Lebesgue measure on the complex plane), and \((a_n)\) is a non-random sequence of non-negative numbers satisfying \(\limsup \root n \of {a_n} = 1\). Properties of zero sets of Gaussian Taylor series with infinite radius of convergence have been studied quite intensively in recent years, see, e.g., [7, 11, 14] and the references therein. When the radius of convergence is finite, the hyperbolic geometry often leads to some peculiarities and complications. In particular, this is the case in the study of the hole probability, that is, the probability of the event

$$\begin{aligned} \text {Hole}(r) = \left\{ F\ne 0 \ \text {on}\ r\bar{\mathbb {D}} \right\} , \end{aligned}$$

an important characteristic both from the point of view of analytic function theory and the theory of point processes. For arbitrary Gaussian Taylor series with radius of convergence one, understanding the asymptotic behaviour of the hole probability as \(r\uparrow 1\) seems difficult. Here, we focus on a natural family \(F_L\) of hyperbolic Gaussian analytic functions (hyperbolic GAFs, for short),

$$\begin{aligned} F_L(z) = \sum _{n\ge 0} \zeta _n \sqrt{\frac{L(L+1)\,\ldots \, (L+n-1)}{n!}}\, z^n, \qquad 0<L<\infty , \end{aligned}$$
(2)

distinguished by the invariance of the distribution of their zero set under Möbius transformations of the unit disk [7, Ch. 2], [19]. Note that the parameter L used to parameterize this family equals the mean number of zeros of \(F_L\) per unit hyperbolic area.

The main result of our work provides reasonably tight asymptotic bounds on the logarithm of the hole probability as r increases to 1. These bounds show a transition in the asymptotics of the hole probability according to whether \(0<L<1\), \(L=1\) or \(L>1\). Curiously, a transition taking place at a different point, \(L=\tfrac{1}{2}\), was observed previously in the study of the asymptotics of the variance of the number of zeroes of \(F_L\) in disks of increasing radii [2].

1.1 The main result

Theorem 1

Suppose that \(F_L\) is a hyperbolic GAF, and that \(r\uparrow 1\). Then,

  1. (i)

    for \(0<L<1\),

    $$\begin{aligned} \frac{1-L-o(1)}{2^{L+1}}\, \frac{1}{(1-r)^L}\, \log \frac{1}{1-r}\leqslant & {} - \log \mathbb {P}[\mathrm{Hole}(r)] \\\leqslant & {} \frac{1-L+o(1)}{2^{L}}\, \frac{1}{(1-r)^L}\, \log \frac{1}{1-r}; \end{aligned}$$
  2. (ii)

    for \(L=1\),

    $$\begin{aligned} -\log \mathbb {P}[\mathrm{Hole}(r)] = \frac{\pi ^2+o(1)}{12}\, \frac{1}{1-r}; \end{aligned}$$
  3. (iii)

    for \(L>1\),

    $$\begin{aligned} -\log \mathbb {P}[\mathrm{Hole}(r)] = \frac{(L-1)^2+o(1)}{4}\, \frac{1}{1-r}\, \log ^2 \frac{1}{1-r}. \end{aligned}$$

The case \(L=1\) in this theorem is due to Peres and Virág. We’ll briefly discuss their work in Sect. 1.2.2. The rest is apparently new.

1.2 Previous work

1.2.1 Gaussian Taylor series with an infinite radius of convergence

For an arbitrary Gaussian Taylor series with an infinite radius of convergence, the logarithmic asymptotics of the hole probability was obtained in [13]. The main result [13, Theorem 1] says that when \(r\rightarrow \infty \) outside an exceptional set of finite logarithmic length,

$$\begin{aligned} -\log \mathbb {P}[\mathrm{Hole}(r)] = S_F(r) + o(S_F(r)), \end{aligned}$$
(3)

where

$$\begin{aligned} S_F(r) = \sum _{n\ge 0} \log _+ (a_n^2 r^{2n}). \end{aligned}$$
(4)

In this generality, the appearance of an exceptional set of values of r is unavoidable due to possible irregularities in the behaviour of the coefficients \((a_n)\) (see [14, Section 17]).

For a Gaussian Taylor series with a finite radius of convergence the asymptotic rate of decay of the hole probability has been described only in several rather special cases.

1.2.2 The determinantal case: \(L=1\)

Peres and Virág [16] (see also [7, Section 5.1]) discovered that the zero set of the Gaussian Taylor series

$$\begin{aligned} F(z) = \sum _{n\ge 0} \zeta _n z^n \end{aligned}$$
(5)

(that corresponds to \(L=1\) in (2)) is a determinantal point process [16, Theorem 1] and, therefore, many of its characteristics can be explicitly computed. In particular, they found that [16, Corollary 3(i)]

$$\begin{aligned} -\log \mathbb {P}[\mathrm{Hole}(r)] = \frac{\pi ^2+o(1)}{12}\, \frac{1}{1-r}\,\qquad \text {as }r\rightarrow 1. \end{aligned}$$
(6)

For \(L\ne 1\), the zero set of \(F_L\) is not a determinantal point process [7, p. 83], requiring the use of other techniques.

1.2.3 Fast growing coefficients

Skaskiv and Kuryliak [18] showed that the technique developed in [13] can be applied to Gaussian Taylor series on the disk, that grow very fast near the boundary. Put

$$\begin{aligned} \sigma _F(r)^2 = \mathbb {E}\left[ |F(re^{\mathrm{i}\theta })|^2\right] = \sum _{n\ge 0} a_n^2 r^{2n}. \end{aligned}$$

They proved [18, Theorem 4] that if

$$\begin{aligned} \lim _{r\rightarrow 1} (1-r) \log \log \sigma _F(r) = +\infty , \end{aligned}$$

and the sequence \((a_n)\) is logarithmically concave, then the same logarithmic asymptotic (3) holds when \(r\rightarrow 1\) outside a small exceptional subset of \([\tfrac{1}{2}, 1)\). Note that in our case, \(\sigma _{F_L}(r) = (1-r^2)^{-L/2}\).

1.2.4 The case when F is bounded on \(\mathbb {D}\)

At the opposite extreme, we may consider the case when, almost surely, the random series F is bounded on \(\mathbb {D}\). Then \(\mathbb {P}[\text {Hole}(r)]\) has a positive limit as \(r\rightarrow 1\). Indeed, put \(F=F(0)+G\). If F is bounded on \(\mathbb {D}\), then G is bounded on \(\mathbb {D}\) as well. Take M so that \(\mathbb {P}[\sup _{\mathbb {D}}|G| \leqslant M]\ge \frac{1}{2}\). Then

$$\begin{aligned} \left\{ F\ne 0 \ \text {on}\ \mathbb {D}\right\} \supset \left\{ \sup _{\mathbb {D}}|G| \leqslant M, \ |F(0)|>M \right\} . \end{aligned}$$

Since F(0) and G are independent, we get

$$\begin{aligned} \mathbb {P}[\text {Hole}(r)] \ge \mathbb {P}[\sup _{\mathbb {D}}|G| \leqslant M] \cdot \mathbb {P}[|F(0)|>M] \ge \frac{1}{2} e^{-M^2}. \end{aligned}$$

In view of this observation, we recall a classical result that goes back to Paley and Zygmund and gives a sufficient condition for continuity (and hence, boundedness) of F on \(\bar{\mathbb {D}}\). Introduce the sequence

$$\begin{aligned} s_j = \left( \, \sum _{2^j \leqslant n < 2^{j+1}} a_n^2 \right) ^{1/2}. \end{aligned}$$

If the sequence \(s_j\) decreases and \(\sum _j s_j < \infty \), then, almost surely, F is a continuous function on \(\bar{\mathbb {D}}\) [8, Section 7.1]. On the other hand, under a mild additional regularity condition, divergence of the series \(\sum _j s_j = +\infty \) guarantees that, almost surely, F is unbounded in \(\mathbb {D}\) [8, Section 8.4].

1.3 Several comments on Theorem 1

1.3.1

The proof of Theorem 1 combines the tools introduced in [13, 20] with several new ingredients. Unfortunately, in the case \(0< L <1\), our techniques are insufficient for finding the main term in the logarithmic asymptotics of \(\mathbb {P}[\text {Hole}(r)]\). Also we cannot completely recover the aforementioned result of Peres and Virág. On the other hand, our arguments do not use the hyperbolic invariance of the zero distribution of \(F_L\). We make use of the fact that

$$\begin{aligned} a_n^2 = \frac{\Gamma (n+L)}{\Gamma (L) \Gamma (n+1)}=(1+o(1))\frac{n^{L-1}}{\Gamma (L)},\quad n\rightarrow \infty , \end{aligned}$$
(7)

where \(\Gamma \) is Euler’s Gamma function, and our techniques apply more generally to a class of Gaussian Taylor series whose coefficients \((a_n)\) display power-law behavior. We will return to this in the concluding Sect. 8.

1.3.2

In the case \(L>1\), using (7), it is easy to see that

$$\begin{aligned} S_F(r) = \sum _{n\ge 0} \log _+ (a_n^2 r^{2n}) = \frac{(L-1)^2+o(1)}{4}\, \frac{1}{1-r}\, \log ^2\frac{1}{1-r}, \qquad r\rightarrow 1. \end{aligned}$$

That is, in this case the hyperbolic geometry becomes less relevant and the main term in the logarithmic asymptotic of the hole probability is governed by the same function as in the planar case, discussed in Sect. 1.2.1.

1.3.3

For \(0<L<1\), the gap between the upper and lower bounds in Theorem 1 remains unsettled. It would be also interesting to accurately explore the behaviour of the logarithm of the hole probability near the transition points \(L=0\) and \(L=1\); for instance, to consider the cases \(a_n = n^{-1/2}\ell (n)\) and \(a_n = \ell (n)\), where \(\ell \) is a slowly varying function.

Of course, the ultimate goal would be to treat the case of arbitrary Gaussian Taylor series with finite radii of convergence.

1.3.1 Notation

  • Gaussian analytic functions will be called GAFs. The Gaussian analytic functions \(F_L\) defined in (2) will be called hyperbolic GAFs.

  • We suppress all of the dependence on L unless it is absolutely necessary. In particular, from here on,

    • \(F=F_L\),

    • c and C denote positive constants that might only depend on L. The values of these constants are irrelevant for our purposes and may vary from line to line. By A, \(\alpha \), \(\alpha _i\), etc. we denote positive constants whose values we keep fixed throughout the proof in which they appear.

  • The notation \(X\simeq Y\) means that \(cX \leqslant Y \leqslant CY\).

  • We set

    $$\begin{aligned} \delta = 1-r\quad \text {and}\quad r_0=1-\kappa \delta \quad \text {with }1<\kappa \leqslant 2. \end{aligned}$$

    Everywhere, except Sect. 6, we set \(\kappa =2\), that is, \(r_0=1-2\delta \). The value of r is assumed to be sufficiently close to 1. Correspondingly, the value of \(\delta \) is assumed to be sufficiently small.

  • The variance \(\sigma _F^2 = \sigma _F(r)^2\) is defined by

    $$\begin{aligned} \sigma _F^2 = \mathbb {E}\left[ |F(re^{\mathrm{i}\theta })|^2\right] = (1-r^2)^{-L} = (1+o(1)) (2\delta )^{-L}, \quad \mathrm{as}\ \delta \rightarrow 0. \end{aligned}$$

    Usually, we suppress the dependence on r and write \(\sigma _F\) instead of \(\sigma _F(r)\). Notice that \(\log \frac{1}{\delta } \simeq \log \sigma _F\).

  • An event E depending on r will be called negligible if \(-\log \mathbb {P}[\mathrm{Hole}(r)] = o(1) \left( -\log \mathbb {P}[E]] \right) \) as \(r\rightarrow 1\). Notice that this may depend on the value of L.

  • If f takes real values, then we define \(f_+ = \max \{0,f\}\) and \(f_- = \max \{0,-f\}\).

  • \(e(t)=e^{2\pi \mathrm{i}t}\).

  • [x] denotes the integer part of x.

  • \(n \equiv k\, (N)\) means that \(n \equiv k\) modulo N.

  • \(\mathbb {D}\) denotes the open unit disk, \(\mathbb {T}\) denotes the unit circle.

  • The planar Lebesgue measure is denoted by m, and the (normalized) Lebesgue measure on \(\mathbb {T}\) is denoted by \(\mu \).

2 Idea of the proof

We give a brief description of the proof of Theorem 1 in the cases \(0<L<1\) and \(L>1\). In the case \(L=1\) our arguments suffice to estimate the logarithm of the hole probability up to a constant, as discussed in Sect. 8, and we briefly sketch the argument for this case as well. Our proof for the upper bound on the hole probability in the case \(0<L<1\) is more involved than the other cases.

2.1 Upper bounds on the hole probability when \(L>1\) and \(L=1\)

Our starting point for proving upper bounds is the mean-value property. On the hole event,

$$\begin{aligned} \int _\mathbb {T}\log |F(tr)| \,\mathrm{d}\mu (t) = \log |F(0)|. \end{aligned}$$

Off an event of negligible probability, the integral may be discretized, yielding the inequality

$$\begin{aligned} \sum _{j=1}^N \log | F(\tau \omega ^j r_0)| \leqslant N \log |F(0)| + 1 \end{aligned}$$
(8)

in the slightly smaller radius \(r_0<r\), for a random \(\tau \in \mathbb {T}\), taken out of a small set of possibilities, suitable N and \(\omega = e(1/N)\) (see Lemmas 8 and 9). Thus it suffices to bound from above, for each fixed \(\tau \in \mathbb {T}\), the probability that (8) holds. We may further simplify by fixing a threshold \(T>0\), noting that \(\mathbb {P}\left[ |F(0)|\ge T\right] = \exp (-T^2)\) and writing

$$\begin{aligned}&\mathbb {P}\left[ \sum _{j=1}^N \log | F(\tau \omega ^j r_0)| \leqslant N \log |F(0)| + 1\right] \nonumber \\&\quad \leqslant \mathbb {P}\left[ \sum _{j=1}^N \log | F(\tau \omega ^j r_0)| \leqslant N \log T + 1\right] + e^{-T^2}. \end{aligned}$$
(9)

We focus on the first summand, setting T sufficiently large so that the second summand is negligible. Taking \(0<\theta <2\) and applying Chebyshev’s inequality,

$$\begin{aligned} \mathbb {P}\left[ \sum _{j=1}^N \log | F(\tau \omega ^j r_0)| \leqslant N \log T + 1 \right]\leqslant & {} \mathbb {P}\left[ \prod _{j=1}^N \left| F(\omega ^j \tau r_0 ) \right| ^{-\theta } \ge c T^{-\theta N} \right] \nonumber \\\leqslant & {} C T^{\theta N}\, \mathbb {E}\left[ \prod _{j=1}^N \left| F(\omega ^j \tau r_0 ) \right| ^{-\theta }\right] .\qquad \end{aligned}$$
(10)

It remains to estimate the expectation in the last expression. Our bounds for it make use of the fact that the covariance matrix \(\Sigma \) of the Gaussian vector \((F(\tau \omega ^j r_0))\), \(1\leqslant j\leqslant N\), has a circulant structure, allowing it to be explicitly diagonalized. In particular, its eigenvalues are (see Lemma 10)

$$\begin{aligned} \lambda _m = N\, \sum _{n \equiv m\, (N)} a_n^2 r^{2n}, \qquad m=0, \ldots , N-1. \end{aligned}$$
(11)

This is used together with the following, somewhat rough, bound (see Lemma 15)

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^N \left| F(\omega ^j \tau r_0 ) \right| ^{-\theta }\right] \leqslant \frac{1}{\det \Sigma }\, \left( \Lambda ^{\left( 1-\tfrac{1}{2} \theta \right) } \cdot \Gamma \left( 1-\tfrac{1}{2}\,\theta \right) \right) ^N , \end{aligned}$$
(12)

where \(\Lambda \) is the maximal eigenvalue of \(\Sigma \) and \(\Gamma \) is Euler’s Gamma-function.

2.1.1 The case \(L>1\)

We set the parameters to be \(T = \delta ^{-\frac{1}{2}}\exp \left( -\sqrt{\log \frac{1}{\delta }}\right) \), so that the factor \(e^{-T^2}\) in (9) is indeed negligible, \(N = \left[ \,\frac{L-1}{2\delta } \log \frac{1}{\delta }\,\right] \) and \(\theta = 2 - \left( \log \frac{1}{\delta }\right) ^{-1}\). With this choice, the dominant term in the combination of the bounds (10) and (12) is the factor \(T^{\theta N} / \det \Sigma \). Its logarithmic asymptotics are calculated using (11) and yield the required upper bound.

We mention that choosing \(\theta \) close to its maximal value of 2 corresponds, in some sense, to the fact that the event (8) constitutes a very large deviation for the random sum \(\sum _{j=1}^N \log | F(\tau \omega ^j r_0)|\).

2.1.2 The case \(L=1\)

The same approach may be applied with the parameters \(T = b\,\delta ^{-1/2}\), for a small parameter \(b>0\), \(N = \left[ \,\delta ^{-1}\,\right] \) and \(\theta = 1\). For variety, Sect. 8.1 presents a slightly different alternative.

In the same sense as before, the choice \(\theta =1\) indicates that we are now considering a large deviation event.

2.2 Upper bound on the hole probability when \(0<L<1\)

Our goal here is to show that the intersection of the hole event with the event \(\left\{ |F(0)|\leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\,\right\} \) is negligible when \(A^2 < \frac{1}{2}\). The upper bound then follows from the estimate

$$\begin{aligned} \mathbb {P}\left[ |F(0)|> A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\,\right] = \exp \left( -A^2\sigma _F^2\log \tfrac{1}{(1-r)\sigma _F^2}\right) . \end{aligned}$$

The starting point is again the inequality (8) in which we choose the parameter

$$\begin{aligned} N = \left[ \delta ^{-\alpha }\right] ,\quad L<\alpha <1, \end{aligned}$$

with \(\alpha \) eventually chosen close to 1. However, a more refined analysis is required here. First, we separate the constant term from the function F, writing

$$\begin{aligned} F(z) = F(0) + G(z). \end{aligned}$$

Second, to have better control of the Gaussian vector

$$\begin{aligned} (G(\tau \omega ^j r_0)),\quad 1\leqslant j\leqslant N, \end{aligned}$$
(13)

we couple G with two independent GAFs \(G_1\) and \(G_2\) so that \(G = G_1 + G_2\) almost surely, the vector \((G_1(\tau \omega ^j r_0))\), \(1\leqslant j\leqslant N\), is composed of independent, identically distributed variables, and \(G_2\) is a polynomial of degree N with relatively small variance. In essence, we are treating the variables in (13) as independent, identically distributed up to a small error captured by \(G_2\).

We proceed by conditioning on F(0) and \(G_2\), using the convenient notation

$$\begin{aligned} \mathbb {E}^{F(0), G_2} \left[ \ . \ \right]&= \mathbb {E}\left[ \ . \ \big |\, F(0), G_2 \right] , \\ \mathbb {P}^{F(0), G_2} \left[ \ . \ \right]&= \mathbb {P}\left[ \ . \ \big |\, F(0), G_2 \right] . \end{aligned}$$

Applying Chebyshev’s inequality we may use the above independence to exchange expectation and product, writing, for \(0<\theta <2\),

$$\begin{aligned}&\mathbb {P}^{F(0), G_2}\left[ \sum _{j=1}^N \log | F(\tau \omega ^j r_0)| \leqslant N \log |F(0)| + 1\right] \nonumber \\&\quad \leqslant C |F(0)|^{\theta N}\, \mathbb {E}^{F(0), G_2} \left[ \prod _{j=1}^N \left| F(\omega ^j \tau r_0 ) \right| ^{-\theta }\right] \nonumber \\&\quad = C |F(0)|^{\theta N}\, \prod _{j=1}^N \mathbb {E}^{F(0), G_2} \left[ \left| F(\omega ^j \tau r_0 ) \right| ^{-\theta }\right] . \end{aligned}$$
(14)

Thus we need to estimate expectations of the form

$$\begin{aligned} \mathbb {E}^{F(0), G_2} \left[ \left| \frac{F(\omega ^j \tau r_0 )}{F(0) } \right| ^{-\theta }\right] = \mathbb {E}^{F(0), G_2} \left[ \left| 1 + \frac{G_1(\omega ^j \tau r_0 )+G_2(\omega ^j \tau r_0 )}{F(0)} \right| ^{-\theta }\right] ,\nonumber \\ \end{aligned}$$
(15)

in which F(0) and \(G_2\) are given. Two bounds are used to this end. Given a standard complex Gaussian random variable \(\zeta \), real \(t>0\) and \(0<\theta \leqslant 1\) we have the simple estimate,

$$\begin{aligned} \sup _{w\in \mathbb {C}} \mathbb {E}\left[ \, \left| w+\frac{\zeta }{t} \right| ^{-\theta } \, \right] \leqslant t^\theta (1+C\theta )\, \end{aligned}$$
(16)

and, for \(0\leqslant \theta \leqslant \frac{1}{2}\), the more refined

$$\begin{aligned} \mathbb {E}\left[ \left| 1+ \frac{\zeta }{t} \right| ^{-\theta } \right] \leqslant 1 - c\theta \, \frac{e^{-t^2}}{1+t^2} + C\theta ^2, \end{aligned}$$
(17)

see Lemmas 11 and 14. The most important feature of the second bound is that it is less than 1 (though only slightly) when \(\theta \) is very close to 0, satisfying \(\theta \leqslant c(1+t^2)^{-1}e^{-t^2}\).

Combining (14) and (15) with the simple estimate (16) (with \(\theta =1\)) already suffices to prove that the intersection of the hole event with the event \(\{|F(0)|\leqslant a\sigma _F\}\) is negligible when a is a sufficiently small constant. However, on the event

$$\begin{aligned} \left\{ a\sigma _F\leqslant |F(0)|\leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\, \right\} \end{aligned}$$

the error term \(G_2\) becomes more relevant and we consider two cases according to its magnitude. Taking small constants \(\varepsilon ,\alpha _0>0\) and \(\eta = \delta ^{\alpha _0}\) we let

$$\begin{aligned} J = \left\{ 1\leqslant j \leqslant N:\left| 1+\frac{G_2(\omega ^j r_0)}{F(0)} \right| \ge 1+2\varepsilon \right\} . \end{aligned}$$

2.2.1 The case \(|J|>(1-2\eta )N\)

Here, discarding (a priori, before conditioning on \(G_2\)) a negligible event to handle the rotation \(\tau \), for many values of j, the terms in (15) have \(\left| 1 + \frac{G_2(\omega ^j \tau r_0 )}{F(0)}\right| \ge 1+\varepsilon \). This fact together with the bound (17) (in simplified form, with right-hand side \(1 + C\theta ^2\)), taking \(\theta \) tending to 0 as a small power of \(\delta \), suffices to show that the probability in (14) is negligible.

2.2.2 The case \(|J|\leqslant (1-2\eta )N\)

In this case we change our starting point. By the mean-value inequality,

$$\begin{aligned} \int _\mathbb {T}\log \left| \frac{F(tr)}{F(0) + G_2(tr)}\right| \,\mathrm{d}\mu (t) \leqslant \log \left| \frac{F(0)}{F(0) + G_2(0)}\right| = 0. \end{aligned}$$

Off an event of negligible probability, the integral may again be discretized, yielding

$$\begin{aligned} \sum _{j=1}^N \log \left| \frac{F(\tau \omega ^j r_0)}{F(0) + G_2(\tau \omega ^j r_0)}\right| \leqslant 1 \end{aligned}$$

in the slightly smaller radius \(r_0<r\), for a random \(\tau \), taken out of a small set of possibilities, see Lemma 8. As before, for each fixed \(\tau \), Chebyshev’s inequality and independence show that,

$$\begin{aligned}&\mathbb {P}^{F(0), G_2}\left[ \sum _{j=1}^N \log \left| \frac{F(\tau \omega ^j r_0)}{F(0) + G_2(\tau \omega ^j r_0)}\right| \leqslant 1\right] \nonumber \\&\quad \leqslant C\prod _{j=1}^N \mathbb {E}^{F(0), G_2} \left[ \left| \frac{F(\tau \omega ^j r_0)}{F(0) + G_2(\tau \omega ^j r_0)} \right| ^{-\theta }\right] \end{aligned}$$
(18)

and we are left with the task of estimating terms of the form

$$\begin{aligned} \mathbb {E}^{F(0), G_2} \left[ \left| \frac{F(\tau \omega ^j r_0)}{F(0) + G_2(\tau \omega ^j r_0)} \right| ^{-\theta }\right] = \mathbb {E}^{F(0), G_2} \Biggl [ \Biggl | 1+\frac{G_1(\tau \omega ^j r_0)}{F(0)\left( 1+ \frac{G_2(\tau \omega ^j r_0)}{F(0)}\right) } \Biggr |^{-\theta }\Biggr ].\nonumber \\ \end{aligned}$$
(19)

Again discarding (a priori) a negligible event to handle the rotation \(\tau \), for many values of j, the terms in (19) have \(\left| 1 + \frac{G_2(\omega ^j \tau r_0 )}{F(0)}\right| \leqslant 1+\varepsilon \). These terms are estimated by using (17) with \(t\leqslant (1+4\varepsilon )A\,\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\). Correspondingly we set \(\theta = c\eta ((1-r)\sigma _F^2)^{(1+10\varepsilon )A^2}\) and obtain that the probability in (18) satisfies

$$\begin{aligned} \mathbb {P}^{F(0), G_2}\left[ \sum _{j=1}^N \log \left| \frac{F(\tau \omega ^j r_0)}{F(0) + G_2(\tau \omega ^j r_0)}\right| \leqslant 1\right] \leqslant \exp (-c\eta ^2((1-r)\sigma _F^2)^{2(1+10\varepsilon )A^2} N). \end{aligned}$$

Recalling that \(\sigma _F^2 = (1-r^2)^{-L}\), \(\eta = \delta ^{\alpha _0}\) and \(N = [\delta ^{-\alpha }]\), and choosing \(\varepsilon \) and \(\alpha _0\) close to 0 and \(\alpha \) close to 1 shows that this probability is negligible provided that \(A^2 < \frac{1}{2}\).

In contrast to the cases \(L>1\) and \(L=1\), the fact that we take \(\theta \) tending to 0 can be viewed as saying that we are now considering a moderate deviation event.

2.3 Lower bounds on the hole probability

The proofs of our lower bounds on the hole probability are less involved than the proofs of the upper bounds and the reader is referred to the relevant sections for details. We mention here that in all cases we rely on the same basic strategy: Fix a threshold \(M>0\) and observe that

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r)\right]&\ge \mathbb {P}\left[ |F(0)|>M,\;\; \max _{r\bar{\mathbb {D}}} |F - F(0)|\leqslant M\right] \nonumber \\&= e^{-M^2}\cdot \mathbb {P}\left[ \max _{r\mathbb {T}} |F - F(0)|\leqslant M\right] . \end{aligned}$$
(20)

2.3.1 The case \(0<L<1\)

Here we take

$$\begin{aligned} M = \sqrt{1-L + 2\varepsilon } \cdot \sigma _F\, \sqrt{\log \tfrac{1}{1-r}}. \end{aligned}$$

To estimate the right-hand side of (20) we discretize the circle \(r\mathbb {T}\) into \(N = \left[ (1-r)^{-(1+\varepsilon )}\right] \) equally-spaced points. We then use Hargé’s version of the Gaussian correlation inequality to estimate, by bounding \(F'\), the probability that the maximum attained on the circle \(r\mathbb {T}\) is not much bigger than the maximum attained on these points and that the value F attains at each of the points is not too large.

2.3.2 The case \(L>1\)

Here we take \(M = \frac{1}{\sqrt{\delta }}\left( \log \frac{1}{\delta }\right) ^{\alpha }\) for some \(\frac{1}{2}<\alpha <1\). We also set \(N=\left[ \frac{2 L}{\delta }\log \frac{1}{\delta }\right] \). We prove that the event on the right-hand side of (20) becomes typical after conditioning that the first N coefficients in the Taylor series of F are suitably small, and estimate the probability of the conditioning event.

2.3.3 The case \(L=1\)

This case seems the most delicate of the lower bounds. We take \(M = B\sqrt{\frac{1}{1-r}}\) for a large constant B. To estimate the probability on the right-hand side of (20) we write the Taylor series of F as an infinite sum of polynomials of degree \(N = \left\lceil \frac{1}{1-r}\right\rceil \) and use an argument based on Bernstein’s inequality and Hargé’s version of the Gaussian correlation inequality.

3 Preliminaries

Here, we collect several lemmas, which will be used in the proof of Theorem 1.

3.1 GAFs

Lemma 1

([7], Lemma 2.4.4) Let g be a GAF on \(\mathbb {D}\), and let \(\sup _{\mathbb {D}} \mathbb {E}[|g|^2] \leqslant \sigma ^2\). Then, for every \(\lambda >0\),

$$\begin{aligned} \mathbb {P}\Big [\max _{\frac{1}{2}\bar{\mathbb {D}}} |g| > \lambda \sigma \Big ] \leqslant Ce^{-c\lambda ^2}. \end{aligned}$$

Lemma 2

Let f be a GAF on \(\mathbb {D}\), and \(s \in (0,\delta )\). Put

$$\begin{aligned} \sigma _f ^2(r) = \max _{r\bar{\mathbb {D}}} \mathbb {E}[|f|^2]. \end{aligned}$$

Then, for every \(\lambda >0\),

$$\begin{aligned} \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |f| > \lambda \sigma _f\left( r + s \right) \right] \leqslant \frac{C}{s} \cdot e^{-c\lambda ^2}. \end{aligned}$$

In particular, for every \(\lambda >0\),

$$\begin{aligned} \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |f| > \lambda \sigma _f\left( \tfrac{1}{2} (1+r) \right) \right] \leqslant C \delta ^{-1} \, e^{-c\lambda ^2}. \end{aligned}$$

Proof of Lemma 2

Take an integer \(N \simeq \frac{1}{s}\) and consider the scaled functions

$$\begin{aligned} g_j(w) = f(z_j+ s w), \qquad z_j = r e(j/N), \ j=1\, \ldots \, N. \end{aligned}$$

Since \(\max _{r\bar{\mathbb {D}}} |f| = \max _{r\mathbb {T}} |f(z)|\), the the first statement follows by applying Lemma 1 to each \(g_j\) and using the union bound. The second statement follows from the first by taking \(s=\tfrac{1}{2} \delta = \tfrac{1}{2} (1-r)\). \(\square \)

3.2 A priori bounds for hyperbolic GAFs

Lemma 3

Suppose that F is a hyperbolic GAF. Then, for \(p>1\),

$$\begin{aligned} \mathbb {P}\left[ \max _{(1-\frac{1}{2} \delta )\bar{\mathbb {D}}} |F| \ge \sigma _F^p \right] \leqslant C \delta ^{-1} \, e^{-c \sigma _F^{2p-2}}. \end{aligned}$$

Proof of Lemma 3

Since \(\sigma _F\left( 1-\tfrac{1}{4} \delta \right) \leqslant C \sigma _F\), this follows from Lemma 2. \(\square \)

Lemma 4

Suppose that F is a hyperbolic GAF. Then

$$\begin{aligned} \mathbb {P}\left[ \, \max _{(1-2\delta )\bar{\mathbb {D}}} |F| \leqslant e^{-\log ^2 \sigma _F} \,\right] \leqslant C e^{-c\delta ^{-1}\log ^4\sigma _F}. \end{aligned}$$

Proof of Lemma 4

Suppose that \( \max _{(1-2\delta )\mathbb {D}} |F| \leqslant e^{-\log ^2 \sigma _F} \). Then, by Cauchy’s inequalities,

$$\begin{aligned} |\zeta _n| a_n \leqslant (1-2\delta )^{-n} e^{-\log ^2 \sigma _F}, \end{aligned}$$

whence, for \(n>0\), using the fact that \(a_n \ge c n^{\frac{1}{2} (L-1)}\) (see (7)), we get

$$\begin{aligned} |\zeta _n| \leqslant C n^{\frac{1}{2} (1-L)} (1-2\delta )^{-n} e^{-\log ^2 \sigma _F} < C n^{\frac{1}{2}} e^{c\delta n -\log ^2 \sigma _F}. \end{aligned}$$

In the range \(n\simeq \tfrac{1}{\delta }\log ^2\sigma _F\), we get

$$\begin{aligned} |\zeta _n| \leqslant C \left( \tfrac{1}{\delta } \log ^2\sigma _F \right) ^{\frac{1}{2}} e^{-c\log ^2 \sigma _F} \leqslant e^{-c\log ^2\sigma _F} \end{aligned}$$
(21)

provided that \(\delta \) is sufficiently small. The probability that (21) holds simultaneously for all such n, does not exceed

$$\begin{aligned} \left( e^{-c\log ^2\sigma _F} \right) ^{c\delta ^{-1}\log ^2\sigma _F} = e^{-c\delta ^{-1}\log ^4\sigma _F}, \end{aligned}$$

completing the proof. \(\square \)

Next, we define “the good event” \(\Omega _\mathtt{g} = \Omega _\mathtt{g} (r)\) by

$$\begin{aligned} \Omega _\mathtt{g} = \left\{ \max _{(1-\frac{1}{2} \delta )\bar{\mathbb {D}}} |F| \leqslant \sigma _F^3 \right\} \bigcap \left\{ \max _{(1-2\delta )\bar{\mathbb {D}}} |F| \ge e^{-\log ^2 \sigma _F} \right\} \end{aligned}$$
(22)

and note that by Lemmas 3 and 4 the event \(\Omega _\mathtt{g}^c\) is negligible.

Lemma 5

Suppose that F is a hyperbolic GAF. If \(\gamma > 1\) then

$$\begin{aligned} \mathrm{Hole}(r) \bigcap \Omega _\mathtt{g}(r) \subset \left\{ \, \min _{(1-\gamma \delta )\bar{\mathbb {D}}}\, |F| \ge \exp \left[ - C (\gamma - 1)^{-1} \delta ^{-3} \right] \, \right\} . \end{aligned}$$

Proof of Lemma 5

Suppose that we are on the event \( \mathrm{Hole}(r) \bigcap \Omega _\mathtt{g}(r) \). Since F does not vanish in \(r\mathbb {D}\), the function \(\log |F|\) is harmonic therein, and therefore,

$$\begin{aligned} \max _{(1-\gamma \delta )\bar{\mathbb {D}}}\, \log \frac{1}{|F(z)|}&= \max _{(1-\gamma \delta )\bar{\mathbb {D}}}\, \int _\mathbb {T}\frac{r^2 - |z|^2}{|rt - z|^2} \log \frac{1}{|F(rt)|} \,\mathrm{d}\mu (t) \\&\leqslant \max _{(1-\gamma \delta )\bar{\mathbb {D}}}\, \left( \frac{r +|z|}{r-|z|} \right) \int _\mathbb {T}\left| \log |F(rt)| \right| \,\mathrm{d}\mu (t) \\&\leqslant \frac{2}{(\gamma - 1) \delta } \int _\mathbb {T}\left| \log |F(rt)| \right| \,\mathrm{d}\mu (t). \\ \end{aligned}$$

Furthermore, on \(\Omega _\mathtt{g}\), let \(w\in (1-2\delta )\mathbb {T}\) be a point where \(|F(w)|\ge e^{-\log ^2\sigma _F}\). Then

$$\begin{aligned} -\left( \log ^2\sigma _F \right) \leqslant \log |F(w)|&= \int _\mathbb {T}\frac{r^2 - |w|^2}{|rt-w|^2}\, \log |F(rt)|\, \mathrm{d}\mu (t) \\&= \int _\mathbb {T}\frac{r^2 - |w|^2}{|rt-w|^2}\, \left[ \log _+ |F(rt)| - \log _- |F(rt)|\right] \, \mathrm{d}\mu (t), \end{aligned}$$

whence,

$$\begin{aligned} \int _{\mathbb {T}} \log _-|F(rt)|\, \mathrm{d}\mu (t)&\leqslant \frac{r+|w|}{r-|w|}\, \int _{\mathbb {T}} \frac{r^2-|w|^2}{|rt-w|^2}\, \log _- |F(rt)|\, \mathrm{d}\mu (t) \\&\leqslant \frac{r+|w|}{r-|w|}\, \left[ \int _{\mathbb {T}} \frac{r^2-|w|^2}{|rt-w|^2}\, \log _+ |F(rt)|\, \mathrm{d}\mu (t) + \log ^2\sigma _F \right] \\&\leqslant \frac{r+|w|}{r-|w|} \, \left[ \max _{r\mathbb {T}} \log |F| + \log ^2\sigma _F \right] \\&\leqslant \frac{2}{\delta } \left[ \, 3 \log \sigma _F + \log ^2\sigma _F\, \right] \leqslant \frac{C}{\delta } \cdot \log ^2 \sigma _F. \end{aligned}$$

Then,

$$\begin{aligned}&\max _{(1-\gamma \delta )\bar{\mathbb {D}}}\, \log \frac{1}{|F|} \leqslant \frac{2}{(\gamma - 1) \delta } \int _{\mathbb {T}} \left| \log |F(rt)| \right| \, \mathrm{d}\mu (t)\\&\qquad \leqslant \frac{C}{(\gamma - 1) \delta ^2} \cdot \log ^2 \sigma _F < \frac{C}{(\gamma - 1) \delta ^3}, \end{aligned}$$

proving the lemma.\(\square \)

3.3 Averaging \(\log |F|\) over roots of unity

We start with a polynomial version.

Lemma 6

Let S be a polynomial of degree \(n \ge 1\), let \(k\ge 4\) be an integer, and let \(\omega =e(1/k)\). Then there exists \(\tau \) with \(\tau ^{k^2 n}=1\) so that

$$\begin{aligned} \frac{1}{k}\, \sum _{j=1}^k \log |S(\tau \omega ^j)| \ge \log |S(0)| - \frac{C}{k^2}. \end{aligned}$$

Proof of Lemma 6

Assume that \(S(0)=1\) (otherwise, replace S by S / S(0)). Then \( S(z) = \prod _{\ell =1}^n (1-s_\ell z) \) and

$$\begin{aligned} \prod _{j=1}^k S(\omega ^j z) = \prod _{\ell =1}^n (1-s_\ell ^k z^k) = S_1(z^k), \end{aligned}$$

where \( S_1(w) \mathop {=}\limits ^\mathrm{def} \prod _{\ell =1}^n (1-s_\ell ^k w) \) is a polynomial of degree n.

Let \(M=\max _{\mathbb {T}} |S_1|\). By the maximum principle, \(M \ge |S_1(0)|=1\). By Bernstein’s inequality, \(\max _{\mathbb {T}} |S_1'|\leqslant nM\). Now, let \(t_0=e^{\mathrm{i}\varphi _0}\in \mathbb {T}\) be a point where \(|S_1(t_0)|=M\). Then, for any \(t=e^{\mathrm{i}\varphi }\in \mathbb {T}\) with \(|\varphi -\varphi _0|< \tfrac{\varepsilon }{n}\) (with \(0\leqslant \varepsilon <1\)), we have

$$\begin{aligned} |S_1(t)| \ge M - \frac{\varepsilon }{n} \cdot nM = (1-\varepsilon )M \ge 1-\varepsilon . \end{aligned}$$

The roots of unity of order kn form a \(\frac{\pi }{k n}\)-net on \(\mathbb {T}\). Thus, there exists t such that \(t^{k n}=1\) and \(\log |S_1(t)| \ge - \tfrac{C}{k}\). Taking \(\tau \) so that \(\tau ^k=t\), we get

$$\begin{aligned} \frac{1}{k}\, \sum _{j=1}^k \log |S(\tau \omega ^j)| = \frac{1}{k}\, \log |S_1(\tau ^k)| \ge - \frac{C}{k^2}, \end{aligned}$$

while \(\tau ^{k^2 n}=1\).\(\square \)

Lemma 7

Let f be an analytic function on \(\mathbb {D}\) such that \(M\mathop {=}\limits ^\mathrm{def}\sup _{\mathbb {D}} |f|<+\infty \). Let \(0<\rho <1\), and denote by q the Taylor polynomial of f around 0 of degree N. If

$$\begin{aligned} N \ge \frac{1}{1-\rho }\, \log \frac{M}{1-\rho }, \end{aligned}$$
(23)

then \(\max _{\rho \bar{\mathbb {D}}} |f-q|<1\).

Proof of Lemma 7

Let

$$\begin{aligned} f(z) = \sum _{n\ge 0} c_n z^n. \end{aligned}$$

Then, by Cauchy’s inequalities, \(|c_n|\leqslant M\), whence

$$\begin{aligned} \max _{\rho \bar{\mathbb {D}}} |f-q|&\leqslant \sum _{n\ge N+1} |c_n| \rho ^n \leqslant M\, \frac{\rho ^{N+1}}{1-\rho }\\&=\frac{M}{1-\rho }\,(1-(1-\rho ))^{N+1} <\frac{M}{1-\rho }\,e^{-N(1-\rho )}\leqslant 1. \end{aligned}$$

\(\square \)

Lemma 8

Let \(0< m < 1\), \(0< \rho < 1\), \(k\ge 4\) be an integer, and \(\omega = e(1/k)\). Suppose that F is an analytic function on \(\mathbb {D}\) with \(m\leqslant \inf _{\mathbb {D}}|F| \leqslant \sup _{\mathbb {D}} |F| \leqslant m^{-1}\) and that P is a polynomial with \(P(0) \ne 0\). There exist a positive integer

$$\begin{aligned} K \leqslant k^2 \left[ \deg P + \frac{C}{1-\rho }\, \log \frac{k}{m(1-\rho )} \right] , \end{aligned}$$

and \(\tau \in \mathbb {T}\) satisfying \(\tau ^{K}=1\) such that

$$\begin{aligned} \frac{1}{k}\, \sum _{j=1}^k \log \left| \frac{F}{P}(\tau \omega ^j \rho ) \right| \leqslant \log \left| \frac{F}{P}(0) \right| + \frac{C}{k^2}. \end{aligned}$$

Proof of Lemma 8

Let \(0< \varepsilon < m\) be a small parameter. Applying Lemma 7 to the function \(f=(\varepsilon F)^{-1}\), we get a polynomial Q with

$$\begin{aligned} \deg Q \leqslant \frac{C}{1-\rho }\, \log \frac{1}{\varepsilon m (1-\rho )}, \end{aligned}$$

such that

$$\begin{aligned} \max _{\rho \bar{\mathbb {D}}} \left| \frac{1}{F} - Q \right| < \varepsilon . \end{aligned}$$

Then, assuming that \(\varepsilon <\frac{1}{2} m\), we get

$$\begin{aligned} \max _{\rho \bar{\mathbb {D}}} \left| \log |Q| + \log |F| \right| = \max _{\rho \bar{\mathbb {D}}} \left| \log | 1 + F(Q-\tfrac{1}{F})|\, \right| \leqslant C\varepsilon m^{-1}. \end{aligned}$$
(24)

Applying Lemma 6 to the polynomial \(S=P\cdot Q\) and taking into account (24), we see that there exists \(\tau \) so that \(\tau ^{k^2\deg S}=1\) and

$$\begin{aligned} \frac{1}{k}\, \sum _{j=1}^k \log \left| \frac{F}{P}(\tau \omega ^j \rho ) \right|&\leqslant - \frac{1}{k}\, \sum _{j=1}^k \log \left| S(\tau \omega ^j \rho ) \right| + C\varepsilon m^{-1} \\&\leqslant - \log |S(0)| + C\left[ k^{-2} + \varepsilon m^{-1} \right] \\&\leqslant \log \left| \frac{F}{P}(0) \right| + C\left[ k^{-2} + \varepsilon m^{-1} \right] . \end{aligned}$$

It remains to let \(\varepsilon = \tfrac{1}{2} m k^{-2} \) and \( K = k^2\deg S = k^2 (\deg P + \deg Q)\). \(\square \)

We will only need the full strength of this lemma in Sect. 4.3.3. Elsewhere, the following lemma will be sufficient.

Lemma 9

Let F be a hyperbolic GAF. Let

$$\begin{aligned} 1 < \kappa \leqslant 2, \quad r_0 = 1-\kappa \delta . \end{aligned}$$

Let \(k\ge 4\) be an integer and \(\omega =e(1/k)\). Then

$$\begin{aligned} \mathbb {P}\left[ \mathrm{Hole}(r) \right]\leqslant & {} \frac{ck^2\log k}{(\kappa - 1)^2 \delta ^{4}} \cdot \sup _{\tau \in \mathbb {T}} \mathbb {P}\left[ \sum _{j=1}^k \log | F(\tau \omega ^j r_0)| \leqslant k \log |F(0)| + C \right] \\&+\,\, \langle \mathrm{negligible\ terms} \rangle . \end{aligned}$$

Proof of Lemma 9

Outside the negligible event \( \mathrm{Hole}(r){\setminus }\Omega _\mathtt{g}\) we have the bound

$$\begin{aligned} \max _{(1-\frac{1}{2} \delta )\bar{\mathbb {D}}} |F| \leqslant \sigma _F^3 \leqslant \frac{C}{\delta ^{3L/2}}. \end{aligned}$$

According to Lemma 5 applied with \(\gamma =1+\tfrac{1}{2} (\kappa -1)\), we have

$$\begin{aligned} \mathrm{Hole}(r){\setminus } \Omega _\mathtt{g} \subset \left\{ \min _{(1-\gamma \delta )\bar{\mathbb {D}}}\, |F| \ge \exp \left[ -C (\kappa - 1)^{-1} \delta ^{-3} \right] \right\} . \end{aligned}$$

Therefore, letting \(m= \exp \left[ -C (\kappa - 1)^{-1} \delta ^{-3} \right] \), we get

$$\begin{aligned} \mathbb {P}\left[ \mathrm{Hole}(r) \right] \leqslant \mathbb {P}\left[ m \leqslant \min _{(1-\gamma \delta )\bar{\mathbb {D}}}\, |F| \leqslant \max _{(1-\gamma \delta )\bar{\mathbb {D}}}\, |F| \leqslant m^{-1} \right] + \langle \mathrm{negligible\ terms} \rangle . \end{aligned}$$

Applying Lemma 8 to the function \(F\left( (1-\gamma \delta ) z \right) \) with \(P \equiv 1\) and

$$\begin{aligned} \rho = \frac{r_0}{(1-\gamma \delta )} = 1 - \frac{(\kappa -\gamma )\delta }{1-\gamma \delta }, \end{aligned}$$

we find that

$$\begin{aligned}&\left\{ m \leqslant \min _{(1-\gamma \delta )\bar{\mathbb {D}}}\, |F| \leqslant \max _{(1-\gamma \delta )\bar{\mathbb {D}}}\, |F| \leqslant m^{-1} \right\} \\&\qquad \subset \bigcup _{\tau :\tau ^K=1} \left\{ \sum _{j=1}^k \log \left| F(\tau \omega ^j r_0) \right| \leqslant k \log |F(0)| + \frac{C}{k}\right\} \end{aligned}$$

with some

$$\begin{aligned} K \leqslant \frac{Ck^2}{(\kappa -\gamma )\delta }\, \log \frac{k}{m(\kappa -\gamma )\delta } \leqslant \frac{Ck^2\log k}{(\kappa -1)^2\delta ^4}, \end{aligned}$$

completing the proof. \(\square \)

Note that everywhere except for Sect. 6 we use this lemma with \(\kappa =2\).

3.4 The covariance matrix

We will constantly exploit the fact that the covariance matrix of the random variables \( F(\omega ^j r)\), \(\omega = e(1/N)\), \(1\leqslant j \leqslant N\), has a simple structure. This is valid for general Gaussian Taylor series, as the following lemma details.

Lemma 10

Let F be any Gaussian Taylor series of the form (1) with radius of convergence R and let \(z_j = re(j/N)\) for \(r<R\) and \(j=0, \ldots , N-1\). Consider the covariance matrix \(\Sigma = \Sigma (r, N)\) of the random variables \(F(z_0)\), ..., \(F(z_{N-1})\), that is,

$$\begin{aligned} \Sigma _{jk} = \mathbb {E}\left[ F(z_j) \overline{F(z_k) }\right] = \sum _{n\ge 0} a_n^2 r^{2n} e((j-k)n/N). \end{aligned}$$

Then, the eigenvalues of \(\Sigma \) are

$$\begin{aligned} \lambda _m = N\, \sum _{n \equiv m\, (N)} a_n^2 r^{2n}, \qquad m=0, \ldots , N-1, \end{aligned}$$

where \(n\equiv m\, (N)\) denotes that n is equivalent to m modulo N.

Proof of Lemma 10

Observe that

$$\begin{aligned} F(z_j) = \sum _{m=0}^{N-1}\, \sum _{n \equiv m\, (N)}\, \zeta _n a_n r^n e(jn/N) =\sum _{m=0}^{N-1}\, e(jm/N)\, \sum _{n\equiv m\, (N)}\, \zeta _n a_n r^n. \end{aligned}$$

Define the \(N \times N\) matrix U by

$$\begin{aligned} U_{jm} = \frac{1}{\sqrt{N}}\, e(jm/N), \qquad 0 \leqslant j, m \leqslant N-1, \end{aligned}$$

and the vector \(\Upsilon \) by

$$\begin{aligned} \Upsilon _m = \sqrt{N}\, \sum _{n\equiv m\, (N)}\, \zeta _n a_n r^n. \end{aligned}$$

Then U is a unitary matrix (it is the discrete Fourier transform matrix) and the components of \(\Upsilon \) are independent complex Gaussian random variables with

$$\begin{aligned} \mathbb {E}\left[ \, |\Upsilon _m|^2\, \right] = N \sum _{n\equiv m\, (N)}\, a_n^2 r^{2n}. \end{aligned}$$

Finally, note that

$$\begin{aligned} \begin{pmatrix} F(z_0) \\ F(z_1) \\ \vdots \\ F(z_{N-1}) \end{pmatrix} = U \Upsilon . \end{aligned}$$

Hence, the covariance matrices of \( ( F(z_0)\), ..., \(F(z_{N-1}))\) and of \(\Upsilon \) have the same set of eigenvalues. \(\square \)

3.5 Negative moments of Gaussian random variables

In the following lemmas \(\zeta \) is a standard complex Gaussian random variable. Recall that, for \(\theta <2\),

$$\begin{aligned} \mathbb {E}\left[ \, |\zeta |^{-\theta }\, \right] = \Gamma \left( 1-\tfrac{1}{2}\, \theta \right) , \end{aligned}$$
(25)

where \(\Gamma \) is Euler’s Gamma-function.

Lemma 11

There exists a numerical constant \(C>0\) such that, for every \(t>0\) and \(0<\theta \leqslant 1\),

$$\begin{aligned} \sup _{w\in \mathbb {C}} \mathbb {E}\left[ \, \left| w+\frac{\zeta }{t} \right| ^{-\theta } \, \right] \leqslant t^\theta (1+C\theta ). \end{aligned}$$

Proof of Lemma 11

Write

$$\begin{aligned} \mathbb {E}\left[ \, \left| w+\frac{\zeta }{t} \right| ^{-\theta } \, \right] = \frac{1}{\pi }\, \int _{\mathbb {C}} \left| w+\frac{z}{t} \right| ^{-\theta } e^{-|z|^2}\, \mathrm{d}m(z), \end{aligned}$$

where m is the planar Lebesgue measure, and use the Hardy–Littlewood rearrangement inequality, noting that the symmetric decreasing rearrangement of \( \left| w+\tfrac{z}{t} \right| ^{-\theta } \) is \( \left| \frac{z}{t}\right| ^{-\theta } \), and that \(e^{-|z|^2}\) is already symmetric and decreasing.\(\square \)

Lemma 12

For each \(\tau >0\) there exists a \(C(\tau )>0\) such that for every integer \(n\ge 1\),

$$\begin{aligned} \sup _{|t|\ge \tau } \mathbb {E}\left[ \, \left| \log \left| 1+ \frac{\zeta }{t} \right| \, \right| ^n \, \right] \leqslant C(\tau ) n!. \end{aligned}$$

Proof of Lemma 12

By the symmetry of \(\zeta \) we may assume that \(t > 0\). Put \(X=\left| 1+\tfrac{\zeta }{t} \right| \) and write

$$\begin{aligned} \mathbb {E}\left[ \left| \log X \right| ^n \right] = \mathbb {E}\left[ \left( \log \tfrac{1}{X} \right) ^n {1\mathrm{l}}_{\{X\leqslant \frac{1}{2}\} }\right] + \mathbb {E}\left[ \left( \log \tfrac{1}{X} \right) ^n {1\mathrm{l}}_{\{\frac{1}{2}< X\leqslant 1\} } \right] + \mathbb {E}\left[ \left( \log X \right) ^n {1\mathrm{l}}_{\{X > 1\}} \right] . \end{aligned}$$

We have

$$\begin{aligned} \mathbb {E}\left[ \left( \log \tfrac{1}{X} \right) ^n {1\mathrm{l}}_{\{X\leqslant \frac{1}{2}\} }\right]&= \frac{1}{\pi }\, \int _{\{ | 1+\frac{z}{t}|\leqslant \frac{1}{2}\}} \left( \log \left| 1+\frac{z}{t} \right| ^{-1} \right) ^n e^{-|z|^2}\, \mathrm{d}m(z) \\&\leqslant \frac{e^{-t^2/4}}{\pi }\, \int _{\{| 1+\frac{z}{t}|\leqslant \frac{1}{2}\}} \left( \log \left| 1+\frac{z}{t} \right| ^{-1} \right) ^n\, \mathrm{d}m(z) \\&= \frac{e^{-t^2/4}t^2}{\pi }\, \int _{\{|w|\leqslant \frac{1}{2}\}} \left( \log \frac{1}{|w|}\right) ^n\, \mathrm{d}m(w) \\&\leqslant C\, \int _0^{1/2} \left( \log \frac{1}{s} \right) ^n s\,\mathrm{d}s \\&= C \int _{\log 2}^\infty x^n e^{-2x}\, \mathrm{d}x \leqslant C n!. \end{aligned}$$

In addition,

$$\begin{aligned} \mathbb {E}\left[ \left( \log \tfrac{1}{X} \right) ^n {1\mathrm{l}}_{\{\frac{1}{2}< X\leqslant 1\} } \right] \leqslant \left( \log 2 \right) ^n < 1. \end{aligned}$$

Finally, for \(t\ge \tau \),

$$\begin{aligned}&\mathbb {E}\left[ \left( \log X \right) ^n {1\mathrm{l}}_{\{X > 1\}} \right] \leqslant \mathbb {E}\left[ (X-1)_+^n \right] \leqslant \mathbb {E}\left[ |X-1|^n \right] \\&\quad \leqslant \mathbb {E}\left[ \frac{|\zeta |^n}{t^n} \right] \mathop {=}\limits ^{(25)} \frac{\Gamma (\frac{1}{2} n + 1)}{t^n} \leqslant \tau ^{-n}n!, \end{aligned}$$

completing the proof. \(\square \)

Lemma 13

For \(t>0\),

$$\begin{aligned} \mathbb {E}\left[ \log \left| 1+ \frac{\zeta }{t} \right| \right] > \frac{e^{-t^2}}{2(t^2+1)}. \end{aligned}$$

Proof of Lemma 13

We have

$$\begin{aligned} \mathbb {E}\left[ \log \left| 1+ \frac{\zeta }{t} \right| \right]&= \frac{1}{\pi }\, \int _{\mathbb {C}} \log \left| 1+ \frac{z}{t} \right| \, e^{-|z|^2}\, \mathrm{d}m(z) \\&= 2 \int _0^\infty e^{-r^2} r\, \int _{-\pi }^\pi \log \left| 1+\frac{re^{\mathrm{i}\theta }}{t} \right| \, \frac{\mathrm{d}\theta }{2\pi } \, \mathrm{d}r \\&= 2 \int _0^\infty \log _+ \left( \frac{r}{t} \right) \, e^{-r^2} r\, \mathrm{d}r \\&= \frac{1}{2}\, \int _{t^2}^\infty \frac{e^{-u}}{u}\, \mathrm{d}u = I. \end{aligned}$$

Integrating by parts once again, we see that

$$\begin{aligned} I = \frac{e^{-t^2}}{2t^2} - \int _{t^2}^\infty \frac{e^{-u}}{2u^2}\, \mathrm{d}u > \frac{e^{-t^2}}{2t^2} - \frac{1}{t^2}\, I, \end{aligned}$$

whence,

$$\begin{aligned} I > \frac{e^{-t^2}}{2(t^2+1)}, \end{aligned}$$

completing the proof.\(\square \)

Lemma 14

There exist numerical constants \(c, C > 0\) such that, for every \(t>0\) and \(0\leqslant \theta \leqslant \tfrac{1}{2}\),

$$\begin{aligned} \mathbb {E}\left[ \left| 1+ \frac{\zeta }{t} \right| ^{-\theta } \right] \leqslant 1 - c\theta \, \frac{e^{-t^2}}{1+t^2} + C\theta ^2. \end{aligned}$$

Proof of Lemma 14

Lemma 11 yields that there exist \(c, \tau >0\) such that, for every \(0<t\leqslant \tau \) and \(0\leqslant \theta \leqslant \tfrac{1}{2}\), we have

$$\begin{aligned} \mathbb {E}\left[ \left| 1+ \frac{\zeta }{t} \right| ^{-\theta } \right] \leqslant 1 - c\theta . \end{aligned}$$

Thus, we need to consider the case \(t\ge \tau \). Write

$$\begin{aligned} \left| 1+ \frac{\zeta }{t} \right| ^{-\theta } = \exp \left( -\theta \log \left| 1+ \frac{\zeta }{t} \right| \right) = 1 + \sum _{n\ge 1} \frac{\left( -\theta \log |1+\frac{\zeta }{t} |\right) ^n}{n!}. \end{aligned}$$

Using Lemma 12, we get

$$\begin{aligned}&\mathbb {E}\left[ \left| 1+ \frac{\zeta }{t} \right| ^{-\theta } \right] \leqslant 1 - \theta \, \mathbb {E}\left[ \log \left| 1+ \frac{\zeta }{t} \right| \right] + \sum _{n\ge 2} \frac{\theta ^n}{n!}\, \mathbb {E}\left[ \left| \log \left| 1+ \frac{\zeta }{t} \right| \right| ^n \right] \\&\quad \mathop {\leqslant }\limits ^{0\leqslant \theta \leqslant \frac{1}{2}}1 - \theta \, \mathbb {E}\left[ \log \left| 1+ \frac{\zeta }{t} \right| \right] + C(\tau )\,\frac{\theta ^2}{1-\theta } \leqslant 1 - \theta \, \mathbb {E}\left[ \log \left| 1+ \frac{\zeta }{t} \right| \right] + 2C(\tau )\theta ^2. \end{aligned}$$

Applying Lemma 13, we get the result. \(\square \)

Lemma 15

Suppose that \((\eta _j)_{1\leqslant j \leqslant N}\) are complex Gaussian random variables with covariance matrix \(\Sigma \). Then, for \(0\leqslant \theta < 2\),

$$\begin{aligned} \mathbb {E}\Bigg [ \prod _{j=1}^N \frac{1}{|\eta _j|^\theta }\,\Bigg ] \leqslant \frac{1}{\det \Sigma }\, \left( \Lambda ^{\left( 1-\tfrac{1}{2} \theta \right) } \cdot \Gamma \left( 1-\tfrac{1}{2}\,\theta \right) \right) ^N , \end{aligned}$$

where \(\Lambda \) is the maximal eigenvalue of \(\Sigma \).

Proof of Lemma 15

We have

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^N \frac{1}{|\eta _j|^\theta }\,\right]&= \frac{1}{\pi ^N \det \Sigma }\, \int _{\mathbb {C}^N} \prod _{j=1}^N \frac{1}{|Z_j|^\theta }\, e^{-\langle \Sigma ^{-1}Z, Z\rangle }\, \mathrm{d}m(Z_1) \ldots \mathrm{d}m(Z_N) \\&\leqslant \frac{1}{\pi ^N \det \Sigma }\, \int _{\mathbb {C}^N} \prod _{j=1}^N \frac{1}{|Z_j|^\theta }\, e^{-\Lambda ^{-1} |Z|^2}\, \mathrm{d}m(Z_1) \ldots \mathrm{d}m(Z_N) \\&= \frac{1}{\det \Sigma }\, \left( \frac{1}{\pi }\int _{\mathbb {C}} |z|^{-\theta }\, e^{-\Lambda ^{-1}|z|^2}\, \mathrm{d}m(z) \right) ^N \\&\mathop {=}\limits ^{(25)} \frac{1}{\det \Sigma }\, \left( \Lambda ^{\left( 1-\tfrac{1}{2} \theta \right) } \cdot \Gamma \left( 1-\tfrac{1}{2}\,\theta \right) \right) ^N, \end{aligned}$$

proving the lemma. \(\square \)

4 Upper bound on the hole probability for \(0< L < 1\)

Now we are ready to prove the lower bound part of Theorem 1, in the case \(0< L < 1\). Throughout the proof, we use the parameters

$$\begin{aligned} L<\alpha <1, \quad r_0 = 1-2\delta , \quad N=\left[ \delta ^{-\alpha } \right] , \quad \omega = e(1/N), \end{aligned}$$

where

$$\begin{aligned} \frac{1}{2} \leqslant r < 1, \quad \delta = 1 - r. \end{aligned}$$

In many instances, we assume that r is sufficiently close to 1, that is, that \(\delta \) is sufficiently small.

It will be convenient to separate the constant term from the function F, letting

$$\begin{aligned} F=F(0)+G. \end{aligned}$$

4.1 Splitting the function G

We define two independent GAFs \(G_1\) and \(G_2\) so that \(G=G_1+G_2\) and

  • \(G_1(\omega ^j r_0)\), \(1\leqslant j \leqslant N\), are independent, identically distributed Gaussian random variables with variance close to \(\sigma _F^2(r_0)\);

  • \(G_2\) is a polynomial of degree \(N-1\) and, for \(|z|=r_0\), the variance of \(G_2(z)\) is much smaller than \(\sigma _F^2(r_0)\).

Let \((\zeta _n')_{n\ge 1}\) and \((\zeta _n'')_{1\leqslant n \leqslant N-1}\) be two independent sequences of independent standard complex Gaussian random variables, and let

$$\begin{aligned} G_1(z)&= \sum _{n=1}^\infty \zeta _n' b_n z^n, \\ G_2(z)&= \sum _{n=1}^{N-1} \zeta _n'' d_n z^n, \end{aligned}$$

where the non-negative coefficients \(b_n\) are defined by

$$\begin{aligned} {\left\{ \begin{array}{ll} b_{n}^2 r_{0}^{2n}=\displaystyle {\sum _{k\ge 1}}\left[ a_{kN}^2 r_0^{2kN}-a_{kN+n}^{2}r_0^{2(kN+n)}\right] ,\quad &{} 1\leqslant n\leqslant N-1,\\ b_{n}=a_{n}, &{} n\ge N. \end{array}\right. } \end{aligned}$$

Since the sequence \((a_n)\) does not increase, the expression in the brackets is positive. For the same reason, for \(1\leqslant n \leqslant N-1\), we have

$$\begin{aligned} a_n^2 r_0^{2n} + \sum _{k\ge 1} a_{kN+n}^2 r_0^{2(kN+n)} = \sum _{k\ge 0} a_{kN+n}^2 r_0^{2(kN+n)} \ge \sum _{k\ge 1} a_{kN}^2 r_0^{2kN}, \end{aligned}$$

whence, for these values of n we have \(b_n\leqslant a_n\). The coefficients \(d_n \ge 0\) are defined by \( a_n^2 = b_n^2 + d_n^2\). This definition implies that the random Gaussian functions G and \( G_1 + G_2 \) have the same distribution, and we couple G, \(G_1\) and \(G_2\) (that is, we couple the sequences \((\zeta _n)\), \((\zeta _n')\) and \((\zeta _n'')\)) so that \(G = G_1+G_2\) almost surely.

Lemma 16

For any \(\tau \in \mathbb {T}\), the random variables \(( G_1(\omega ^j \tau r_0) )\), \(1\leqslant j \leqslant N\), are independent, identically distributed \(\mathcal N_{\mathbb {C}}(0, \sigma _{G_1}^2(r_0))\) with

$$\begin{aligned} \sigma _{G_1}^2(r_0) = N\, \sum _{k\ge 1} a_{kN}^2 r_0^{2kN}. \end{aligned}$$
(26)

In addition, we have

$$\begin{aligned} 0 \leqslant \sigma _F^2(r_0) - \sigma _{G_1}^2(r_0) \leqslant C \sigma _F^{2-c}(r_0). \end{aligned}$$
(27)

Proof of Lemma 16

Applying Lemma 10 to the function \(G_1\) evaluated at \(\tau z\) we see that the eigenvalues of the covariance matrix of the random variables \((G_1(\omega ^j\tau r_0))_{1\leqslant j \leqslant N}\) are all equal to \(N\, \sum _{k\ge 1} a_{kN}^2 r_0^{2kN}\). Hence, the covariance matrix of these Gaussian random variables is diagonal, that is, they are independent, and the relation (26) holds.

To prove estimate (27), observe that

$$\begin{aligned} \sigma _{G_1}^2(r_0) \leqslant \sum _{n\ge 0} a_n^2 r_0^{2n} = \sigma _{F}^2(r_0), \end{aligned}$$

and since the sequence \((a_n)\) does not increase, we have

$$\begin{aligned} \sigma _{G_1}^2(r_0) \ge \sum _{n\ge N} a_n^2 r_0^{2n} = \sigma _F^2(r_0) - \sum _{n=0}^{N-1} a_n^2 r_0^{2n}. \end{aligned}$$

Recalling (see (7)) that

$$\begin{aligned} a_n^2 = \frac{\Gamma (n+L)}{\Gamma (L)\Gamma (n+1)} \sim \frac{n^{L-1}}{\Gamma (L)}, \qquad n\rightarrow \infty , \end{aligned}$$

we see that

$$\begin{aligned} \sum _{n=0}^{N-1} a_n^2 r_0^{2n} \leqslant C\, \sum _{n=0}^{N-1} (1+n)^{L-1} \leqslant C N^L \leqslant C \sigma _F^{2-c}(r_0), \end{aligned}$$

proving the lemma. \(\square \)

We henceforth condition on F(0) and \(G_2\) (that is, on \(\zeta _0\) and \((\zeta _n'')_{1\leqslant n \leqslant N-1}\)), and write

$$\begin{aligned} \mathbb {E}^{F(0), G_2} \left[ \ . \ \right]&= \mathbb {E}\left[ \ . \ \big |\, F(0), G_2 \right] , \\ \mathbb {P}^{F(0), G_2} \left[ \ . \ \right]&= \mathbb {P}\left[ \ . \ \big |\, F(0), G_2 \right] . \end{aligned}$$

In the following section, we consider the case when \(|F(0)|/\sigma _F\) is sufficiently small.

4.2 \(|F(0)|\leqslant a \sigma _F\)

We show that the intersection of the hole event with the event \(\{|F(0)|\leqslant a \sigma _F\}\) is negligible for a sufficiently small.

By Lemma 9 (with \(\kappa =2\) and \(k=N\)), it suffices to estimate the probability

$$\begin{aligned} \mathbb {P}^{F(0), G_2} \left[ \sum _{j=1}^N \log \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| \leqslant C \right] = \mathbb {P}^{F(0), G_2} \left[ \prod _{j=1}^N \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| ^{-1} \ge e^{-C} \right] \end{aligned}$$

with some fixed \(\tau \in \mathbb {T}\). By Chebyshev’s inequality, the right-hand side is bounded by

$$\begin{aligned} C\,\mathbb {E}^{F(0), G_2} \left[ \prod _{j=1}^N \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| ^{-1} \right] = C\, \prod _{j=1}^N \mathbb {E}^{F(0), G_2} \left[ \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| ^{-1} \right] \end{aligned}$$

where in the last equality we used the independence of \(\left( G_1(\omega ^j \tau r_0) \right) _{1\leqslant j \leqslant N}\) proven in Lemma 16, and the independence of \(G_1\) and F(0), \(G_2\). Then, applying Lemma 11 with \(t=|F(0)|/\sigma _{G_1}(r_0)\), we get using (27), for each \(1\leqslant j \leqslant N\),

$$\begin{aligned}&\mathbb {E}^{F(0), G_2} \left[ \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| ^{-1} \right] \\&\qquad = \mathbb {E}^{F(0), G_2} \left[ \left| 1+\frac{G_2(\omega ^j\tau r_0)}{F(0)} + \frac{G_1(\omega ^j\tau r_0)}{F(0)} \right| ^{-1} \right] \leqslant C\, \frac{|F(0)|}{\sigma _{G_1}} \leqslant C \cdot a < \frac{1}{2}, \end{aligned}$$

provided that the constant a is sufficiently small.

Thus,

$$\begin{aligned} \mathbb {P}^{F(0), G_2} \left[ \prod _{j=1}^N \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| ^{-1} \ge e^{-C} \right] \leqslant C 2^{-N} \leqslant Ce^{-c\delta ^{-\alpha }}. \end{aligned}$$

Since \(\alpha >L\), this case gives a negligible contribution to the probability of the hole event.

4.3 \(a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\)

Our strategy is to make the constant A as large as possible, while keeping the hole event negligible. Then, up to negligible terms, we will bound \(\mathbb {P}[\text {Hole}(r)]\) by

$$\begin{aligned} \mathbb {P}\Big [ |F(0)|>A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}} \Big ] = \exp \Big [-A^2\sigma _F^2 \log \tfrac{1}{(1-r)\sigma _F^2} \Big ]. \end{aligned}$$

We fix a small positive parameter \(\varepsilon \), put

$$\begin{aligned} J = \left\{ 1\leqslant j \leqslant N:\left| 1+\frac{G_2(\omega ^j r_0)}{F(0)} \right| \ge 1+2\varepsilon \right\} , \end{aligned}$$

and introduce the event \(\left\{ |J|\leqslant (1-2\eta ) N \right\} \), where \(\eta =\delta ^{\alpha _0}\), \(\alpha _0\) is a sufficiently small positive constant, and |J| denotes the size of the set J. We will estimate separately the probabilities of the events

$$\begin{aligned} \text {Hole}(r)\bigcap \left\{ a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}} \right\} \bigcap \left\{ |J|\leqslant (1-2\eta ) N \right\} \end{aligned}$$

and

$$\begin{aligned} \text {Hole}(r)\bigcap \left\{ a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}} \right\} \bigcap \{ |J|> (1-2\eta ) N \}. \end{aligned}$$

Given \(\tau \in \mathbb {T}\), put

$$\begin{aligned} J_-(\tau )&= \left\{ 1\leqslant j \leqslant N:\left| 1+\frac{G_2(\omega ^j \tau r_0)}{F(0)} \right| \ge 1+\varepsilon \right\} , \\ J_+(\tau )&= \left\{ 1\leqslant j \leqslant N:\left| 1+\frac{G_2(\omega ^j \tau r_0)}{F(0)} \right| \ge 1+3\varepsilon \right\} . \end{aligned}$$

Our first goal is to show that, outside a negligible event, the random sets \(J_\pm (\tau )\) are similar in size to the set J.

4.3.1 Controlling \(G_2\) at the points \(\left( \omega ^j\tau r_0 \right) _{1\leqslant j \leqslant N}\)

Lemma 17

Given \(\alpha _0>0\), there is an event \(\widetilde{\Omega }=\widetilde{\Omega }(r)\) with \(\mathbb {P}^{F(0)} \left[ \widetilde{\Omega } \right] \leqslant e^{-\delta ^{-(L+c)}}\) on \(\{|F(0)|\ge a\sigma _F\}\), for r sufficiently close to 1, such that, for any \(\tau \in \mathbb {T}\), on the complement \(\widetilde{\Omega }^c\) we have

$$\begin{aligned} \begin{aligned} |J_-(\tau )|\ge |J| - \eta N,\\ |J_+(\tau )|\leqslant |J| + \eta N \end{aligned} \end{aligned}$$
(28)

with \(\eta =\delta ^{\alpha _0}\).

Proof of Lemma 17

We take small positive constants \(\alpha _1\) and \(\alpha _2\) satisfying

$$\begin{aligned} \begin{aligned}&\alpha _1<\max \left\{ \frac{(1-\alpha )L}{2}, \frac{(1-L)\alpha _2}{2}\right\} \quad \text {and}\\&\alpha _1 + 2\alpha _2 < \alpha , \end{aligned} \end{aligned}$$
(29)

let \(M=[\delta ^{-\alpha _2}]\), write \(G_2~=~G_3 + G_4\) as follows

$$\begin{aligned} G_3(z)&=\sum _{n=1}^M \zeta _n'' d_n z^n, \\ G_4(z)&=\sum _{n=M+1}^{N-1} \zeta _n'' d_n z^n, \end{aligned}$$

and define the events

$$\begin{aligned} \Omega _3&= \bigcup _{n=1}^M \left\{ |\zeta _n''| \ge \delta ^{-\alpha _1} |F(0)| \right\} , \\ \Omega _4&= \left\{ \sum _{n=M+1}^{N-1} |\zeta _n''|^2 d_n^2 \ge \delta ^{2 \alpha _1} |F(0)|^2\right\} . \end{aligned}$$

First, we show that (under our running assumption \(|F(0)|\ge a\sigma _F\)) the events \(\Omega _3\) and \(\Omega _4\) are negligible. Then, outside these events, we estimate the functions \(G_3\) and \(G_4\) at the points \(\left( \omega ^j\tau r_0 \right) _{1\leqslant j \leqslant N}\). We may assume without loss of generality that \(|\arg (\tau )|\leqslant \pi /N\), as rotating \(\tau \) by \(2\pi k/N\) leaves \(|J_\pm (\tau )|\) unchanged. \(\square \)

Lemma 18

For r sufficiently close to 1, we have \(\mathbb {P}^{F(0)}\left[ \Omega _3 \right] \leqslant e^{-\delta ^{-(L+c)}}\) on \(\{|F(0)|\ge a\sigma _F\}\).

Proof of Lemma 18

Using the union bound, we get

$$\begin{aligned} \mathbb {P}^{F(0)}\left[ \Omega _3 \right]\leqslant & {} \sum _{n=1}^M \mathbb {P}^{F(0)}\left[ |\zeta _n''|\ge \delta ^{-\alpha _1} |F(0)| \right] \leqslant \sum _{n=1}^M \mathbb {P}^{F(0)}\left[ |\zeta _n''|\ge \delta ^{-\alpha _1} a \sigma _F \right] \\\leqslant & {} Me^{-c a^2\delta ^{-(L+2 \alpha _1)}}. \end{aligned}$$

\(\square \)

Lemma 19

For r sufficiently close to 1, we have \(\mathbb {P}^{F(0)}\left[ \Omega _4 \right] \leqslant e^{-\delta ^{-(L+c)}}\) on \(\{|F(0)|\ge a\sigma _F\}\).

Proof of Lemma 19

Put

$$\begin{aligned} X=\sum _{n=M+1}^{N-1} |\zeta _n''|^2 d_n^2. \end{aligned}$$

Let \(\lambda >0\) satisfy \(\lambda d_n^2<1\) for \(M+1\leqslant n\leqslant N-1\). Then,

$$\begin{aligned} \mathbb {P}\left[ X\ge t \right]= & {} \mathbb {P}\left[ e^{\lambda X} \ge e^{\lambda t} \right] \leqslant e^{-\lambda t} \mathbb {E}\left[ \prod _{n=M+1}^{N-1} e^{\lambda |\zeta _n''|^2 d_n^2} \right] \\= & {} e^{-\lambda t} \prod _{n=M+1}^{N-1} \frac{1}{1-\lambda d_n^2} = \exp \left[ -\lambda t + \sum _{n=M+1}^{N-1} \log \frac{1}{1-\lambda d_n^2}\right] . \end{aligned}$$

We take \(\lambda = 1/(2 a_M^2)\), recalling that \(a_n^2 = b_n^2 + d_n^2\) with \((a_n)\) non-increasing. Then,

$$\begin{aligned} \log \frac{1}{1-\lambda d_n^2} \leqslant 2\lambda d_n^2. \end{aligned}$$

Thus,

$$\begin{aligned} \mathbb {P}\left[ X\ge t \right] \leqslant \exp \left[ -\lambda \left( t - 2\sum _{n=M+1}^{N-1} d_n^2 \right) \right] \leqslant \exp \left[ - \tfrac{1}{2} \lambda t\right] , \end{aligned}$$
(30)

provided that

$$\begin{aligned} t\ge 4 \sum _{n=M+1}^{N-1} d_n^2. \end{aligned}$$
(31)

We use estimate (30) with \(t=\delta ^{2 \alpha _1}|F(0)|^2\). Note that \( \delta ^{2 \alpha _1}|F(0)|^2 \ge c\delta ^{-(L-2 \alpha _1)} \) and that

$$\begin{aligned} 4 \sum _{n=M+1}^{N-1} d_n^2 \leqslant 4 \sum _{n=M+1}^{N-1} a_n^2 \leqslant C N^L < C \delta ^{-\alpha L}, \end{aligned}$$

which satisfies (31) by (29). Finally, using again (29), we get

$$\begin{aligned}&\mathbb {P}\left[ X \ge \delta ^{2 \alpha _1} |F(0)|^2 \right] \leqslant \exp \left[ -\tfrac{1}{(2a_M)^2} \cdot \delta ^{2 \alpha _1} |F(0)|^2 \right] \\&\quad \leqslant \exp \left[ -c M^{1-L} \delta ^{2 \alpha _1} |F(0)|^2 \right] \leqslant \exp \left[ -ca^2 \delta ^{-L-(1-L) \alpha _2 +2 \alpha _1} \right] \leqslant \exp \left[ -\delta ^{-(L+c)}\right] . \end{aligned}$$

\(\square \)

Lemma 20

Suppose that r is sufficiently close to 1. Then on the event \(\Omega _3^c\) we have

$$\begin{aligned} \sup _{|\arg \tau |<\pi N^{-1}}\, \sup _{1\leqslant j \leqslant N} \left| G_3(\omega ^j\tau r_0) - G_3(\omega ^jr_0) \right| < \frac{1}{2}\, \varepsilon \,|F(0)|, \qquad 1 \leqslant j \leqslant N. \end{aligned}$$

Proof of Lemma 20

For each \(1\leqslant j \leqslant N\), since \(0 \leqslant d_n \leqslant 1\), we have

$$\begin{aligned} \left| G_3(\omega ^j\tau r_0)- G_3(\omega ^jr_0) \right|\leqslant & {} \max _{\bar{\mathbb {D}}} |G_3'| \cdot \frac{\pi }{N} \leqslant M\, \sum _{n=1}^M |\zeta _n''|\cdot |d_n| \cdot \frac{\pi }{N} \\\leqslant & {} \frac{CM^2}{N}\cdot \delta ^{- \alpha _1} |F(0)| \leqslant C\delta ^{-2 \alpha _2+\alpha - \alpha _1} |F(0)| <\tfrac{1}{2} \varepsilon |F(0)|, \end{aligned}$$

provided that \(2 \alpha _2 + \alpha _1 < \alpha \), and that r is sufficiently close to 1. \(\square \)

Lemma 21

Suppose that r is sufficiently close to 1. Then, on the event \(\Omega _4^c\), for any \(\tau \in \mathbb {T}\), the cardinality of the set

$$\begin{aligned} \left\{ 1\leqslant j \leqslant N:\max \left( |G_4(\omega ^jr_0)|, |G_4(\omega ^j\tau r_0)| \right) \ge \tfrac{1}{4}\, \varepsilon |F(0)| \right\} \end{aligned}$$

does not exceed \(\eta N\).

Proof of Lemma 21

We have

$$\begin{aligned} \frac{1}{N}\, \sum _{j=1}^{N}&|G_4(\omega ^j r_0 )|^2 = \frac{1}{N}\, \sum _{j=1}^{N} \left| \sum _{n=M+1}^{N-1} \zeta _n'' d_n (\omega ^j r_0)^n \right| ^2 \\&=\sum _{n_1, n_2 = M+1}^{N-1} \zeta _{n_1}'' \overline{\zeta _{n_2}''} d_{n_1} d_{n_2} r_0^{n_1+n_2}\, \frac{1}{N}\,\sum _{j=1}^N \omega ^{j(n_1 - n_2)} \\&= \sum _{n=M+1}^{N-1} |\zeta _n''|^2 d_n^2 r_0^{2n} \\&\leqslant \sum _{n=M+1}^{N-1} |\zeta _n''|^2 d_n^2 \leqslant \delta ^{2 \alpha _1} |F(0)|^2\qquad \text {on }\Omega _4^c, \end{aligned}$$

and similarly,

$$\begin{aligned} \frac{1}{N}\, \sum _{j=1}^{N} |G_4(\omega ^j \tau r_0 )|^2 \leqslant \delta ^{2 \alpha _1} |F(0)|^2\qquad \text {on }\Omega _4^c. \end{aligned}$$

Hence, on \(\Omega _4^c\), the cardinality of the set we are interested in does not exceed

$$\begin{aligned} \frac{32\,\delta ^{2 \alpha _1}N}{\varepsilon ^2} < \delta ^{\alpha _0}N = \eta N, \end{aligned}$$

provided that \(2 \alpha _1<\alpha _0\) and that r is sufficiently close to 1. \(\square \)

Now, Lemma 17 is a straightforward consequence of Lemmas 18,  19,  20, and 21. \(\Box \)

4.3.2 \(|J|> (1-2\eta )N\), \(\eta =\delta ^{\alpha _0}\)

In this section we show that the intersection of the hole event with the event

$$\begin{aligned} \Big \{a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\Big \}\cap \Big \{|J|> (1-2\eta )N\Big \} \end{aligned}$$
(32)

is negligible. Taking into account the fact that \(J, J_-(\tau )\) and \(J_+(\tau )\) are measurable with respect to F(0) and \(G_2\), Lemma 9 (with \(\kappa =2\) and \(k=N\)) and Lemma 17 show that it suffices to estimate uniformly in \(\tau \in \mathbb {T}\), on the intersection of the events (32) and (28), the probability

$$\begin{aligned} \mathbb {P}^{F(0), G_2} \left[ \sum _{j=1}^N \log \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| \leqslant C \right] . \end{aligned}$$

Taking some positive \(\theta _1=\theta _1(r)\) tending to 0 as \(r\rightarrow 1\) (the function \(\theta _1(r)\) will be chosen later) and applying Chebyshev’s inequality, the last probability does not exceed

$$\begin{aligned}&C\,\mathbb {E}^{F(0), G_2} \left[ \prod _{j=1}^N \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| ^{-\theta _1} \right] \nonumber \\&\quad = C\,\mathbb {E}^{F(0), G_2} \left[ \prod _{j=1}^N \left| 1+\frac{G_2(\omega ^j\tau r_0)}{F(0)} +\frac{G_1(\omega ^j\tau r_0)}{F(0)}\right| ^{-\theta _1} \right] \nonumber \\&\quad = C\, \prod _{j=1}^N \mathbb {E}^{F(0), G_2} \left[ \left| 1+\frac{G_2(\omega ^j\tau r_0)}{F(0)} +\frac{G_1(\omega ^j\tau r_0)}{F(0)} \right| ^{-\theta _1} \right] . \end{aligned}$$
(33)

Once again, we used the independence of \(\left( G_1(\omega ^j \tau r_0) \right) _{1\leqslant j \leqslant N}\) and the independence of \(G_1\) and F(0), \(G_2\).

For \(j\in J_-(\tau )\), we have

$$\begin{aligned} \left| 1+\frac{G_2(\omega ^j\tau r_0)}{F(0)} +\frac{G_1(\omega ^j\tau r_0)}{F(0)} \right|= & {} \left| 1+\frac{G_2(\omega ^j\tau r_0)}{F(0)} \right| \cdot \left| 1+\frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j \tau r_0)} \right| \\&\quad \ge (1+\varepsilon ) \left| 1+\frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j \tau r_0)} \right| . \end{aligned}$$

Hence, by Lemma 14, for such j we obtain

$$\begin{aligned}&\mathbb {E}^{F(0), G_2} \left[ \left| 1+\frac{G_2(\omega ^j\tau r_0)}{F(0)} +\frac{G_1(\omega ^j\tau r_0)}{F(0)} \right| ^{-\theta _1} \right] \\&\quad \leqslant (1+\varepsilon )^{-\theta _1} (1+C\theta _1^2) \leqslant e^{-c\varepsilon \theta _1 +C\theta _1^2} < e^{-c\varepsilon \theta _1}, \end{aligned}$$

provided that r is so close to 1 that \( \theta _1 (r)\) is much smaller than \(\varepsilon \) (recall that \(\varepsilon \) is small but fixed).

For \(j\notin J_-(\tau )\), using Lemma 11 (with \(t = \frac{|F(0)|}{\sigma _G} \leqslant CA\,\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\)), we get

$$\begin{aligned}&\mathbb {E}^{F(0), G_2} \left[ \left| 1+\frac{G(\omega ^j\tau r_0)}{F(0)} \right| ^{-\theta _1} \right] \\&\quad \leqslant (CA\,\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}})^{\theta _1} (1+C\theta _1) < C( \log \tfrac{1}{(1-r)\sigma _F^2} )^{\frac{1}{2} \theta _1}. \end{aligned}$$

Thus, (33) does not exceed

$$\begin{aligned} \exp \left[ -c\varepsilon \theta _1 |J_-(\tau )| + \Big (C + \frac{1}{2} \theta _1 \log \log \tfrac{1}{(1-r)\sigma _F^2} \Big ) (N-|J_-(\tau )|) \right] . \end{aligned}$$

As we are on the intersection of the events (32) and (28) we have \(|J_-(\tau )| \ge (1-3\eta )N\). Therefore, the expression in the last displayed formula does not exceed

$$\begin{aligned} \exp \left[ -c\varepsilon \theta _1 N +C \eta N (1 + \theta _1 \log \log \tfrac{1}{(1-r)\sigma _F^2}) \right] \mathop {=}\limits ^\mathrm{def} E. \end{aligned}$$

Then, letting \(\theta _1 = \delta ^c\) with \(c<\min \{\alpha _0, \alpha - L\}\), and using that

$$\begin{aligned} \theta _1 \log \log \tfrac{1}{(1-r)\sigma _F^2} \leqslant \delta ^c \left( \log \log \frac{1}{\delta } + C \right) \rightarrow 0 \quad \text {as}\ \delta \rightarrow 0, \end{aligned}$$

we see that \(E \leqslant \exp \left[ -c\varepsilon \theta _1 N \right] \) (recall that \(\eta = \delta ^{\alpha _0}\)) and conclude that the event

$$\begin{aligned} \text {Hole}(r) \bigcap \left\{ a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\right\} \bigcap \left\{ |J|> (1-2\eta ) N \right\} \end{aligned}$$

is negligible.

4.3.3 \(|J| \leqslant (1-2\eta )N\), \(\eta =\delta ^{\alpha _0}\)

Here we show that the intersection of the hole event with the event

$$\begin{aligned} \Big \{a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}} \Big \}\cap \Big \{|J|\leqslant (1-2\eta )N\Big \} \end{aligned}$$
(34)

is negligible, provided that \( A^2 < \frac{1}{2}\).

In this case, our starting point is Lemma 8 which we apply with the polynomial \(P=F(0)+G_2\). Combined with Lemmas 5 and 17, it tells us that it suffices to estimate uniformly in \(\tau \in \mathbb {T}\), on the intersection of the events (34) and (28), the probability

$$\begin{aligned} \mathbb {P}^{F(0), G_2} \left[ \sum _{j=1}^N \log \left| \frac{F}{P} (\omega ^j\tau r_0)\right| \leqslant C \right] . \end{aligned}$$

Noting that

$$\begin{aligned} \frac{F}{P} = 1+ \frac{G_1}{F(0)+G_2}, \end{aligned}$$

we rewrite this expression as

$$\begin{aligned}&\mathbb {P}^{F(0), G_2} \left[ \prod _{j=1}^N \big | 1 + \frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j\tau r_0)} \big | \leqslant e^{C} \right] \\&\quad = \mathbb {P}^{F(0), G_2} \left[ \prod _{j=1}^N \big | 1+ \frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j\tau r_0)} \big |^{-\theta _2} \ge e^{-C\theta _2} \right] \\&\quad \leqslant e^{C\theta _2}\, \mathbb {E}^{F(0), G_2} \left[ \prod _{j=1}^N \big | 1+ \frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j\tau r_0)} \big |^{-\theta _2} \right] . \end{aligned}$$

The positive \(\theta _2=\theta _2(r)\) (again tending to 0 as \(r\rightarrow 1\)) will be chosen later. By the independence of \((G_1(\omega ^j \tau r_0)_{1\leqslant j \leqslant N}\) proven in Lemma 16, and the independence of \(G_1\) and F(0), \(G_2\), it suffices to estimate the product

$$\begin{aligned} \prod _{j=1}^N \mathbb {E}^{F(0), G_2} \left[ \left| 1+ \frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j\tau r_0)} \right| ^{-\theta _2} \right] . \end{aligned}$$

First we consider the terms with \(j\in J_+(\tau )\). In this case, by Lemma 14, we have

$$\begin{aligned} \mathbb {E}^{F(0), G_2} \left[ \left| 1+ \frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j\tau r_0)} \right| ^{-\theta _2} \right] \leqslant 1+ C\theta _2^2 < e^{C\theta _2^2}. \end{aligned}$$
(35)

For the terms with \(j\notin J_+(\tau )\), we apply Lemma 14 with

$$\begin{aligned} t= & {} \frac{|F(0)+G_2(\omega ^j\tau r_0)|}{\sigma _{G_1}} \leqslant (1+3\varepsilon ) \frac{|F(0)|}{\sigma _{G_1}} \\= & {} (1+3\varepsilon ) \frac{\sigma _F}{\sigma _{G_1}}\, \frac{|F(0)|}{\sigma _F} \mathop {\leqslant }\limits ^{(27)}(1+4\varepsilon ) A \sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}, \qquad \end{aligned}$$

provided that r is sufficiently close to 1. Then, by Lemma 14 (using that \((1-r)\sigma _F^2\) is sufficiently small)

$$\begin{aligned} \mathbb {E}^{F(0), G_2} \left[ \left| 1+ \frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j\tau r_0)} \right| ^{-\theta _2} \right] \leqslant 1 - c\theta _2 \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2} + C\theta _2^2. \end{aligned}$$

Assuming that

$$\begin{aligned} \theta _2 =o(1) \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2}, \end{aligned}$$
(36)

we continue our estimate as follows

$$\begin{aligned} \leqslant 1 - c\theta _2 \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2} \leqslant \exp \left[ -c\theta _2 \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2} \right] . \end{aligned}$$
(37)

Multiplying the bounds (35) and (37), we get

$$\begin{aligned}&\prod _{j=1}^N \mathbb {E}^{F(0), G_2} \left[ \left| 1+ \frac{G_1(\omega ^j\tau r_0)}{F(0)+G_2(\omega ^j\tau r_0)} \right| ^{-\theta _2} \right] \\&\quad \leqslant \exp \left[ C\theta _2^2 |J_+(\tau )| - c\theta _2 \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2} (N-|J_+(\tau )|) \right] . \end{aligned}$$

As we are on the intersection of the events (34) and (28), we have \(|J_+(\tau )| \leqslant |J| + \eta N \leqslant (1-\eta )N\). Then, the expression in the exponent on the right-hand side of the previous displayed equation does not exceed

$$\begin{aligned} \left( C\theta _2^2 - c\eta \theta _2 \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2} \right) N. \end{aligned}$$

Letting \( \theta _2 = c \eta \left( (1-r)\sigma _F^2\right) ^{(1+10\varepsilon ) A^2} \) (which satisfies our previous requirement (36) since \(\eta \rightarrow 0\) as \(r\rightarrow 1\)), we estimate the previous expression by

$$\begin{aligned} -c \eta ^2 \left( (1-r)\sigma _F^2\right) ^{2(1+10\varepsilon ) A^2} N = -c \delta ^{2 \alpha _0+2(1+10\varepsilon ) A^2 (1-L) -\alpha }. \end{aligned}$$

To make the event with probability bounded by \(\exp \left[ -c \delta ^{2 \alpha _0+2(1+10\varepsilon ) A^2 (1-L) -\alpha } \right] \) negligible, we need to be sure that \(\alpha - 2(1+10\varepsilon ) A^2(1-L) - 2 \alpha _0 > L\). Since the constants \(\varepsilon \) and \(\alpha _0\) can be made arbitrarily small, while \(\alpha \) can be made arbitrarily close to 1, we conclude that the event

$$\begin{aligned} \text {Hole}(r)\bigcap \left\{ a\sigma _F \leqslant |F(0)| \leqslant A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\right\} \bigcap \{|J| \leqslant (1-2\eta )N\} \end{aligned}$$

is negligible, provided that \( A^2 < \frac{1}{2}\).

We conclude that the event

$$\begin{aligned} \text {Hole}(r)\bigcap \left\{ a \sigma _F<|F(0)|<A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\right\} \end{aligned}$$

is negligible whenever \( A^2 < \frac{1}{2}\), and therefore, combined with the bound of Sect. 4.2, for any such A,

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r) \right]&\leqslant \mathbb {P}\left[ |F(0)|\ge A\sigma _F\sqrt{\log \tfrac{1}{(1-r)\sigma _F^2}}\, \right] + \langle \text {negligible terms} \rangle \\&= \exp \left( -A^2\sigma _F^2 \log \tfrac{1}{(1-r)\sigma _F^2}\right) + \langle \text {negligible terms} \rangle , \end{aligned}$$

whence, letting \(r\uparrow 1\),

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r) \right]&\leqslant \exp \left[ - \frac{1-o(1)}{2}\, \sigma _F^2 \log \tfrac{1}{(1-r)\sigma _F^2} \right] \\&= \exp \left[ - \frac{1-o(1)}{2} \cdot \frac{1}{(2\delta )^{L}} \cdot (1-L) \log \frac{1}{\delta } \right] \\&= \exp \left[ - \frac{1-L-o(1)}{2^{L+1}} \cdot \frac{1}{\delta ^{L}} \, \log \frac{1}{\delta } \right] \end{aligned}$$

completing the proof of the upper bound in the case \(0<L<1\). \(\Box \)

5 Lower bound on the hole probability for \(0<L<1\)

As before, let \(F=F(0)+G\). Fix \(\varepsilon >0\), set

$$\begin{aligned} M = \frac{\sqrt{1-L+2\varepsilon }}{(1-r^2)^{L/2}} \cdot \sqrt{\log \frac{1}{1-r}}, \end{aligned}$$

define the events

$$\begin{aligned} \Omega _1 = \left\{ |F(0)| > M \right\} , \qquad \Omega _2 = \left\{ \max _{r\bar{\mathbb {D}}} |G| \leqslant M \right\} = \left\{ \max _{r \mathbb {T}} |G| \leqslant M \right\} , \end{aligned}$$

and observe that \(\text {Hole}(r) \supset \Omega _1 \bigcap \Omega _2\) and that \(\Omega _1\) and \(\Omega _2\) are independent.

Put

$$\begin{aligned} N= & {} \left[ (1-r)^{-1-\varepsilon } \right] , \quad \omega = e(1/N), \quad z_j = r\omega ^j, \ 1\leqslant j \leqslant N,\\ M'= & {} \frac{\sqrt{1-L+\varepsilon }}{(1-r^2)^{L/2}}\, \sqrt{\log \frac{1}{1-r}}, \quad M'' = \frac{A}{(1-r^2)^{(L+2)/2}}\, \sqrt{\log \frac{1}{1-r}}, \end{aligned}$$

with a sufficiently large positive constant A that will be chosen later, and define the events

$$\begin{aligned} \Omega _3 = \left\{ \max _{1\leqslant j \leqslant N } |G(z_j)| \leqslant M' \right\} , \quad \Omega _4 = \left\{ \max _{r\mathbb {T}} |G'| \leqslant M'' \right\} . \end{aligned}$$

Then, \( \Omega _2 \supset \Omega _3 \bigcap \Omega _4\) when r is sufficiently close to 1 as a function of \(\varepsilon \). Hence,

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r) \right] \ge \mathbb {P}\left[ \Omega _1 \right] \cdot \mathbb {P}\left[ \Omega _3 \bigcap \Omega _4\right] . \end{aligned}$$

For \(K \in \mathbb {N}\), we put \(\lambda = e\left( 1/2^K\right) \), \(w_k = \lambda ^k r\), \(1 \leqslant k \leqslant 2^K\) and define

$$\begin{aligned} \Omega _4^K = \left\{ \max _{1\leqslant k \leqslant 2^K } |G'(w_k)| \leqslant M'' \right\} . \end{aligned}$$

Notice that \(\Omega _4^K \downarrow \Omega _4\) as \(K\rightarrow \infty \). In order to bound \(\mathbb {P}\left[ \, \Omega _3 \bigcap \Omega _4 \, \right] \) from below we use Hargé’s version [6] of the Gaussian correlation inequality:Footnote 1

Theorem 2

(G. Hargé) Let \(\gamma \) be a Gaussian measure on \(\mathbb {R}^n\), let \(A\subset \mathbb {R}^n\) be a convex symmetric set, and let \(B\subset \mathbb {R}^n\) be an ellipsoid (that is, a set of the form \(\{X\in \mathbb {R}^n:\langle CX, X \rangle \leqslant 1\}\), where C is a non-negative symmetric matrix). Then \( \gamma (A\bigcap B) \ge \gamma (A) \gamma (B)\).

We apply this inequality N times to the Gaussian measure on \(\mathbb {R}^{2\left( N - j + 1 + 2^K\right) }\), \(1 \leqslant j \leqslant N\), generated by

$$\begin{aligned} X_j^r=\text {Re}\,G\left( z_j\right) ,&\ldots , X_N^r=\text {Re}\,G\left( z_N\right) ,\\ X_j^i=\text {Im}\,G\left( z_j\right) ,&\ldots , X_N^i=\text {Im}\,G\left( z_N\right) , \end{aligned}$$

and

$$\begin{aligned} X_{N+1}^r=\text {Re}\,G'(w_1),&\ldots , X_{N+2^{K}}^r=\text {Re}\,G'\left( w_{2^{K}}\right) ,\\ X_{N+1}^i=\text {Im}\,G'(w_1),&\ldots , X_{N+2^{K}}^i=\text {Im}\,G'\left( w_{2^{K}}\right) , \end{aligned}$$

and the sets

$$\begin{aligned} A_j&= \left\{ \max _{j + 1\leqslant k \leqslant N} |G(z_j)| \leqslant M^\prime \right\} \cap \left\{ \max _{1\leqslant k \leqslant 2^K} |G^\prime (w_k)| \leqslant M^{\prime \prime } \right\} ,\\ B_j&= \left\{ |G(z_j)|^2 = (X_j^r)^2 + (X_j^i)^2 \leqslant (M^\prime )^2 \right\} . \end{aligned}$$

Thus, we get

$$\begin{aligned} \mathbb {P}[\Omega _3 \cap \Omega _4^K] \ge \prod _{j=1}^N \mathbb {P}\left[ |G(z_j)| \leqslant M'\, \right] \cdot \mathbb {P}[\Omega _4^K]. \end{aligned}$$

Thus, by the monotone convergence of \(\Omega _4^K\) to \(\Omega _4\),

$$\begin{aligned} \mathbb {P}\left[ \Omega _3 \bigcap \Omega _4\right] \ge \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |G'| \leqslant M'' \right] \cdot \prod _{j=1}^N \mathbb {P}\left[ |G(z_j)| \leqslant M' \right] . \end{aligned}$$

For each \(1\leqslant j \leqslant N\), \(G(z_j)\) is a complex Gaussian random variable with variance at most \((1-r^2)^{-L}\), so that

$$\begin{aligned} \prod _{j=1}^N \mathbb {P}\left[ |G(z_j)| \leqslant M' \right]\ge & {} \left( 1-e^{-M'^2(1-r^2)^L} \right) ^N = \left( 1-e^{(1-L+\varepsilon )\,\log (1-r)} \right) ^N\\= & {} \left( 1-(1-r)^{1-L+\varepsilon } \right) ^N \ge \exp \left[ -c N (1-r)^{1-L+\varepsilon } \right] \\= & {} \exp \left[ -c (1-r)^{-1-\varepsilon +1-L+\varepsilon } \right] = \exp \left[ -c (1-r)^{-L} \right] \end{aligned}$$

with r sufficiently close to 1.

Next note that

$$\begin{aligned} G'(z) = \sum _{n\ge 0} \zeta _{n+1} (n+1)a_{n+1} z^n, \end{aligned}$$

and therefore we have

$$\begin{aligned} \max _{s\bar{\mathbb {D}}} \mathbb {E}\left[ |G'|^2 \right] \leqslant C (1-s^2)^{-(L+2)}, \qquad 0<s<1. \end{aligned}$$

Thus, noting that

$$\begin{aligned} M'' \ge cA \sqrt{\log (1-r)^{-1}} \cdot \sigma _{G'}(\tfrac{1}{2} (1+r)) \end{aligned}$$

and applying Lemma 2, we see that

$$\begin{aligned} \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |G'| > M'' \right] \leqslant \frac{C}{1-r} \exp \left[ c A^2 \log (1-r) \right] < \tfrac{1}{2}, \end{aligned}$$

provided that the constant A in the definition of \(M''\) is chosen sufficiently large. Thus, \( \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |G'| \leqslant M'' \right] \ge \tfrac{1}{2}\). Then, piecing everything together, we get

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r) \right]&\ge \mathbb {P}\left[ \Omega _1 \right] \cdot \mathbb {P}\left[ \Omega _3 \bigcap \Omega _4\right] \\&\ge \mathbb {P}\left[ \Omega _1 \right] \cdot \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |G'| \leqslant M'' \right] \cdot \prod _{j=1}^N \mathbb {P}\left[ |G(z_j)| \leqslant M' \right] \\&\ge \mathbb {P}\left[ \Omega _1 \right] \cdot \tfrac{1}{2}\, \exp \left[ -c (1-r)^{-L}\right] . \end{aligned}$$

Finally, the theorem follows from the fact that \( \mathbb {P}\left[ \Omega _1 \right] = e^{-M^2} \). \(\square \)

5.1 Remark

Having in mind the gap between the upper and lower bounds on the hole probability for \(0<L<1\), as given in Theorem 1, we note here that a different method would be required in order to improve the lower bound. Precisely, setting \(F = F(0) + G\) as before, we will show that

$$\begin{aligned}&\sup _M \mathbb {P}\left[ |F(0)|>M,\;\; \max _{r\bar{\mathbb {D}}} |G|\leqslant M\right] \nonumber \\&\qquad = \exp \left( -\frac{1-L+o(1)}{2^{L}}\, \frac{1}{(1-r)^L}\, \log \frac{1}{1-r}\right) , \quad \text {as } r\uparrow 1. \end{aligned}$$
(38)

Our proof above shows that this holds with a greater or equal sign instead of the equality sign and it remains to establish the opposite inequality. It is clear that

$$\begin{aligned} \mathbb {P}\left[ |F(0)|>M,\;\; \max _{r\bar{\mathbb {D}}} |G|\leqslant M\right] \leqslant \mathbb {P}\left[ |F(0)|>M\right] = e^{-M^2} \end{aligned}$$

and hence the opposite inequality need only be verified in the regime where

$$\begin{aligned} M\leqslant \frac{\sqrt{1-L-o(1)}}{(1-r^2)^{L/2}} \cdot \sqrt{\log \frac{1}{1-r}}, \quad \text {as } r\uparrow 1. \end{aligned}$$

Now let \(0<\varepsilon <\tfrac{1}{2} (1-L)\) and suppose that

$$\begin{aligned} M\leqslant \frac{\sqrt{1-L-2\varepsilon }}{(1-r^2)^{L/2}} \cdot \sqrt{\log \frac{1}{1-r}}. \end{aligned}$$
(39)

Set also \(N=\left[ (1-r)^{-1+\epsilon }\right] \) and \(\omega = e(1/N)\). Let \(G_1\) and \(G_2\) be as in Sect. 4.1, so that \(G = G_1 + G_2\) and the random variables \((G_1(\omega ^j r))\), \(1\leqslant j\leqslant N\), are independent and identically distributed by Lemma 16. (The decomposition in Sect. 4.1 yields independent random variables at radius \(r_0 = 1 - 2(1-r)\) but may easily be modified to radius r (or indeed any radius).)

$$\begin{aligned} \mathbb {P}\left[ |F(0)|>M,\;\; \max _{r\bar{\mathbb {D}}} |G|\leqslant M\right] \leqslant \mathbb {P}\left[ \max _{1\leqslant j\leqslant N} |G_1(\omega ^j r) +G_2(\omega ^j r)|\leqslant M\right] . \end{aligned}$$

We condition on \(G_2\) and write \(\mathbb {P}^{G_2}[ . ]=\mathbb {P}[.|G_2]\). Recalling that \((G_1(\omega ^j r))\) are independent and applying the Hardy–Littlewood rearrangement inequality, we get

$$\begin{aligned}&\mathbb {P}^{G_2}\left[ \max _{1\leqslant j\leqslant N} |G_1(\omega ^j r) +G_2(\omega ^j r)|\leqslant M\right] \\&\quad = \prod _{j=1}^N \mathbb {P}^{G_2}\left[ |G_1(\omega ^j r) + G_2(\omega ^j r)|\leqslant M\, \right] \\&\quad \leqslant \prod _{j=1}^N \mathbb {P}^{G_2}\left[ |G_1(\omega ^j r)|\leqslant M\right] \\&\quad = \left( 1 - e^{-M^2 / \sigma _{G_1}^2(r)}\right) ^N\leqslant e^{-N\exp (-M^2 / \sigma _{G_1}^2(r))}. \end{aligned}$$

Since the right-hand side does not depend on \(G_2\), we can drop the conditioning on the left-hand side and finally get

$$\begin{aligned} \mathbb {P}\left[ |F(0)|>M,\;\; \max _{r\bar{\mathbb {D}}} |G|\leqslant M\right] \leqslant e^{-N\exp (-M^2 / \sigma _{G_1}^2(r))}. \end{aligned}$$

It remains to note that with our choice of N and using the upper bound (39) on M and the relation of \(\sigma _F^2(r)\) and \(\sigma _{G_1}^2(r)\) given in Lemma 16 we have

$$\begin{aligned} N\exp (-M^2 / \sigma _{G_1}^2(r))\ge \left( \frac{1}{1-r}\right) ^{L+\varepsilon -o(1)}, \quad \text {as } r\uparrow 1. \end{aligned}$$

We conclude that

$$\begin{aligned} \mathbb {P}\left[ |F(0)|>M,\;\; \max _{r\bar{\mathbb {D}}} |G|\leqslant M\right] \leqslant \exp \left[ -\left( \frac{1}{1-r}\right) ^{L+\varepsilon -o(1)} \right] , \quad \text {as } r\uparrow 1. \end{aligned}$$

for all M satisfying the upper bound (39). As \(\varepsilon \) can be taken arbitrarily small, this completes the proof of (38).

6 Upper bound on the hole probability for \(L>1\)

6.1 Beginning the proof

Put \(\delta =1-r\), \(1 < \kappa \leqslant 2\) and \(r_0=1-\kappa \delta \). We take

$$\begin{aligned} N = \left[ \,\frac{L-1}{2\delta }\, \log \frac{1}{\delta }\,\right] \end{aligned}$$
(40)

and put \(\omega =e(1/N)\). By Lemma 9, it suffices to estimate the probability

$$\begin{aligned} \sup _{\tau \in \mathbb {T}}\, \mathbb {P}\left[ \sum _{j=1}^N \log |F(\omega ^j \tau r_0 )| \leqslant N \log |F(0)| + C \right] . \end{aligned}$$

Set

$$\begin{aligned} a = \left( \log \frac{1}{\delta }\right) ^{-\frac{1}{2}}. \end{aligned}$$
(41)

Discarding a negligible event, we assume that \( |F(0)| \leqslant \delta ^{-(\frac{1}{2} + a)} \). Let \(0<\theta <2\) be a parameter depending on \(\delta \) which we will choose later. Then,

$$\begin{aligned}&\mathbb {P}\left[ \sum _{j=1}^N \log |F(\omega ^j \tau r_0 )| \leqslant N \log |F(0)| + C \right] \\&\quad \leqslant \mathbb {P}\left[ \prod _{j=1}^N \left| F(\omega ^j \tau r_0 ) \right| ^{-\theta } \ge C \delta ^{N\theta (\frac{1}{2}+a)} \right] \\&\quad \leqslant C \delta ^{-N\theta (\frac{1}{2}+a)}\, \mathbb {E}\left[ \prod _{j=1}^N \left| F(\omega ^j \tau r_0 ) \right| ^{-\theta }\right] . \end{aligned}$$

Using Lemma 15, we can bound the expectation on the right by

$$\begin{aligned} \frac{1}{\det \Sigma }\, \Big ( \Lambda ^{\left( 1-\frac{1}{2}\theta \right) } \Gamma \left( 1-\tfrac{1}{2}\theta \right) \Big )^N , \end{aligned}$$

where \(\Sigma \) is the covariance matrix of \(\left( F(\omega ^j \tau r_0) \right) _{1\leqslant j \leqslant N} \) and \(\Lambda \) is the maximal eigenvalue of \(\Sigma \). Note that, since the distribution of F(z) is rotation invariant, the covariance matrix \(\Sigma \) does not depend on \(\tau \).

6.2 Estimating the eigenvalues of \(\Sigma \)

By Lemma 10, the eigenvalues of \(\Sigma \) are

$$\begin{aligned} \lambda _m(\Sigma ) = N\, \sum _{n\equiv m\, (N)} a_n^2 r_0^{2n}= & {} N \left( a_m^2 r_0^{2m} + \sum _{j\ge 1} a_{m+jN}^2 r_0^{2(m+jN)} \right) ,\\&\qquad \quad \qquad m=0, 1, \ldots , N-1. \end{aligned}$$

Now, for small \(\delta \) and \(j\ge 1\), we have

$$\begin{aligned}&\frac{a_{m+(j+1)N}^2 r_0^{2(m+(j+1)N)}}{a_{m+jN}^2 r_0^{2(m+jN)}} \leqslant C \left( \frac{m+(j+1)N}{m+jN} \right) ^{L-1}\, r_0^{2N} \\&\qquad = C \left( 1 + \frac{N}{m+jN} \right) ^{L-1}\, \left( 1-\kappa \delta \right) ^{2N} \leqslant C e^{-2\kappa \delta N}. \end{aligned}$$

Take \(\delta \) sufficiently small so that

$$\begin{aligned} \frac{a_{m+(j+1)N}^2 r_0^{2(m+(j+1)N)}}{a_{m+jN}^2 r_0^{2(m+jN)}} \leqslant \frac{1}{2}, \end{aligned}$$

which yields

$$\begin{aligned} \lambda _m(\Sigma )&\leqslant N \left( a_m^2 (1-\kappa \delta )^{2m} + a_{m+N}^2(1-\kappa \delta )^{2(m+N)} \sum _{j\ge 1} 2^{-j}\right) \\&= N \left( a_m^2 (1-\kappa \delta )^{2m} + a_{m+N}^2(1-\kappa \delta )^{2(m+N)} \right) . \end{aligned}$$

Put

$$\begin{aligned} M \mathop {=}\limits ^\mathrm{def} \frac{Lr_0^2 -1}{1-r_0^2} = (1+o(1))\, \frac{L-1}{2\kappa }\, \frac{1}{\delta } \leqslant \frac{C}{\delta }\qquad \text {as}\ \delta \rightarrow 0, \end{aligned}$$

and note that the sequence

$$\begin{aligned} m \mapsto a_m^2 r_0^{2m} = \frac{L(L+1)\, \ldots \, (L+m-1)}{m!}\, r_0^{2m} \end{aligned}$$

increases for \(m\leqslant [M]\) and decreases for \(m\ge [M]+1\). Thus, the maximal term of this sequence does not exceed

$$\begin{aligned} C M^{L-1} (1-\kappa \delta )^{2M} \leqslant \frac{C}{\delta ^{L-1}}, \end{aligned}$$

whence

$$\begin{aligned} \lambda _m(\Sigma ) \leqslant \frac{C N}{\delta ^{L-1}}, \qquad m=0, 1, \ldots , N-1. \end{aligned}$$

Also

$$\begin{aligned} \det (\Sigma )= & {} \prod _{m=0}^{N-1} \lambda _m(\Sigma ) \ge \prod _{m=0}^{N-1} Na_m^2 (1-\kappa \delta )^{2m} \ge N^N (1-\kappa \delta )^{N(N-1)} \prod _{m=0}^{N-1} c(m+1)^{L-1} \\\ge & {} (cN)^N (N!)^{L-1} (1-\kappa \delta )^{N(N-1)} = \exp \left[ (1+o(1)) (LN\log N-\kappa \delta N^2) \right] \end{aligned}$$

as \(\delta \rightarrow 0\).

6.3 Completing the proof

Finally,

$$\begin{aligned} \log \mathbb {E}\left[ \prod _{j=1}^N |F(\omega ^j\tau r_0)|^{-\theta }\right]&\leqslant -\log \det \Sigma + N\left( 1-\tfrac{1}{2}\theta \right) \log \Lambda + N \log \Gamma \left( 1-\tfrac{1}{2}\theta \right) \\&\leqslant -(1+o(1)) \left( LN\log N - \kappa \delta N^2 \right) \\&\qquad + N\left( 1-\tfrac{1}{2}\theta \right) \, \left( \log N + (L-1)\log \tfrac{1}{\delta } + O(1)\right) \\&\qquad + N \log \Gamma \left( 1-\tfrac{1}{2}\theta \right) , \end{aligned}$$

and then,

$$\begin{aligned}&\log \mathbb {P}\left[ \sum _{j=1}^N \log |F(\omega ^j \tau r_0 )| \leqslant N \log |F(0)| + C \right] \\&\quad \leqslant N\theta \left( \tfrac{1}{2}+a \right) \log \tfrac{1}{\delta } + O(1) + \log \mathbb {E}\left[ \prod _{j=1}^N |F(\omega ^j\tau r_0)|^{-\theta }\right] \\&\quad \leqslant N\theta \left( \tfrac{1}{2}+a \right) \log \tfrac{1}{\delta } + O(1) -(1+o(1)) \left( LN\log N - \kappa \delta N^2 \right) \\&\qquad + N\left( 1-\tfrac{1}{2}\theta \right) \, \left( \log N + (L-1)\log \tfrac{1}{\delta } + O(1)\right) + N \log \Gamma \left( 1-\tfrac{1}{2}\theta \right) \\&\quad \mathop {=}\limits ^\mathrm{def} (1 + o(1)) N \cdot P_N. \end{aligned}$$

We set \(\theta = 2 - a^2\), \(\kappa = 1 + \delta \), and continue to bound \(P_N\) [using the choices (40) and (41)]:

$$\begin{aligned} P_N&= (2-a^2)(\tfrac{1}{2} + a) \log \tfrac{1}{\delta }- L \log N + (1+ \delta )\delta N \\&\quad + \tfrac{a^2}{2} (\log N + (L-1)\log \tfrac{1}{\delta }) + \log \Gamma (\tfrac{a^2}{2})\\&= \log \tfrac{1}{\delta }+ O(\sqrt{\log \tfrac{1}{\delta }}) - L \log \tfrac{1}{\delta }\\&\quad + \tfrac{L-1}{2} \log \tfrac{1}{\delta }+ O(\delta \log \tfrac{1}{\delta }) + O(\log \log \tfrac{1}{\delta }) + O(1) + O(\log \log \tfrac{1}{\delta }) \\&= -\tfrac{L-1}{2} \log \tfrac{1}{\delta }(1 + o(1)), \quad \delta \rightarrow 0. \end{aligned}$$

Thus,

$$\begin{aligned}&\log \mathbb {P}\left[ \sum _{j=1}^N \log |F(\omega ^j \tau r_0 )| \leqslant N \log |F(0)| + C \right] \\&\quad \leqslant - (1+o(1))\, \frac{(L-1)^2}{4}\, \frac{1}{\delta }\, \left( \log \frac{1}{\delta } \right) ^2. \end{aligned}$$

From Lemma 9 we have

$$\begin{aligned} \log \mathbb {P}\left[ \mathrm{Hole}(r) \right]&\leqslant C\log \frac{1}{\delta } + 2\log \frac{1}{\kappa - 1} - (1+o(1))\, \frac{(L-1)^2}{4}\, \frac{1}{\delta }\, \left( \log \frac{1}{\delta } \right) ^2\\&=- (1+o(1))\, \frac{(L-1)^2}{4}\, \frac{1}{\delta }\, \left( \log \frac{1}{\delta } \right) ^2, \end{aligned}$$

by our choice of \(\kappa \), completing the proof of the upper bound in the case \(L>1\). \(\square \)

7 Lower bound on the hole probability for \(L>1\)

As before, let \(\delta =1-r\) and assume that r is sufficiently close to 1. Introduce the parameter

$$\begin{aligned} \tfrac{1}{2}<\alpha <1, \end{aligned}$$

and put

$$\begin{aligned} N = \left[ \, \frac{2L}{\delta }\, \log \frac{1}{\delta }\, \right] , \quad M = \frac{1}{\sqrt{\delta }}\, \left( \log \frac{1}{\delta }\right) ^\alpha . \end{aligned}$$

We now introduce the events

$$\begin{aligned}&\mathcal {E}_1 = \left\{ |\zeta _0|>M \right\} , \ \mathcal {E}_2 = \left\{ \max _{r\mathbb {T}} \left| \sum _{1\leqslant n \leqslant N } \zeta _n a_n z^n \right| \leqslant \frac{M}{2} \right\} , \\&\mathcal {E}_3 = \left\{ \max _{r\mathbb {T}} \left| \sum _{n>N} \zeta _n a_n z^n \right| \leqslant \frac{M}{2} \right\} . \end{aligned}$$

Then

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r)\right] \ge \mathbb {P}\left[ \mathcal {E}_1 \right] \cdot \mathbb {P}\left[ \mathcal {E}_2 \right] \cdot \mathbb {P}\left[ \mathcal {E}_3 \right] . \end{aligned}$$

We have

$$\begin{aligned} \mathbb {P}\left[ \mathcal {E}_1 \right] = \exp \left[ -M^2 \right] = \exp \left[ - \frac{1}{\delta }\, \left( \log \frac{1}{\delta }\right) ^{2\alpha } \right] . \end{aligned}$$

In order to give a lower bound for \(\mathbb {P}\left[ \mathcal {E}_2 \right] \), we rely on a comparison principle between Gaussian analytic functions which might be of use in other contexts. To introduce this principle let us say that a random analytic function G has the \(\mathrm {GAF}(b_n)\) distribution, for some sequence of complex numbers \((b_n)_{n\ge 0}\), if G has the same distribution as

$$\begin{aligned} z\mapsto \sum _{n\ge 0} \zeta _n b_n z^n,\quad z\in \mathbb {C}, \end{aligned}$$

where, as usual, \((\zeta _n)\) is a sequence of independent standard complex Gaussian random variables.

Lemma 22

Let \((b_n)\), \((c_n)\), \(n\ge 0\), be two sequences of complex numbers, such that \(|c_n| \leqslant |b_n|\) for all n, and put

$$\begin{aligned} Q = \prod _{n\ge 0} \left| \frac{c_n}{b_n}\right| , \end{aligned}$$

where we take the ratio \(c_n / b_n\) to be 1 if both \(b_n\) and \(c_n\) are zero. If \(Q > 0\), then there exists a probability space supporting a random analytic function G with the \(\mathrm {GAF}(b_n)\) distribution, and an event E satisfying \(\mathbb {P}[E] = Q^2\), such that, conditioned on the event E, the function G has the \(\mathrm {GAF}(c_n)\) distribution.

The proof of Lemma 22 uses the following simple property of Gaussian random variables.

Lemma 23

Let \(0 < \sigma \leqslant 1\). There exists a probability space supporting a standard complex Gaussian random variable \(\zeta \), and an event E satisfying \(\mathbb {P}[E] = \sigma ^2\), such that, conditioned on the event E, the random variable \(\zeta \) has the complex Gaussian distribution with variance \(\mathbb {E}[|\zeta |^2\mid E] = \sigma ^2\).

Proof of Lemma 23

We may assume that \(\sigma < 1\). Write

$$\begin{aligned} f(z) = \frac{1}{\pi }\exp (-|z|^2), \quad f_\sigma (z) = \frac{1}{\pi \sigma ^2}\exp \left( -\tfrac{1}{\sigma ^2} |z|^2 \right) , \quad z \in \mathbb {C}, \end{aligned}$$

for the density of a standard complex Gaussian, and the density of a complex Gaussian with variance \(\sigma ^2\), respectively. Observe that, since \(\sigma < 1\),

$$\begin{aligned} f = \sigma ^2 \cdot f_\sigma + (1-\sigma ^2) g_\sigma \end{aligned}$$
(42)

for some non-negative function \(g_\sigma \) with integral 1. Now, suppose that our probability space supports a complex Gaussian \(\zeta _\sigma \) with \(\mathbb {E}|\zeta _\sigma |^2 = \sigma ^2\), a random variable \(Y_\sigma \) with density function \(g_\sigma \), and a Bernoulli random variable \(I_\sigma \), satisfying \(\mathbb {P}[I_\sigma = 1] = 1 - \mathbb {P}[I_\sigma ~=~0] = \sigma ^2\), which is independent of both \(\zeta _\sigma \) and \(Y_\sigma \). Then (42) implies that the random variable

$$\begin{aligned} \zeta = I_\sigma \cdot \zeta _\sigma + (1-I_\sigma ) Y_\sigma \end{aligned}$$

has the standard complex Gaussian distribution. In this probability space, after conditioning that \(I_\sigma = 1\), the distribution of \(\zeta \) is that of a complex Gaussian with variance \(\mathbb {E}[|\zeta |^2 | I_\sigma ~=~1] = \sigma ^2\), as required. \(\square \)

Proof of Lemma 22

Let \(\sigma _n = \left| c_n/b_n\right| \) (where again, the ratio is defined to be 1 if \(b_n\) and \(c_n\) are both zero). Lemma 23 yields, for each n, a probability space supporting a standard Gaussian random variable \(\zeta _n\) and an event \(E_n\) with \(\mathbb {P}[E_n] = \sigma _n^2\). Clearly we may assume that the sequence \(\zeta _n\) is mutually independent and we take the probability space to be the product of these probability spaces, extend each \(E_n\) to this space in the obvious way and define

$$\begin{aligned} G(z) = \sum _{n\ge 0} \zeta _n b_n z^n. \end{aligned}$$

The claim follows with the event \(E=\bigcap _{n\ge 0} E_n\). \(\square \)

7.1 Estimating \( \mathbb {P}\left[ \mathcal {E}_2 \right] \)

Put

$$\begin{aligned} G(z) = \sum _{1\leqslant n \leqslant N } \zeta _n a_n z^n, \end{aligned}$$

and let \((q_n)_{n=1}^N\) be a sequence of numbers in [0, 1] to be specified below. According to Lemma 22, there is an event E with probability \(Q^2 = \prod _{n=1}^N q_n^2\), such that on the event E, the function G has the same distribution as

$$\begin{aligned} G_Q(z) \mathop {=}\limits ^\mathrm{def} \sum _{1\leqslant n \leqslant N } \zeta _n q_n a_n z^n. \end{aligned}$$

Notice also that

$$\begin{aligned} \sigma _Q(\rho )^2 \mathop {=}\limits ^\mathrm{def} \mathbb {E}\left[ \left| \sum _{n=1}^N \zeta _n q_n a_n (\rho e^{\mathrm{i}\theta })^n \right| ^2 \right] = \sum _{n=1}^N q_n^2 a_n^2 \rho ^{2n}. \end{aligned}$$

If we now set

$$\begin{aligned} r_2 = r + \delta ^2, \quad \lambda = \Big (\log \frac{1}{\delta }\Big )^\alpha , \end{aligned}$$

then applying first Lemma 22, and then Lemma 2 to the function \(G_Q\), with \(\lambda \) as above, we obtain that for \(\delta \) sufficiently small

$$\begin{aligned}&\mathbb {P}\left[ \max _{r \bar{\mathbb {D}}} \left| \sum _{n=1}^N \zeta _n a_n z^n \right| \leqslant \lambda \, \sigma _Q(r_2) \right] \\&\quad \ge Q^2 \cdot \mathbb {P}\left[ \max _{r \bar{\mathbb {D}}} \left| \sum _{n=1}^N \zeta _n a_n z^n \right| \leqslant \lambda \, \sigma _Q(r_2) \big |E \right] \\&\quad = Q^2 \cdot \mathbb {P}\left[ \max _{r \bar{\mathbb {D}}} \left| \sum _{n=1}^N \zeta _n q_n a_n z^n \right| \leqslant \lambda \, \sigma _Q(r_2) \right] \\&\quad \ge Q^2 \left( 1 - C \delta ^{-1} \exp \left( -c (\log \tfrac{1}{\delta })^{2\alpha } \right) \right) \\&\quad {\mathop {\ge }\limits ^{\alpha > \tfrac{1}{2}}} \tfrac{1}{2} Q^2. \end{aligned}$$

For this estimate to be useful in our context we have to ensure, by choosing the sequence \(q_n\) appropriately, that

$$\begin{aligned} \sigma _Q(r_2) \leqslant \frac{M}{2 \lambda } = \frac{1}{2 \sqrt{\delta }}. \end{aligned}$$
(43)

First, it is straightforward to verify that

$$\begin{aligned} \exp (-\delta ) \leqslant r_2 \leqslant \exp (-\delta +\delta ^2), \quad \forall r \in (0,1). \end{aligned}$$

Since \(c n^{L-1} \leqslant a_n^2 \leqslant C n^{L-1}\), this implies that for \(n \in \{1, \ldots , N \}\)

$$\begin{aligned} a_n^2 r_2^{2 n} = \exp \left( (L-1)\log n - 2 n \delta + O(1) \right) . \end{aligned}$$

Putting

$$\begin{aligned} N_1 = \left[ \, \frac{L - 1}{2 \delta }\, \log \frac{1}{\delta }\, \right] \end{aligned}$$

and noting that the function \(x \mapsto (L-1)\log x - 2 x \delta \) attains its maximum at \(x = \frac{L-1}{2 \delta }\) we see that \(a_{n}^2 r_2^{2 n} \ge c\) for \(n \leqslant N_1\), while for \(n \in \{N_1, \ldots , N\}\) we have \(a_{n}^2 r_2^{2 n} \leqslant C \left( \log \frac{1}{\delta }\right) ^{L-1}\).

We therefore choose

$$\begin{aligned} q^2_n = {\left\{ \begin{array}{ll} \alpha _1 \left( a^2_n r_2^{2n} \log \tfrac{1}{\delta }\right) ^{-1}, &{} n\in \{1, \ldots , N_1\},\\ \alpha _1 \left( \log \tfrac{1}{\delta }\right) ^{-L}, &{} n\in \{N_1 + 1, \ldots , N\}, \end{array}\right. } \end{aligned}$$

where we choose the constant \(\alpha _1>0\) sufficiently small to ensure that \(q_n \leqslant 1\) for all \(n \in \{1, \ldots N\}\). With this choice we have

$$\begin{aligned} \sigma ^2_Q(r_2) = \sum _{n=1}^N q^2_n a^2_n r_2^{2n} \leqslant \alpha _1 \Big (\log \frac{1}{\delta }\Big )^{-1} N_1 + \alpha _1 \cdot C \Big (\log \frac{1}{\delta }\Big )^{-1} (N - N_1) \leqslant C \cdot \frac{\alpha _1}{\delta } \end{aligned}$$

and further choosing \(\alpha _1\) small if necessary, Condition (43) is satisfied.

It remains to estimate the probability of the event E. Notice that

$$\begin{aligned} \mathbb {P}[E] = Q^2&= \prod _{n=1}^N q^2_n = \alpha _1^N \left( \log \tfrac{1}{\delta }\right) ^{-L(N - N_1) - N_1} \prod _{n=1}^{N_1} \left( a^2_n r_2^{2n}\right) ^{-1}\\&\ge \exp \left( -C N \log \log \tfrac{1}{\delta }\right) \cdot \prod _{n=1}^{N_1} \left( C n^{L-1} \exp [-2n(\delta - \delta ^2)] \right) ^{-1} \\&\ge \exp \left( -C N \log \log \tfrac{1}{\delta }\right) \cdot \frac{e^{(\delta - \delta ^2 )N_1(N_1+1)}}{(N_1!)^{L-1}}. \end{aligned}$$

Recalling that \(N \leqslant C N_1\), and using that \(N_1 = \left[ \, \tfrac{L - 1}{2 \delta }\, \log \tfrac{1}{\delta }\, \right] \), we obtain

$$\begin{aligned} \mathbb {P}\left[ \mathcal {E}_2 \right]&\ge \tfrac{1}{2} Q^2 \ge \exp \left( - N_1 \left[ \, (L-1) \log N_1 - \delta N_1 + C \log \log \tfrac{1}{\delta }\,\right] \right) \\&= \exp \left( - \tfrac{1}{4} [(L-1)^2 + o(1)]\, \tfrac{1}{\delta }\, \log ^2\tfrac{1}{\delta }\right) . \end{aligned}$$

7.2 Estimating \( \mathbb {P}\left[ \mathcal {E}_3 \right] \)

Put

$$\begin{aligned} H(z) = \sum _{n>N} \zeta _n a_n z^n. \end{aligned}$$

Then

$$\begin{aligned} \sigma _H(\rho )^2 = \mathbb {E}\left[ \left| \sum _{n>N} \zeta _n a_n (\rho e^{\mathrm{i}\theta })^n \right| ^2 \right] = \sum _{n>N} a_n^2 \rho ^{2n}. \end{aligned}$$

The choice of the parameter N guarantees that the ‘tail’ H will be small.

Lemma 24

Put \(r_1=\tfrac{1}{2} (1+r)\). There exists a constant \(C > 0\) such that, for \(\delta \) sufficiently small,

$$\begin{aligned} \sigma _H(r_1)^2 = \sum _{n>N} a_n^2 r_1^{2n} \leqslant C. \end{aligned}$$

Proof of Lemma 24

Since the function \(r \mapsto 2 \log (1+r) - r\), \(0\leqslant r \leqslant 1\) attains its maximum at \(r=1\) we have \(r_1^2 \leqslant \exp (-\delta )\) for \(\delta = 1 - r \in [0,1]\). Recalling that \(N = \left[ \, \tfrac{2L}{\delta }\, \log \tfrac{1}{\delta }\, \right] \), we observe that for \(n > N\) we have

$$\begin{aligned} \frac{n}{\log n} \ge \frac{2(L-1)}{\delta }, \end{aligned}$$

and therefore \(n^{L-1} \leqslant \exp (\tfrac{1}{2} \delta n)\). Then, using that \(a_n^2 \leqslant C n^{L-1}\), we have

$$\begin{aligned} \sigma _H(r_1)^2\leqslant & {} C \sum _{n>N} n^{L-1} \exp (-\delta n) \leqslant C \sum _{n>N} \exp (-\tfrac{1}{2} \delta n) \\\leqslant & {} C \delta ^{-1} \exp (-\tfrac{1}{2} \delta N) \leqslant C \delta ^{L - 1}. \end{aligned}$$

Since \(L>1\) this is a stronger result than we claimed. \(\square \)

By Lemma 24,

$$\begin{aligned} \mathbb {P}\left[ \mathcal {E}_3^c \right] = \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |H|> \tfrac{1}{2} M \right] \leqslant \mathbb {P}\left[ \max _{r\bar{\mathbb {D}}} |H| > c M \cdot \sigma _{H}\left( \tfrac{1}{2} (1+r) \right) \right] , \end{aligned}$$

if \(c>0\) is sufficiently small. By Lemma 2, the right-hand side is at most \(C\delta ^{-1}\, \exp \left[ -c M^2 \right] \rightarrow 0 \) as \( \delta \rightarrow 0 \). Thus, \(\mathbb {P}\left[ \mathcal {E}_3 \right] \ge \tfrac{1}{2} \) for \(\delta \) sufficiently small.

7.3 Putting the estimates together

Finally,

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r) \right]&\ge \mathbb {P}\left[ \mathcal {E}_1 \right] \cdot \mathbb {P}\left[ \mathcal {E}_2 \right] \cdot \mathbb {P}\left[ \mathcal {E}_3 \right] \\&\ge \frac{1}{2}\, \exp \left[ -\frac{1}{\delta }\, \left( \log \frac{1}{\delta }\right) ^{2\alpha } - \frac{(L-1)^2+o(1)}{4}\, \frac{1}{\delta }\, \log ^2\frac{1}{\delta } \right] \\&\mathop {\ge }\limits ^{\alpha <1}\exp \left[ - \frac{(L-1)^2+o(1)}{4}\, \frac{1}{\delta }\, \log ^2\frac{1}{\delta } \right] , \end{aligned}$$

completing the proof of the lower bound in the case \(L>1\), and hence of Theorem 1. \(\square \)

8 The hole probability for non-invariant GAFs with regularly distributed coefficients

The proofs we gave do not use the hyperbolic invariance of the zero distribution of \(F=F_L\). In the case \(0<L<1\) we could assume that the sequence of coefficients \((a_n)\) in (1) does not increase, that \(a_0=1\) and that

$$\begin{aligned} a_n \simeq n^{\frac{1}{2} (L-1)},\quad n\ge 1. \end{aligned}$$
(44)

Then, setting as before

$$\begin{aligned} \sigma _F(r)^2 = \mathbb {E}\left[ |F(re^{\mathrm{i}\theta })|^2\right] = \sum _{n\ge 0} a_n^2 r^{2n} \end{aligned}$$

the same proof yields the bounds

$$\begin{aligned} \frac{1 - L + o(1)}{2}\, \sigma _F(r)^2 \log \frac{1}{1-r}&\leqslant - \log \mathbb {P}\left[ \text {Hole}(r) \right] \\&\leqslant (1 - L + o(1))\, \sigma _F(r)^2 \log \frac{1}{1-r}, \qquad r\rightarrow 1. \end{aligned}$$

In the case \(L>1\), assuming that \(a_0=1\) and (44), we also get the same answer as in Theorem 1:

$$\begin{aligned} - \log \mathbb {P}\left[ \text {Hole}(r) \right] = \frac{(L-1)^2 +o(1)}{4}\, \frac{1}{1-r}\, \log ^2\frac{1}{1-r}, \qquad r\rightarrow 1. \end{aligned}$$

In the case \(L=1\) (that is, \(a_n=1\), \(n\ge 0\)) the result of Peres and Virág relies on their proof that the zero set of \(F_1\) is a determinantal point process. This ceases to hold under a slight perturbation of the coefficients \(a_n\), while our techniques still work, though yielding less precise bounds:

Theorem 3

Suppose that F is a Gaussian Taylor series of the form (1) with \(a_n \simeq 1\) for \(n\ge 0\). Then

$$\begin{aligned} - \log \mathbb {P}\left[ \mathrm{Hole}(r) \right] \simeq \frac{1}{1-r}, \qquad \frac{1}{2} \leqslant r < 1. \end{aligned}$$

For the reader’s convenience, we supply the proof of Theorem 3, which is based on arguments similar to those we have used above. As before, we put \(\delta =1-r\), \(\sigma _F=\sigma _F(r)\), and note that under the assumptions of Theorem 3, \( \sigma _F(r)^2 \simeq \delta ^{-1}\). Also put

$$\begin{aligned} N=\left\lceil \frac{1}{1-r}\right\rceil \end{aligned}$$

and \(\omega = e(1/N)\).

8.1 Upper bound on the hole probability in Theorem 3

8.1.1 Beginning the proof

The starting point is the same as in the proofs of the upper bound in the cases \(L\ne 1\). Put \(r_0 = 1-2\delta \). Then, by Lemma 9 (with \(\kappa =2\) and \(k=N\)),

$$\begin{aligned} \mathbb {P}\left[ \text {Hole}(r) \right]\leqslant & {} \delta ^{-c}\, \sup _{\tau \in \mathbb {T}}\, \mathbb {P}\left[ \sum _{j=1}^N \log |F(\omega ^j \tau r_0 )| \leqslant N\log |F(0)| + C \right] \\&+\, \langle \mathrm{negligible\ terms} \rangle . \end{aligned}$$

For fixed \(\tau \in \mathbb {T}\) and for a small positive parameter b, we have

$$\begin{aligned}&\mathbb {P}\left[ \sum _{j=1}^N \log |F(\omega ^j \tau r_0 )| \leqslant N\log |F(0)| + C \right] \\&\quad \leqslant \mathbb {P}\left[ \sum _{j=1}^N \log |F(\omega ^j \tau r_0 )| \leqslant N\log \left( b\sigma _F \right) + C \right] + \mathbb {P}\left[ |F(0)|>b\sigma _F \right] \\&\quad \leqslant \mathbb {P}\left[ \sum _{j=1}^N \log |w_j| \leqslant N\log \left( C b \right) \right] + \mathbb {P}\left[ |F(0)|>b\sigma _F \right] , \end{aligned}$$

where \(w_j = F(\omega ^j \tau r_0)/\sqrt{N}\) are complex Gaussian random variables. Next,

$$\begin{aligned}&\mathbb {P}\left[ \sum _{j=1}^N \log |w_j| \leqslant N\log \left( C b \right) \right] = \mathbb {P}\left[ \prod _{j=1}^N |w_j|^{-1} \ge (Cb)^{-N} \right] \\&\quad \leqslant (Cb)^N\, \mathbb {E}\left[ \prod _{j=1}^N |w_j|^{-1} \right] . \end{aligned}$$

Thus, up to negligible terms, the hole probability is bounded from above by

$$\begin{aligned} (Cb)^N\, \mathbb {E}\left[ \prod _{j=1}^N |w_j|^{-1} \right] + e^{-b^2 \sigma _F^2}. \end{aligned}$$

What remains is to show that the expectation of the product of \(|w_j|^{-1}\) grows at most exponentially with N. Then, choosing the constant b so small that the prefactor \( (Cb)^N \) overcomes this growth of the expectation, we will get the result.

8.1.2 Estimating \( \mathbb {E}\left[ \prod _{j=1}^N |w_j|^{-1} \right] \)

One can use Lemma 15 in order to bound the expectation above, below we give an alternative argument. Put \(z_j = \omega ^j \tau r_0\), \(1\leqslant j \leqslant N\), and consider the covariance matrix

$$\begin{aligned} \Gamma _{ij} = \mathbb {E}\left[ w_i \bar{w}_j\right] = N^{-1} \mathbb {E}\left[ F(z_i) \bar{F}(z_j)\right] , \qquad 1 \leqslant i, j \leqslant N. \end{aligned}$$

For each non-empty subset \(I\subset \{1, 2, \ldots , N\}\), we put \( \Gamma _I = \left( \Gamma _{ij} \right) _{i, j\in I}\).

Lemma 25

For each \(I\subset \{1, 2, \ldots , N\}\), we have \( \det \Gamma _I \ge c^{|I|} \).

Proof of Lemma 25

By Lemma 10, the eigenvalues of the matrix \(\Gamma \) are

$$\begin{aligned} \lambda _m = \sum _{n\equiv m\, (N)}\, a_n^2 r_0^{2n} \ge c \sum _{n\equiv m\, (N)} (1-2\delta )^{2n} \ge c \sum _{k\ge 0} e^{-CNk\delta } \ge c \sum _{k\ge 0} e^{-Ck} \ge c >0, \end{aligned}$$

that is, the minimal eigenvalue of \(\Gamma \) is separated from zero. It remains to recall that the \(N-1\) eigenvalues of any minor of order \(N-1\) of an Hermitian matrix of order N interlace with the N eigenvalues of the original matrix. Applying this principle several times, we conclude that the minimal eigenvalues of the matrix \(\Gamma _I\) cannot be less than the minimal eigenvalue of the full matrix \(\Gamma \). \(\square \)

Now, we write

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^N |w_j|^{-1} \right] \leqslant \sum _{I\subset \{1, 2, \ldots , N\}}\, \mathbb {E}\left[ \prod _{i \in I} |w_i|^{-1} {1\mathrm{l}}_{\{|w_i|\leqslant 1\}} \right] . \end{aligned}$$

By Lemma 25, the expectations on the right-hand side do not exceed

$$\begin{aligned} \frac{1}{\pi ^{|I|} \det \Gamma _I}\, \left( \int _{|w|\leqslant 1} \frac{1}{|w|}\, e^{-\lambda _I^{-1}|w|^2 }\, \mathrm{d}m(w) \right) ^{|I|} \leqslant C^{|I|} \end{aligned}$$

(here, \(\lambda _I\) is the maximal eigenvalue of \(\Gamma _I\)). Since the number of subsets I of the set \( \left\{ 1, 2, \ldots , N \right\} \) is \(2^N\), we finally get

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^N |w_j|^{-1} \right] \leqslant (2C)^N. \end{aligned}$$

This completes the proof of the upper bound on the hole probability in Theorem 3. \(\Box \)

8.2 Lower bound on the hole probability in Theorem 3

We write

$$\begin{aligned} F(z) = F(0) + G(z) = F(0) + \sum _{k \ge 0} z^{k N} S_k(z), \end{aligned}$$

where the \(S_k (z)\) are independent random Gaussian polynomials,

$$\begin{aligned} S_k(z) = \sum _{j=1}^N \zeta _{j+kN} a_{j+kN} z^j. \end{aligned}$$

We have

$$\begin{aligned} \max _{r\mathbb {T}} |G| \leqslant \sum _{k\ge 0} r^{kN} \max _{r\mathbb {T}} |S_k| \, \mathop {\leqslant }\limits ^{N\log r < -1}\, \sum _{k\ge 0} e^{-k} \max _{r\mathbb {T}} |S_k|. \end{aligned}$$

We fix \(1<A<e\) and consider the independent events \( \mathcal {E}_k = \left\{ \max _{r\mathbb {T}} |S_k| < A^k \sqrt{N} \right\} \). If these events occur together, then

$$\begin{aligned} \max _{r\mathbb {T}} |G|< \sqrt{N}\, \sum _{k\ge 0} \left( Ae^{-1} \right) ^k < B\sqrt{N} \end{aligned}$$

with some positive numerical constant B. Then, \(F(z) \ne 0\) on \(r\bar{\mathbb {D}}\), provided that \(|F(0)| > B\sqrt{N}\), and that all the events \(\mathcal {E}_k\) occur together, that is,

$$\begin{aligned} \mathbb {P}\left[ \text {Hole} (r) \right] \ge e^{-B^2 N}\, \prod _{k\ge 0} \mathbb {P}\left[ \mathcal {E}_k \right] . \end{aligned}$$

It remains to estimate from below the probability of each event \(\mathcal {E}_k\) and to multiply the estimates.

8.2.1 Estimating \( \mathbb {P}\left[ \mathcal {E}_k \right] \)

Take 4N points \(z_j = r e\left( \frac{j}{4N}\right) \), \(1\leqslant j \leqslant 4N\). By the Bernstein inequality,

$$\begin{aligned} \max _{\theta \in [0, 2\pi ]} \left| \frac{\partial S_k}{\partial \theta } \left( re^{\mathrm{i}\theta } \right) \right| \leqslant N\, \max _{r\mathbb {T}} |S_k|. \end{aligned}$$

Therefore,

$$\begin{aligned} \max _{r\mathbb {T}} |S_k| \leqslant \max _{1\leqslant j \leqslant 4N}\, |S_k(z_j)| + \frac{\pi }{4N}\cdot N \max _{r\mathbb {T}} |S_k|, \end{aligned}$$

whence,

$$\begin{aligned} \max _{r\mathbb {T}} |S_k| \leqslant C \max _{1\leqslant j \leqslant 4N}\, |S_k(z_j)|, \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}\left[ \mathcal {E}_k \right] \ge \mathbb {P}\left[ \, \max _{1\leqslant j \leqslant 4N}\, |S_k(z_j)| < C A^k \sqrt{N} \, \right] . \end{aligned}$$

Then, applying as in Sect. 5, Hargé’s version of the Gaussian correlation inequality, we get

$$\begin{aligned} \mathbb {P}\left[ \mathcal {E}_k \right] \ge \prod _{j=1}^{4N} \mathbb {P}\left[ |S_k(z_j)| < C A^k \sqrt{N} \right] . \end{aligned}$$

(In fact, passing to real and imaginary parts we could use here the simpler Khatri-Sidak version of the Gaussian correlation inequality [10, Section 2.4].) Each value \(S_k(z_j)\) is a complex Gaussian random variable with variance

$$\begin{aligned} \sigma _{S_k}^2 = \sum _{j=1}^N a_{j+kN}^2 r^{2j} \simeq \, \sum _{j=1}^N r^{2j} \simeq \frac{1}{\delta } \simeq N. \end{aligned}$$

Therefore,

$$\begin{aligned} \mathbb {P}\left[ \, |S_k(z_j)| < C A^k \sqrt{N}\, \right] = 1 - \mathbb {P}\left[ \, |S_k(z_j)| \ge C A^k \sqrt{N} \, \right] \ge 1 - e^{-cA^{2k}}, \end{aligned}$$

whence,

$$\begin{aligned} \mathbb {P}\left[ \mathcal {E}_k \right] \ge \left( 1 - e^{-cA^{2k}} \right) ^{4N}, \end{aligned}$$

and then,

$$\begin{aligned} \prod _{k\ge 0} \mathbb {P}\left[ \mathcal {E}_k \right] \ge \prod _{k\ge 0} \left( 1 - e^{-cA^{2k}} \right) ^{4N} \ge e^{-cN}, \end{aligned}$$

completing the proof. \(\square \)