Keywords

1 Inroduction: Problem and Results

Let \(\{\xi _{k}\}_{k=0}^{\infty }\) be a sequence of independent identically distributed real- or complex-valued random variables. It is always supposed that \(\mathbf{P}\,(\xi _{0} = 0) < 1\).

Consider the sequence of random polynomials

$$G_{n}(z) = \xi _{0} + \xi _{1}z + \cdots + \xi _{n-1}{z}^{n-1} + \xi _{ n}{z}^{n}.$$

By \(z_{1n},\ldots ,z_{nn}\) denote the zeros of G n . It is not hard to show (see [1]) that there exists an indexing of the zeros such that for each \(k\,=\,1,\ldots ,n\) the k-th zero z kn is a one-valued random variable. For any measurable subset A of complex plain \(\mathbb{C}\) put \(N_{n}(A)\,=\#\{z_{kn}\, :\, z_{kn} \in \ A\}\). Then N n (A) ∕ n is a probability measure on the plane (the empirical distribution of the zeros of G n ). For any a, b such that \(0 \leq a < b \leq \infty \) put \(R_{n}(a,b) = N_{n}(\{z\, :\, a \leq \vert z\vert \leq b\})\) and for any α, β such that 0 \(\leq \alpha < \beta \leq 2\pi \) put \(S_{n}(\alpha ,\beta ) = N_{n}(\{z\, :\, \alpha \leq \arg z \leq \beta \})\). Thus R n  ∕ n and S n  ∕ n define the empiric distributions of | z kn  | and \(\arg z_{kn}\).

In this paper we study the limit distributions of \(N_{n},R_{n},S_{n}\) as n → .

The question of the distribution of the complex roots of G n have been originated by Hammersley in [1]. The asymptotic study of \(R_{n},S_{n}\) has been initiated by Shparo and Shur in [16]. To describe their results let us introduce the function

$$ff(t)=\left[\underbrace{\log^+\log^+\dots\log^+t}_{m+1}\right]^{1+\varepsilon}\prod_{i=1}^m\underbrace{\log^+\log^+\dots\log^+t}_{i},$$

where \(\log^{+}s =\max (1,\log s)\). We assume that \(\epsilon \,>\,0,m \in {\mathbb{Z}}^{+}\) and \(f(t) = {{(\log }^{+}t)}^{1+\epsilon }\) for m = 0.

Shparo and Shur have proved in [16] that if

$$\mathbf{E}\,f(\vert \xi _{0}\vert ) < \infty $$

for some \(\epsilon > 0,m \in {\mathbb{Z}}^{+}\), then for any \(\delta \in (0,1)\) and \(\alpha ,\beta \) such that 0 \(\leq \alpha < \beta \leq 2\pi \)

$$\frac{1} {n}R_{n}(1 - \delta ,1 + \delta ) \rightarrow ^{\mathbf{P}\,}1,\quad n \rightarrow \infty ,$$
$$\frac{1} {n}S_{n}(\alpha ,\beta ) \rightarrow ^{\mathbf{P}\,}\frac{\beta - \alpha } {2\pi } ,\quad n \rightarrow \infty.$$

The first relation means that under quite weak constraints imposed on the coefficients of a random polynomial, almost all its roots “concentrate uniformly” near the unit circumference with high probability; the second relation means that the arguments of the roots are asymptotically uniformly distributed.

Later Shepp and Vanderbei [15] and Ibragimov and Zeitouni [5] under additional conditions imposed on the coefficients of G n got more precise asymptotic formulas for R n .

What kind of further results could be expected? First let us note that if, e.g., \(\mathbf{E}\,\vert \xi _{0}\vert < \infty \), then for | z |  < 1

$$G_{n}(z) \rightarrow G(z) = \sum\limits_{k=0}^{\infty }\xi _{ k}{z}^{k}$$

as \(n\,\rightarrow \,\infty \) a.s. The function G(z) is analytical inside the unit disk \(\{\vert z\vert \,<\,1\}\). Therefore for any δ > 0 it has only a finite number of zeros in the disk \(\{\vert z\vert < 1 - \delta \}\). At the other hand, the average number of zeros in the domain \(\vert z\vert > 1/(1 - \delta )\) is the same (it could be shown if we consider the random polynomial G(1 ∕ z)). Thus one could expect that under sufficiently weak constraints imposed on the coefficients of a random polynomial the zeros concentrate near the unit circle \(\Gamma =\{ z\, :\, \vert z\vert = 1\}\) and a measure R n  ∕ n converges to the delta measure at the point one. We may expect also from the consideration of symmetry that the arguments \(\arg z_{kn}\) are asymptotically uniformly distributed. Below we give the conditions for these hypotheses to hold. We shall prove the following three theorems about the behavior of \(N_{n}/n,R_{n}/n,S_{n}/n\).

For the sake of simplicity, we assume that \(\mathbf{P}\,\{\xi _{0} = 0\} = 0\). To treat the general case it is enough to study in the same way the behavior of the roots on the sets \(\{\theta {^\prime}_{n} = k,\theta {^\prime}{^\prime}_{n} = l\}\), where

$$\theta {^\prime}_{n} =\max \{ i = 0,\ldots ,n\mid \xi _{i}\neq 0\},\qquad \theta {^\prime}{^\prime}_{n} =\min \{ j = 0,\ldots ,n\mid \xi _{j}\neq 0\}.$$

Theorem A.

The sequence of the empirical distributions R n ∕n converges to the delta measure at the point one almost surely if and only if

$$\mathbf{E}\,\log (1 + \vert \xi _{0}\vert ) < \infty.$$
(1)

In other words, (1) is necessary and sufficient condition for

$$\mathbf{P}\,\left \{\frac{1} {n}R_{n}(1 - \delta ,1 + \delta ) \rightarrow _{n \rightarrow \infty }1\right \} = 1$$
(2)

hold for any δ > 0.

We shall also prove that if (1) does not hold then no limit distribution for \(\{z_{nk}\}\) exist.

Theorem B.

Suppose the condition (1)holds. Then the empirical distribution N n ∕n almost surely converges to the probability measure \(N(\cdot ) = \mu (\cdot \cap \Gamma )/(2\pi )\) , where \(\Gamma =\{ z\, :\, \vert z\vert = 1\}\) and μ is the Lebesgue measure.

Theorem C.

The empirical distribution S n ∕n almost surely converges to the uniform distribution, i.e.,

$$\mathbf{P}\,\left \{\frac{1} {n}S_{n}(\alpha ,\beta ) \rightarrow _{n \rightarrow \infty }\frac{\beta - \alpha } {2\pi } \right \} = 1$$

for any \(\alpha ,\beta \) such that 0 \(\leq \alpha < \beta \leq 2\pi \) .

Let us remark here that Theorem C does not require any additional conditions on the sequence {ξ k }.

The next result is of crucial importance in the proof of Theorem C.

Theorem D.

Let \(\{\eta _{k}\}_{k=0}^{\infty }\) be a sequence of independent identically distributed real-valued random variables. Put \(g_{n}(x) = \sum\limits_{k=0}^{n}\eta _{k}{x}^{k}\) and by M n denote the number of real roots of the polynomial g n (x). Then

$$\mathbf{P}\,\left \{\frac{M_{n}} {n} \rightarrow _{n \rightarrow \infty }0\right \} = 1,\quad \mathbf{E}\,M_{n} = o(n),\quad n \rightarrow \infty.$$

Theorem D is also of independent interest. In a number of papers it was shown that under weak conditions on the distribution of η0 one has \(\mathbf{E}\,M_{n} \sim c \times \log n,n \rightarrow \infty \) (see [24, 6, 9, 10]). L. Shepp proposed the following conjecture: for any distribution of η0 there exist positive numbers \(c_{1},c_{2}\) such that \(\mathbf{E}\,M_{n} \geq c_{1} \times \log n\) and \(\mathbf{E}\,M_{n} \leq c_{2} \times \log n\) for all n. The first statement was disproved in [17, 18]. There was constructed a random polynomial g n (x) with \(\mathbf{E}\,M_{n} < 1 + \epsilon \). It is still unknown if the second statement is true. However, Theorem D shows that an arbitrary random polynomial can not have too much real roots (see also [14]).

In fact, in the proof of Theorem C we shall use a slightly generalized version of Theorem D:

Theorem E.

For some integer r consider a set of r non-degenerate probability distributions. Let \(\{\eta _{k}\}_{k=0}^{\infty }\) be a sequence of independent real-valued random variables with distributions from this set. As above, put \(g_{n}(x) = \sum\limits_{k=0}^{n}\eta _{k}{x}^{k}\) and by M n denote the number of real roots of the polynomial g n (x). Then

$$\mathbf{P}\,\left \{\frac{M_{n}} {n} \rightarrow _{n \rightarrow \infty }0\right \} = 1,\quad \mathbf{E}\,M_{n} = o(n),n \rightarrow \infty.$$
(3)

2 Proof of Theorem A

Let us establish the sufficiency of (1). Let it hold and fix \(\delta \in (0,1)\). Prove that the radius of convergence of the series

$$G(z) = \sum\limits_{k=0}^{\infty }\xi _{ k}{z}^{k}$$
(4)

is equal to one with probability one.

Consider ρ > 0 such that \(\mathbf{P}\,\{\vert \xi _{0}\vert > \rho \} > 0\). Using the Borel-Cantelli lemma we obtain that with probability one the sequence {ξ k } contains infinitely many ξ k such that \(\vert \xi _{k}\vert > \rho \). Therefore the radius of convergence of the series (4) does not exceed 1 almost surely.

On the other hand, for any non-negative random variable ζ

$$\sum\limits_{k=1}^{\infty }\mathbf{P}\,(\zeta \geq k) \leq \mathbf{E}\,\zeta \leq 1 + \sum\limits_{k=1}^{\infty }\mathbf{P}\,(\zeta \geq k).$$
(5)

Therefore, it follows from (1) that

$$\sum\limits_{k=1}^{\infty }\mathbf{P}\,(\vert \xi _{ k}\vert \geq {e}^{\gamma k}) < \infty $$

for any positive constant γ. It follows from the Borel-Cantelli lemma that with probability one \(\vert \xi _{k}\vert < {e}^{\gamma k}\) for all sufficiently large k. Thus, according to the Cauchy-Hadamard formula (see, e.g., [11]), the radius of convergence of the series (4) is at least 1 almost surely.

Hence with probability one G(z) is an analytical function inside the unit ball { | z |  < 1}. Therefore if \(0 \leq a < b < 1\), then \(R(a,b) < \infty \), where R(a, b) denotes the number of the zeros of G inside the domain \(\{z\, :\, a \leq \vert z\vert \leq b\}\). It follows from the Hurwitz theorem (see, e.g., [11]) that \(R_{n}\left (0,1 - \delta \right ) \leq R\left (0,1 - \delta /2\right )\) with probability one for all sufficiently large n. This implies

$$\mathbf{P}\,\left \{\frac{1} {n}R_{n}(0,1 - \delta ) \rightarrow _{n \rightarrow \infty }0\right \} = 1.$$

In order to conclude the proof of (2) it remains to show that

$$\mathbf{P}\,\left \{\frac{1} {n}R_{n}(1 + \delta ,\infty ) \rightarrow _{n \rightarrow \infty }0\right \} = 1.$$

In other words, we need to prove that \(\mathbf{P}\,\{A\}\,=\,0\), where A denotes the event that there exists \(\epsilon > 0\) such that

$$R_{n}\left (1 + \delta ,\infty \right ) \geq \epsilon n$$

holds for infinitely many values n.

By B denote the event that G(z) is an analytical function inside the unit disk \(\{\vert z\vert < 1\}\). For \(m \in \mathbb{N}\) put

$$\zeta _{m} =\sup _{k\in {\mathbb{Z}}^{+}}\vert \xi _{k}{e}^{-k/m}\vert.$$

By C m denote the event that \(\zeta _{m} < \infty \). It was shown above that \(\mathbf{P}\,\{B\} = \mathbf{P}\,\{C_{m}\} = 1\) for \(m \in \mathbb{N}\). Therefore, to get \(\mathbf{P}\,\{A\} = 0\), it is sufficient to show that \(\mathbf{P}\,\{ABC_{m}\} = 0\) for some m.

Let us fix m. The exact value of it will be chosen later. Suppose the event ABC m occurred. Index the roots of the polynomial G n (z) according to the order of magnitude of their absolute values:

$$\vert z_{1}\vert \leq \vert z_{2}\vert \leq \cdots \leq \vert z_{n}\vert.$$

Fix an arbitrary number C > 1 (an exact value will be chosen later). Consider indices i, j such that

$$\begin{array}{rcl} \vert z_{i}\vert < 1 - \delta /C,& \quad \vert z_{i+1}\vert \geq 1 - \delta /C,& \\ \vert z_{j}\vert \leq 1 + \delta ,& \quad \vert z_{j+1}\vert > 1 + \delta. & \\ \end{array}$$

If \(\vert z_{1}\vert \geq 1 - \delta /C\), then i = 0; if \(\vert z_{n}\vert \leq 1 + \delta \) then j = n.

It is easily shown that if

$$\vert z\vert <\min \left (1, \frac{\vert \xi _{0}\vert } {n \times \max _{k=1,\ldots ,n}\vert \xi _{k}\vert }\right ),$$

then

$$\vert \xi _{0}\vert > \vert \xi _{1}z\vert + \vert \xi _{2}{z}^{2}\vert + \cdots + \vert \xi _{ n}{z}^{n}\vert.$$

Therefore such z can not be a zero of the polynomial G n . Taking into account that the event C m occurred, we obtain a lower bound for the absolute values of the zeros for all sufficiently large n:

$$\vert z_{1}\vert \geq \min \left (1, \frac{\vert \xi _{0}\vert } {n \times \max _{k=1,\ldots ,n}\vert \xi _{k}\vert }\right ) \geq \frac{\vert \xi _{0}\vert } {n\zeta _{m}{e}^{n/m}} \geq \vert \xi _{0}\vert \zeta _{m}^{-1}{e}^{-2n/m}.$$

Therefore for any integer l satisfying \(j + 1 \leq l \leq n\) and all sufficiently large n

$$\begin{array}{rcl} \vert z_{1}\ldots z_{l}\vert = \vert z_{1}\ldots z_{i}\vert \vert z_{i+1}\ldots z_{j}\vert \vert z_{j+1}\ldots z_{l}\vert & & \\ \geq \vert \xi _{0}{\vert }^{i}\zeta _{ m}^{-i}{e}^{-2ni/m}{\left (1 - \frac{\delta } {C}\right )}^{j-i}{(1 + \delta )}^{l-j}.& & \\ \end{array}$$

Since A occurred, \(n - j \geq n\epsilon \) for infinitely many values of n. Therefore if l satisfies \(n -\sqrt{n} \leq l \leq n\), then the inequalities \(j + 1 \leq l \leq n\) and \(l - j \geq n\epsilon /2\) hold for infinitely many values of n. According to the Hurwitz theorem for all sufficiently large n we have \(i \leq R_{n}(0,1 - \delta /C) \leq R(0,1 - \delta /(2C))\). Therefore for infinitely many values of n

$$\vert z_{1}\ldots z_{l}\vert \geq {\left (\frac{\vert \xi _{0}\vert } {\zeta _{m}} \right )}^{R(0,1-\delta /(2C))}{e}^{-2nR(0,1-\delta /(2C))/m}{\left (1 - \frac{\delta } {C}\right )}^{n}{(1 + \delta )}^{n\epsilon /2}.$$

Choose now C large enough to yield

$$\left (1 - \frac{\delta } {C}\right ){(1 + \delta )}^{\epsilon /2} > 1.$$

Furthermore, holding C constant choose m such that

$$b = {e}^{-2R(0,1-\delta /(2C))/m}\left (1 - \frac{\delta } {C}\right ){(1 + \delta )}^{\frac{\epsilon } {2} } > 1.$$

Since

$${\left (\frac{\vert \xi _{0}\vert } {\zeta _{m}} \right )}^{R(0,1-\delta /(2C))/n} \rightarrow _{n \rightarrow \infty }1,$$

there exists a random variable a > 1 such that for infinitely many values of n

$$\vert z_{1}\ldots z_{l}\vert \geq {\left (\frac{\vert \xi _{0}\vert } {\zeta _{m}} \right )}^{R(0,1-\delta /(2C))}{b}^{n} ={ \left (b{\left (\frac{\vert \xi _{0}\vert } {\zeta _{m}} \right )}^{R(0,1-\delta /(2C))/n}\right )}^{n} \geq {a}^{n}.$$

On the other hand, it follows from \(n -\sqrt{n} \leq l\) and Viéte’s formula that

$$\vert z_{l+1}\ldots z_{n}\vert \geq {\left ({ n \atop n -\sqrt{n}} \right )}^{-1}\vert \sum\limits_{i_{1}<\cdots <i_{n-l}}z_{i_{1}}\ldots z_{i_{n-l}}\vert ={ \left ({ n \atop n -\sqrt{n}} \right )}^{-1} \frac{\vert \xi _{l}\vert } {\vert \xi _{n}\vert }.$$

We combine these two inequalities to obtain for infinitely many values of n

$$\begin{array}{rcl} \frac{\vert \xi _{0}\vert } {\vert \xi _{n}\vert } = \vert z_{1}\ldots z_{n}\vert \geq {a}^{n}{\left ({ n \atop n -\sqrt{n}} \right )}^{-1} \frac{\vert \xi _{l}\vert } {\vert \xi _{n}\vert }& & \\ \geq c_{1}{a}^{n}\frac{{(\sqrt{n})}^{\sqrt{n}+\frac{1} {2} }{(n -\sqrt{n})}^{n-\sqrt{n}+\frac{1} {2} }} {{n}^{n+\frac{1} {2} }} \frac{\vert \xi _{l}\vert } {\vert \xi _{n}\vert } \geq c_{2}{a}^{n}{(\sqrt{n})}^{-\sqrt{n}}{\left (1 - \frac{1} {\sqrt{n}}\right )}^{n} \frac{\vert \xi _{l}\vert } {\vert \xi _{n}\vert }& & \\ \geq c_{3}\exp \left (n\log a -\frac{\sqrt{n}\log n} {2} -\sqrt{n}\right ) \frac{\vert \xi _{l}\vert } {\vert \xi _{n}\vert } \geq {e}^{\alpha n} \frac{\vert \xi _{l}\vert } {\vert \xi _{n}\vert },& & \\ \end{array}$$

where α is a positive random variable. Multiplying left and right parts by | ξ n  | , we get

$$\mathit{ABC_{m}} \subset \bigcup\limits_{i=1}^{\infty }D_{ i},$$

where D i denotes the event that \(\vert \xi _{0}\vert > {e}^{n/i}\max _{n-\sqrt{n}\leq l\leq n}\vert \xi _{l}\vert \) for infinitely many values of n.

To complete the proof it is sufficient to show that \(\mathbf{P}\,\{D_{i}\} = 0\) for all \(i \in \mathbb{N}\). Having in mind to apply the Borel-Cantelli lemma, let us introduce the following events:

$$H_{in} = \left \{\vert \xi _{0}\vert > {e}^{n/i}\max _{ n-\sqrt{n}\leq l\leq n}\vert \xi _{l}\vert \right \}.$$

Considering θ > 0 such that \(\mathbf{P}\,\{\vert \xi _{0}\vert \leq \theta \} = F(\theta ) < 1\), we have

$$H_{in} \subset \left \{\vert \xi _{0}\vert > \theta {e}^{n/i}\right \} \cup \left \{\max _{ n-\sqrt{n}\leq l\leq n}\vert \xi _{l}\vert \leq \theta \right \},$$

consequently,

$$\sum\limits_{n=1}^{\infty }\mathbf{P}\,\{H_{ in}\} \leq \sum\limits_{n=1}^{\infty }\mathbf{P}\,\{\vert \xi _{ 0}\vert > \theta {e}^{n/i}\} + \sum\limits_{n=1}^{\infty }{(F(\theta ))}^{\sqrt{n}} < \infty $$

and, according to the Borel-Cantelli lemma, \(\mathbf{P}\,\{D_{i}\} = 0\).

We prove the implication (2)\(\Rightarrow \)(1) arguing by contradiction. Suppose (1) does not hold, i.e.,

$$\mathbf{E}\,\log (1 + \vert \xi _{o}\vert ) = \infty.$$

It follows from (5) that

$$\sum\limits_{n=1}^{\infty }\mathbf{P}\,(\vert \xi _{ n}\vert \geq {e}^{\gamma n}) = \infty $$
(6)

for an arbitrary positive γ. For \(k \in \mathbb{N}\) introduce an event F k that \(\vert \xi _{n}\vert \geq {e}^{kn}\) holds for infinitely many values of n. It follows from (6) and the Borel-Cantelli lemma that \(\mathbf{P}\,\{F_{k}\} = 1\) and, consequently, \(\mathbf{P}\,\{ \cap _{k=1}^{\infty }F_{k}\} = 1\). This yields

$$\mathbf{P}\,\left \{limsup_{n\rightarrow \infty }\vert \xi _{n}{\vert }^{1/n} = \infty \right \} = 1.$$

Therefore with probability one for infinitely many values of n

$$\vert \xi _{n}{\vert }^{1/n} >\max _{ i=0,\ldots ,n-1}\vert \xi _{i}{\vert }^{1/i},\quad \vert \xi _{ n}{\vert }^{1/n} > \frac{3} {\epsilon },\quad \vert \xi _{0}\vert < {2}^{n-1},$$

where \(\epsilon > 0\) is an arbitrary fixed value. Let us hold one of those n. Suppose \(\vert z\vert \geq \epsilon \). Then

$$\begin{array}{rcl} \vert \xi _{0} + \xi _{1}z + \cdots + \xi _{n-1}{z}^{n-1}\vert & & \\ \leq {2}^{n-1} + \vert \xi _{ n}{z}^{n}{\vert }^{1/n} + \vert \xi _{ n}{z}^{n}{\vert }^{2/n} + \cdots + \vert \xi _{ n}{z}^{n}{\vert }^{(n-1)/n}& & \\ = \frac{{2}^{n}} {2} - 1 + \frac{\vert \xi _{n}{z}^{n}\vert - 1} {\vert \xi _{n}^{1/n}z\vert - 1} \leq \frac{\vert \xi _{n}^{1/n}z{\vert }^{n}} {2} - 1 + \frac{\vert \xi _{n}{z}^{n}\vert - 1} {(3/\epsilon ) \times \epsilon - 1} < \vert \xi _{n}{z}^{n}\vert.& & \\ \end{array}$$

Thus with probability one for infinite number of values of n all the roots of the polynomial G n are located inside the circle \(\{z\, :\, \vert z\vert = \epsilon \}\), where \(\epsilon \) is an arbitrary positive constant. This means that (2) does not hold for any \(\delta \in (0,1)\).

3 Proof of Theorem B

The proof of Theorem B follows immediately from Theorems A and C. However, the additional assumption (1) significantly simplifies the proof.

Consider a set of sequences of reals

$$\{a_{11}\},\{a_{12},a_{22}\},\ldots ,\{a_{1n},a_{2n},\ldots a_{nn}\},\ldots ,$$

where all \(a_{jn} \in [0,1]\). We say that \(\{a_{jn}\}\) are uniformly distributed in [0, 1] if for any \(0 \leq a < b \leq 1\)

$$\lim _{n\rightarrow \infty }\frac{\#\{j \in \{ 1,2,\cdots \,,n\}\, :\, a_{jn} \in [a,b]\}} {n} = b - a.$$

The definition is an insignificant generalization of the notion of uniformly distributed sequences (see, e.g., [7]). It is easy to see that the Weyl criterion (see Ibid.) continues to be valid in this case:

The set of sequences \(\{a_{jn},j = 1,\ldots ,n\},\,n = 1,2,\ldots ,\) is uniformly distributed if and only if for all \(l = 1,2,\ldots \)

$$\frac{1} {n}\sum\limits_{j=1}^{n}{e}^{2\pi ila_{jn} } \rightarrow 0,\quad n \rightarrow \infty.$$

Let \(z_{jn}\,=\,r_{jn}{e}^{i\theta _{jn}}\) be a zero of \(G_{n}(z),\,r_{jn}\,=\,\vert z_{jn}\vert ,\,\theta _{jn}\,=\,\arg z_{jn},\,0\,\leq \,\theta _{jn}\,<\,2\pi.\) The asymptotic uniform distribution of the arguments is equivalent to the statement that the set of sequences \(\{\theta _{jn}/(2\pi )\}\) is uniformly distributed. Thus, according to Weyl’s criterion, it is enough to show that for any \(l = 1,2,\ldots \)

$$\lim _{n} \frac{1} {n}\sum\limits_{j=1}^{n}{e}^{il\theta _{jn} } = 0$$

with probability 1.

For the simplicity we assume that \(\xi _{0}\neq 0\). Consider the random polynomial

$$\tilde{G}_{n}(z) = \xi _{n} + \xi _{n-1}z + \cdots + \xi _{1}{z}^{n-1} + \xi _{ 0}{z}^{n}.$$

Its roots are \(z_{kn}^{-1}\). According to Newton’s formulas (see, e.g., [8]),

$$\sum\limits_{j=1}^{n} \frac{1} {z_{jn}^{l}} = \varphi _{l}\left (\frac{\xi _{1}} {\xi _{0}},\ldots \frac{\xi _{l}} {\xi _{0}}\right ),$$

where \(\varphi _{l}(x_{1},\ldots x_{l})\) are polynomials which do not depend on n (for example, \(\varphi _{1}(x) = -x\)). It follows that

$$\frac{1} {n}\sum\limits_{j=1}^{n}{e}^{-il\theta _{jn} } = \frac{1} {n}\sum\limits_{j=1}^{n}{e}^{-il\theta _{jn} }\left (1 - \frac{1} {r_{jn}^{l}}\right ) + \frac{\varphi _{l}} {n}.$$
(7)

As was shown in the proof of Theorem A, for | z |  < 1 the polynomials G n (z) converge to the analytical function \(G(z) = \sum\limits_{k=0}^{\infty }\xi _{k}{z}^{k}\) with probability 1. Since \(\xi _{0}\neq 0\), the function G(z) has no zeros inside a circle \(\{z : \vert z\vert \leq \rho \},\,\mathbf{P}\{\rho > 0\} = 1\). Hence for \(n \geq N,\,\mathbf{P}\{N < \infty \},\) the polynomials G n (z) have no zeros inside \(\{z :\, \vert z\vert \leq \rho \}.\) Let γ > 0 be a positive number. It follows from (7) that

$$\Big{\vert }\frac{1} {n}\sum\limits_{j=1}^{n}{e}^{-il\theta _{jn} }\Big{\vert }\leq (l + 1) \frac{\gamma } {{(1 - \gamma )}^{l}} + \frac{1} {n}\left (1 + \frac{1} {\rho }\right )\#\{j : \vert r_{jn} - 1\vert > \gamma ,i = 1,\ldots n\} + \frac{\varphi _{l}} {n}.$$

Theorem A implies that the second member on the right-hand side goes to zero as \(n \rightarrow \infty \) with probability 1. Hence

$$\frac{1} {n}\sum\limits_{j=1}^{n}{e}^{-il\theta jn} \rightarrow 0,\quad n \rightarrow \infty ,$$

with probability 1 and the theorem follows.

4 Proof of Theorem C

Consider integer numbers \(p,q_{1},q_{2}\) such that \(0\,\leq \,q_{1}\,<\,q_{2}\,<\,p - 1\). Put \(\varphi _{j}\,=\,q_{j}/p\), j = 1, 2, and try to estimate \(S_{n} = S_{n}(2\pi \varphi _{1},2\pi \varphi _{2})\). Evidently \(S_{n} =\lim _{R\rightarrow \infty }S_{nR}\), where S nR is the number of zeros of G n (z) inside the domain \(A_{R} =\{ z\, :\, \vert z\vert \leq R,2\pi \varphi _{1} \leq \arg z \leq 2\pi \varphi _{2}\}\). It follows from the argument principle (see, e.g., [11]) that S nR is equal to the change of the argument of G n (z) divided by 2π as z traverses the boundary of A R . The boundary consists of the arc \(\Gamma _{R} =\{ z\, :\, \vert z\vert = R,2\pi \varphi _{1} \leq \arg z \leq 2\pi \varphi _{2}\}\) and two intervals \(L_{j} =\{ z\, :\, 0 \leq \vert z\vert \leq R,\arg z = \pi \varphi _{j}\},j = 1,2\). It can easily be checked that if R is sufficiently large, then the change of the argument as z traverses Γ R is equal to \(n(\varphi _{2} - \varphi _{1}) + o(1)\) as \(n \rightarrow \infty \). If z traverses a subinterval of L j and the change of the argument of G n (z) is at least π, then the function \(\vert G_{n}(z)\vert \cos (\arg G_{n}(z))\) has at least one root in this interval. It follows from Theorem E that with probability one the number of real roots of the polynomial

$$g_{n,j}(x) = \sum\limits_{k=0}^{n}{x}^{k}\mathfrak{R}(\xi _{ k}{e}^{2\pi ik\varphi _{j} }) = \sum\limits_{k=0}^{n}{x}^{k}\eta _{ k,j}$$

is o(n) as n → . Thus the change of the argument of \(G_{n}(z)\) as z traverses L j is o(n) as \(n \rightarrow \infty \) and

$$\mathbf{P}\,\left \{\frac{1} {n}S_{n}(2\pi \varphi _{1},2\pi \varphi _{2}) = (\varphi _{2} - \varphi _{1}) + o(1),\quad n \rightarrow \infty \right \} = 1.$$

The set of points of the form \(\exp \{2\pi iq/p\}\) is dense in the unit circle \(\{z\, :\, \vert z\vert = 1\}\). Therefore

$$\mathbf{P}\,\left \{\frac{1} {n}S_{n}(\alpha ,\beta ) \rightarrow _{n \rightarrow \infty }\frac{\beta - \alpha } {2\pi } \right \} = 1$$

for any \(\alpha ,\beta \) such that 0 \(\leq \alpha < \beta \leq 2\pi \).

5 Proof of Theorem E

First we convert the problem of counting of real zeros of g n (x) to the problem of counting of sign changes in the sequence of the derivatives \(\{g_{n}^{(j)}(1)\}_{j=0}^{n}\).

Let \(\{a_{j}\}_{j=0}^{n}\) be a sequence of real numbers. By \(Z(\{a_{j}\})\) denote the number of sign changes in the sequence {a j }, which is defined as follows. First we exclude all zero members from the sequence. Then we count the number of the neighboring members of different signs.

For any polynomial p(x) of degree n put \(\mathit{Z_{p}}(x) = Z(\{{p}^{(j)}(x)\})\), i.e., the number of sign changes in the sequence \(p(x),{p}^{{^\prime}}(x),\ldots ,{p}^{(n)}(x)\).

Lemma 1 (Budan-Fourier Theorem). 

Suppose p(x) is a polynomial such that p(a),p(b)≠0 for some a < b. Then the number of the roots of p(x) inside (a,b) does not exceed \(\mathit{Z_{p}}(a) -\mathit{Z_{p}}(b)\) . Moreover, the difference between \(\mathit{Z_{p}}(a) -\mathit{Z_{p}}(b)\) and the number of the roots is an even number.

Proof.

See, e.g., [8]. □ 

Corollary 1.

The number of the roots of p(x) inside [1,∞) does not exceed Z p (1).

Proof.

For all sufficiently large x the sign of p (j)(x) coincides with the sign of the leading coefficient. □ 

Corollary 2.

The function Z p (x) does not increase.

Let us turn back to the random polynomial g n (x). Here and elsewhere we shall omit the index n when it can be done without ambiguity. By M n (a, b) denote the number of zeros of g(x) inside the interval [a, b].

First let us prove that

$$\mathbf{E}\,Z_{g}(1) = o(n),\quad n \rightarrow \infty.$$
(8)

Fix some \(\epsilon > 0\) and \(\lambda \in (0,1/2)\). Since the distributions of {η j } belong to a finite set, there exists \(K = K(\epsilon )\) such that

$$\sup _{j\in {\mathbb{Z}}^{1}}\mathbf{P}\,\{\vert \eta _{j}\vert \geq K\} \leq \epsilon.$$
(9)

Let I be a subset of \(\{0,1,\ldots ,n\}\) consisting of indices j such that | η j  |  < K and \([\lambda n] \leq j \leq [(1 - \lambda )n]\). Put

$$g_{1}(x) = \sum\limits_{j\in I}\eta _{j}{x}^{j},\quad g_{ 2}(x) = g(x) - g_{1}(x).$$

Let τ k be the indicator of \(\{\vert g_{1}^{(k)}(1)\vert \geq \vert g_{2}^{(k)}(1)\vert \}\) and χ j be the indicator of \(\{\vert \eta _{j}\vert \geq K\}\).

Lemma 2.

Let \(a_{1},a_{1},b_{1},b_{2}\) be real numbers. If \((a_{1} + a_{2})(b_{1} + b_{2})\,<\,0\) and \(a_{2}b_{2}\,\geq \,0\) , then either \(\vert a_{1}\vert \geq \vert a_{2}\vert \) or \(\vert b_{1}\vert \geq \vert b_{2}\vert \) .

Proof.

The proof is trivial. □ 

It follows from Lemma 2 that

$$Z_{g}(1) = Z_{g_{1}+g_{2}}(1) \leq Z_{g_{2}}(1)+2\sum\limits_{j=0}^{n}\tau _{ j} \leq Z_{g_{2}}(1)+2\lambda n+2+2\sum\limits_{j=[\lambda n]}^{[(1-\lambda )n]}\tau _{ j}.$$

Owing to the monotonicity of the function \(Z_{g_{2}}(x)\), one has

$$Z_{g_{2}}(1) \leq Z_{g_{2}}(0) \leq \sum\limits_{j=0}^{n}\chi _{ j}.$$

Hence,

$$Z_{g}(1) \leq 2\lambda n + 2 + \sum\limits_{j=0}^{n}\chi _{ j} + 2\sum\limits_{j=[\lambda n]}^{[(1-\lambda )n]}\tau _{ j}.$$
(10)

Using (9) we have \(\mathbf{E}\,\chi _{j} = \mathbf{P}\,\{\vert \eta _{j}\vert \geq K\} \leq \epsilon \), therefore,

$$\mathbf{E}\,Z_{g}(1) \leq 2\lambda n + 2 + \epsilon (n + 1) + 2\mathbf{E}\,\sum\limits_{j=[\lambda n]}^{[(1-\lambda )n]}\tau _{ j}.$$
(11)

Let us now estimate the value \(\mathbf{E}\,\tau _{j}\). Note that \({g}^{(k)}(x) = \sum\limits_{l=k}^{n}\eta _{l}A_{k,l}{x}^{l-k}\), where \(A_{k,l} = l(l - 1)\cdots (l - k + 1)\). Fix some integer k such that \(\lambda n \leq k \leq (1 - \lambda )n\). If \(n - 1 \geq j \geq k\), then

$$A_{k,j} \leq (1 - \lambda )A_{k,j+1},$$

which implies

$$A_{k,j} \leq A_{k,[(1-\lambda )n]}{(1 - \lambda )}^{[(1-\lambda )n]-j}$$

for \(\lambda n \leq k \leq j \leq (1 - \lambda )n\). Consequently,

$$\begin{array}{rcl} \vert g_{1}^{(k)}(1)\vert = \Big{\vert }\sum\limits_{j\in J,j\geq k}\eta _{j}A_{k,j}\Big{\vert }& & \\ \leq KA_{k,[(1-\lambda )n]} \sum\limits_{j=0}^{[(1-\lambda )n]}{(1 - \lambda )}^{j} \leq \frac{K} {\lambda } A_{k,[(1-\lambda )n]}.& & \\ \end{array}$$

This yields that

$$\begin{array}{rcl} \mathbf{E}\,\tau _{k} = \mathbf{P}\,\left \{\vert g_{1}^{(k)}(1)\vert \geq \vert g_{ 2}^{(k)}(1)\vert \right \}& & \\ \leq \mathbf{P}\,\left \{\vert g_{1}^{(k)}(1)\vert \geq \vert g_{ 1}^{(k)}(1) + g_{ 2}^{(k)}(1)\vert -\vert g_{ 1}^{(k)}(1)\vert \right \}& & \\ = \mathbf{P}\,\left \{\vert {g}^{(k)}(1)\vert \leq 2\vert g_{ 1}^{(k)}(1)\vert \right \}\leq \mathbf{P}\,\left \{\vert {g}^{(k)}(1)\vert \leq \frac{2K} {\lambda } A_{k,[(1-\lambda )n]}\right \}.& & \\ \end{array}$$

For an arbitrary random variable X define the concentration function Q(h; X) as follows:

$$Q(h;X) =\sup _{a\in {\mathbb{R}}^{1}}\mathbf{P}\,\{a \leq X \leq a + h\}.$$

If X, Y are independent random variables, then (see, e.g., [12])

$$Q(h;X + Y ) \leq \min \left (Q(h;X),Q(h;Y )\right ).$$

Therefore,

$$\begin{array}{rcl} \mathbf{E}\,\tau _{k} \leq \mathbf{P}\,\left \{ \frac{\vert {g}^{(k)}(1)\vert } {A_{k,[(1-\lambda )n]}} \leq \frac{2K} {\lambda } \right \}& & \\ \leq \mathbf{P}\,\left \{ \frac{{g}^{(k)}(1)} {A_{k,[(1-\lambda )n]}} \leq \frac{2K} {\lambda } \right \} \leq Q\left (\frac{2K} {\lambda } ; \frac{{g}^{(k)}(1)} {A_{k,[(1-\lambda )n]}}\right )& & \\ = Q\left (\frac{2K} {\lambda } ;\sum\limits_{j=k}^{n} \frac{A_{k,j}} {A_{k,[(1-\lambda )n]}}\eta _{j}\right ) \leq Q\left (\frac{2K} {\lambda } ;\sum\limits_{j=[(1-\lambda )n]}^{n} \frac{A_{k,j}} {A_{k,[(1-\lambda )n]}}\eta _{j}\right ).& &\end{array}$$
(12)

To estimate the right-hand side of (12) we use the following result.

Lemma 3 (the Kolmogorov-Rogozin inequality). 

Let \(X_{1},X_{2},\ldots ,X_{n}\) be independent random variables. Then for any \(0 < h_{j} \leq h,\,j = 1,\ldots ,n,\)

$$Q(h;X_{1} + \cdots + X_{n}) \leq \frac{Ch} {\sqrt{\sum\limits_{j=1}^{n}h_{j}^{2}(1 - Q(h_{j};X_{j}))}},$$
(13)

where C is an absolute constant.

Proof.

See [13]. □ 

Since the distributions of {η j } belong to a finite set, we get

$$\delta = \delta (\epsilon ,\lambda ) =\inf _{j\in {\mathbb{Z}}^{1}}\left \{1 - Q\left (\frac{2K} {\lambda } ;\eta _{j}\right )\right \} > 0.$$

Putting \(h = h_{j} = 2K/\lambda \) in (13) and using (12), we obtain

$$\begin{array}{rcl} \mathbf{E}\,\tau _{k} \leq C{\left [\sum\limits_{j=[(1-\lambda )n]}^{n}\left \{1 - Q\left (\frac{2K} {\lambda } ; \frac{A_{k,j}} {A_{k,[(1-\lambda )n]}}\eta _{j}\right )\right \}\right ]}^{-1/2}& & \\ \leq C{\left [\sum\limits_{j=[(1-\lambda )n]}^{n}\left \{1 - Q\left (\frac{2K} {\lambda } ;\eta _{j}\right )\right \}\right ]}^{-1/2} \leq \frac{C} {\sqrt{\delta \lambda n}}.& & \\ \end{array}$$

Combining this with (11), we have

$$\mathbf{E}\,Z_{g}(1) \leq 2\lambda n + 2 + \epsilon (n + 1) + \frac{2C} {\sqrt{\delta (\epsilon , \lambda )\lambda }}{n}^{1/2}.$$

Since \(\lambda ,\epsilon \) are arbitrary positive numbers, we obtain (8), which together with the corollary from Lemma 1 implies

$$\mathbf{E}\,M_{n}(1,\infty ) = o(n),\quad n \rightarrow \infty.$$

Considering the random polynomials g(1 ∕ x) and g( − x), it is possible to obtain similar estimates for M n (0, 1) and \(M_{n}(-\infty ,0)\). Thus the second part of (3) holds. To prove the first one, we estimate the probabilities of large deviations for the sums \(\sum \chi _{j}\) and \(\sum \tau _{j}\). The elementary considerations or the application of Bernstein inequalities (see, e.g., [12]) leads to

$$\mathbf{P}\,\left \{\Big{\vert }\sum\limits_{j=0}^{n}\chi _{ j}\Big{\vert } > 2(n + 1)\epsilon \right \} \leq 2{e}^{-n\epsilon /8}.$$
(14)

The analysis of the behavior of ∑τ j is slightly more difficult.

Henceforth we shall use the following notation: for any positive functions \(f_{1},f_{2}\) we write \(f_{1} \ll f_{2}\), if there exists an absolute constant C such that \(f_{1} \leq Cf_{2}\) in the domain of these functions.

Lemma 4.

There exists a constant c depending only on \(\lambda ,\epsilon \) and the distributions of {η j } such that

$$\mathbf{E}\,\tau _{k} \leq c{n}^{-2}$$

for \(\lambda n \leq k \leq (1 - \lambda )n\) .

Proof.

As was shown in (12),

$$\mathbf{E}\,\tau _{k} \leq Q\left (\frac{2K} {\lambda } ;\sum\limits_{j=[(1-\lambda )n]}^{n} \frac{A_{k,j}} {A_{k,[(1-\lambda )n]}}\eta _{j}\right )\;.$$
(15)

To estimate the concentration function in the right-hand side we use the result of Esseen (see, e.g., [12]). Let X be a random variable with a characteristic function f(t). Then

$$Q(h;X) \ll \max \left (h, \frac{1} {T}\right )\int\limits_{-T}^{T}\vert f(t)\vert \,dt$$

uniformly for all T > 0.

Putting \(T = \lambda /(KA_{k,[(1-\lambda )n]})\) and applying (15) , we obtain

$$\mathbf{E}\,\tau _{k} \ll \frac{1} {T}\int\limits_{-T}^{T} \prod\limits_{j=[(1-\lambda )n]}^{n}\vert f_{ j}(A_{kj}t)\vert \,dt,$$

where f j (t) is a characteristic function of η j . Further,

$$\begin{array}{rl} \mathbf{E}\,\tau _{k}& \ll \frac{1} {T} \int\limits_{-T}^{T}{\left [\prod\limits_{j=[(1-\lambda )n]}^{n}\vert f_{j}(A_{kj}t){\vert }^{2}\right ]}^{\frac{1} {2} }\,dt \\ & \ll \frac{1} {T} \int\limits_{-T}^{T}\exp \left \{-\frac{1} {2} \sum\limits_{j=[(1-\lambda )n]}^{n}\left (1 -\vert f(A_{kj}t){\vert }^{2}\right )\right \}\,dt \\ & = \frac{1} {T} \int\limits_{-T}^{T}\exp \left \{-\frac{1} {2} \sum\limits_{j=[(1-\lambda )n]}^{n} \int\limits_{-\infty }^{\infty }\left [1 -\cos (A_{kj}tx)\right ]\,\mathcal{P}_{j}(dx)\right \}\,dt, \end{array}$$

where \(\mathcal{P}_{j}\) is a distribution of the symmetrized η j , i.e., a distribution of \(\eta _{j} - \eta _{j}^{{^\prime}}\), where \(\eta _{j}^{{^\prime}}\) is an independent copy of η j .

There are at most r different distributions among \(\{\mathcal{P}_{j}\}_{(1-\lambda )n\leq j\leq n}\). Therefore there exist a distribution \(\mathcal{P}\) and a subset \(J \subset \{ j\, :\, (1 - \lambda )n \leq j \leq n\}\) such that \(\vert J\vert \geq n\lambda /r\) and \(\mathcal{P}_{j} = \mathcal{P}\) for all j ∈ J. By ∑ denote the summation taking over all indices such that j ∈ J. Thus,

$$\mathbf{E}\,\tau _{k} \ll \frac{1} {T}\int\limits_{-T}^{T}\exp \left \{-\frac{1} {2}{{}{\sum }{^\prime}}_{j=[(1-\lambda )n]}^{n} \int\limits_{-\infty }^{\infty }\left [1 -\cos (A_{ kj}tx)\right ]\,\mathcal{P}(dx)\right \}\,dt.$$

Choose δ > 0 such that \(\gamma \,=\,\mathcal{P}\{x\, :\, \vert x\vert > \delta \}\,>\,0\). Since the integrands are non-negative, we get

$$\begin{array}{rl} \mathbf{E}\,\tau _{k}& \ll \frac{1} {T}\int\limits_{-T}^{T}\exp \left \{-\frac{1} {2}{{}{\sum }{^\prime}}_{j=[(1-\lambda _{r})n]}^{n} \int\limits_{\vert x\vert >\delta }\left [1 -\cos (A_{kj}tx)\right ]\,\mathcal{P}(dx)\right \} \\ & = \frac{1} {T}\int\limits_{-T}^{T}{e}^{-\beta n+s(t)}\,dt, \end{array}$$

where \(\lambda _{r} = \lambda (2r - 1)/(2r),\,\beta = \vert J \cap \{ j\, :\, (1 - \lambda _{r})n \leq j \leq n\}\vert /(2n)\) and

$$s(t) = \frac{1} {2}\int\limits_{\vert x\vert >\delta }{{}{ \sum }{^\prime}}_{j=[(1-\lambda _{r})n]}^{n}\cos (A_{ kj}tx)\,\mathcal{P}(dx).$$

Put \(\alpha = \lambda \gamma /(4r)\) and consider \(\Lambda _{1} =\{ t \in [-T,T]\, :\, \vert s(t)\vert < \alpha n/2\}\) and \(\Lambda _{2} = [-T,T] \setminus \Lambda _{1}\). Since \(\vert J\vert \geq n\lambda /r\) and by the definition of β, we have \(\beta \geq \alpha \). Therefore,

$$\mathbf{E}\,\tau _{k} \ll {e}^{-\alpha n/2} + \frac{\mu (\Lambda _{2})} {T} ,$$
(16)

where μ denotes the Lebesgue measure.

Let us estimate \(\mu (\Lambda _{2})\). It follows from Chebyshev’s and Hölder’s inequalities that

$$\mu (\Lambda _{2}) \leq \frac{16} {{\alpha }^{4}{n}^{4}} \int\limits_{-T}^{T}\vert s(t){\vert }^{4}\,dt \leq \frac{1} {{\alpha }^{4}{n}^{4}} \int\limits_{\vert x\vert >\delta }\,d\mathcal{P}\int\limits_{-T}^{T}\Big{\vert }{{}{\sum }{^\prime}}_{j=[(1-\lambda _{r})n]}^{n}\cos (A_{ kj}tx){\Big{\vert }}^{4}\,dt.$$
(17)

Put

$$S(x) = \int\limits_{-T}^{T}\Big{\vert }{{}{\sum }{^\prime}}_{j=[(1-\lambda _{r})n]}^{n}\cos (A_{ kj}tx){\Big{\vert }}^{4}\,dt$$

and assume, for simplicity, that r = 1, i.e., \(\lambda _{r} = \lambda /2,\,\sum ={ \sum }^{{^\prime}}\) and the summation is taken over all j. The general case is considered in a similar way.

We have

$$\begin{array}{rcl} S(x) = \int\limits_{-T}^{T}\bigg{(}\sum {_{j_{1}}\cos }^{4}(A_{ kj_{1}}tx) + \sum {_{j_{1}\neq j_{2}}\cos }^{3}(A_{ kj_{1}}tx)\cos (A_{kj_{2}}tx)& & \\ +\sum {_{j_{1}\neq j_{2}}\cos }^{2}{(A_{ kj_{1}}tx)\cos }^{2}(A_{ kj_{2}}tx)& & \\ +\sum {_{j_{1}\neq j_{2}\neq j_{3}}\cos }^{2}(A_{ kj_{1}}tx)\cos (A_{kj_{2}}tx)\cos (A_{kj_{3}}tx)& & \\ +\sum\limits_{j_{1}\neq j_{2}\neq j_{3}\neq j_{4}}\cos (A_{kj_{1}}tx)\cos (A_{kj_{2}}tx)\cos (A_{kj_{3}}tx)\cos (A_{kj_{4}}tx)\bigg{)}\,dt.& &\end{array}$$
(18)

The first three summands in (18) are easily estimated as follows:

$$\begin{array}{rcl} \Big{\vert }\int\limits_{-T}^{T}\bigg{(}\sum {_{j_{1}}\cos }^{4}(A_{ kj_{1}}tx) + \sum {_{j_{1}\neq j_{2}}\cos }^{3}(A_{ kj_{1}}tx)\cos (A_{kj_{2}}tx)& & \\ +\sum {_{j_{1}\neq j_{2}}\cos }^{2}{(A_{ kj_{1}}tx)\cos }^{2}(A_{ kj_{2}}tx)\bigg{)}\,dt\Big{\vert }\ll T{n}^{2}.& &\end{array}$$
(19)

The next two summands have a common method of estimation. We consider only the last one. From the formula \(\cos y = ({e}^{iy} + {e}^{-iy})/2\) it is easily shown that

$$\begin{array}{rcl} \Big{\vert }\int\limits_{-T}^{T} \sum\limits_{j_{1}\neq j_{2}\neq j_{3}\neq j_{4}}\cos (A_{kj_{1}}tx)\cos (A_{kj_{2}}tx)\cos (A_{kj_{3}}tx)\cos (A_{kj_{4}}tx)\,dt\Big{\vert }& & \\ \ll \sum\limits_{j_{1}\neq j_{2}\neq j_{3}\neq j_{4}}\min \left (T,\vert x{\vert }^{-1}\vert \pm A_{ kj_{1}} \pm A_{kj_{2}} \pm A_{kj_{3}} \pm A_{kj_{4}}{\vert }^{-1}\right )& & \\ \ll \sum\limits_{j_{1}>j_{2}>j_{3}>j_{4}}\min \left (T,\vert x{\vert }^{-1}A_{ kj_{1}}^{-1}\Big{\vert }1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} -\frac{A_{kj_{3}}} {A_{kj_{1}}} -\frac{A_{kj_{4}}} {A_{kj_{1}}}{ \Big{\vert }}^{-1}\right ),& &\end{array}$$
(20)

The summation in the middle term is taken over all possible combinations of signs.

Consider the partition of the index set

$$\{j = (j_{1},j_{2},j_{3},j_{4})\, :\, j_{1} > j_{2} > j_{3} > j_{4}\} = K_{1} \cup K_{2},$$

where

$$K_{1} = \left \{j\, :\, j_{1} - j_{2} \leq \frac{10} {\lambda } ,\,j_{1} - j_{3} \leq \frac{10} {\lambda } \vert \ln \lambda \vert \right \}$$

and K 2 is the complement of K 1. Clearly, \(\vert K_{1}\vert \ll {n}^{2}\vert \ln \lambda \vert /{\lambda }^{2}\). Therefore,

$$\sum\limits_{j\in K_{1}}\min \left (T,\vert x{\vert }^{-1}A_{ kj_{1}}^{-1}\Big{\vert }1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} -\frac{A_{kj_{3}}} {A_{kj_{1}}} -\frac{A_{kj_{4}}} {A_{kj_{1}}}{ \Big{\vert }}^{-1}\right ) \ll \frac{T{n}^{2}\vert \ln \lambda \vert } {{\lambda }^{2}}.$$
(21)

Consider now

$$\sum\limits_{j\in K_{2}}A_{kj_{1}}^{-1}\Big{\vert }1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} -\frac{A_{kj_{3}}} {A_{kj_{1}}} -\frac{A_{kj_{4}}} {A_{kj_{1}}}{ \Big{\vert }}^{-1}.$$

Putting \(p = j_{1} - j_{2}\), we have

$$\begin{array}{rcl} \frac{A_{kj_{2}}} {A_{kj_{1}}} = \frac{(j_{1} - p)\cdots (j_{1} - p - k + 1)} {j_{1}\cdots (j_{1} - k + 1)} & & \\ = \left (1 - \frac{p} {j_{1}}\right )\cdots \left (1 - \frac{p} {j_{1} - k + 1}\right ) \leq \exp \left \{-p\sum\limits_{l=j_{1}-k+1}^{j_{1} } \frac{1} {l} \right \}.& & \\ \end{array}$$

Since for any natural l

$$\frac{1} {l} >\ln \left (1 + \frac{1} {l} \right ) =\ln (l + 1) -\ln l,$$

we get

$$\sum\limits_{l=j_{1}-k+1}^{j_{1} } \frac{1} {l} >\ln (j_{1} + 1) -\ln (j_{1} - k + 1) = -\ln \left (1 - \frac{k} {j_{1} + 1}\right ).$$

Taking into account \(\lambda n \leq k \leq (1 - \lambda )n\) and \((1 - \lambda /2)n \leq j_{1} \leq n\) and using the inequality

$$-\ln (1 - t) \geq t,\quad t \in [0,1],$$

we get

$$\sum\limits_{l=j_{1}-k+1}^{j_{1} } \frac{1} {l} \geq \frac{\lambda n} {n + 1} \geq \frac{1} {2}\lambda.$$

Therefore,

$$\frac{A_{kj_{2}}} {A_{kj_{1}}} \leq \exp \left \{-\frac{\lambda } {2}p\right \} =\exp \left \{-\frac{\lambda } {2}(j_{1} - j_{2})\right \}.$$
(22)

If \(j \in K_{2}\) and \(j_{1} - j_{2} > 10/\lambda \), then

$$\frac{A_{kj_{4}}} {A_{kj_{1}}} \leq \frac{A_{kj_{3}}} {A_{kj_{1}}} \leq \frac{A_{kj_{2}}} {A_{kj_{1}}} \leq {e}^{-5} < \frac{1} {4},$$

which implies

$$1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} -\frac{A_{kj_{3}}} {A_{kj_{1}}} -\frac{A_{kj_{4}}} {A_{kj_{1}}} \geq \frac{1} {4}.$$
(23)

Suppose now \(j \in K_{2}\) and \(j_{1} - j_{3} > 10\vert \ln \lambda \vert /\lambda \). Using (22) and \(\lambda \in (0,1/2)\), we get

$$1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} \geq 1 - {e}^{-\lambda /2} \geq \frac{\lambda } {2}\left (1 -\frac{\lambda } {4}\right ) \geq \frac{7} {16}\lambda.$$

Further, (22) also holds for j 3. Therefore,

$$\frac{A_{kj_{4}}} {A_{kj_{1}}} \leq \frac{A_{kj_{3}}} {A_{kj_{1}}} \leq \exp \left \{-\frac{\lambda } {2}(j_{1} - j_{3})\right \} \leq \exp \left \{-\frac{10} {2} \vert \ln \lambda \vert \right \}\leq {\lambda }^{5} \leq \frac{1} {16}\lambda.$$

Thus,

$$1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} -\frac{A_{kj_{3}}} {A_{kj_{1}}} -\frac{A_{kj_{4}}} {A_{kj_{1}}} \geq \frac{5} {16}\lambda.$$
(24)

It follows from (23) and (24) that

$$\sum\limits_{j\in K_{2}}A_{kj_{1}}^{-1}\Big{\vert }1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} -\frac{A_{kj_{3}}} {A_{kj_{1}}} -\frac{A_{kj_{4}}} {A_{kj_{1}}}{ \Big{\vert }}^{-1} \ll \frac{1} {\lambda }\sum\limits_{j}A_{kj_{1}}^{-1}.$$

Taking into account the structure of the index set {j}, we have

$$\sum\limits_{j}A_{kj_{1}}^{-1} \leq \frac{{(\lambda n)}^{4}} {A_{k,[(1-\lambda /2)n]}},$$

consequently,

$$\sum\limits_{j\in K_{2}}A_{kj_{1}}^{-1}\Big{\vert }1 -\frac{A_{kj_{2}}} {A_{kj_{1}}} -\frac{A_{kj_{3}}} {A_{kj_{1}}} -\frac{A_{kj_{4}}} {A_{kj_{1}}}{ \Big{\vert }}^{-1} \ll \frac{{\lambda }^{3}{n}^{4}} {A_{k,[(1-\lambda /2)n]}}.$$
(25)

Combining (18)–(21) and (25), we obtain

$$S(x) \ll T{n}^{2} + \frac{T{n}^{2}\vert \ln \lambda \vert } {{\lambda }^{2}} + \frac{{\lambda }^{3}{n}^{4}} {\vert x\vert A_{k,[(1-\lambda /2)n]}}.$$

Applying this to (17), we get

$$\mu (\Lambda _{2}) \ll \frac{T} {{\alpha }^{4}{n}^{2}} + \frac{T\vert \ln \lambda \vert } {{\lambda }^{2}{\alpha }^{4}{n}^{2}} + \frac{{\lambda }^{3}} {{\alpha }^{4}\delta A_{k,[(1-\lambda /2)n]}}.$$

By (16),

$$\mathbf{E}\,\tau _{k} \ll {e}^{-\alpha n/2} + \frac{1} {{\alpha }^{4}{n}^{2}} + \frac{\vert \ln \lambda \vert } {{\lambda }^{2}{\alpha }^{4}{n}^{2}} + \frac{{\lambda }^{3}} {T{\alpha }^{4}\delta A_{k,[(1-\lambda /2)n]}}.$$

Recalling that \(T = \lambda /(KA_{k,[(1-\lambda )n]})\), we obtain

$$\mathbf{E}\,\tau _{k} \ll {e}^{-\alpha n/2} + \frac{1} {{\alpha }^{4}{n}^{2}} + \frac{\vert \ln \lambda \vert } {{\lambda }^{2}{\alpha }^{4}{n}^{2}} + \frac{{\lambda }^{2}KA_{k,[(1-\lambda )n]}} {{\alpha }^{4}\delta A_{k,[(1-\lambda /2)n]}}.$$

It follows from (22) that

$$\frac{A_{k,[(1-\lambda )n]}} {A_{k,[(1-\lambda /2)n]}} \leq {e}^{-{\lambda }^{2}n/4 }.$$

Thus,

$$\mathbf{E}\,\tau _{k} \ll {e}^{-\alpha n/2} + \frac{1} {{\alpha }^{4}{n}^{2}} + \frac{\vert \ln \lambda \vert } {{\lambda }^{2}{\alpha }^{4}{n}^{2}} + \frac{{\lambda }^{2}K} {{\alpha }^{4}\delta } {e}^{-{\lambda }^{2}n/4 }.$$

Recalling that \(\alpha = \gamma \lambda /4\), we obtain

$$\mathbf{E}\,\tau _{k} \ll {e}^{-\gamma \lambda n/8} + \frac{1} {{\gamma }^{4}{\lambda }^{4}{n}^{2}} + \frac{\vert \ln \lambda \vert } {{\gamma }^{4}{\lambda }^{6}{n}^{2}} + \frac{K} {{\gamma }^{4}{\lambda }^{2}\delta }{e}^{-{\lambda }^{2}n/4 }.$$

Since K is defined by \(\epsilon \) and γ, δ are defined by the distributions of {η j }, Lemma 4 is proved. □ 

Now we are ready to complete the proof of Theorem E. It follows from (10) that

$$M_{n}(1,\infty ) \leq 2\lambda n + 2 + \sum\limits_{j=0}^{n}\chi _{ j} + 2\sum\limits_{j=[\lambda n]}^{[(1-\lambda )n]}\tau _{ j}.$$
(26)

By Lemma 4 and Chebyshev’s inequality,

$$\mathbf{P}\,\left \{\sum\limits_{k=[\lambda n]}^{[(1-\lambda )n]}\tau _{ k} > {n}^{3/4}\right \} \leq \frac{\sum\limits_{j=[\lambda n]}^{[(1-\lambda )n]}\mathbf{E}\,\tau _{k}} {{n}^{3/4}} \leq c_{1}{n}^{-5/4}.$$
(27)

Further, it follows from (14) that there exists a constant c 2 > 0 depending only on \(\epsilon \) such that

$$\mathbf{P}\,\left \{\sum\limits_{j=0}^{n}\chi _{ j} > 2\epsilon n\right \} \leq c_{2}{n}^{-2}.$$
(28)

Combining (26)–(28), we get

$$\mathbf{P}\,\left \{M_{n}(1,\infty ) > 2\lambda n + 2 + 2{n}^{3/4} + 2\epsilon n\right \} \leq c_{ 1}{n}^{-5/4} + c_{ 2}{n}^{-2}.$$

Considering the random polynomials g(1 ∕ x) and g( − x), it is possible to obtain similar estimates for M n (0, 1) and \(M_{n}(-\infty ,0)\). Thus there exist positive constants \(c{^\prime}_{1},c{^\prime}_{2}\) such that

$$\mathbf{P}\,\left \{M_{n} > 2\lambda n + 2 + 2{n}^{3/4} + 2\epsilon n\right \} \leq c{^\prime}_{ 1}{n}^{-5/4} + c{^\prime}_{ 2}{n}^{-2}.$$

According to the Borel-Cantelli lemma, with probability one there exists only a finite number of n such that \(M_{n} > 2\lambda n + 2 + 2{n}^{3/4} + 2\epsilon n\). Since \(\lambda ,\epsilon \) are arbitrary small,

$$\mathbf{P}\,\left \{\frac{M_{n}} {n} \rightarrow _{n \rightarrow \infty }0\right \} = 1.$$

Theorem E is proved.