Abstract
Let \(G_{n}(z) = \xi _{0} + \xi _{1}z + \cdots + \xi _{n}{z}^{n}\) be a random polynomial with i.i.d. coefficients (real or complex). We show that the arguments of the roots of G n (z) are uniformly distributed in [0, 2π] asymptotically as \(n\,\rightarrow \,\infty \). We also prove that the condition \(\mathbf{E}\,\ln (1 + \vert \xi _{0}\vert )\,<\,\infty \) is necessary and sufficient for the roots to asymptotically concentrate near the unit circumference.
Mathematics Subject Classification (2010): 60-XX, 30C15
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Inroduction: Problem and Results
Let \(\{\xi _{k}\}_{k=0}^{\infty }\) be a sequence of independent identically distributed real- or complex-valued random variables. It is always supposed that \(\mathbf{P}\,(\xi _{0} = 0) < 1\).
Consider the sequence of random polynomials
By \(z_{1n},\ldots ,z_{nn}\) denote the zeros of G n . It is not hard to show (see [1]) that there exists an indexing of the zeros such that for each \(k\,=\,1,\ldots ,n\) the k-th zero z kn is a one-valued random variable. For any measurable subset A of complex plain \(\mathbb{C}\) put \(N_{n}(A)\,=\#\{z_{kn}\, :\, z_{kn} \in \ A\}\). Then N n (A) ∕ n is a probability measure on the plane (the empirical distribution of the zeros of G n ). For any a, b such that \(0 \leq a < b \leq \infty \) put \(R_{n}(a,b) = N_{n}(\{z\, :\, a \leq \vert z\vert \leq b\})\) and for any α, β such that 0 \(\leq \alpha < \beta \leq 2\pi \) put \(S_{n}(\alpha ,\beta ) = N_{n}(\{z\, :\, \alpha \leq \arg z \leq \beta \})\). Thus R n ∕ n and S n ∕ n define the empiric distributions of | z kn | and \(\arg z_{kn}\).
In this paper we study the limit distributions of \(N_{n},R_{n},S_{n}\) as n → ∞.
The question of the distribution of the complex roots of G n have been originated by Hammersley in [1]. The asymptotic study of \(R_{n},S_{n}\) has been initiated by Shparo and Shur in [16]. To describe their results let us introduce the function
where \(\log^{+}s =\max (1,\log s)\). We assume that \(\epsilon \,>\,0,m \in {\mathbb{Z}}^{+}\) and \(f(t) = {{(\log }^{+}t)}^{1+\epsilon }\) for m = 0.
Shparo and Shur have proved in [16] that if
for some \(\epsilon > 0,m \in {\mathbb{Z}}^{+}\), then for any \(\delta \in (0,1)\) and \(\alpha ,\beta \) such that 0 \(\leq \alpha < \beta \leq 2\pi \)
The first relation means that under quite weak constraints imposed on the coefficients of a random polynomial, almost all its roots “concentrate uniformly” near the unit circumference with high probability; the second relation means that the arguments of the roots are asymptotically uniformly distributed.
Later Shepp and Vanderbei [15] and Ibragimov and Zeitouni [5] under additional conditions imposed on the coefficients of G n got more precise asymptotic formulas for R n .
What kind of further results could be expected? First let us note that if, e.g., \(\mathbf{E}\,\vert \xi _{0}\vert < \infty \), then for | z | < 1
as \(n\,\rightarrow \,\infty \) a.s. The function G(z) is analytical inside the unit disk \(\{\vert z\vert \,<\,1\}\). Therefore for any δ > 0 it has only a finite number of zeros in the disk \(\{\vert z\vert < 1 - \delta \}\). At the other hand, the average number of zeros in the domain \(\vert z\vert > 1/(1 - \delta )\) is the same (it could be shown if we consider the random polynomial G(1 ∕ z)). Thus one could expect that under sufficiently weak constraints imposed on the coefficients of a random polynomial the zeros concentrate near the unit circle \(\Gamma =\{ z\, :\, \vert z\vert = 1\}\) and a measure R n ∕ n converges to the delta measure at the point one. We may expect also from the consideration of symmetry that the arguments \(\arg z_{kn}\) are asymptotically uniformly distributed. Below we give the conditions for these hypotheses to hold. We shall prove the following three theorems about the behavior of \(N_{n}/n,R_{n}/n,S_{n}/n\).
For the sake of simplicity, we assume that \(\mathbf{P}\,\{\xi _{0} = 0\} = 0\). To treat the general case it is enough to study in the same way the behavior of the roots on the sets \(\{\theta {^\prime}_{n} = k,\theta {^\prime}{^\prime}_{n} = l\}\), where
Theorem A.
The sequence of the empirical distributions R n ∕n converges to the delta measure at the point one almost surely if and only if
In other words, (1) is necessary and sufficient condition for
hold for any δ > 0.
We shall also prove that if (1) does not hold then no limit distribution for \(\{z_{nk}\}\) exist.
Theorem B.
Suppose the condition (1)holds. Then the empirical distribution N n ∕n almost surely converges to the probability measure \(N(\cdot ) = \mu (\cdot \cap \Gamma )/(2\pi )\) , where \(\Gamma =\{ z\, :\, \vert z\vert = 1\}\) and μ is the Lebesgue measure.
Theorem C.
The empirical distribution S n ∕n almost surely converges to the uniform distribution, i.e.,
for any \(\alpha ,\beta \) such that 0 \(\leq \alpha < \beta \leq 2\pi \) .
Let us remark here that Theorem C does not require any additional conditions on the sequence {ξ k }.
The next result is of crucial importance in the proof of Theorem C.
Theorem D.
Let \(\{\eta _{k}\}_{k=0}^{\infty }\) be a sequence of independent identically distributed real-valued random variables. Put \(g_{n}(x) = \sum\limits_{k=0}^{n}\eta _{k}{x}^{k}\) and by M n denote the number of real roots of the polynomial g n (x). Then
Theorem D is also of independent interest. In a number of papers it was shown that under weak conditions on the distribution of η0 one has \(\mathbf{E}\,M_{n} \sim c \times \log n,n \rightarrow \infty \) (see [2–4, 6, 9, 10]). L. Shepp proposed the following conjecture: for any distribution of η0 there exist positive numbers \(c_{1},c_{2}\) such that \(\mathbf{E}\,M_{n} \geq c_{1} \times \log n\) and \(\mathbf{E}\,M_{n} \leq c_{2} \times \log n\) for all n. The first statement was disproved in [17, 18]. There was constructed a random polynomial g n (x) with \(\mathbf{E}\,M_{n} < 1 + \epsilon \). It is still unknown if the second statement is true. However, Theorem D shows that an arbitrary random polynomial can not have too much real roots (see also [14]).
In fact, in the proof of Theorem C we shall use a slightly generalized version of Theorem D:
Theorem E.
For some integer r consider a set of r non-degenerate probability distributions. Let \(\{\eta _{k}\}_{k=0}^{\infty }\) be a sequence of independent real-valued random variables with distributions from this set. As above, put \(g_{n}(x) = \sum\limits_{k=0}^{n}\eta _{k}{x}^{k}\) and by M n denote the number of real roots of the polynomial g n (x). Then
2 Proof of Theorem A
Let us establish the sufficiency of (1). Let it hold and fix \(\delta \in (0,1)\). Prove that the radius of convergence of the series
is equal to one with probability one.
Consider ρ > 0 such that \(\mathbf{P}\,\{\vert \xi _{0}\vert > \rho \} > 0\). Using the Borel-Cantelli lemma we obtain that with probability one the sequence {ξ k } contains infinitely many ξ k such that \(\vert \xi _{k}\vert > \rho \). Therefore the radius of convergence of the series (4) does not exceed 1 almost surely.
On the other hand, for any non-negative random variable ζ
Therefore, it follows from (1) that
for any positive constant γ. It follows from the Borel-Cantelli lemma that with probability one \(\vert \xi _{k}\vert < {e}^{\gamma k}\) for all sufficiently large k. Thus, according to the Cauchy-Hadamard formula (see, e.g., [11]), the radius of convergence of the series (4) is at least 1 almost surely.
Hence with probability one G(z) is an analytical function inside the unit ball { | z | < 1}. Therefore if \(0 \leq a < b < 1\), then \(R(a,b) < \infty \), where R(a, b) denotes the number of the zeros of G inside the domain \(\{z\, :\, a \leq \vert z\vert \leq b\}\). It follows from the Hurwitz theorem (see, e.g., [11]) that \(R_{n}\left (0,1 - \delta \right ) \leq R\left (0,1 - \delta /2\right )\) with probability one for all sufficiently large n. This implies
In order to conclude the proof of (2) it remains to show that
In other words, we need to prove that \(\mathbf{P}\,\{A\}\,=\,0\), where A denotes the event that there exists \(\epsilon > 0\) such that
holds for infinitely many values n.
By B denote the event that G(z) is an analytical function inside the unit disk \(\{\vert z\vert < 1\}\). For \(m \in \mathbb{N}\) put
By C m denote the event that \(\zeta _{m} < \infty \). It was shown above that \(\mathbf{P}\,\{B\} = \mathbf{P}\,\{C_{m}\} = 1\) for \(m \in \mathbb{N}\). Therefore, to get \(\mathbf{P}\,\{A\} = 0\), it is sufficient to show that \(\mathbf{P}\,\{ABC_{m}\} = 0\) for some m.
Let us fix m. The exact value of it will be chosen later. Suppose the event ABC m occurred. Index the roots of the polynomial G n (z) according to the order of magnitude of their absolute values:
Fix an arbitrary number C > 1 (an exact value will be chosen later). Consider indices i, j such that
If \(\vert z_{1}\vert \geq 1 - \delta /C\), then i = 0; if \(\vert z_{n}\vert \leq 1 + \delta \) then j = n.
It is easily shown that if
then
Therefore such z can not be a zero of the polynomial G n . Taking into account that the event C m occurred, we obtain a lower bound for the absolute values of the zeros for all sufficiently large n:
Therefore for any integer l satisfying \(j + 1 \leq l \leq n\) and all sufficiently large n
Since A occurred, \(n - j \geq n\epsilon \) for infinitely many values of n. Therefore if l satisfies \(n -\sqrt{n} \leq l \leq n\), then the inequalities \(j + 1 \leq l \leq n\) and \(l - j \geq n\epsilon /2\) hold for infinitely many values of n. According to the Hurwitz theorem for all sufficiently large n we have \(i \leq R_{n}(0,1 - \delta /C) \leq R(0,1 - \delta /(2C))\). Therefore for infinitely many values of n
Choose now C large enough to yield
Furthermore, holding C constant choose m such that
Since
there exists a random variable a > 1 such that for infinitely many values of n
On the other hand, it follows from \(n -\sqrt{n} \leq l\) and Viéte’s formula that
We combine these two inequalities to obtain for infinitely many values of n
where α is a positive random variable. Multiplying left and right parts by | ξ n | , we get
where D i denotes the event that \(\vert \xi _{0}\vert > {e}^{n/i}\max _{n-\sqrt{n}\leq l\leq n}\vert \xi _{l}\vert \) for infinitely many values of n.
To complete the proof it is sufficient to show that \(\mathbf{P}\,\{D_{i}\} = 0\) for all \(i \in \mathbb{N}\). Having in mind to apply the Borel-Cantelli lemma, let us introduce the following events:
Considering θ > 0 such that \(\mathbf{P}\,\{\vert \xi _{0}\vert \leq \theta \} = F(\theta ) < 1\), we have
consequently,
and, according to the Borel-Cantelli lemma, \(\mathbf{P}\,\{D_{i}\} = 0\).
We prove the implication (2)\(\Rightarrow \)(1) arguing by contradiction. Suppose (1) does not hold, i.e.,
It follows from (5) that
for an arbitrary positive γ. For \(k \in \mathbb{N}\) introduce an event F k that \(\vert \xi _{n}\vert \geq {e}^{kn}\) holds for infinitely many values of n. It follows from (6) and the Borel-Cantelli lemma that \(\mathbf{P}\,\{F_{k}\} = 1\) and, consequently, \(\mathbf{P}\,\{ \cap _{k=1}^{\infty }F_{k}\} = 1\). This yields
Therefore with probability one for infinitely many values of n
where \(\epsilon > 0\) is an arbitrary fixed value. Let us hold one of those n. Suppose \(\vert z\vert \geq \epsilon \). Then
Thus with probability one for infinite number of values of n all the roots of the polynomial G n are located inside the circle \(\{z\, :\, \vert z\vert = \epsilon \}\), where \(\epsilon \) is an arbitrary positive constant. This means that (2) does not hold for any \(\delta \in (0,1)\).
3 Proof of Theorem B
The proof of Theorem B follows immediately from Theorems A and C. However, the additional assumption (1) significantly simplifies the proof.
Consider a set of sequences of reals
where all \(a_{jn} \in [0,1]\). We say that \(\{a_{jn}\}\) are uniformly distributed in [0, 1] if for any \(0 \leq a < b \leq 1\)
The definition is an insignificant generalization of the notion of uniformly distributed sequences (see, e.g., [7]). It is easy to see that the Weyl criterion (see Ibid.) continues to be valid in this case:
The set of sequences \(\{a_{jn},j = 1,\ldots ,n\},\,n = 1,2,\ldots ,\) is uniformly distributed if and only if for all \(l = 1,2,\ldots \)
Let \(z_{jn}\,=\,r_{jn}{e}^{i\theta _{jn}}\) be a zero of \(G_{n}(z),\,r_{jn}\,=\,\vert z_{jn}\vert ,\,\theta _{jn}\,=\,\arg z_{jn},\,0\,\leq \,\theta _{jn}\,<\,2\pi.\) The asymptotic uniform distribution of the arguments is equivalent to the statement that the set of sequences \(\{\theta _{jn}/(2\pi )\}\) is uniformly distributed. Thus, according to Weyl’s criterion, it is enough to show that for any \(l = 1,2,\ldots \)
with probability 1.
For the simplicity we assume that \(\xi _{0}\neq 0\). Consider the random polynomial
Its roots are \(z_{kn}^{-1}\). According to Newton’s formulas (see, e.g., [8]),
where \(\varphi _{l}(x_{1},\ldots x_{l})\) are polynomials which do not depend on n (for example, \(\varphi _{1}(x) = -x\)). It follows that
As was shown in the proof of Theorem A, for | z | < 1 the polynomials G n (z) converge to the analytical function \(G(z) = \sum\limits_{k=0}^{\infty }\xi _{k}{z}^{k}\) with probability 1. Since \(\xi _{0}\neq 0\), the function G(z) has no zeros inside a circle \(\{z : \vert z\vert \leq \rho \},\,\mathbf{P}\{\rho > 0\} = 1\). Hence for \(n \geq N,\,\mathbf{P}\{N < \infty \},\) the polynomials G n (z) have no zeros inside \(\{z :\, \vert z\vert \leq \rho \}.\) Let γ > 0 be a positive number. It follows from (7) that
Theorem A implies that the second member on the right-hand side goes to zero as \(n \rightarrow \infty \) with probability 1. Hence
with probability 1 and the theorem follows.
4 Proof of Theorem C
Consider integer numbers \(p,q_{1},q_{2}\) such that \(0\,\leq \,q_{1}\,<\,q_{2}\,<\,p - 1\). Put \(\varphi _{j}\,=\,q_{j}/p\), j = 1, 2, and try to estimate \(S_{n} = S_{n}(2\pi \varphi _{1},2\pi \varphi _{2})\). Evidently \(S_{n} =\lim _{R\rightarrow \infty }S_{nR}\), where S nR is the number of zeros of G n (z) inside the domain \(A_{R} =\{ z\, :\, \vert z\vert \leq R,2\pi \varphi _{1} \leq \arg z \leq 2\pi \varphi _{2}\}\). It follows from the argument principle (see, e.g., [11]) that S nR is equal to the change of the argument of G n (z) divided by 2π as z traverses the boundary of A R . The boundary consists of the arc \(\Gamma _{R} =\{ z\, :\, \vert z\vert = R,2\pi \varphi _{1} \leq \arg z \leq 2\pi \varphi _{2}\}\) and two intervals \(L_{j} =\{ z\, :\, 0 \leq \vert z\vert \leq R,\arg z = \pi \varphi _{j}\},j = 1,2\). It can easily be checked that if R is sufficiently large, then the change of the argument as z traverses Γ R is equal to \(n(\varphi _{2} - \varphi _{1}) + o(1)\) as \(n \rightarrow \infty \). If z traverses a subinterval of L j and the change of the argument of G n (z) is at least π, then the function \(\vert G_{n}(z)\vert \cos (\arg G_{n}(z))\) has at least one root in this interval. It follows from Theorem E that with probability one the number of real roots of the polynomial
is o(n) as n → ∞. Thus the change of the argument of \(G_{n}(z)\) as z traverses L j is o(n) as \(n \rightarrow \infty \) and
The set of points of the form \(\exp \{2\pi iq/p\}\) is dense in the unit circle \(\{z\, :\, \vert z\vert = 1\}\). Therefore
for any \(\alpha ,\beta \) such that 0 \(\leq \alpha < \beta \leq 2\pi \).
5 Proof of Theorem E
First we convert the problem of counting of real zeros of g n (x) to the problem of counting of sign changes in the sequence of the derivatives \(\{g_{n}^{(j)}(1)\}_{j=0}^{n}\).
Let \(\{a_{j}\}_{j=0}^{n}\) be a sequence of real numbers. By \(Z(\{a_{j}\})\) denote the number of sign changes in the sequence {a j }, which is defined as follows. First we exclude all zero members from the sequence. Then we count the number of the neighboring members of different signs.
For any polynomial p(x) of degree n put \(\mathit{Z_{p}}(x) = Z(\{{p}^{(j)}(x)\})\), i.e., the number of sign changes in the sequence \(p(x),{p}^{{^\prime}}(x),\ldots ,{p}^{(n)}(x)\).
Lemma 1 (Budan-Fourier Theorem).
Suppose p(x) is a polynomial such that p(a),p(b)≠0 for some a < b. Then the number of the roots of p(x) inside (a,b) does not exceed \(\mathit{Z_{p}}(a) -\mathit{Z_{p}}(b)\) . Moreover, the difference between \(\mathit{Z_{p}}(a) -\mathit{Z_{p}}(b)\) and the number of the roots is an even number.
Proof.
See, e.g., [8]. □
Corollary 1.
The number of the roots of p(x) inside [1,∞) does not exceed Z p (1).
Proof.
For all sufficiently large x the sign of p (j)(x) coincides with the sign of the leading coefficient. □
Corollary 2.
The function Z p (x) does not increase.
Let us turn back to the random polynomial g n (x). Here and elsewhere we shall omit the index n when it can be done without ambiguity. By M n (a, b) denote the number of zeros of g(x) inside the interval [a, b].
First let us prove that
Fix some \(\epsilon > 0\) and \(\lambda \in (0,1/2)\). Since the distributions of {η j } belong to a finite set, there exists \(K = K(\epsilon )\) such that
Let I be a subset of \(\{0,1,\ldots ,n\}\) consisting of indices j such that | η j | < K and \([\lambda n] \leq j \leq [(1 - \lambda )n]\). Put
Let τ k be the indicator of \(\{\vert g_{1}^{(k)}(1)\vert \geq \vert g_{2}^{(k)}(1)\vert \}\) and χ j be the indicator of \(\{\vert \eta _{j}\vert \geq K\}\).
Lemma 2.
Let \(a_{1},a_{1},b_{1},b_{2}\) be real numbers. If \((a_{1} + a_{2})(b_{1} + b_{2})\,<\,0\) and \(a_{2}b_{2}\,\geq \,0\) , then either \(\vert a_{1}\vert \geq \vert a_{2}\vert \) or \(\vert b_{1}\vert \geq \vert b_{2}\vert \) .
Proof.
The proof is trivial. □
It follows from Lemma 2 that
Owing to the monotonicity of the function \(Z_{g_{2}}(x)\), one has
Hence,
Using (9) we have \(\mathbf{E}\,\chi _{j} = \mathbf{P}\,\{\vert \eta _{j}\vert \geq K\} \leq \epsilon \), therefore,
Let us now estimate the value \(\mathbf{E}\,\tau _{j}\). Note that \({g}^{(k)}(x) = \sum\limits_{l=k}^{n}\eta _{l}A_{k,l}{x}^{l-k}\), where \(A_{k,l} = l(l - 1)\cdots (l - k + 1)\). Fix some integer k such that \(\lambda n \leq k \leq (1 - \lambda )n\). If \(n - 1 \geq j \geq k\), then
which implies
for \(\lambda n \leq k \leq j \leq (1 - \lambda )n\). Consequently,
This yields that
For an arbitrary random variable X define the concentration function Q(h; X) as follows:
If X, Y are independent random variables, then (see, e.g., [12])
Therefore,
To estimate the right-hand side of (12) we use the following result.
Lemma 3 (the Kolmogorov-Rogozin inequality).
Let \(X_{1},X_{2},\ldots ,X_{n}\) be independent random variables. Then for any \(0 < h_{j} \leq h,\,j = 1,\ldots ,n,\)
where C is an absolute constant.
Proof.
See [13]. □
Since the distributions of {η j } belong to a finite set, we get
Putting \(h = h_{j} = 2K/\lambda \) in (13) and using (12), we obtain
Combining this with (11), we have
Since \(\lambda ,\epsilon \) are arbitrary positive numbers, we obtain (8), which together with the corollary from Lemma 1 implies
Considering the random polynomials g(1 ∕ x) and g( − x), it is possible to obtain similar estimates for M n (0, 1) and \(M_{n}(-\infty ,0)\). Thus the second part of (3) holds. To prove the first one, we estimate the probabilities of large deviations for the sums \(\sum \chi _{j}\) and \(\sum \tau _{j}\). The elementary considerations or the application of Bernstein inequalities (see, e.g., [12]) leads to
The analysis of the behavior of ∑τ j is slightly more difficult.
Henceforth we shall use the following notation: for any positive functions \(f_{1},f_{2}\) we write \(f_{1} \ll f_{2}\), if there exists an absolute constant C such that \(f_{1} \leq Cf_{2}\) in the domain of these functions.
Lemma 4.
There exists a constant c depending only on \(\lambda ,\epsilon \) and the distributions of {η j } such that
for \(\lambda n \leq k \leq (1 - \lambda )n\) .
Proof.
As was shown in (12),
To estimate the concentration function in the right-hand side we use the result of Esseen (see, e.g., [12]). Let X be a random variable with a characteristic function f(t). Then
uniformly for all T > 0.
Putting \(T = \lambda /(KA_{k,[(1-\lambda )n]})\) and applying (15) , we obtain
where f j (t) is a characteristic function of η j . Further,
where \(\mathcal{P}_{j}\) is a distribution of the symmetrized η j , i.e., a distribution of \(\eta _{j} - \eta _{j}^{{^\prime}}\), where \(\eta _{j}^{{^\prime}}\) is an independent copy of η j .
There are at most r different distributions among \(\{\mathcal{P}_{j}\}_{(1-\lambda )n\leq j\leq n}\). Therefore there exist a distribution \(\mathcal{P}\) and a subset \(J \subset \{ j\, :\, (1 - \lambda )n \leq j \leq n\}\) such that \(\vert J\vert \geq n\lambda /r\) and \(\mathcal{P}_{j} = \mathcal{P}\) for all j ∈ J. By ∑′ denote the summation taking over all indices such that j ∈ J. Thus,
Choose δ > 0 such that \(\gamma \,=\,\mathcal{P}\{x\, :\, \vert x\vert > \delta \}\,>\,0\). Since the integrands are non-negative, we get
where \(\lambda _{r} = \lambda (2r - 1)/(2r),\,\beta = \vert J \cap \{ j\, :\, (1 - \lambda _{r})n \leq j \leq n\}\vert /(2n)\) and
Put \(\alpha = \lambda \gamma /(4r)\) and consider \(\Lambda _{1} =\{ t \in [-T,T]\, :\, \vert s(t)\vert < \alpha n/2\}\) and \(\Lambda _{2} = [-T,T] \setminus \Lambda _{1}\). Since \(\vert J\vert \geq n\lambda /r\) and by the definition of β, we have \(\beta \geq \alpha \). Therefore,
where μ denotes the Lebesgue measure.
Let us estimate \(\mu (\Lambda _{2})\). It follows from Chebyshev’s and Hölder’s inequalities that
Put
and assume, for simplicity, that r = 1, i.e., \(\lambda _{r} = \lambda /2,\,\sum ={ \sum }^{{^\prime}}\) and the summation is taken over all j. The general case is considered in a similar way.
We have
The first three summands in (18) are easily estimated as follows:
The next two summands have a common method of estimation. We consider only the last one. From the formula \(\cos y = ({e}^{iy} + {e}^{-iy})/2\) it is easily shown that
The summation in the middle term is taken over all possible combinations of signs.
Consider the partition of the index set
where
and K 2 is the complement of K 1. Clearly, \(\vert K_{1}\vert \ll {n}^{2}\vert \ln \lambda \vert /{\lambda }^{2}\). Therefore,
Consider now
Putting \(p = j_{1} - j_{2}\), we have
Since for any natural l
we get
Taking into account \(\lambda n \leq k \leq (1 - \lambda )n\) and \((1 - \lambda /2)n \leq j_{1} \leq n\) and using the inequality
we get
Therefore,
If \(j \in K_{2}\) and \(j_{1} - j_{2} > 10/\lambda \), then
which implies
Suppose now \(j \in K_{2}\) and \(j_{1} - j_{3} > 10\vert \ln \lambda \vert /\lambda \). Using (22) and \(\lambda \in (0,1/2)\), we get
Further, (22) also holds for j 3. Therefore,
Thus,
It follows from (23) and (24) that
Taking into account the structure of the index set {j}, we have
consequently,
Combining (18)–(21) and (25), we obtain
Applying this to (17), we get
By (16),
Recalling that \(T = \lambda /(KA_{k,[(1-\lambda )n]})\), we obtain
It follows from (22) that
Thus,
Recalling that \(\alpha = \gamma \lambda /4\), we obtain
Since K is defined by \(\epsilon \) and γ, δ are defined by the distributions of {η j }, Lemma 4 is proved. □
Now we are ready to complete the proof of Theorem E. It follows from (10) that
By Lemma 4 and Chebyshev’s inequality,
Further, it follows from (14) that there exists a constant c 2 > 0 depending only on \(\epsilon \) such that
Considering the random polynomials g(1 ∕ x) and g( − x), it is possible to obtain similar estimates for M n (0, 1) and \(M_{n}(-\infty ,0)\). Thus there exist positive constants \(c{^\prime}_{1},c{^\prime}_{2}\) such that
According to the Borel-Cantelli lemma, with probability one there exists only a finite number of n such that \(M_{n} > 2\lambda n + 2 + 2{n}^{3/4} + 2\epsilon n\). Since \(\lambda ,\epsilon \) are arbitrary small,
Theorem E is proved.
References
Hammersley, J.M.: The zeros of a random polynomial. In: Neyman, J. (ed.) Proceedings of the Third Berkeley Symposium on Mathematics, Statistics and Probability, pp. 89–111. University of California Press, Berkley (1956)
Ibragimov, I.A., Maslova, N.B.: The mean number of real zeros of random polynomials I. Coefficients with a nonzero mean. Teor. Veroyatnost. i Primenen. 16, 495–503 (1971)
Ibragimov, I.A.; Maslova, N.B.: The mean number of real zeros of random polynomials II. Coefficients with a nonzero mean. Teor. Veroyatnost. i Primenen. 16, 495–503 (1971)
Ibragimov, I.A., Maslova, N.B.: On the average number of real roots of random polynomials. Dokl. Akad. Nauk SSSR 199, 1004–1008 (1971)
Ibragimov, I., Zeitouni, O.: On roots of random polynomials. Trans. Am. Math. Soc. 349, 2427–2441 (1997)
Kac, M.: On the average number of real roots of a random algebraic equation. Bull. Am. Math. Soc. 49, 314–320 (1943)
Kuipers, L., Niederreiter, H.: Uniform Distribution of Sequences. Wiley, New York (1974)
Kurosh, A.: Higher Algebra. Mir, Moscow (1988)
Logan, B.F., Shepp, L.A.: Real zeros of random polynomials. Proc. Lond. Math. Soc. 18, 29–35 (1968)
Logan, B.F., Shepp, L.A.: Real zeros of random polynomials II. Proc. Lond. Math. Soc. 18, 308–314 (1968)
Markushevich, A.I.: The Theory of Analytic Functions: A Brief Course. Mir, Moscow (1983)
Petrov, V.V.: Limit Theorems of Probability Theory: Sequences of Independent Random Variables. Clarendon, Oxford (1995)
Rogozin, B.A.: On the increase of dispersion of sums of independent random variables. Teor. Veroyatnost. i Primenen. 6, 106–108 (1961)
Shepp, L., Farahmand, K.: Expected number of real zeros of a random polynomial with independent identically distributed symmetric long-tailed coefficients. Teor. Veroyatnost. i Primenen. 55, 196–204 (2010)
Shepp, L., Vanderbei, R.J.: The complex zeros of random polynomials. Trans. Am. Math. Soc. 347, 4365–4383 (1995)
Shparo, D. I., Shur, M. G.: On distribution of zeros of random polynomials. Vestnik Moskov. Univ. Ser. I Mat. Mekh. 3, 40–43 (1962)
Zaporozhets, D.N.: An example of a random polynomial with unusual behavior of roots. Teor. Veroyatnost. i Primenen. 50, 549–555 (2005)
Zaporozhets, D.N., Nazarov, A.I.: What is the least expected number of real roots of a random polynomial? Teor. Veroyatnost. i Primenen. 53, 40–58 (2008)
Acknowledgements
A part of the work was done in the University of Bielefeld. The authors thank F. Götze for the possibility to participate at the work of CRC 701 “Spectral Structures and Topological Methods in Mathematics”. They are also grateful to A. Cole for her hospitality.
This work was partially supported by RFBR (08-01-00692, 10-01-00242), RFBR-DFG (09-0191331), NSh-4472.2010.1, and CRC 701 “Spectral Structures and Topological Methods in Mathematics”
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ibragimov, I., Zaporozhets, D. (2013). On Distribution of Zeros of Random Polynomials in Complex Plane. In: Shiryaev, A., Varadhan, S., Presman, E. (eds) Prokhorov and Contemporary Probability Theory. Springer Proceedings in Mathematics & Statistics, vol 33. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33549-5_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-33549-5_18
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33548-8
Online ISBN: 978-3-642-33549-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)