1 Introduction

Consider a fixed Hamiltonian H (a complex self-adjoint operator) acting on a complex Hilbert space \(\mathcal {H}\) of dimension D, where \(D\ge 3\). Then, \(\mathcal {H}\) can be written as

$$\begin{aligned} \mathcal {H}\,=\, \mathcal {V}_1\, \oplus ...\oplus \mathcal {V}_K, \end{aligned}$$

where each \(\mathcal {V}_a\), \(a=1,2,...,K\), is the subspace of eigenvectors associated with the eigenvalue \(\lambda _a\), and \(\lambda _1< \lambda _2<...<\lambda _K.\)

We fixed an initial condition \(\psi _0\) for the dynamic Schrodinger evolution. We consider the time evolution \(\psi _t = e^{-i\,t\,H} (\psi _0)\), \(t \ge 0\), and we are interested in properties for most of the large times (not all large times).

Now, we consider another decomposition \(\mathcal {D}\) of \(\mathcal {H}\) (which has nothing to do with the previous one)

$$\begin{aligned} \mathcal {H}\,=\, \mathcal {H}_1\, \oplus ...\oplus \mathcal {H}_N,\,\,\,N\ge 2. \end{aligned}$$

We can consider a natural probability on the set \(\Delta \) of possible decompositions \(\mathcal {D}\) and we are interested here in properties for most of the decompositions \(\mathcal {D}\). For small \(\delta >0\), we are interested in the concept of a \((1-\delta )\) generic decomposition \(\mathcal {D}\) (in the probabilistic sense).

For a given fixed subspace \(\mathcal {H}_\nu \) of \(\mathcal {H}\), \(\nu =1,...,N\), the observable \(P_{\mathcal {H}_\nu }\) (the orthogonal projection on \(\mathcal {H}_\nu \)) is such that the mean value of the state \(\psi _t\), \(t \ge 0\), is given by \(E_{\psi _t} (P_{\mathcal {H}_\nu }) =<P_{\mathcal {H}_\nu }(\psi _t), \psi _t>= |P_{\mathcal {H}_\nu }(\psi _t)\,|^2.\)

In the first part of the paper, following the basic guidelines of the original work by von Neumann, we present lower bound conditions (in terms of \(\delta \), etc) on the dimensions \(d_\nu \), \(\nu =1,2,..,N\), of the different values of \(\mathcal {H}_\nu \) of a \((1-\delta )\)-generic orthogonal decomposition \(\mathcal {D}\) of the form \(\mathcal {H}\,=\, \mathcal {H}_1\, \oplus ...\oplus \mathcal {H}_N\), in such way that the dynamic time evolution \(\psi _t\), \(t \ge 0\), of a given \(\psi _0\), for most of the large times t, has the property that the expected value \(E_{\psi _t} (P_{\mathcal {H}_\nu }) \) is almost \(\frac{d_\nu }{D}\). In this way, there is an approximately uniform spreading of \(\psi _t\) among the different values of \(\mathcal {H}_\nu \) of a generic decomposition \(\mathcal {D}\). In this part, the main result is Theorem 15. We point out that these estimates are for a fixed initial condition \(\psi _0\).

The von Neumann’s Quantum Ergodic Theorem provides uniform estimates for all \(\psi _0\). This result is presented in Theorem 19. This will be done in the second part of the paper which begins in Sect. 4. To get this theorem, it will be necessary to assume hypothesis on the eigenvalues of the Hamiltonian H (see hypothesis \(\mathfrak {N\,\,R}\) just after Lemma 16).

Suppose, for instance, that \(A: \mathcal {H} \rightarrow \mathcal {H}\) is an observable and this self-adjoint operator has spectral decomposition

$$\begin{aligned} \mathcal {H}\,=\, \mathcal {H}_1\, \oplus ...\oplus \mathcal {H}_N, \end{aligned}$$

where \( \mathcal {H}_p\), \(p=1,...,N\) is the subspace of eigenvectors associated with the eigenvalue \(\beta _p\) and \(\beta _1< \beta _2<...<\beta _N.\) The probability that the measurement of A on the state \(\psi _t\) is \(\beta _p\) is given by \(<P_{\mathcal {H}_p }(\psi _t), \psi _t>\). This shows the relevance of the result. The point of view here is not to look for generic observables but for generic decompositions.

We stress a point raised on [3]. What is proved is a property of the kind: for most \(\mathcal {D}\), something is true for all \(\psi _0\). In addition, not a property of the kind: for all \(\psi _0\), something is true for most \(\mathcal {D}.\)

Of course, the main result can also be stated in terms of limits, when \(T\rightarrow \infty \), of means \( \frac{1}{T} \int E_{\psi _t} (P_{\mathcal {H}_\nu }) \mathrm{d}t\), which is a more close expression to the one present in the classical Ergodic Theorem.

We present here a simplified proof (with less hypothesis in some parts) when dim \(\mathcal {H}\) is finite of this important result which was initially published in German by von Neumann in 1929 (see [6]). The paper [5] presents a translation from German to English of this work of von Neumann. This 1929 paper also considers the concept of Entropy for such setting. We will not consider this topic in our note.

Several papers with interesting discussions about this work appeared recently (see, for instance, [1,2,3, 5] and other papers which mention these four)

Consider a general connected compact Riemannian manifold X and its volume form. When properly normalized, this procedure defines a natural probability \(w_X\) over X.

Given a compact Lie group (real) G, one can consider the associated bi-invariant Riemannian metric. If H is a closed subset of G, this metric can be considered in the quotient space \(X= \frac{G}{H}\), and in this way, we get a probability on such manifold X. We will denote by \(\pi \) the projection.

When we consider expected values of a function f, this we will be taken with respect to the above-mentioned probability.

Lemma 1

Given a continuous function \(f:X \rightarrow \mathbb {C}\) and \(\pi : G\rightarrow X\) the canonical projection, then

$$\begin{aligned} \mathrm{(a)} \,\,\text {vol}\, (S) = \frac{\text {vol}\, (\pi ^{-1} (S))}{\text {vol}\, (H) } \end{aligned}$$

for every Borel set \(S\subset X\), and

$$\begin{aligned} \mathrm{(b)}\,\, E_X(f)= E_G (f \circ \pi ). \end{aligned}$$

The first integral is taken with respect to the volume form \(w_X\) and the second with respect to the volume form \(w_G\).

Note that vol \((G)=\) vol \((X)\,\) vol (H).

The proof is left for the reader.

Suppose \(\mathcal {H}\) is a complex Hilbert space of finite dimension D with an inner product \(<,\,>\) and a norm \(|\,\,\,|\).

Suppose we fix a decomposition \( \mathcal {D}\), that is

$$\begin{aligned} \mathcal {D}\,:\, \mathcal {H}\,=\, \mathcal {H}_1\, \oplus ...\oplus \mathcal {H}_N \end{aligned}$$

\( N>1\), is a orthogonal direct sum, where dim \(\mathcal {H}_\nu =d_\nu >0\) for all \(\nu =1,2,...,N.\)

Denote \(P_\nu \) the orthogonal projection of \( \mathcal {H}\,\) over   \(\mathcal {H}_\nu \).

Moreover, \(S=\{ \psi \in \mathcal {H}\,|\, | \psi |=1\} \) denotes the unitary sphere. S has a Riemannian structure with a metric induced by the norm in \(\mathcal {H}.\) In the same way as before, there is an associated probability \(w_S\) is S.

Lemma 2

For any \(\nu =1,2...,N\),

$$\begin{aligned} \, E_S(\left| P_\nu \, (\,.\,)\right| ^2)= \int _S\, | P_\nu \, (\phi )\,|^2\, d\, w_S (\phi )\,=\frac{d_\nu }{D} . \end{aligned}$$

Proof

Suppose \(\nu \) is fixed, then take \(\psi _1,\psi _2,...,\psi _D\), and orthogonal basis of \(\mathcal {H}\), such that \(\psi _1,\psi _2,...,\psi _{d_\nu }\) is an orthogonal basis of \(\mathcal {H}_\nu .\)

Given \(\phi = \sum _{j=1}^D x_j\, \psi _j \in S,\) where \(\sum _{j=1}^D |x_j|^2=1\), then

$$\begin{aligned} \int _S\, | P_\nu \, (\phi )\,|^2\, d\, w_S (\phi )= \int _S \,\sum _{j=1}^{d_\nu } |x_j|^2 d\, w_S (x). \end{aligned}$$

Note that the integral \(\int _S \, |x_j|^2 d\, w_S (x)\) is independent of j and

$$\begin{aligned} \int _S \,\sum _{j=1}^{D} |x_j|^2 d\, w_S (x) = \,\text {vol}\,\,(S)=1. \end{aligned}$$

Therefore, for any j

$$\begin{aligned} \int _S \, |x_j|^2 d\, w_S (x)=\,\frac{1}{D}\, . \end{aligned}$$

Therefore, it follows that

$$\begin{aligned} \int _S \,\sum _{j=1}^{d_\nu } |x_j|^2 d\, w_S (x)=\,\frac{d_\nu }{D}\, . \end{aligned}$$

\(\square \)

Lemma 3

For any \(\nu =1,2...,N\),

$$\begin{aligned} \, \text {Var}_S(| P_\nu \, (\,.\,)\,|^2)= \int _S\, \left( | P_\nu \, (\phi )|^2\,- \frac{d_\nu }{D} \right) ^2\, d\, w_S (\phi )\,=\frac{d_\nu \, (D- d_\nu )}{D^2 \, ( D+1 )} . \end{aligned}$$

Proof

To simplify the notation we take \(\nu =1\). Then, we denote \(d=d_1\) and \(P=P_1\).

Take \(\psi _1,\psi _2,...,\psi _D\), and orthogonal basis of \(\mathcal {H}\), such that, \(\psi _1,\psi _2,...,\psi _{d}\) is an orthogonal basis of \(\mathcal {H}_1.\)

By last Lemma, we have

$$\begin{aligned} \int _S\,\left( | P\, (\phi )|^2\,- \frac{d}{D}\right) ^2\, d\, w_S (\phi )= & {} \int _S\, | P\, (\phi )|^4\, d\, w_S (\phi )\,- 2\,\frac{d}{D} \, \int _S\, | P\, (\phi )|^2\, d\, w_S (\phi )+ \left( \frac{d}{D}\right) ^2\\= & {} \int _S\, | P\, (\phi )|^4\, d\, w_S (\phi )\,- \left( \frac{d}{D}\right) ^2. \end{aligned}$$

If \(\phi = \sum _{j=1}^D x_j\, \psi _j \in S,\) then \(P(\phi ) =\sum _{j=1}^d x_j\, \psi _j .\)

Therefore

$$\begin{aligned} \int _S\, | P\, (\phi )|^4\, d\, w_S (\phi )\,=\,\frac{1}{\text {vol (S)} } \int _S \, (\sum _{j=1}^d |x_j|^2)^2\, \mathrm{d}S(x)= \, \frac{d^2 + d}{D\, (D+1)}. \end{aligned}$$

The last equality follows from a standard computation (see “Appendix 1”).

From this follows the claim. \(\square \)

2 Changing the decomposition

\(\mathcal {H}\) is fixed for the rest of the paper.

Now, we change our point of view. We fix \(\phi \in \mathcal {H}\) and we consider different decompositions of \(\mathcal {H}\) in direct sum. More precisely, we fix \(D=\) dim \(\mathcal {H}\) and N and we consider fixed natural positive numbers \(d_\nu \), \(\nu =1,2,...,N\), such that \(d_1+d_2+...+d_N=D\), and then, all possible choices of orthogonal decompositions with this data.

We denote by \(\Delta (d_1,d_2,...,d_N, \mathcal {H} ) = \Delta \) the set of all possible \( \mathcal {D}\), that is, all possible orthogonal direct sum decompositions:

$$\begin{aligned} \mathcal {D}\,:\, \mathcal {H}\,=\, \mathcal {H}_1\, \oplus ...\oplus \mathcal {H}_N. \end{aligned}$$

For fixed \(\nu =1,2,...,N\), then \( P_\nu (\mathcal {D})\) denotes the projection on \(\mathcal {H}_\nu \) associated with the decomposition \(\mathcal {D}\).

Each choice of orthogonal basis \(\psi _1,\psi _2,...,\psi _D\) of \(\mathcal {H}\) defines a possible choice of direct orthogonal sum decomposition:

$$\begin{aligned} \mathcal {H}_1 \, \,\text { is generated by}\,\,\{\psi _1,...,\psi _{d_1}\,\},\,\,\mathcal {H}_2 \, \,\text { is generated by}\,\,\{\psi _{d_1+1},...,\psi _{d_1+ d_2}\,\}, \end{aligned}$$

and so on.

The set of all orthogonal basis is identified with the set of unitary operators U(D) which defines a compact Lie group and a Haar probability structure.

In this way,

$$\begin{aligned} \Delta = \frac{U(D)}{U(d_1) \times U(d_2)\times ...\times U(d_N)}. \end{aligned}$$

In the same way as before, we get a probability \(w_\Delta \) over \(\Delta \). Therefore, it has a meaning the probability \(w_\Delta (B)\) of a Borel set \(B\subset \Delta \) of decompositions.

Lemma 4

Consider a continuous function \(f: \mathbb {R} \rightarrow \mathbb {R}\). Then, for fixed \(\nu =1,2...,N\), and fixed \(\tilde{\phi }\) and \(\tilde{\mathcal {D}}\)

$$\begin{aligned} \, \int _S\, f(| P_\nu (\tilde{\mathcal {D}}) \, \phi \,|)\,d\, w_S (\phi )\,= \int _\Delta \, f(| P_\nu (\mathcal {D}) \, \tilde{\phi }\,|)\,d\, w_\Delta (\mathcal {D}). \end{aligned}$$

This constant value is independent of \(\tilde{\phi }\) and \(\tilde{\mathcal {D}}\).

Proof

If \(U: \mathcal {H} \rightarrow \mathcal {H}, \) is unitary, then \(U\, \mathcal {D}\) denotes

$$\begin{aligned} U(\mathcal {H}_1)\, \oplus ...\oplus U(\mathcal {H}_N). \end{aligned}$$

Then, for fixed \(\phi \) and \(\mathcal {D}\), we have

$$\begin{aligned} P_\nu (U\, \mathcal {D})\, U \,(\phi )= \, U \,P_\nu ( \mathcal {D}) \phi . \end{aligned}$$

We prove the claim for \(P_1\). Suppose \(\psi _1,\psi _2,...,\psi _D\), is an orthogonal basis of \(\mathcal {H}\), such that \(\psi _1,\psi _2,...,\psi _{d_1}\) is an orthogonal basis of \(\mathcal {H}_1.\)

We can express \(\phi = \sum _{j=1}^D x_j \, \psi _j\), and moreover, \(U(\phi )= \sum _{j=1}^D x_j \, U( \psi _j)\).

\(U(\psi _1),U(\psi _2),...,U(\psi _D)\) is an orthogonal basis of \(\mathcal {H}\) associated with \(U\, \mathcal {D}\) and \(U(\psi _1),U(\psi _2),...,U(\psi _{d_1})\) is an orthogonal basis of \(U(\mathcal {H}_1)\).

Then,

$$\begin{aligned} P_1 (U\, \mathcal {D})\, U \,(\phi )= P_1 (U\, \mathcal {D})\, \left( \sum _{j=1}^D x_j \, U( \psi _j)\right) =\sum _{j=1}^{d_1} x_j \, U( \psi _j). \end{aligned}$$

By the other hand

$$\begin{aligned} U \,P_1 ( \mathcal {D}) \phi =U \,P_1 ( \mathcal {D}) \left( \sum _{j=1}^D x_j \, \psi _j\right) = U \left( \sum _{j=1}^{d_1} x_j \, \psi _j\right) =\sum _{j=1}^{d_1} x_j \, U( \psi _j), \end{aligned}$$

and this shows the claim.

Therefore, we get

$$\begin{aligned} |\,P_\nu (U\, \mathcal {D})\, U \,(\phi )\,|\, = |\,U^{-1}\,P_\nu (U\, \mathcal {D})\, U \,(\phi )\,|\,= |\,U^{-1} U \,P_\nu ( \mathcal {D}) \phi | = |\,P_\nu ( \mathcal {D}) \phi |. \end{aligned}$$

Finally, for a fixed \(\mathcal {D}\) and a variable U

$$\begin{aligned} \int _S\, f(| P_\nu (\mathcal {D}) \, \phi \,|)\,d\, w_S (\phi )\,= \int _S\, f(|\,P_\nu (U\, \mathcal {D})\, U \,(\phi )\,|)\,d\, w_S (\phi )= \int _S\, f( |\,P_\nu (U\, \mathcal {D})\, \,(\phi )\,|)\,d\, w_S (\phi ), \end{aligned}$$

because \(w_S\) is invariant by the action of U.

Then, the above integral on the variable \(\phi \) is constant by the action of U in a given decomposition \(\mathcal {D}\).

Now, consider a fixed \(\phi _1\) and another general \(\phi _2 = U ( \phi _1)\), where U is unitary.

As \(w_\Delta \) is invariant by the action of U, the integral

$$\begin{aligned} \int _\Delta \, f(| P_\nu (\mathcal {D}) \, \phi _2\,|)\,d\, w_\Delta (\mathcal {D})= & {} \int _\Delta \, f(| P_\nu ( U\, \mathcal {D}) \, U ( \phi _1) \,|)\,d\, w_\Delta ( \mathcal {D}) = \int _\Delta \, f(| U \, \,P_\nu (\mathcal {D}) \, \phi _1\,|)\,d\, w_\Delta (\mathcal {D})\\ {}= & {} \int _\Delta \, f(| \, \,P_\nu (\mathcal {D}) \, \phi _1\,|)\,d\, w_\Delta (\mathcal {D}) \end{aligned}$$

is constant and independent of \(\phi \).

Remember that \( w_S \times w_\Delta \) is a probability.

Consider now

$$\begin{aligned} \int \int f(| P_\nu (\mathcal {D}) \, \phi \,|)\,d\, w_S (\phi ) d\, w_\Delta (\mathcal {D})= & {} \int \, \left[ \,\, \int f(| P_\nu (\mathcal {D}) \, \phi \,|)\,d\, w_S (\phi )\,\,\right] \, \,d\, w_\Delta (\mathcal {D})\\= & {} \int \,\left[ \,\,\int f(| P_\nu (\mathcal {D}) \, \phi \,|)\,d\,w_\Delta (\mathcal {D})\,\,\right] \,\,d\, w_S (\phi ) , \end{aligned}$$

then by Fubini, we get the claim of the Lemma (since the unitary group acts transitively on S and on \(\Delta \)). \(\square \)

Corollary 5

Consider a fixed \(\phi \in \mathcal {H}\), such that \(|\phi |=1\).

Then, for \(\nu =1,2...,N\), we get that

$$\begin{aligned} \, E_\Delta (| P_\nu \,( \,.\,) (\phi )\,|^2)= \frac{d_\nu }{D} , \end{aligned}$$

and

$$\begin{aligned} \text {Var}_\Delta (| P_\nu \,( \,.\,) (\phi )\,|^2)= \frac{d_\nu \, (D- d_\nu )}{D^2 \, ( D+1 )} , \end{aligned}$$

where \(\,.\,\) denotes integration with respect to \(\mathcal {D}.\)

Proof

This is consequence of Lemmas 2, 3, and 4. \(\square \)

Definition 6

Given \(\delta >0\), a Hilbert space \(\mathcal {H}\) and natural positive numbers \(d_j, j=1,2,...,N\), such that \(d_1 + d_2 +...+ d_N=D=\) dim \(\mathcal {H}\), we say that a property is true for \(\mathcal {D}\in \Delta (d_1,..,d_N, \mathcal {H})\), in \((1-\delta )\) sense, if the property is not true only for elements \(\mathcal {D}\) in a set of probability \(w_\Delta \) smaller than \(\delta \).

Corollary 7

Suppose \(\epsilon >0\) and \(\delta >0\) are given. Consider natural positive numbers \(d_\nu , \nu =1,2,...,N\), such that \(d_1 + d_2 +...+ d_N=D=\) dim \(\mathcal {H}\), and moreover, assume that for all \(\nu =1,2...,N\)

$$\begin{aligned} d_\nu > D - \frac{\epsilon ^2\, \delta D\, (D+1) }{N^2}. \end{aligned}$$

Consider a fixed \( \phi \) such that \(|\phi |=1\). Then, for decompositions, \(\mathcal {D}\in \Delta (d_1,..,d_N, \mathcal {H})\) in the \((1-\delta )\) sense, and \(\nu =1,2...,N\), we have

$$\begin{aligned} |\,\,| P_\nu \,( \,\mathcal {D}) (\phi )\,|^2\,-\,\frac{d_\nu }{D}\,\,\,| < \epsilon \sqrt{\frac{d_\nu }{D\,N} } . \end{aligned}$$
(1)

Proof

By Corollary 5 and Markov inequality, we have

$$\begin{aligned} w_\Delta \left( \,\left[ \,| P_\nu \,( \,\mathcal {D}) (\phi )\,|^2\,-\,\frac{d_\nu }{D}\,\,\right] ^2\,\ge \epsilon ^2 \frac{d_\nu }{D\,N} \,\,\right) \le \frac{d_\nu \, (D- d_\nu )}{D^2 \, ( D+1 )} \, \frac{D\,N}{\epsilon ^2\,d_\nu }= \frac{N\, (D- d_\nu )}{\epsilon ^2\, D \, ( D+1 )}. \end{aligned}$$

Then, the probability that all N inequalities do not happen is

$$\begin{aligned} 1- N\, \frac{N\, (D- d_\nu )}{\epsilon ^2\, D \, ( D+1 )}>1- \delta \end{aligned}$$

by hypothesis. \(\square \)

The corollary above means that for a fixed \(\phi \), if the \(d_\nu \) are all not very small, then for a big part of the decompositions \( \mathcal {D}\), we have that

$$\begin{aligned} \,\,| P_\nu \,( \,\mathcal {D}) (\phi )\,|^2\, \end{aligned}$$

is close by the mean value \(\,\frac{d_\nu }{D}\).

Definition 8

Given a Hilbert space \(\mathcal {H}\) and a fixed decomposition \(\mathcal {D}\) (associated with natural positive numbers \(d_j, j=1,2,...,N\), such that \(d_1 + d_2 +...+ d_N=D=\) dim \(\mathcal {H}\), we define a semi-norm in such a way that for a linear operator \(\rho :\mathcal {H} \rightarrow \mathcal {H}, \) by

$$\begin{aligned} |\,\rho \,|_\infty = |\,\rho \,|_\infty ^\mathcal {D}= \sup _{1 \le \nu \le N} \, |\, \text {Tr} \, (\rho \,\, P_\nu (\mathcal {D} )\,| \end{aligned}$$

The above means that if \(|\,\rho \,|_\infty \) is small, then all expected values \(E_{P_\nu } (\rho ) \), \(\nu =1,2,...,N\), are small

\(|\,\phi >\,<\phi \,|\) will denote the orthogonal projection on the unitary vector \(\phi \) in the Hilbert space \(\mathcal {H}\).

Lemma 9

Consider a \(\phi \in \mathcal {H}=\mathcal {H}_1\, \oplus ...\oplus \mathcal {H}_N\), such that \( |\phi |=1\). Denote \(\rho _{mc} = \frac{1}{D} I_{\mathcal {H}}.\)

Then

$$\begin{aligned} |\, \, |\,\phi >\,<\phi \,|\, -\, \rho _{mc}\,|_\infty \,\,= \sup _{ 1 \le \nu \le N }\, |\,\,| P_\nu (\mathcal {D})\,(\phi ) \,|^2 - \frac{d_\nu }{D}\,\,|. \end{aligned}$$

Proof

Suppose \(\psi _1,\psi _2,...,\psi _D\) is orthogonal basis of \(\mathcal {H}\), such that \(\psi _1,\psi _2,...,\psi _{d_1}\) is an orthogonal basis of \(\mathcal {H}_1.\)

If \(\phi = \sum _{j=1}^D x_j \phi _j\), then for \(i=1,2,...,d_1\)

$$\begin{aligned} |\,\phi>\,<\phi \,|\,| P_1 (\phi _i)>= |\,\phi>\,<\phi \,|\,|\phi _i>\,= \sum _{j=1}^D \,\overline{x_i}\,x_j\, \phi _j \end{aligned}$$

and

$$\begin{aligned} |\,\phi>\,<\phi \,|\,| P_1 (\phi _i)>=0 \end{aligned}$$

for \(i> d_1\).

Therefore

$$\begin{aligned} \text {Tr}\,\,[\, |\,\phi>\,<\phi \,|\,| P_1\,(.)\,>\,] = \sum _{j=1}^{d_1} \,\,|x_j|^2= \, | P_1 (\phi )|^2. \end{aligned}$$

In an analogous way, we have that for any \(\nu \)

$$\begin{aligned} \text {Tr}\,\,[\, |\,\phi>\,<\phi \,|\,| P_\nu \,(.)\,>\,]= \, | P_\nu (\phi )|^2. \end{aligned}$$

From this follows the claim. \(\square \)

From the above, it follows:

Corollary 10

Under the hypothesis of Corollary 7, we get that for decompositions \(\mathcal {D}\in \Delta (d_1,..,d_N, \mathcal {H})\) in the \((1-\delta )\) sense

$$\begin{aligned} |\, \, |\,\phi >\,<\phi \,|\, -\, \rho _{mc}\,|_\infty \,\,\le \sup _{ 1 \le \nu \le N }\, \epsilon \, \sqrt{\frac{d_\nu }{N\, D}}. \end{aligned}$$

\(\square \)

3 Estimations on time

Definition 11

Given \(\delta >0\), we say that a property for the parameters \(t\in \mathbb {R}\) is true for \((1-\delta )\)-most of the large times, if

$$\begin{aligned} \liminf _{T \rightarrow \infty } \, \frac{1}{T} \mu (A_T)\,\,> \,\,1- \delta , \end{aligned}$$

where \(A_T\) is the set of \(t\in [0,T]\), where the property is verified and \(\mu \) is the Lebesge measure on \(\mathbb {R}.\)

Lemma 12

Suppose \(f: \mathbb {R} \rightarrow \mathbb {R}\) is continuous and non- negative. Consider a certain \(\gamma >0\).

Suppose \(\rho \) is such that

$$\begin{aligned} \limsup _{T \rightarrow \infty } \, \frac{1}{T} \int _0^T f(t)\,\, \mathrm{d}t\,<\, \rho . \end{aligned}$$

Then, \(f(t)< \gamma \) for \(1-\frac{\rho }{\gamma }\)-most of the large times.

Proof

$$\begin{aligned} \int _0^T f(t)\, \mathrm{d}t \ge \int _{f(t)\ge \gamma }^T f(t) \, \mathrm{d}t \ge \gamma \, \mu ( \{ t\in [0,T] \,|\, f(t)\ge \gamma \}). \end{aligned}$$

Therefore

$$\begin{aligned} \limsup _{T \rightarrow \infty } \, \frac{1}{T} \mu ( \, \{\, t \in [0,T] \, f(t) \ge \gamma \,\}\, \,<\, \frac{\rho }{\gamma },\end{aligned}$$

and finally

$$\begin{aligned} \liminf _{T \rightarrow \infty } \, \frac{1}{T} \mu ( \, \{\, t \in [0,T] \, f(t)< \gamma \,\}\, \,>\, 1-\frac{\rho }{\gamma }. \end{aligned}$$

\(\square \)

Suppose \(\mathcal {H}\) is Hilbert space, and \(d_j, j=1,2,...,N\) are such that \(d_1 + d_2 +...+ d_N=D=\) dim \(\mathcal {H}\), and \(H : \mathcal {H} \rightarrow \mathcal {H}\) a self-adjoint operator. Consider a fixed \(\phi _0 \in \mathcal {H}\), with \(|\phi _0|=1\), and \(\psi _t = e^{-\,i\, t \, H}\, \phi _0\), \(t\ge 0\), a solution of the associated Schrodinger equation.

Lemma 13

For fixed T and \(\nu =1,2,...,N\), consider the function

$$\begin{aligned} f_{\nu ,T} : \Delta (d_1,d_2,...,d_N, \mathcal {H} ) \times \, S\rightarrow \mathbb {R}, \end{aligned}$$

given by

$$\begin{aligned} f_{\nu ,T} (\mathcal {D}, \phi )\,\,=\,\, \frac{1}{T}\, \int _0 ^T \, \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \, \mathrm{d}t. \end{aligned}$$

Then, \(f_{\nu ,T}\) converges uniformly on \((\mathcal {D},\phi ) \in \Delta (d_1,d_2,...,d_N, \mathcal {H} )\times S\) when \(T\rightarrow \infty \), for any \(\nu =1,2,...,N\).

Proof

Suppose \(\phi _1,\phi _2,...,\phi _D\) is a set of eigenvectors of H which is an orthonormal basis of \(\mathcal {H}\).

Assume that \(\phi _0= \sum _{j=1}^D x_j \phi _j\). Then

$$\begin{aligned} \psi _t = \sum _{j=1}^D \, x_j \, e^{ -i\,t\, E_j}\, \phi _j, \end{aligned}$$

where \(E_j\), \(j=1,2,..,D\) are the corresponding eigenvalues.

Then, for a given \(\nu \)

$$\begin{aligned} | P_\nu (\mathcal {D})\, \psi _t\,|^2=< \psi _t, P_\nu (\mathcal {D})\,(\psi _t)> = \sum _{\alpha ,\beta } x_\alpha \overline{x_{\beta }} e^{ -i\,t\, (E_\alpha - E_\beta )}\phi _j< \phi _\alpha , P_\nu (\mathcal {D})\,(\phi _\beta )> . \end{aligned}$$

Therefore

$$\begin{aligned} \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \,=\, \sum _{w=1}^M\, L_{w,\nu } ( \mathcal {D},\phi ) \,e^{i\, u_w t}, \end{aligned}$$

where \(M \in \mathbb {N}\), \(u_1,..,u_M\) are real constants and \( |\,L_{w,\nu }(\mathcal {D},\phi ) \,|\le 2\).

Then

$$\begin{aligned} f_{\nu ,T} (\mathcal {D},\phi )=\sum _{u_w=0}^M\ L_{w,\nu }(\mathcal {D},\phi )+ \frac{1}{T} \sum _{u_w\ne 0}^M\ L_{w,\nu }(\mathcal {D},\phi )\left( \frac{e^{i\,u_w\, T}}{i\, u_w} - \frac{1}{i \,u_w}\right) . \end{aligned}$$

Finally, we get

$$\begin{aligned} |\, f_{\nu ,T} (\mathcal {D},\phi ) - \sum _{u_w=0}^M\ L_{w,\nu }(\mathcal {D},\phi )\,| \le \frac{1}{T}\, \frac{4\,M}{\inf _{u_w\ne 0} \, |u_w|} . \end{aligned}$$

As M is fixed, the claim follows from this. \(\square \)

Corollary 14

$$\begin{aligned} \int _\Delta \,\left( \lim _{T \rightarrow \infty } \frac{1}{T}\, \int _0 ^T \, \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \, \mathrm{d}t\right) \,\, d\mathrm{w}_\Delta (\mathcal D)\,= \frac{d_\nu \, (D- d_\nu )}{D^2 \, ( D+1 )} , \end{aligned}$$

for any \(\nu =1,2,..,N\).

Proof

By Lemma 13 and Corollary 5, we have that

$$\begin{aligned}&\int _\Delta \,\left[ \lim _{T \rightarrow \infty } \frac{1}{T}\, \int _0 ^T \, \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \, \mathrm{d}t\right] \,\, d\mathrm{w}_\Delta (\mathcal D)\\&\quad =\lim _{T \rightarrow \infty } \frac{1}{T}\,\int _\Delta \, d\mathrm{w}_\Delta (\mathcal D)\,\,\left( \int _0 ^T \, \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \, \mathrm{d}t\right) \\&\quad \lim _{T \rightarrow \infty } \frac{1}{T}\,\int _0^T\, \mathrm{d}t\, \int _\Delta \,\, \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \,d w_\Delta (\mathcal D)\, = \frac{d_\nu \, (D- d_\nu )}{D^2 \, ( D+1 )}. \end{aligned}$$

\(\square \)

Theorem 15

Suppose \(\epsilon >0\), \(\delta >0\) and \(\delta \,'>0\) are given. Consider natural positive numbers \(d_\nu , \nu =1,2,...,N\), such that \(d_1 + d_2 +...+ d_N=D=\) dim \(\mathcal {H}\), and, moreover, assume that, for all \(\nu =1,2...,N\),

$$\begin{aligned} d_\nu > D - \frac{\epsilon ^2\, \delta \, \delta '\, D\, (D+1) }{N^3}. \end{aligned}$$

Suppose \(H: \mathcal {H} \rightarrow \mathcal {H}\) is self-adjoint, the unitary vector \(\psi _0\in \mathcal {H}\) is fixed, and \(\psi _t = e^{-\,i\,t\,H} (\psi _0),\) \(t \ge 0.\)

Then, for \((1-\delta )\)-most of the decompositions \(\mathcal {D} \in \Delta (d_1,d_2,...,d_N, \mathcal {H} )\), the inequalities

$$\begin{aligned} |\, \,| E_{\psi _t} (P_{\mathcal {H}_\nu }) -\frac{d_\nu }{D}\,\,| = |\, \,| P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\,\,|< \epsilon \, \sqrt{\frac{d_\nu }{N\, D}} \,\,\,\,\,\,\,\,\,\,\,\,( \nu =1,2,...,N)\,\, \end{aligned}$$

are true for \((1- \delta \,')\)-most of the large times.

The estimates depend on the initial condition \(\psi _0\).

Proof

We denote

$$\begin{aligned} f_{\nu } (\mathcal {D})\,=\,\lim _{T \rightarrow \infty } \,\, \frac{1}{T}\, \int _0 ^T \, \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \, \mathrm{d}t. \end{aligned}$$

From Corollary 14, for each \(\nu \)

$$\begin{aligned} w_\Delta \left( \left\{ \mathcal {D} \in \Delta \,:\, f_\nu (\mathcal {D} ) \ge \,\frac{\epsilon ^2\, \delta \, '\, d_\nu }{D\, N^2} \right\} \right) \le \frac{d_\nu \, (D- d_\nu )}{D^2 \, ( D+1 )}\,\frac{D\, N^2}{\epsilon ^2 \delta \,'\,d_\nu } =\frac{N^2\, (D- d_\nu )}{D \, ( D+1 )\, \epsilon ^2\, \delta \, '} . \end{aligned}$$

Therefore, there exists a set \(S\subset \Delta \), such that

$$\begin{aligned} w_\Delta (S)\ge 1- \frac{N^3\, (D- d_\nu )}{D \, ( D+1 )\, \epsilon ^2\, \delta \, '}> 1 - \delta , \end{aligned}$$

and, at the same time \( f_\nu (\mathcal {D} )< \frac{\epsilon ^2\, \delta \, '\, d_\nu }{D\, N^2}, \) for all \(\mathcal {D}\in S\) and all \(\nu =1,2...,N.\)

Now, taking in Lemma 12 \(\rho =\frac{\epsilon ^2\, \delta \, '\, d_\nu }{D\, N^2},\) and \(\gamma =\frac{\epsilon ^2\, d_\nu }{D\, N},\) we get for all \(\mathcal {D}\in S\) and all \(\nu =1,2,...,N\)

$$\begin{aligned} |\, \,| P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\,\,|< \epsilon \, \sqrt{\frac{d_\nu }{N\, D}} \,\,\,\,\,\,\,\,\,\,\,\,( \nu =1,2,...,N), \end{aligned}$$

for \((1- \frac{\delta \,' }{N})\)-most of the large times.

Therefore, the above inequalities for all \(\nu =1,2,..,N\) are true for \((1- \delta \,' )\) most of the large times. \(\square \)

Note that the mean value \(f_{\nu } (\mathcal {D})\,\) depends of the Hamiltonian H but the bounds of last theorem does not depend on H.

4 Uniform estimates

In this section, we will refine the last result considering uniform estimates which are independent of the initial condition \(\psi _0\) (for the time evolution associated with the fixed Hamiltonian \(H: \mathcal {H} \rightarrow \mathcal {H}\)).

Suppose \(\epsilon >0\), \(\delta >0\), and \(\delta \,'>0\) are given. Consider natural positive numbers \(d_\nu , \nu =1,2,...,N\), such that \(d_1 + d_2 +...+ d_N=D=\) dim \(\mathcal {H}\)

We denote for each \(\psi _0 \in \mathcal {H}\), where \(|\psi _0|=1\), and \( \mathcal {D}\in \Delta =\Delta (d_1,...,d_N; \mathcal {H})\)

$$\begin{aligned} f_{\nu } (\psi _0,\mathcal {D})\,=\,\lim _{T \rightarrow \infty } \,\, \frac{1}{T}\, \int _0 ^T \, \left( | P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D}\right) ^2 \, \mathrm{d}t, \end{aligned}$$

where \(\psi _t= e^{-i\,t\, H} (\psi _0)\) (see Lemma 13).

Lemma 16

Suppose are given \(\epsilon >0\) and \(\delta '>0.\) Assume that there exists non-negative continuous functions \(g_\nu : \Delta \rightarrow \mathbb {R}\), \(\nu =1,2,..,N\), and \(K>0\), such that

$$\begin{aligned} \mathrm{(a)} f_\nu ( \psi _0 , \mathcal {D})\le g_\nu ,\, \,\text {for all}\,\,\, \mathcal {D} \in \Delta \, \,\text { and for all}\, \,\,\psi _0 \in \mathcal {H} \text { with}\,\,|\psi _0|=1, \end{aligned}$$
(2)
$$\begin{aligned} \mathrm{(b)} \int _\Delta g_\nu (\mathcal {D})\, d\mathrm{w}_\Delta (\mathcal D)\, <K . \end{aligned}$$
(3)

Suppose \(\delta \) is such that

$$\begin{aligned} 1\,>\, \delta \ge \frac{K\,D\, N^3}{\epsilon ^2\, \delta '\, d_\nu } , \nu =1,2,..,N. \end{aligned}$$
(4)

Then, for \((1-\delta )\)-most of the \(\mathcal {D} \in \Delta \), we have

$$\begin{aligned} |\,\,| P_\nu (\mathcal {D})\, \psi _t\,|^2 -\frac{d_\nu }{D} \,| \le \epsilon \, \sqrt{\frac{d_\nu }{N\, D}},\,\,\nu =1,2,..,N , \end{aligned}$$
(5)

for \((1- \delta ')\)-most of the large times and for any \(\psi _0 \in \mathcal {H}\) with \(|\psi _0|=1\).

Proof

Note that

$$\begin{aligned} w_\Delta (\{\mathcal {D}\in \Delta \,:\, g_\nu (\Delta )\ge \delta ' \, \epsilon ^2\, \frac{d_\nu }{N^2 \, D}\}\,< K\, \frac{N^2 \, D}{\delta '\, \epsilon ^2 d_\nu } \,<\, \frac{\delta }{N}, \,\nu =1,2,..,N . \end{aligned}$$

Therefore, there exists a subset \(E \subset \Delta \), such that \(w_\Delta (E) < 1 - \delta \) and \(g_\nu (\Delta )< \delta ' \, \epsilon ^2\, \frac{d_\nu }{N^2 \, D}\), for all \(\Delta \in E\) and all \(\nu =1,2...,N.\)

The conclusion is: if \(\Delta \in E\), then \( f_\nu (\psi _0,\mathcal {D})\, < \delta ' \, \epsilon ^2\, \frac{d_\nu }{N^2 \, D}\), for all \(\nu =1,2...,N,\) and all \(\psi _0 \) with norm 1.

The proof of the claim now follows from the reasoning of Theorem 15 and Lemma 12. \(\square \)

Note that to have \(\delta \) in expression (4) small, it is necessary that all \(d_\nu \) are large.

We assume now several hypothesis on H. Consider a certain orthogonal basis of eigenvectors \(\phi _1,\phi _2,...,\phi _D\) of H. We denote by \(E_j\), \(j=1,2,..,D\) the corresponding eigenvalues.

We assume hypothesis \(\mathfrak {N\,\,R}\) which says

  1. a)

    H is not degenerate, that is, \(E_\alpha \ne E_\beta \), for \(\alpha \ne \beta \), and

  2. b)

    H has no resonances, that is, \(E_\alpha -E_\beta \ne E_{\alpha '} - E_{\beta '}\), unless \(\alpha = \alpha '\) and \(\beta =\beta '\), or, \(\alpha = \beta \) and \(\alpha '=\beta '\).

Lemma 17

$$\begin{aligned} f_{\nu } (\psi _0,\mathcal {D})\le \max _{1\le \alpha \ne \beta \le D} |< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>|^2 + \max _{1\le \alpha \le D} \left( < \phi _\alpha , P_\nu ( \mathcal {D}) \, \phi _\alpha \,>- \frac{d_\nu }{D}\right) ^2, \end{aligned}$$

for all \(\psi _0\in \mathcal {H}\), such that \(|\psi _0|=1\), and for all \(\mathcal {D} \in \Delta (d_1,...,d_N; \mathcal {H})\) and all \(\nu =1,2...,N.\)

Proof

Suppose \(\psi _0= \sum _{\alpha =1}^D \,c_\alpha \, \phi _\alpha \). Then

$$\begin{aligned} \psi _t= \sum _{\alpha =1}^D \,c_\alpha \, e^{ -i\, t\,E_\alpha }\phi _\alpha , t \ge 0, \end{aligned}$$

and

$$\begin{aligned} |\,P_\nu ( \mathcal {D}) \psi _t|^2=< \psi _t ,P_\nu ( \mathcal {D}) \psi _t> = \sum _{1\le \alpha , \beta \le D} c_\alpha \,\overline{c}_ \beta e^{ -i t\,(E_\alpha - E_\beta )}\,\,\, \,\,\,\,\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta >. \end{aligned}$$

Therefore

$$\begin{aligned} \left( |\,P_\nu ( \mathcal {D}) \psi _t|^2\,-\frac{d_\nu }{D}\right) ^2= & {} \sum _{1\le \alpha , \beta , \gamma ,\delta \le D }^D\, c_\alpha \,\overline{c}_\beta \,c_\gamma \overline{c}_\delta \, e^{ -i t\,[\,(E_\alpha - E_\beta )\,- (E_\delta - E_\gamma )\,]}\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>\,< \phi _\gamma ,P_\nu ( \mathcal {D}) \phi _\delta>\\&-2\,\frac{d_\nu }{D}\,\sum _{1\le \alpha , \beta \le D} c_\alpha \,\overline{c}_ \beta e^{ -i t\,(E_\alpha - E_\beta )}\,\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta >\,+\, \frac{d_\nu ^2}{D^2}. \end{aligned}$$

Using the above expression in the computation of integral \(f_{\nu } (\psi _0,\mathcal {D})\) will remain just the terms, where the coefficient of t is zero. By hypothesis, this will happen just when \(\alpha = \delta \) and \(\beta =\gamma \), or, \(\alpha =\beta \) and \(\gamma =\delta \).

Note that the case \(\alpha =\beta =\gamma =\delta \) is counted twice in the estimation.

Therefore

$$\begin{aligned} f_{\nu } (\psi _0,\mathcal {D})= & {} \sum _{1\le \alpha , \beta \le D}\, |c_\alpha |^2\,|c_\beta |^2 \,\,\,\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>\,|^2\\&+ \sum _{1\le \alpha , \gamma \le D}\, |c_\alpha |^2\,|c_\gamma |^2 \,\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha>\,< \phi _\gamma ,P_\nu ( \mathcal {D}) \phi _\gamma> \\&-\sum _{1\le \alpha \le D}\, |c_\alpha |^4\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha>\,|^2\,- 2 \, \frac{d_\nu }{D}\,\sum _{1\le \alpha \le D}\, |c_\alpha |^2\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha >\,\,+\frac{d_\nu ^2}{D^2}, \end{aligned}$$

because \(\,< \phi _\gamma ,P_\nu ( \mathcal {D}) \phi _\delta>\,= \overline{< \phi _\delta ,P_\nu ( \mathcal {D}) \phi _\gamma >} \).

Finally, putting together the first and third terms:

$$\begin{aligned} f_{\nu } (\psi _0,\mathcal {D}) =\sum _{1\le \alpha \ne \beta \le D}\, |c_\alpha |^2\,|c_\beta |^2 \,\,\,\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>\,|^2+ \left( \sum _{1\le \alpha \le D}\, |c_\alpha |^2\,\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha >\,\,- \, \frac{d_\nu }{D}\right) ^2. \end{aligned}$$

By the other hand

$$\begin{aligned} \sum _{1\le \alpha \ne \beta \le D}\, |c_\alpha |^2\,|c_\beta |^2 \,\,\,\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>\,|^2\le & {} \max _{1\le \alpha \ne \beta \le D}\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>\,|^2\, \sum _{1\le \alpha ,\beta \le D}\, |c_\alpha |^2\,|c_\beta |^2\\= & {} \max _{1\le \alpha \ne \beta \le D}\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>\,|^2\, \left( \sum _{1\le \alpha \le D}\, \,|c_\alpha |^2\right) ^2\\= & {} \max _{1\le \alpha \ne \beta \le D}\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta >\,|^2, \end{aligned}$$

because \(|\psi _0|=1\).

By the same reason

$$\begin{aligned} \,|\,\sum _{1\le \alpha \le D}\, |c_\alpha |^2\,\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha>\,\,- \, \frac{d_\nu }{D}\,|= & {} \left| \,\sum _{1\le \alpha \le D}\, |c_\alpha |^2\,\left(< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha>\,\,- \, \frac{d_\nu }{D}\right) \,\right| \\\le & {} \max _{1\le \alpha \le D}\,\left| \,\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha >\,\,- \, \frac{d_\nu }{D}\,\right| . \end{aligned}$$

\(\square \)

Now, we define for each \(\nu =1,2,...,N\), the continuous function \(g_\nu (\mathcal {D}): \Delta (d_1,...,d_N; \mathcal {H}) = \Delta \rightarrow \mathbb {R} \) given by

$$\begin{aligned} g_\nu (\mathcal {D}) = \max _{1\le \alpha \ne \beta \le D}\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta>\,|^2\,+\max _{1\le \alpha \le D}\,\left| \,\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\alpha >\,\,- \, \frac{d_\nu }{D}\,\right| ^2. \end{aligned}$$
(6)

We point out that for each \(\mathcal {D}\), the expression \(g_\nu (\mathcal {D})\) depends just on H because as \(E_\alpha \) are all different, the eigenvector basis is unique up to a changing in order and multiplication by scalar of modulus one.

Now, we need a fundamental technical Lemma.

Lemma 18

There exist a constant \(C_1>0\), such that

$$\begin{aligned} \int _\Delta g_\nu (\mathcal {D}) w_\Delta (\mathcal {D}) < \frac{10 \log D}{D}, \,\,\nu =1,2,...,N, \end{aligned}$$

if, \(C_1 \log D \,< \, d_\nu \,<\, \frac{D}{C_1}.\)

Note that if D is large, there is a lot of room for the values \(d_\nu \) to be able to satisfy last inequality. We will prove this fundamental lemma in the next sections.

If we assume the Lemma is true, then:

Theorem 19

Given \( \epsilon , \delta >0\) and \(\delta '>0\), take \(d_1,d_2,...,d_N\), such that, if \(D=d_1+...+d_N\), \(N>0\), then the following inequalities are true

$$\begin{aligned} \max \, (C_1 , \frac{10 N^3}{\epsilon \, \delta \,\delta '})\, \log D \,< d_\nu \,<\, \frac{D}{ C_1}, \,\,\nu =1,2,..,N, \end{aligned}$$

where \(C_1\) comes from Lemma 18.

Assume that \(\mathcal {H}\) is a Hilbert space of dimension D and \(H: \mathcal {H} \rightarrow \mathcal {H}\) is a self-adjoint Hamiltonian without resonances and degeneracies, then for \((1-\delta )\) most of the decompositions \(\mathcal {D} \in \Delta (d_1,...,d_N;\mathcal {H})\) the system of inequalities

$$\begin{aligned} |\,\,\,|\,P_\nu ( \mathcal {D}) \psi _t\,|^2\,-\frac{d_\nu }{D}\,|\, < \, \epsilon \, \sqrt{\frac{d_\nu }{N\, D}},\,\, \nu =1,2,...,N \end{aligned}$$

are true for most of the \((1-\delta ')\) large times and for any initial condition, \(\psi _0\in \mathcal {H}\), \(|\psi _0|=1\).

Proof

By hypothesis and Lemma 18, we get

$$\begin{aligned} \int _\Delta g_\nu (\mathcal {D}) w_\Delta (\mathcal {D}) < \frac{10 \log D}{D}, \,\,\nu =1,2,...,N. \end{aligned}$$

The claim follows from Lemma 16 by taking \(K = \frac{10 \log D}{D}.\) \(\square \)

Main conclusion:

As we said before, for a given fixed subspace \(\mathcal {H}_\nu \) of \(\mathcal {H}\), the observable \(P_{\mathcal {H}_\nu }\) (the orthogonal projection on \(\mathcal {H}_\nu \)) is such that the mean value \(E_{\psi _t} (P_{\mathcal {H}_\nu })\) of the state \(\psi _t\) is \(<P_{\mathcal {H}_\nu }(\psi _t), \psi _t>= |P_{\mathcal {H}_\nu }(\psi _t)\,|^2.\)

For a fixed Hamiltonian H acting on a Hilbert space \(\mathcal {H}\) of dimension D, the main theorem gives lower bound conditions on the dimensions \(d_\nu \), \(\nu =1,2,..,N\), of the different \(\mathcal {H}_\nu \) values of a \((1-\delta )\)-generic orthogonal decomposition \(\mathcal {D}\) of the form \(\mathcal {H}\,=\, \mathcal {H}_1\, \oplus ...\oplus \mathcal {H}_N\), in such a way that the dynamic time evolution \(\psi _t\), obtained from any fixed initial condition \(\psi _0\), for most of the large times t, has the property that the projected component \(P_\nu (\mathcal {D})\, (\psi _t)\,=\,P_{\mathcal {H}_\nu }(\psi _t)\) is almost uniformly distributed (in terms of expected value) with respect to the relative dimension size \(\frac{d_\nu }{D}\) of \(\mathcal {H}_\nu .\) In this way, there is an approximately uniform spreading of \(\psi _t\) among the different values of \(\mathcal {H}_\nu \) of the decomposition \(\mathcal {D}\).

5 Proof of Lemma 18

The Lemmas 22 and 23 will permit to reduce the integration problem from the unitary group to a problem in the real line.

We will need first an auxiliary lemma. We denote by \(S^k\) the unitary sphere in \(\mathbb {R}^{k+1}\) and \(S^k_r\) the sphere of radius \(r>0\) in \(\mathbb {R}^{k+1}.\) We consider the usual metric on them.

The next lemma is a classical result on Integral Geometry (see [4]). We will provide a simple proof in “Appendix 2”.

Lemma 20

Suppose X is a Riemannian compact manifold, \(f:X \rightarrow \mathbb {R}\) a \(C^\infty \)-function and \(g: \mathbb {R} \rightarrow \mathbb {R}\) a continuous function. We define

$$\begin{aligned} G(v)=\, \int _{f\le v} (g \circ f)\, \lambda , \end{aligned}$$

where \(\lambda \) is the volume form on X. Suppose that \(a\in \mathbb {R}\) is a regular value of f. Then, G is differentiable at \(v=a\) and

$$\begin{aligned} \frac{\mathrm{d} G}{\mathrm{d}v} (a)\,=\,g(a)\, \int _{X_a} \frac{ \lambda _a}{|\, \,\text { grad}\, f\,|}, \end{aligned}$$

where \(X_a\) is the level manifold \(f=a\) and \(\lambda _a\) is the induced volume form in \(X_a\).

Corollary 21

Given positive integers dD, where \(1<d<D-1\), denote by S the unitary sphere on \(\mathbb {R}^{2\, D}\) with the usual metric. Define

$$\begin{aligned} f(x)= x_1^2 +...+x^2_{2\,d }, \,\text {where}\,x \in S \,\,\text {and}\,\,\,g:\mathbb {R} \rightarrow \mathbb {R}\,\,\,\text {is a continuous function}. \end{aligned}$$

Suppose

$$\begin{aligned} G(v)=\, \int _{f\le v} (g \circ f)\, d \lambda , \end{aligned}$$

then G is of class \(C^1\) and

$$\begin{aligned} \frac{\mathrm{d} G}{\mathrm{d}v} (v)\,=\,\frac{2\, \pi ^D}{(d-1)\,!\,\,(D -d-1)\,!}\, g(v)\, v^{d-1} \, (1-v)^{D-d-1},\,\,\,\text {if}\,\,0\le v\le 1, \end{aligned}$$

and \( \frac{\mathrm{d} G}{\mathrm{d}v} (v)\,=0\), if \(v<0\) or \(v>1\).

Proof

For \(x_1^2 +...+x^2_{2\,d }=v\), we have

$$\begin{aligned} \text {grad}\,f(x)\,=\, 2\,( \,(1-v) x_1,...,(1-v) x_{2d}, -v\, x_{2d+1},...,-v\, x_{2\,D}). \end{aligned}$$

Then, \(|\text {grad}\,f(x)|=2\, \sqrt{v\, (v-1)}\), which is constant over \(S_v=\{f=v\}\). Note that

$$\begin{aligned} S_v = S_{\sqrt{v}}^{2 d-1}\,\times S_{\sqrt{1-v}}^{2\,(D -d)\,-1},\,\,\,0<v<1. \end{aligned}$$

From last Lemma and from the above expression, it follows that (remember that vol \((S_r^{2n-1})\,= \,\frac{\,2 \, \pi ^n}{(n-1)\, !} \,r^{2n-1}\))

$$\begin{aligned}&\frac{\mathrm{d} G}{\mathrm{d}v} (v)\,=\,g(v)\,\frac{1}{\,2\, \sqrt{v\,(1-v)}\,}\, \,\frac{2\,\pi ^d\,(\sqrt{v})^{2\,d-1}}{\,(d-1)\,!}\,\, \,\frac{2\, \pi ^{D-d} (\sqrt{(1-v)})^{2\,(D-d)\,-1}}{\,(D-d-1)\,!\, }\\&\quad =\frac{2\,\pi ^D\,v^{d-1}\,(1-v)^{D-d-1}\,}{\,(d-1)\,!\,\,\,\,\, (D-d-1)\,!},\,\,\,\,\,\,\,\,0<v<1. \end{aligned}$$

In the case \(v<0\) or \(v>1\), we have that G is constant. Finally, as \(S_0\) and \(S_1\) are submanifolds of S, we have that G is continuous for \(v=0\) and \(v=1\). \(\square \)

From now on, we fix \(\nu \), where \(1\le \nu \le N\), and we define

$$\begin{aligned} e_{\alpha ,\beta }( \mathcal {D})= <\, \phi _\alpha , P_\nu (\mathcal {D})\, \phi _\beta \,>, \,\,\,\mathcal {D} \in \Delta , \,\,1\le \alpha ,\,\beta \,\le D,\,\,\,e_{\alpha ,\beta }:\Delta \rightarrow \mathbb {C}, \end{aligned}$$

where \(\phi _1,...,\phi _D\) is the orthonormal basis for \(\mathcal {H}\) which were fixed in Sect. 4.

Lemma 22

Suppose \(1< d_\nu <D-1\). Let \(a \ge 0\) be such \(\sqrt{a} < \frac{d_\nu }{D}\) and \(\sqrt{a}+ \frac{d_\nu }{D}<1.\) Then, the probability, such that \((e_{\alpha ,\beta }\,-\frac{d_\nu }{D} )^2\, \ge \alpha \) is

$$\begin{aligned} \frac{(D-1)\,!\,}{\,(d_\nu -1)\,!\, (D-d_\nu -1)\,!}\,\int _{[0,\,\frac{d_\nu }{D}-\sqrt{a}]\cup [\frac{d_\nu }{D}+\sqrt{a},\,1]} u^{d _\nu -1}\,(1-u)^{D-d_\nu -1}\,\mathrm{d}u.\end{aligned}$$

Lemma 23

Suppose \(1< d_\nu <D-1\). Let \(\alpha \ne \beta \) and \(0\le a\le 1/4\). Then, the probability such that \(|\,e_{\alpha ,\beta }\,|^2\, \ge a\) is

$$\begin{aligned} \frac{(D-1)\,!\,}{\,(d_\nu -1)\,!\, (D-d_\nu -1)\,!}\,\int _{1/2\,-\, \sqrt{1/4-a}}^{1/2\,+\, \sqrt{1/4-a}}\,\frac{(w\,(1-w)-a)^{D-2}}{w^{D-d_\nu -1}\,(1-w)^{d_\nu -1}} d\mathrm{w}. \end{aligned}$$

Proof of Lemma 22

We just have to consider the case \(\nu =1\). We write \(d=d_1\) and denote by P the orthogonal projection of \(\mathcal {H}\) over \(\mathbb {C} \phi _1+...+ \mathbb {C} \phi _d.\)

We denote by \(p:\mathbb {U} \rightarrow \Delta \) the projection defined in the beginning of Sect. 2, where \(\mathbb {U}\) denotes the group of unitary transformations of \(\mathcal {H}.\)

If \(U\in \mathbb {U}\), then

\( e_{\alpha ,\alpha }(p(U))=<\phi _\alpha ,\) orthogonal projection of \(\phi _\alpha \) in \(\mathbb {C} U(\phi _1)+...+ \mathbb {C}\,U( \phi _d)>=\)

\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,<U^{-1} (\phi _\alpha ), P (U^{-1} \phi _\alpha )>.\)

Denote \(q:\mathbb {U} \rightarrow S,\)   where \(q(U)= U(\phi _\alpha ),\,\,\,U \in \mathbb {U}\) and \(\sigma :S \rightarrow \mathbb {R},\)  where \(\sigma (\phi )= <\phi , P(\phi )>,\)   \(\phi \in S,\) and where S is the unitary sphere of \(\mathcal {H}.\)

Then, we get the following commutative diagram:

$$\begin{aligned}&\,\,\,\,\,\text {inverse} \,\,\,\,\,\,\,\,\,&\\&\mathbb {U} \,\,\,\,\,\rightarrow \,\,\,\,\,\,\,\,\,\mathbb {U}&\\&p \downarrow \,\,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\, \,\,\,\,\downarrow q&\\&\Delta \,\,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\, \,\,\,\,\,\,\,\,\,S&\\&e_{\alpha ,\alpha } \searrow \,\,\,\,\,\,\,\, \,\,\swarrow \sigma&\\&\,\,\,\,\,\, \,\,\mathbb {R}&\end{aligned}$$

As the inverse preserves the metric, it follows from Lemma 1 a) that the probability of \( e_{\alpha ,\alpha } \le b\) is equal to the probability that \(\sigma \le b\). Note that the metric on S as quotient of \(\mathbb {U}\) is the same as the induced by \(\mathcal {H}\), because \(\mathbb {U}\) acts transitively on S.

It will be more easy to make the computations via the right hand side of the diagram.

We identify \(\mathcal {H}\) with \(\mathbb {C}^D= \mathbb {R}^{2\, D}\), via \(\phi _1,\phi _2,...,\phi _D\). Then, S is identified with the unitary sphere in \(\mathbb {R}^{2\, D}\), also denoted by S, and

$$\begin{aligned} \sigma :S \rightarrow \mathbb {R},\,\, \,\,\sigma (x) = x_1^2+...+ x_{2 \,d}^2 , \,\,\, x \in S. \end{aligned}$$

Therefore, by Corollary 21 with \(g=1\), we get

$$\begin{aligned} \frac{d\, (\text {Vol}\, (\sigma \le v))}{d\,v} \,=\,\frac{2\, \pi ^D}{(d-1)\,!\,\,(D -d-1)\,!}\, v^{d-1} \, (1-v)^{D-d-1},\,\,\,\text {if}\,\,0\le v\le 1, \end{aligned}$$

and

$$\begin{aligned} \frac{d\, (\text {Vol}\, (\sigma \le v))}{d\,v} \,=0, \end{aligned}$$

if \(v<0\) or \(v>1\).

Now, we normalize dividing by vol \(S= \frac{\, 2\,\pi ^D}{(D-1)\,!}\) and we get

$$\begin{aligned} \frac{d\, (\text {prob}\, (\sigma \le v))}{d\,v} \,=\,\frac{(D-1)\,!}{(d-1)\,!\,\,(D -d-1)\,!}\, v^{d-1} \, (1-v)^{D-d-1},\,\,\,\text {if}\,\,0\le v\le 1. \end{aligned}$$

As \((e_{\alpha ,\alpha }\,-\frac{d}{D} )^2\, \ge a\) is equivalent to

$$\begin{aligned} e_{\alpha ,\alpha }\,\ge \frac{d}{D} \, + \sqrt{a},\,\,\,\text {or}\,\,\, e_{\alpha ,\alpha }\,\le \frac{d}{D} \, - \sqrt{a} , \end{aligned}$$

we get that the probability of \((e_{\alpha ,\alpha }\,-\frac{d}{D} )^2\, \ge a\) is equal to the probability of \(\sigma \ge \frac{d}{D} \, + \sqrt{a}\) or \(\sigma \le \frac{d}{D} \, - \sqrt{a}\). From this follows that the probability of \((e_{\alpha ,\alpha }\,-\frac{d}{D} )^2\, \ge a\) is equal to

$$\begin{aligned} \frac{(D-1)\,!}{(d-1)\,!\,\,(D -d-1)\,!}\,\,\left[ \,\int _{\frac{d}{D} +\sqrt{a} }^1v^{d-1} \, (1-v)^{D-d-1}\mathrm{d}v+ \int ^{\frac{d}{D} -\sqrt{a} }_0 v^{d-1} \, (1-v)^{D-d-1}\mathrm{d}v\right] . \end{aligned}$$

Observe that \(\sigma =\) constant is an analytic subset of S, and therefore, the associated probability is zero. The case \(a=0\) is trivial. \(\square \)

Proof of Lemma 23

We just have to consider the case \(\nu =1\). Take \(d=d_1\) and as before, we denote by P the orthogonal projection of \(\mathcal {H}\) over \(\mathbb {C} \phi _1+...+ \mathbb {C} \phi _d.\) Once more we denote by \(p:\mathbb {U} \rightarrow \Delta \) the projection defined in the beginning of Sect. 2.

If \(U\in \mathbb {U}\), then

\( e_{\alpha ,\beta }(p(U))=<\phi _\alpha ,\) orthogonal projection of \(\phi _\beta \) in \(\mathbb {C} U(\phi _1)+...+ \mathbb {C}\,U( \phi _d)>=\)

\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,<U^{-1} (\phi _\alpha ), P (U^{-1} \phi _\beta )>.\)

Denote \(q_{\alpha ,\beta } :\mathbb {U} \rightarrow S\times S,\)   where \(q_{\alpha ,\beta }(U)= (U(\phi _\alpha ), U(\phi _\beta )),\,\,\,U \in \mathbb {U}\), and S is the unitary sphere of \(\mathcal {H}.\)

Denote by \(M= q_{\alpha ,\beta }(\mathbb {U})= \{(\phi ,\psi )\in S \times S\,| \,\phi \) is orthogonal to \( \psi \,\}\).

Let \(H_{\alpha ,\beta } \subset \mathbb {U}\) the closed subgroup of the U, such that \(U(\phi _\alpha )= \phi _\alpha \) and \(U(\phi _\beta )= \phi _\beta \).

Then, \(M = \mathbb {U}/H_{\alpha ,\beta }\) and \(q_{\alpha ,\beta }: \mathbb {U} \rightarrow M\) is the canonical projection.

The quotient metric on M is the induced by \(S\times S\), because \(\mathbb {U}\) acts transitively on M.

Let \(f:M \rightarrow \mathbb {C}\) given by \(f(\phi , \psi ) = <\,\phi , P( \psi )\,>.\) Then, we get the following commutative diagram:

$$\begin{aligned}&\,\,\,\,\,\text {inverse} \,\,\,\,\,\,\,\,\,&\\&\mathbb {U} \,\,\,\,\,\rightarrow \,\,\,\,\,\,\,\,\,\mathbb {U}&\\&p \downarrow \,\,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\, \,\,\,\,\downarrow q_{\alpha , \beta }&\\&\Delta \,\,\,\,\,\,\, \,\,\,\,\,\, \,\,\,\,\, \,\,\,\,\,\,\,\,\,M&\\&e_{\alpha ,\beta } \searrow \,\,\,\,\,\,\,\, \,\,\swarrow f&\\&\,\,\,\,\,\, \,\,\mathbb {C} \end{aligned}$$

As the inverse preserves the metric of \(\mathbb {U}\), it follows that the probability of \( |e_{\alpha ,\alpha }|^2 \le a\) is equal to the probability that \(|f|^2\le a\) by Lemma 1 a).

Now, consider \(\varphi :M \rightarrow S\), such that \(\varphi (\phi ,\psi )=\psi \). This defines a \(C^\infty \) locally trivial fiber bundle with fiber \(S^{2 \, D-3}\). Indeed, \(E_\psi =\varphi ^{-1} (\psi )\) is the unitary sphere of the subspace \(\mathcal {H}_\psi \) which is the orthogonal set to \(\psi \) in \(\mathcal {H}\).

Given \(u\in \mathbb {R}\) denote:

$$\begin{aligned} F_u(\psi )= E_\psi \,\cap \, \{|f|^2 \le u \},\,\,\,\psi \in S. \end{aligned}$$

Then

$$\begin{aligned} \text {Vol}\, (\{|f|^2 \le u \})\,=\, \int _S \text {vol}_{E_\psi } (F_u (\psi ))\,\,\, \mathrm{d} S \,(\psi ). \end{aligned}$$

For each \(\psi \), we get \(\psi '\in \mathcal {H}\) via

$$\begin{aligned} P(\psi )=\, c \psi + \psi ', \,\,\text {where}\,\,c \in \mathbb {C}\,\,\, \text {and}\,\,\,\psi ' \, \,\text {is orthogonal to}\,\,\psi . \end{aligned}$$

Note that \(\psi ' \in \mathcal {H}_\psi \). Then

$$\begin{aligned} f(\phi ,\psi )=<\phi ,P(\psi )>=<\phi ,\psi '>, \end{aligned}$$

and it follows that

$$\begin{aligned} F_u (\psi ) = \{\phi \in E_\psi \,:\, \,| \,<\phi ,\psi '>\,|^2\le u\},\,\,u\in \mathbb {R},\,\,\psi \in S. \end{aligned}$$

There exist an isomorphism identifying \(\mathcal {H}_\psi = \mathbb {C}^{D-1}= \mathbb {R}^{2\, D -2}\) between Hilbert spaces which transform \(\psi '\) in \((|\psi '|,0,...,0)\). This isomorphism identifies \(E_\psi \) with the unitary sphere E on \(\mathbb {R}^{2\, D-2}\) and \(F_u (\psi )\) with the set:

$$\begin{aligned} \{ x \in E \,:\, |\psi '|^2 (x_1^2 + x_2^2) \le u\}. \end{aligned}$$

Now, applying Corollary 21 with \(D-1\) instead of D, \(d=1\), \(g=1\), and \(v= \frac{u}{|\psi '|^2}\), we get

$$\begin{aligned} \frac{d\,\text {Vol}_{E_\psi } F_u(\psi )}{\mathrm{d}u} = \frac{2\, \pi ^{D-1}}{(D-3)\,!} (1- \frac{u}{|\psi \,'|^2})^{D-3} \frac{1}{|\psi \,'|^2} =\frac{2\, \pi ^{D-1}}{(D-3)\,!} \frac{(|\,\psi \,'|^2- u)^{D-3}}{|\psi \,'|^{2\,(D-2)}} , \end{aligned}$$

for all \(\psi \in S\) and \(0<u\le |\psi \,'|^2\), and

$$\begin{aligned} \frac{d\,\text {Vol}_{E_\psi } F_u(\psi )}{\mathrm{d}u} = 0 \end{aligned}$$

if \(|\psi \, '|^2 \le u\le 1\), for any \(\psi \in S\).

Then, we get that \( \frac{d\,\text {Vol}_{E_\psi } F_u(\psi )}{\mathrm{d}u}\) is a continuous function of \((u,\psi )\) for \(0<u\le 1\) and \(\psi \in S\). As S is compact, we can take derivative inside the integral and we get

$$\begin{aligned} \frac{d\,\text {Vol} (|f|^2 \le u)}{\mathrm{d}u}= \int _S \, \frac{d\,\text {Vol}_{E_\psi } F_u(\psi )}{\mathrm{d}u} \, \mathrm{d} S(\psi ) \end{aligned}$$

for any \(0<u\le 1.\)

By the definition of \(\psi \,'\), it is easy to see that \(|\psi \,'|^2 = |P(\psi )|^2 \,(1- |P(\psi )|^2)\).

Now, we consider \(g_u: \mathbb {R} \rightarrow \mathbb {R} \), where

$$\begin{aligned} g_u(w) = \frac{(w\, (1-w) - u)^{D-3}}{(w\, (1-w))^{D-2}} \end{aligned}$$

if \(u \le w\, (1-w),\) and \(g_u(w)=0\) in the other case.

\(g_u(w)\) is a continuous function of u and w when \(0<u\le 1\), \(0\le w \le 1\).

From this follows that

$$\begin{aligned} \frac{d\,\text {Vol} (|f|^2 \le u)}{\mathrm{d}u}= \frac{2\, \pi ^{D-1}}{(D-3)\,! } \int _S \, (g_u \circ |P(\psi )|^2 ) \, \mathrm{d} S(\psi ) \end{aligned}$$

for any \(0<u\le 1.\)

Now, we normalize dividing by Vol \((M)= \frac{ 2\, \pi ^{D-1 }}{(D-2)\, !}\,\frac{ 2\, \pi ^{D }}{(D-1)\, !}\) and we get

$$\begin{aligned} \frac{d\,\text {Prob} (|f|^2 \le u)}{\mathrm{d}u}= \frac{(D-1)\,!\, (D-2)}{(2\,\pi ^D)\, } \int _S \, (g_u \circ |P(\psi )|^2) \, \mathrm{d} S(\psi ) \end{aligned}$$
(7)

for any \(0<u\le 1.\)

Denote

$$\begin{aligned} A(u,w) = \int _{|P(\psi )|^2 \le w} (g_u \circ |P(\psi )|^2) \, \mathrm{d} S(\psi ), \end{aligned}$$

for any \(0<u\le 1\), \(0\le w\le 1.\)

By Corollary 21, we get

$$\begin{aligned} \int _S\,( \,g_u \circ |P(\psi )|^2) \, \mathrm{d} S(\psi )\, = A(u,1) = A(u,1) - A(u,0) = \int _0^1 \frac{\partial A}{\partial w}(u,w)\, \mathrm{d}w, \end{aligned}$$

for any \(0<u\le 1.\)

Estimating \(\frac{\partial A}{\partial w}\) by Corollary 21 and substituting in (7), we finally get

$$\begin{aligned} \frac{d\,\text {Prob} (|f|^2 \le u)}{\mathrm{d}u}= \frac{(D-1)\,!\, (D-2)}{(d-1)\,!\, (D-d-1)\, !\, } \int _{u \le w\,(1-w)} \, \frac{(w\, (1-w)- u)^{D-3} }{w^{D-d-1} \, (1-w)^{d-1}} \, d\mathrm{w} \end{aligned}$$

for any \(0<u\le 1.\)

If \(u>1/4\), \(w\,(1-w)<u\) for all w and the integral is zero.

If \(0<u\le 1/4\), \(u\le w (1-w)\) is equivalent to

$$\begin{aligned} 1/2 - \sqrt{1/4-u} \le w\le 1/2 + \sqrt{1/4-u}. \end{aligned}$$

Then

$$\begin{aligned} \frac{d\,\text {Prob} (|f|^2 \le u)}{\mathrm{d}u}= \frac{(D-1)\,!\, (D-2)}{(d-1)\,!\, (D-d-1)\, !\, } \int _{1/2 - \sqrt{1/4-u} }^{1/2 + \sqrt{1/4-u}} \, \frac{(w\, (1-w)- u)^{D-3} }{w^{D-d-1} \, (1-w)^{d-1}} \, d\mathrm{w}, \end{aligned}$$

if \(0<u\le 1/4\), and

$$\begin{aligned} \frac{d\,\text {Prob} (|f|^2 \le u)}{\mathrm{d}u}=0 \end{aligned}$$

if \(1/4\le u\le 1.\)

Finally, for \(0< a\le 1/4\)

$$\begin{aligned} \text {Prob} (|f|^2 \ge a)= \frac{(D-1)\,!\, (D-2)}{(d-1)\,!\, (D-d-1)\, !\, } \int _a^{1/4}\,\mathrm{d}u\,\int _{1/2 - \sqrt{1/4-u} }^{1/2 + \sqrt{1/4-u}} \, \frac{(w\, (1-w)- u)^{D-3} }{w^{D-d-1} \, (1-w)^{d-1}} \, d\mathrm{w}. \end{aligned}$$

Considering the double integral in the region \(a\le u\le w\, (1-w)\), we get

$$\begin{aligned} \text {Prob} (|f|^2 \ge a)= & {} \frac{(D-1)\,!\, (D-2)}{(d-1)\,!\, (D-d-1)\, !\, }\int _{1/2 - \sqrt{1/4-u} }^{1/2 + \sqrt{1/4-a}} \,\mathrm{d}w\,\int _a^{w\,(1-w)} \frac{(w\, (1-w)- u)^{D-3} }{w^{D-d-1} \, (1-w)^{d-1}} \, \mathrm{d} u\\= & {} \frac{(D-1)\,!\,}{(d-1)\,!\, (D-d-1)\, ! } \int _{1/2 - \sqrt{1/4-a} }^{1/2 + \sqrt{1/4-a}} \frac{(w\, (1-w)- a)^{D-2} }{w^{D-d-1} \, (1-w)^{d-1}} \, \mathrm{d}w. \end{aligned}$$

The case \(a=0\) is trivial. \(\square \)

Remark

Note that if \(g:\Delta \rightarrow \mathbb {R}\) is a continuous function such that \(0\le g( \mathcal {D})\le r\), for all \(\mathcal {D} \in \Delta ,\) then we get the estimate

$$\begin{aligned} \int _\Delta g( \mathcal {D})\, w_\Delta (\mathcal {D})\,= \int _{g \ge a} g( \mathcal {D})\, w_\Delta (\mathcal {D})\,+ \int _{g < a} g( \mathcal {D})\, w_\Delta (\mathcal {D})\,\le r\, \,\text {Prob}\,\,(g\ge a)+ \, a, \end{aligned}$$

for \(0\le a\le 1\).

Given positive integer numbers dD and \(a \in \mathbb {R}\), such that

$$\begin{aligned} 1<d<D-1,\,\,\,\,0\le a \le \frac{d^2}{D^2}\,\,\,\text {and}\,\,\,\frac{d}{D} + \sqrt{a} \le 1 \end{aligned}$$

we define

$$\begin{aligned} I(d,D,a) = \frac{(D-1)\,!}{(d-1)\,!\, (D-d-1)\, ! } \int _{[0,\,\frac{d}{D} - \sqrt{a}] \cup [\frac{d}{D}+ \sqrt{a} , \,1] }\, u^{d-1}\,(1-u)^{D-d-1}\, \mathrm{d}u.\end{aligned}$$

In the following, we will use the estimate \(\theta =11/12.\)

Lemma 24

There exists a constant \(C>4\), such that if \(a \ge 0,\) \(d\ge 1\) \(C\, \log D< d <\frac{D}{C}\) and \(\frac{1}{D}< \sqrt{a} < \frac{d}{8\, D} \), then

$$\begin{aligned} I(d,D,a) < \frac{D}{\sqrt{d}}\, e^{- \frac{\theta \,\, a\, \,D^2}{2\,d} }. \end{aligned}$$

Proof

Note that our hypothesis implies that \(1<d<D-1\), \(a^2 < \frac{d^2}{D^2}\) and \(\frac{d}{D} + \sqrt{a} <1\).

  1. a)

    By Stirling formula, when \(D\rightarrow \infty \), \(d \rightarrow \infty \), \(D/d\rightarrow \infty \), we get that

    $$\begin{aligned} \frac{(D-1)\,!}{(d-1)\,!\, (D-d-1)\, ! }\sim \frac{1}{e} \, \sqrt{\frac{d}{2 \pi } } \left( \frac{d}{D}\right) ^{-d} \left( 1- \frac{d}{D}\right) ^{d-D}. \end{aligned}$$

    As \(\sqrt{\frac{1}{2 \pi }}\,<\,1\), there exists a constant A such that if \(D>A\), \(d>A\) and \(D/d>A\), we get

    $$\begin{aligned} \frac{(D-1)\,!}{(d-1)\,!\, (D-d-1)\, ! }\,< \, \frac{\sqrt{d}}{2 } \left( \frac{d}{D}\right) ^{-d} \left( 1- \frac{d}{D}\right) ^{d-D}. \end{aligned}$$

    If we take \(C>A+1\), it follows from the hypothesis of the Lemma that \(D>d\,C >d\,A\), \(d>C \log D>C>A\) and \(D-d>d\, C-d=d (C-1)> d\, A>A\).

  2. b)

    The derivative of \(u^{d-1}\, (1-u)^{D-d-1} \) with respect to u in (0, 1) is zero only on the point \(u= \frac{d-1}{D-1}\) which is smaller than d / D. Moreover

    $$\begin{aligned} \frac{d}{D}- \sqrt{a}<\frac{d}{D}- \frac{1}{D}=\frac{d-1}{D}< \frac{d-1}{D-1}. \end{aligned}$$

    Then, \(\frac{d-1}{D-1}\in ( \frac{d}{D}- \sqrt{a}, \frac{d}{D} )\subset ( \frac{d}{D}- \sqrt{a}, \frac{d}{D} + \sqrt{a}).\) From this it follows that \(u^{d-1}\, (1-u)^{D-d-1} \) takes its maximal values on the set \([0, \frac{d}{D}- \sqrt{a}]\,\cup \, [ \frac{d}{D}+ \sqrt{a},1] \) on the point \(\frac{d}{D}- \sqrt{a}\) or on the point \(\frac{d}{D}+ \sqrt{a}.\) Under our hypothesis, if \(C>A+1\), we get that for \(\epsilon =1\) or \(-1\):

    $$\begin{aligned} I(d,D,a)< & {} \frac{\sqrt{d}}{2 } \left( \frac{d}{D}\right) ^{-d} \left( 1- \frac{d}{D}\right) ^{d-D} \left( \frac{d}{D} + \epsilon \sqrt{a}\right) ^{d-1} (1-\frac{d}{D} - \epsilon \sqrt{a})^{D-d-1}\\= & {} \frac{\sqrt{d}}{2 }\, \frac{(1 + \epsilon \frac{D}{d} \sqrt{a})^{d} \,(1 - \epsilon \frac{D}{D-d}\sqrt{a})^{D-d} }{ (\frac{d}{D} + \epsilon \sqrt{a})\, \left( 1-\,\frac{d}{D} - \epsilon \sqrt{a}\right) }. \end{aligned}$$
  3. c)

    If \(\epsilon =1\) with \(C>4\), \(C>A+1\), we get

    $$\begin{aligned} (\frac{d}{D} + \epsilon \sqrt{a})\, \left( 1-\,\frac{d}{D} - \epsilon \sqrt{a}\right) = \frac{d}{D} + \sqrt{a}\, - \,\frac{d^2}{D^2} - 2\frac{d}{D} \sqrt{a} - a>\end{aligned}$$
    $$\begin{aligned} \frac{d}{D} - \, \frac{d^2}{D^2} - 2 \frac{d}{D} \sqrt{a}> \frac{d}{D} - \, \frac{d^2}{D^2} - 2 \frac{d^2}{8\, D^2} =\frac{d}{D} - \, \frac{5\,d^2}{4\,D^2}> \frac{d}{D} \left( 1 -\frac{5\,d}{4\,D}\right) >\frac{d}{2\,D}. \end{aligned}$$

    If \(\epsilon =1\) with \(C>4\), \(C>A+1\), one can show in the same way that

    $$\begin{aligned} \left( \frac{d}{D} + \epsilon \sqrt{a}\right) \, \left( 1-\,\frac{d}{D} - \epsilon \sqrt{a}\right) > \frac{d}{2\, D}. \end{aligned}$$

    In this way, we finally get that for \(\epsilon =1\) or \(\epsilon =-1\)

    $$\begin{aligned} I(d,D,a)< & {} \frac{\sqrt{d}}{2 } \frac{2 \, D}{d} \left( 1+ \epsilon \,\frac{D}{d} \sqrt{a}\right) ^{d} (1- \epsilon \frac{D}{D-d} \sqrt{a})^{D-d}\\= & {} \frac{ D}{\sqrt{d}} \left( 1+ \epsilon \,\frac{D}{d} \sqrt{a}\right) ^{d} \left( 1- \epsilon \frac{D}{D-d} \sqrt{a}\right) ^{D-d} . \end{aligned}$$

Note that

$$\begin{aligned}&\frac{ D}{\sqrt{d}} (1+ \epsilon \,\frac{D}{d} \sqrt{a})^{d} \left( 1- \epsilon \frac{D}{D-d} \sqrt{a}\right) ^{D-d}\\&\quad = \frac{ D}{\sqrt{d}}\, \exp \,\left[ d\, \log ( 1+ \epsilon \,\frac{D}{d} \sqrt{a}) + (D-d) \log \left( 1- \epsilon \frac{D}{D-d} \sqrt{a} \right) \right] \\&\quad < \frac{ D}{\sqrt{d}}\,\exp \, \left[ d\, \left( \epsilon \,\frac{D}{d} \sqrt{a} - \frac{1}{2} \frac{D^2}{d^2} \, a +\frac{\epsilon }{3} \frac{D^3}{d^3} \, a^{3/2} \right) + (D-d) \left( - \epsilon \frac{D}{D-d} \sqrt{a}\right) \right] . \end{aligned}$$

This is so because \(\log (1+x)=x- \frac{x^2}{2} + \frac{x^3}{3}+...\), for \(|x|<1\), \(\frac{D}{d} \sqrt{a}<1/8\), and \(\frac{D}{D-d} \sqrt{a}<\frac{1}{24}.\)

Therefore, if \(C>4\) and \(C>A+1\), then

$$\begin{aligned} I(d,D,a) < \frac{ D}{\sqrt{d}} \exp \left[ -\frac{1}{2} \,\frac{D^2}{d} \,a+ \frac{\epsilon }{3} \,\frac{D^3}{d^2} \,a^{3/2} \right] , \end{aligned}$$

for \(\epsilon =1\) or \(\epsilon =-1\).

Note that

$$\begin{aligned} \frac{|\frac{\epsilon }{3} \,\frac{D^3}{d^2} \,a^{3/2}|}{|-\frac{1}{2} \,\frac{D^2}{d} \,a |} = \frac{2}{3} \,\frac{D}{d} \,a^{1/2}< \frac{2}{3} \,\frac{D}{d} \,\frac{d}{8\, D} =\frac{1}{12}. \end{aligned}$$

Therefore, if \(C>4\) and \(C>A+1\), we finally get

$$\begin{aligned} I(d,D,a) < \frac{ D}{\sqrt{d}} e^{- \frac{\theta }{2}\, \frac{D^2}{d} \,a }. \end{aligned}$$

\(\square \)

Motivated by the Remark before Lemma 24, we will choose a convenient choice of a.

Corollary 25

There exist \(C_0>4\), such that if d and D are such that \( C_0 \log D< d < \frac{D}{C_0}\), then

$$\begin{aligned} I(d,D,a)< \frac{1}{D^3 \sqrt{d}}, \end{aligned}$$

where \(a=\frac{8\, d\, \log D}{\theta \, D^2}.\)

Proof

Take \(C_0>C\) (of Lemma 24) and \(C_0> 24^{2}\). Then

$$\begin{aligned} \sqrt{a} = \sqrt{\frac{8}{\theta }} \, \frac{ \sqrt{d\, \log D}}{ D}< 3\, \frac{ \sqrt{d\, \frac{d}{C_0}}}{ D}= \frac{3\,d}{ D\,\sqrt{C_0}}< \frac{3\,d}{ D\,24}= \frac{d}{ 8\, D\,}, \end{aligned}$$

because \(\frac{8}{\theta }<9.\)

Moreover, \(\sqrt{a}> \frac{ \sqrt{d\, \log D}}{ D}> \frac{1}{ D}.\)

By Lemma 24, we get that

$$\begin{aligned} I(d,D,a) < \frac{D}{\sqrt{d}}\, e^{ -\, \frac{8\, \theta \, D^2}{2\,d} \,\frac{8}{\theta }\, \frac{d\, \log D}{D^2} }=\frac{D}{\sqrt{d}} \, e^{ -\,4\, \log D } = \frac{1}{ D^3 \sqrt{d}}. \end{aligned}$$

\(\square \)

Lemma 26

Suppose \(C_0\) is the constant of Corollary 25. Given \(1\le \nu \le N,\) suppose that \(C_0\log D< d_\nu < \frac{D}{C_0},\) then

$$\begin{aligned} \int _\Delta \, \max _{1\le \alpha \le D} \,\left(<\,\phi _\alpha , P_\nu (\mathcal {D})\, \phi _\alpha \,>- \frac{d_\nu }{D}\right) ^2 w_\Delta (\mathcal {D})\,<\, \frac{9\, d_\nu \, \log D}{D^2}. \end{aligned}$$

Proof

Suppose \(a= \frac{8\, d_\nu \, \log D}{\theta \, D^2}\).

By Corollary 25 and Lemma 22 (see also the beginning of the proof of Lemma 24), we get that the probability of the above integrand to be great or equal to a is smaller than \( D\, \frac{1}{D^3\, \sqrt{d_\nu }} = \frac{1}{D^2 \sqrt{d_\nu } }\).

As we point out in the Remark before Lemma 24, the integral is smaller than

$$\begin{aligned} \frac{1}{D^2 \sqrt{d_\nu } }\,+\, \frac{8}{\theta }\, \frac{d_\nu \, \log D}{ D^2}. \end{aligned}$$

Note that

$$\begin{aligned} \frac{\frac{1}{D^2 \, \sqrt{d_\nu } }}{ \frac{d_\nu \, \log D}{ D^2}}\,=\, \frac{1}{d_\nu ^{3/2} \,\log D}<9- \frac{8}{\theta }=\frac{3}{11}, \end{aligned}$$

because \(d_\nu ^{3/2} \,\log D> C_0^{3/2}\, (\log D)^{5/2}> C_0^{3/2}>8> \frac{11}{3}.\)

Therefore

$$\begin{aligned} \frac{1}{D^2 \sqrt{d_\nu } }\,+\, \frac{8}{\theta }\, \frac{d_\nu \, \log D}{ D^2}< (9-\frac{8}{\theta } )\,\frac{d_\nu \, \log D}{ D^2}\,+\frac{8}{\theta } \, \frac{d_\nu \, \log D}{ D^2}\,= \frac{9\,d_\nu \, \log D}{ D^2} . \end{aligned}$$

\(\square \)

In Lemma 18, the function \(g_\nu \) is defined as the sum of two terms (see expression (6). The Lemma 26 takes care of the upper bound of the integral of the second term. Now, we will estimate the upper bound for the first term (using the Remark done before Lemma 24). First, we need two lemmas.

Lemma 27

Suppose \(\phi \) and \(\psi \) are orthonormal and \(E\subset \mathcal {H}\) is a subspace. Denote by P the orthogonal projection of \(\mathcal {H}\) over E.

Then, \( |\,<\phi ,\,P(\psi )\,>\,|^2\le 1/4\).

Proof

If \(\psi \) is orthogonal to E or \(\psi \in E\), we have that \(<\phi ,P(\psi )>=0.\)

Suppose \(\psi \) is not on E and is also not orthogonal to E. Suppose \(\psi =\psi _1+\psi _2\), where \(\psi _1 \) is orthogonal to E and \(\psi _2 \in E\).

Let \(\lambda =|\psi _1| \) and \(\mu =|\psi _2|\), then \(\psi _1=\lambda e_1\), \(\psi _2= \mu \, e_2\), where \(e_1\) and \(e_2\) are orthonormal.

Denote by \(\theta \) the orthogonal projection of \(\phi \) over \(\mathbb {C}\, e_1 + \mathbb {C}\, e_2\). Then

$$\begin{aligned} |\theta |\le 1 \,\,\,\text {and}\,\, \,\,\alpha \,=\,<\phi , P(\psi )>=< \phi ,\psi _2>= <\theta ,\psi _2>. \end{aligned}$$

Now, \(<\phi ,\,\psi >=0\) implies that

$$\begin{aligned} 0 =<\phi ,\psi _1>+<\phi ,\psi _2>=<\theta ,\psi _1>+<\theta ,\psi _2>. \end{aligned}$$

Suppose \(\theta = a \, e_1 + b\, e_2\), then \(|a|^2 + |b|^2\le 1.\) By the other hand, \(1=|\psi |=|\psi _1 + \psi _2|= |\lambda |^2 + |\mu |^2\) and

$$\begin{aligned} \alpha =<\theta , \psi _2>\,=\,b\, \overline{\mu } ,\,\,\, \,\, <\theta , \psi _1>\,=\,a\, \overline{\lambda },\,\,\, \,\,a\, \overline{\lambda }=-\,b\, \overline{\mu }=-\alpha . \end{aligned}$$

From this, it follows that \(|\frac{-\alpha }{a}|^2 + |\frac{\alpha }{b}|^2=1\), that is, \(|\alpha |^2= \frac{|a|^2 \Vert b|^2}{|a|^2 +|b|^2}< \frac{1}{4}.\)

Note that if \(a\,b=0\), then \(\alpha =0\). \(\square \)

Lemma 28

Given positive integers dD, where \(1<d\) and \(D> 2 d +2\) denote

$$\begin{aligned} f(t) = (1-t)^{d+1-D}\,(1+t)^{1-d}+ (1+t)^{d+1-D}\,(1-t)^{1-d}. \end{aligned}$$

then, f(t) is increasing on the interval (0, 1).

Proof

For any \(t\in (0,1)\), we have

$$\begin{aligned} f'(t)\,= (1-t)^{d+1-D}\,(1+t)^{1-d}\,\left[ \frac{1-d}{1+t}- \frac{d+1-D}{1-t}\right] + (1+t)^{d+1-D}\,(1-t)^{1-d}\,\left[ \frac{d+1-D}{1+t}-\frac{1-d}{1-t}\right] . \end{aligned}$$

Taking \(z=\frac{1+t}{1-t}>1\), we get

$$\begin{aligned} (1+t)^{D-1}\, f'(t)= z^{D-d-1} \,\left[ (D-d-1)\,z - (d-1)\right] + z^{d-1}\,\left[ (d-1)\,z-(D-d-1)\right] > \end{aligned}$$
$$\begin{aligned} z^{ D-d-1} \left[ (D-d-1)-\,(d-1)\right] + z^{d-1} \left[ \,(d-1) - (D-d-1)\right] \,= \end{aligned}$$
$$\begin{aligned} (z^{D-d-1} \,-z^{d-1})\, (D- 2\,d)>0, \end{aligned}$$

because \(z>1\), \(\square \)

Suppose \(0\le a < 1/4\) and dD positive integers, such that \(1<d<D-1\). Define

$$\begin{aligned} J(d,D,a)\,=\frac{(D-1)\,!\, }{(d-1)\,!\, (D-d-1)\, ! } \int _{1/2 - \sqrt{1/4-a} }^{1/2 + \sqrt{1/4-a}} \frac{(w\, (1-w)- a)^{D-2} }{w^{D-d-1} \, (1-w)^{d-1}} \, \mathrm{d}w. \end{aligned}$$

Lemma 29

Suppose dD are positive integers \(1<d,\, 2\,d+2 < D\). Then

$$\begin{aligned} 0\le J(d,D,a)< e^{- 4 \,a (D-3/2)},\,\,\,\,\text {where}\,\,\,0\le a <1/4. \end{aligned}$$

Proof

Note that J(dDa) is positive.

In the integration, we divide the integral in two parts: \([1/2 - \sqrt{1/4-a},\,1/2]\) and \([1/2,\,1/2 + \sqrt{1/4-a}]\).

We make a change of variable \(w= 1/2\,-\sqrt{1/4-x}\) on the first interval and \(w= 1/2\,+\sqrt{1/4-x}\) on the second interval. On both cases, we get \(x=w\,(1-w)\) and \(a\le x\le 1/4.\)

From this, it follows

$$\begin{aligned} J(d,D,a)= & {} \frac{(D-1)\,!\, }{(d-1)\,!\, (D-d-1)\, ! } \int _{a}^{1/4} (x-a)^{D-2} \left[ \left( \frac{1}{2}- \sqrt{\frac{1}{4}-x}\right) ^{d+1-D}\,\left( \frac{1}{2}+ \sqrt{\frac{1}{4}-x}\right) ^{1-d}\right. \\&+\left( \left. \frac{1}{2}+ \sqrt{\frac{1}{4}-x}\right) ^{d+1-D}\left( \frac{1}{2}- \sqrt{\frac{1}{4}-x}\right) ^{1-d}\right] \frac{1}{2\, \sqrt{1/4-x}}\, \mathrm{d}x\\= & {} \frac{2^{D-2}\,(D-1)\,!\, }{(d-1)\,!\, (D-d-1)\, ! } \int _{a}^{1/4} (x-a)^{D-2} [(1- \sqrt{1-4\,x})^{d+1-D}\,(1+ \sqrt{1-4\, x})^{1-d}\\&+ (1+ \sqrt{1-4\,x})^{d+1-D}\,(1- \sqrt{1-4\, x})^{1-d}\, ]\,\frac{1}{\sqrt{1-4\,x}\,}\,\mathrm{d}x. \end{aligned}$$

Now, we consider \(y=\frac{x-a}{ 1/4 -a}\). In this case \((1-4x)=(1-4a)(1-y)\).

Then

$$\begin{aligned} J(d,D,a) = \end{aligned}$$
$$\begin{aligned} \frac{(1-4a)^{3/2}\,(D-1)\,!\, }{2^D\,(d-1)\,!\, (D-d-1)\, ! }\,\int _0^1\,y^{D-2} [\,(1-\sqrt{1-4a}\,\sqrt{1-y})^{d+1-D} \end{aligned}$$
$$\begin{aligned} (1+\sqrt{1-4a}\,\sqrt{1-y})^{1-d}+ (1+\sqrt{1-4a}\,\sqrt{1-y})^{d+1-D} \end{aligned}$$
$$\begin{aligned} (1-\sqrt{1-4a}\,\sqrt{1-y})^{1-d} \,\,]\, \frac{1}{\sqrt{1-y}}\, dy. \end{aligned}$$

Note that just the expression under \([\,\,\) \(\,\,]\) depends on a. For each \(y\in (0,1)\), we have \(\sqrt{1-4a}\,\sqrt{1-y}\in (0,1) \) is an decreasing function of a. It follows from Lemma 28 that for each \(y\in (0,1)\), the integrand is a decreasing function of a.

Therefore, \(\frac{J(d,D,a)}{(1-4a)^{D-3/ 2}}\) is a decreasing function of a. As \(J(d,D,0)=1\) (see Lemma 23), it follows that

$$\begin{aligned} J(d,D,a)\le (1- 4\, a)^{D-3/2},\,\,\,\,0 \le a < 4. \end{aligned}$$

Finally, note that \((1- 4\, a)^{D-3/2}\le e^{ -4\, a\, (D-3/2) }\) \(\square \)

Corollary 30

If \(1<d, \,D>2\, d +2\) and \(\frac{\log D}{D}<\frac{1}{3}\), then

$$\begin{aligned} J(d,D,a) < D^{-3} e^{ \frac{9\, \log D}{2\,D}},\,\,\,\,\text {where}\,\, a=\frac{3}{4}\, \frac{\log D}{D}. \end{aligned}$$

Proof

It follows from Lemma 29, because \( 0<\frac{3}{4} \frac{\log D}{D} < \frac{1}{4}.\) \(\square \)

Lemma 31

Suppose \(1\le \nu \le N \), \(3<d_\nu \), \(D>2 \, d_\nu +2\), and \( \frac{\log D}{D}<\frac{1}{5}.\)

Then

$$\begin{aligned} \int _\Delta \max _{1\le \alpha \ne \beta \le D}\,|\,< \phi _\alpha ,P_\nu ( \mathcal {D}) \phi _\beta >\,|^2\, w_\Delta (\mathcal {D})< \frac{\log D}{D} \end{aligned}$$
(8)

where \(\phi _1,..,\phi _D\) is an orthonormal basis of eigenvectors for H (without resonances).

Proof

By Lemma 23 and Corollary 30, the probability that the integrand is bigger than a is smaller than

$$\begin{aligned} \frac{D\, (D-1)}{2}\, D^{-3} e^{\frac{9\, \log D}{2\,D}},\,\,\,\,\,a=\frac{3\, \log D}{4\, D}, \end{aligned}$$

because as \(e_{\alpha ,\beta } = \overline{e_{\beta ,\alpha }}\), we just have to take \(\alpha <\beta \).

By the Remark before Lemma 24, the integral is smaller than

$$\begin{aligned} \frac{3}{4}\, \frac{\log D}{D} \, + \frac{D\,(D-1)}{8} D^{-3}\, e^{\frac{9\,\log D}{2\, D}}, \end{aligned}$$

because by Lemma 27 \(|e_{\alpha ,\beta } |<1/4 \).

As \(D-1<D\), we have

$$\begin{aligned} \frac{\frac{D\,(D-1)}{8} D^{-3}\, e^{\frac{9\,\log D}{2\, D}}}{\frac{\log D}{D} } < \,\frac{1}{8\, D} e^{\frac{9\,\log D}{2\, D}}\, \frac{D}{\log D}= e^{\frac{9\,\log D}{2\, D}}\, \frac{1}{8\,\log D}. \end{aligned}$$

Now, as \(D\ge 9\), \(\log D\ge 2\), we get

$$\begin{aligned} \frac{1}{8\, \log D} e^{\frac{9\,\log D}{2\, D}}< \frac{1}{16} e^{9/10}< \frac{e}{16}<1/4. \end{aligned}$$

Now, we put the two estimates together \(\frac{3}{4}\, \frac{\log D}{D} + \frac{1}{4}\, \frac{\log D}{D}\) and we get the claim of the Lemma. \(\square \)

The Lemma 18 follows from Lemmas 26 and 31. In this way, we get the claim of the Quantum Ergodic Theorem of von Neumann.