Keywords

4.1 Introduction

In this paper, we consider the so-called Information-Plus-Noise type model

$$\displaystyle \begin{aligned} M_N= \varSigma_N \varSigma_N^* \mbox{ ~where ~} \varSigma_N = \sigma \frac{X_N}{\sqrt{N}}+A_N, \end{aligned} $$

defined as follows.

  • n = n(N), n ≤ N, c N = nN →N→+c ∈ ]0;1].

  • σ ∈ ]0;+[.

  • X N = [X ij]1≤in;1≤jN where \(\{X_{ij}, i\in \mathbb {N}, j \in \mathbb {N}\}\) is an infinite set of complex random variables such that \(\{\Re (X_{ij}), \Im (X_{ij}), i\in \mathbb {N}, j \in \mathbb {N}\}\) are independent centered random variables with variance 1∕2 and satisfy

    1. 1.

      There exists K > 0 and a random variable Z with finite fourth moment for which there exists x 0 > 0 and an integer number n 0 > 0 such that, for any x > x 0 and any integer numbers n 1, n 2 > n 0, we have

      $$\displaystyle \begin{aligned}\frac{1}{n_1n_2} \sum_{i\leq n_1,j\leq n_2}P\left( \vert X_{ij}\vert >x\right) \leq KP\left(\vert Z \vert>x\right).\end{aligned} $$
      (4.1)
    2. 2.
      $$\displaystyle \begin{aligned}\sup_{(i,j)\in \mathbb{N}^2}\mathbb{E}(\vert X_{ij}\vert^3)<+\infty. \end{aligned} $$
      (4.2)
  • Let ν be a compactly supported probability measure on \(\mathbb {R}\) whose support has a finite number of connected components. Let Θ = {θ 1;…;θ J} where θ 1 > … > θ J ≥ 0 are J fixed real numbers independent of N which are outside the support of ν. Let k 1, …, k J be fixed integer numbers independent of N and \(r=\sum _{j=1}^J k_j\). Let β j(N) ≥ 0, r + 1 ≤ j ≤ n, be such that \(\frac {1}{n} \sum _{j=r+1}^{n} \delta _{\beta _j(N)}\) weakly converges to ν and

    $$\displaystyle \begin{aligned} \max _{r+1\leq j\leq n} \mathrm{dist}(\beta _j(N),\mathrm{supp}(\nu ))\mathop{\longrightarrow } _{N \rightarrow \infty } 0\end{aligned} $$
    (4.3)

    where supp(ν) denotes the support of ν.

    Let α j(N), j = 1, …, J, be real nonnegative numbers such that

    $$\displaystyle \begin{aligned}\lim_{N \rightarrow +\infty} \alpha_j(N)=\theta_j.\end{aligned}$$

    Let A N be a n × N deterministic matrix such that, for each j = 1, …, J, α j(N) is an eigenvalue of \(A_N A_N^*\) with multiplicity k j, and the other eigenvalues of \(A_N A_N^*\) are the β j(N), r + 1 ≤ j ≤ n. Note that the empirical spectral measure of \({A_N A_N^*} \) weakly converges to ν.

Remark 4.1

Note that assumption such as (4.1) appears in [14]. It obviously holds if the X ij’s are identically distributed with finite fourth moment.

For any Hermitian n × n matrix Y , denote by spect(Y ) its spectrum, by

$$\displaystyle \begin{aligned} \lambda_1(Y) \geq \ldots \geq \lambda_n(Y) \end{aligned}$$

the ordered eigenvalues of Y  and by μ Y the empirical spectral measure of Y :

$$\displaystyle \begin{aligned}\mu _{Y} := \frac{1}{n} \sum_{i=1}^n \delta_{\lambda _{i}(Y)}.\end{aligned}$$

For a probability measure τ on \(\mathbb {R}\), denote by g τ its Stieltjes transform defined for \(z \in \mathbb {C}\setminus \mathbb {R}\) by

$$\displaystyle \begin{aligned}g_\tau (z) = \int_{\mathbb{R}} \frac{d\tau (x)}{z-x}.\end{aligned}$$

When the X ij’s are identically distributed, Dozier and Silverstein established in [15] that almost surely the empirical spectral measure \(\mu _{M_N}\) of M N converges weakly towards a nonrandom distribution μ σ,ν,c which is characterized in terms of its Stieltjes transform which satisfies the following equation: for any \(z \in \mathbb {C}^+\),

$$\displaystyle \begin{aligned} g_{\mu_{\sigma,\nu,c}}(z)=\int \frac{1}{(1-\sigma^2cg_{ \mu_{\sigma,\nu,c}}(z))z- \frac{ t}{1- \sigma^2 cg_{ \mu_{\sigma,\nu,c}}(z)} -\sigma^2 (1-c)}d\nu(t).\end{aligned} $$
(4.4)

This result of convergence was extended to independent but non identically distributed random variables by Xie in [30]. (Note that, in [21], the authors in- vestigated the case where σ is replaced by a bounded sequence of real numbers.) In [11], the author carries on with the study of the support of the limiting spectral measure previously investigated in [16] and later in [25, 28] and obtains that there is a one-to-one relationship between the complement of the limiting support and some subset in the complement of the support of ν which is defined in (4.6) below.

Proposition 4.1

Define differentiable functions ω σ, ν, c and Φ σ, ν, c on respectively \( \mathbb {R}\setminus \mathit{\mbox{supp}}(\mu _{\sigma ,\nu ,c})\) and \( \mathbb {R}\setminus \mathit{\mbox{supp}}(\nu )\) by setting

$$\displaystyle \begin{aligned}\omega_{\sigma,\nu,c} :\begin{array}{ll} \mathbb{R}\setminus \mathit{\mbox{supp}}(\mu_{\sigma,\nu,c}) \rightarrow \mathbb{R}\\ x \mapsto x (1- \sigma^2 c g_{ \mu_{\sigma,\nu,c}}(x))^2 -\sigma^2 (1-c)(1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(x))\end{array}\end{aligned} $$
(4.5)

and

$$\displaystyle \begin{aligned}\varPhi_{\sigma,\nu,c} :\begin{array}{ll} \mathbb{R}\setminus \mathit{\mbox{supp}}(\nu) \rightarrow \mathbb{R}\\ x \mapsto x (1+c \sigma^2g_{ \nu}(x))^2 + \sigma^2 (1-c) (1+ c \sigma^2 g_\nu(x))\end{array}.\end{aligned}$$

Set

$$\displaystyle \begin{aligned} \mathbb{E}_{\sigma,\nu,c}:=\left\{ x \in \mathbb{R}\setminus \mathit{\mbox{supp}}(\nu), \varPhi_{\sigma,\nu,c}^{\prime}(x) >0, g_\nu(x) >-\frac{1}{\sigma^2c}\right\}.\end{aligned} $$
(4.6)

ω σ,ν,c is an increasing analytic diffeomorphism with positive derivative from \(\mathbb {R}\setminus \mathit{\mbox{supp}}(\mu _{\sigma ,\nu ,c})\) to \(\mathbb {E}_{\sigma ,\nu ,c}\) , with inverse Φ σ,ν,c.

Moreover, extending previous results in [25] and [8] involving the Gaussian case and finite rank perturbations, [11] establishes a one-to-one correspondence between the θ i’s that belong to the set \(\mathbb {E}_{\sigma ,\nu ,c}\) (counting multiplicity) and the outliers in the spectrum of M N. More precisely, setting

$$\displaystyle \begin{aligned}\varTheta_{\sigma,\nu,c} = \left\{\theta \in \varTheta, \varPhi_{\sigma,\nu,c}^{\prime}(\theta) >0, g_\nu(\theta) >-\frac{1}{\sigma^2c}\right\},\end{aligned} $$
(4.7)

and

$$\displaystyle \begin{aligned}\mathbb{S}=\mbox{ supp } (\mu_{\sigma,\nu,c}) \cup \left\{ \varPhi_{\sigma,\nu,c}({\theta}), \theta \in \varTheta_{\sigma,\nu,c} \right\},\end{aligned} $$
(4.8)

we have the following results.

Theorem 4.1 ([11])

For any 𝜖 > 0,

$$\displaystyle \begin{aligned}\mathbb P[\,\mathit{\mbox{for all large N}}, \mathrm{spect}(M_N) \subset \{x \in \mathbb{R} , \mathit{\mbox{dist}}(x,\mathbb{S})\leq \epsilon \}]=1.\end{aligned}$$

Theorem 4.2 ([11])

Let θ j be in Θ σ,ν,c and denote by n j−1 + 1, …, n j−1 + k j the descending ranks of α j(N) among the eigenvalues of \(A_NA_N^*\) . Then the k j eigenvalues \((\lambda _{n_{j-1}+i}(M_N), \, 1 \leq i \leq k_j)\) converge almost surely outside the support of μ σ,ν,c towards \(\rho _{\theta _j}:=\varPhi _{\sigma ,\nu ,c}(\theta _j)\) . Moreover, these eigenvalues asymptotically separate from the rest of the spectrum since (with the conventions that λ 0(M N) = +∞ and λ N+1(M N) = −∞) there exists δ 0 > 0 such that almost surely for all large N,

$$\displaystyle \begin{aligned}\lambda_{n_{j-1}}(M_N) > \rho _{\theta _j} + \delta_0 \, \mathit{\mbox{ and }} \, \lambda_{n_{j-1}+k_j +1}(M_N) < \rho _{\theta _j} - \delta_0 .\end{aligned} $$
(4.9)

Remark 4.2

Note that Theorems 4.1 and 4.2 were established in [11] for A N as (4.14) below and with \(\mathbb {S}\cup \{0\}\) instead of \( \mathbb {S}\) but they hold true as stated above and in the more general framework of this paper. Indeed, these extensions can be obtained sticking to the proof of the corresponding results in [11] but using the new versions of [3] and of the exact separation phenomenon of [11] which are presented in the Appendix 1 of the present paper.

The aim of this paper is to study how the eigenvectors corresponding to the outliers of M N project onto those corresponding to the spikes θ i’s. Note that there are some pioneering results investigating the eigenvectors corresponding to the outliers of finite rank perturbations of classical random matricial models: [27] in the real Gaussian sample covariance matrix setting, and [7, 8] dealing with finite rank additive or multiplicative perturbations of unitarily invariant matrices. For a general perturbation, dealing with sample covariance matrices, Péché and Ledoit [23] introduced a tool to study the average behaviour of the eigenvectors but it seems that this did not allow them to focus on the eigenvectors associated with the eigenvalues that separate from the bulk. It turns out that further studies [6, 10] point out that the angle between the eigenvectors of the outliers of the deformed model and the eigenvectors associated to the corresponding original spikes is determined by Biane-Voiculescu’s subordination function. For the model investigated in this paper, such a free interpretation holds but we choose not to develop this free probabilistic point of view in this paper and we refer the reader to the paper [13]. Here is the main result of the paper.

Theorem 4.3

Let θ j be in Θ σ,ν,c (defined in (4.7)) and denote by n j−1 + 1, …, n j−1 + k j the descending ranks of α j(N) among the eigenvalues of \(A_NA_N^*\) . Let ξ(j) be a normalized eigenvector of M N relative to one of the eigenvalues \((\lambda _{n_{j-1}+q}(M_N)\) , 1 ≤ q  k j). Denote by ∥⋅∥2 the Euclidean norm on \(\mathbb {C}^n\) . Then, almost surely

  1. (i)

    \(\displaystyle {\lim _{N\rightarrow +\infty }\left \| P_{\mathit{\mbox{Ker }}(\alpha _j(N) I_N-A_NA_N^*)}\xi (j)\right \|{ }^2_2 = \tau (\theta _j)}\)

    where

    $$\displaystyle \begin{aligned}\tau(\theta_j)= \frac{1-\sigma^2c g_{\mu_{\sigma,\nu,c}}(\rho_{\theta_j})}{\omega_{\sigma,\nu,c}^{\prime}(\rho_{\theta_j})}=\frac{ \varPhi_{\sigma,\nu,c}^{\prime}({\theta_j})}{1+ \sigma^2 cg_\nu(\theta_j)} \end{aligned} $$
    (4.10)
  2. (ii)

    for any θ i in Θ σ,ν,c ∖{θ j},

    $$\displaystyle \begin{aligned}\displaystyle{\lim_{N\rightarrow +\infty}\left\| P_{ \mathit{\mbox{Ker }}(\alpha_i(N) I_N-A_NA_N^*)}\xi(j)\right\|{}_2 = 0.}\end{aligned}$$

The sketch of the proof of Theorem 4.3 follows the analysis of [10] as explained in Sect. 4.2. In Sect. 4.3, we prove a universal result allowing to reduce the study to estimating expectations of Gaussian resolvent entries carried on Sect. 4.4. In Sect. 4.5, we explain how to deduce Theorem 4.3 from the previous Sections. In an Appendix 1, we present alternative versions on the one hand of the result in [3] about the lack of eigenvalues outside the support of the deterministic equivalent measure, and, on the other hand, of the result in [11] about the exact separation phenomenon. These new versions deal with random variables whose imaginary and real parts are independent but remove the technical assumptions ((1.10) and “b 1 > 0” in Theorem 1.1 in [3] and “ω σ,ν,c(b) > 0” in Theorem 1.2 in [11]). This allows us to claim that Theorem 4.2 holds in our context (see Remark 4.2). Finally, we present, in Appendix 2, some technical lemmas that are used throughout the paper.

4.2 Sketch of the Proof

Throughout the paper, for any m × p matrix B, \((m,p)\in {\mathbb {N}}^2\), we will denote by ∥B∥ the largest singular value of B, and by \(\Vert B\Vert _2=\{Tr (BB^*)\}^{\frac {1}{2}}\) its Hilbert-Schmidt norm.

The proof of Theorem 4.3 follows the analysis in two steps of [10].

Step A

First, we shall prove that, for any orthonormal system \((\xi _1,\cdots ,\xi _{k_j})\) of eigenvectors associated to the k j eigenvalues \(\lambda _{n_{j-1}+q}(M_N)\), 1 ≤ q ≤ k j, the following convergence holds almost surely: ∀l = 1, …, J,

$$\displaystyle \begin{aligned} \sum_{p=1}^{k_j}\left\| P_{\ker(\alpha_l (N) I_N-A_NA_N^*)}\xi_p \right\|{}^2_2 \rightarrow_{N \rightarrow +\infty} \frac{k_j\delta_{jl} (1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(\rho_{\theta_j}))}{\omega_{\sigma,\nu,c}^{\prime}(\rho_{\theta_j})}. \end{aligned} $$
(4.11)

Note that for any smooth functions h and f on \(\mathbb {R}\), if v 1, …, v n are eigenvectors associated to \(\lambda _1(A_NA_N^*), \ldots ,\lambda _n(A_NA_N^*)\) and w 1, …, w n are eigenvectors associated to λ 1(M N), …, λ n(M N), one can easily check that

$$\displaystyle \begin{aligned} \mathrm{Tr} \left[h(M_N) f(A_NA_N^*)\right] =\sum_{m,p=1}^n h(\lambda_p(M_N)) f(\lambda_m(A_NA_N^*)) \vert \langle v_m,w_p \rangle \vert^2. \end{aligned} $$
(4.12)

Thus, since α l(N) on one hand and the k j eigenvalues of M N in \((\rho _{\theta _j} -\varepsilon ,\rho _{\theta _j}+\varepsilon [ )\) (for 𝜖 small enough) on the other hand, asymptotically separate from the rest of the spectrum of respectively \(A_NA_N^*\) and M N, a fit choice of h and f will allow the study of the restrictive sum \(\sum _{p=1}^{k_j}\left \| P_{\ker (\alpha _l(N) I_N-A_NA_N^*)} \xi _p \right \|{ }^2_2\). Therefore proving (4.11) is reduced to the study of the asymptotic behaviour of \(\mathrm {Tr}\left [h(M_N)f(A_NA_N^*)\right ]\) for some functions f and h respectively concentrated on a neighborhood of θ l and \(\rho _{\theta _j}\).

Step B

In the second, and final, step, we shall use a perturbation argument identical to the one used in [10] to reduce the problem to the case of a spike with multiplicity one, case that follows trivially from Step A.

Step B closely follows the lines of [10] whereas Step A requires substantial work. We first reduce the investigations to the mean Gaussian case by proving the following.

Proposition 4.2

Let X N as defined in Sect. 4.1 . Let \(\mathbb {G}_N = [\mathbb {G}_{ij}]_{1\leq i\leq n, 1\leq j\leq N}\) be a n × N random matrix with i.i.d. standard complex normal entries. Let h be a function in \(\mathbb {C}^\infty (\mathbb {R}, \mathbb {R})\) with compact support, and Γ N be a n × n Hermitian matrix such that

$$\displaystyle \begin{aligned} \sup_{n,N} \Vert \varGamma_N \Vert<\infty \mathit{\text{ and }} \sup_{n,N} \mathrm{rank} (\varGamma_N) <\infty.\end{aligned} $$
(4.13)

Then almost surely,

\(\mathrm {Tr} \left (h\left (\left (\sigma \frac {X_N}{\sqrt {N}}+A_N\right )\left (\sigma \frac {X_N}{\sqrt {N}}+A_N\right )^*\right ) \varGamma _N\right )\)

$$\displaystyle \begin{aligned}-\mathbb{E}\left(\mathrm{Tr} \left[h\left(\left(\sigma \frac{\mathbb{G}_N}{\sqrt{N}}+A_N\right)\left(\sigma \frac{\mathbb{G}_N}{\sqrt{N}}+A_N\right)^*\right) \varGamma_N\right] \right)\rightarrow_{N \rightarrow +\infty} 0.\end{aligned}$$

The asymptotic behaviour of \(\mathbb {E}\Big (\mathrm {Tr} \Big [h\Big (\Big (\sigma \frac {\mathbb {G}_N}{\sqrt {N}}+A_N\Big )\Big (\sigma \frac {\mathbb {G}_N}{\sqrt {N}}+A_N\Big )^*\Big ) f(A_NA_N^*)\Big ] \Big )\) can be deduced, by using the bi-unitarily invariance of the distribution of \( \mathbb {G}_N\), from the following Proposition 4.3 and Lemma 4.18.

Proposition 4.3

Let \(\mathbb {G}_N = [\mathbb {G}_{ij}]_{1\leq i \leq n, 1\leq j\leq N}\) be a n × N random matrix with i.i.d. complex standard normal entries. Assume that A N is such that

$$\displaystyle \begin{aligned} A_N=\begin{pmatrix} d_1(N) ~~~~~~~~~~~~~~~~~~~~(0)\\ ~~~(0)\\ ~~~~~~~~~~\ddots~~~~~~~~~~~~~( 0)\\ ~(0)~~~~~~~~~~~~~~~~~~~~\\ ~~~~~~~~~~~~~~~~~d_{n}(N)~~~ ( 0 ) \end{pmatrix} \end{aligned} $$
(4.14)

where n = n(N), n  N, c N = nN N→+c ∈ ]0;1], for i = 1, …, n, \(d_i(N) \in \mathbb {C}\) , supNmaxi=1,…,n|d i(N)| < +∞ and \(\frac {1}{n} \sum _{i=1}^n \delta _{\vert d_i(N)\vert ^2}\) weakly converges to a compactly supported probability measure ν on \(\mathbb {R}\) when N goes to infinity. Define for all \(z\in \mathbb {C}\setminus \mathbb {R}\) ,

$$\displaystyle \begin{aligned}G^{\mathbb{G}}_N(z) =\left (zI - \left(\sigma \frac{\mathbb{ G}_N}{\sqrt{N}}+A_N\right)\left(\sigma \frac{\mathbb{G}_N}{\sqrt{N}}+A_N\right)^*\right)^{-1}.\end{aligned}$$

Define for any q = 1, …, n,

$$\displaystyle \begin{aligned}\gamma_q(N) =(A_NA_N^*)_{qq} =\vert d_q(N)\vert^2. \end{aligned} $$
(4.15)

There is a polynomial P with nonnegative coefficients, a sequence (u N)N of nonnegative real numbers converging to zero when N goes to infinity and some nonnegative real number l, such that for any (p, q) in {1, …, n}2, for all \(z\in \mathbb {C}\setminus \mathbb {R}\) ,

$$\displaystyle \begin{aligned} \mathbb{E} \left(\left( G^{\mathbb{G}}_N(z)\right)_{pq}\right) = \frac{1- \sigma^2 cg_{\mu_{\sigma,\nu,c}}(z)}{\omega_{\sigma, \nu, c}(z) -\gamma_q(N)} \delta_{pq} +\varDelta_{p,q,N}(z), \end{aligned} $$
(4.16)

with

$$\displaystyle \begin{aligned}\left| \varDelta_{p,q,N} (z)\right| \leq (1+\vert z\vert)^l P(\vert \Im z \vert^{-1})u_N.\end{aligned}$$

4.3 Proof of Proposition 4.2

In the following, we will denote by o C(1) any deterministic sequence of positive real numbers depending on the parameter C and converging for each fixed C to zero when N goes to infinity. The aim of this section is to prove Proposition 4.2.

Define for any C > 0,

(4.17)

Set

$$\displaystyle \begin{aligned}\theta^*=\sup_{(i,j)\in \mathbb{N}^2}\mathbb{E}(\vert X_{ij}\vert^3)<+\infty.\end{aligned}$$

We have

so that

$$\displaystyle \begin{aligned}\sup_{i\geq 1,j\geq 1}\mathbb{E} \left( \vert X_{ij}-Y_{ij}^C\vert^2 \right) \leq \frac{2\theta^*}{C}.\end{aligned}$$

Note that

so that

$$\displaystyle \begin{aligned}\sup_{i\geq 1, j\geq 1}\vert 1 - 2\mathbb{E} \left( \vert \Re Y_{ij}^C\vert^2 \right) \vert \leq \frac{4\theta^*}{C}.\end{aligned}$$

Similarly

$$\displaystyle \begin{aligned}\sup_{i\geq 1,j\geq 1}\vert 1 - 2\mathbb{E} \left( \vert \Im Y_{ij}^C\vert^2 \right) \vert \leq \frac{4\theta^*}{C}.\end{aligned}$$

Let us assume that C > 8θ . Then, we have

$$\displaystyle \begin{aligned}\mathbb{E} \left( \vert \Re Y_{ij}^C\vert^2 \right)> \frac{1}{4} \; \mbox{and}\; \mathbb{E} \left( \vert \Im Y_{ij}^C\vert^2 \right)> \frac{1}{4}.\end{aligned}$$

Define for any C > 8θ , \(X^C=(X^C_{ij})_{1\leq i \leq n; 1\leq j \leq N},\) where for any 1 ≤ i ≤ n, 1 ≤ j ≤ N,

$$\displaystyle \begin{aligned}{X}_{ij}^C =\frac{\Re Y_{ij}^C}{\sqrt{2\mathbb{E} \left( \vert \Re Y_{ij}^C\vert^2 \right)}} +\mathrm{i} \frac{\Im Y_{ij}^C}{\sqrt{2\mathbb{E} \left( \vert \Im Y_{ij}^C\vert^2 \right)}}.\end{aligned} $$
(4.18)

Let \(\mathbb {G} = [\mathbb {G}_{ij}]_{1\leq i\leq n, 1\leq j \leq N}\) be a n × N random matrix with i.i.d. standard complex normal entries, independent from X N, and define for any α > 0,

$$\displaystyle \begin{aligned}X^{\alpha,C}= \frac{ X^C +\alpha \mathbb{G}}{\sqrt{1+\alpha^2}}.\end{aligned}$$

Now, for any n × N matrix B, let us introduce the (N + n) × (N + n) matrix

$$\displaystyle \begin{aligned}\mathbb{M}_{N+n}(B) =\left( \begin{array}{ll} 0_{n\times n}~~~ B+A_N\\ B^* +A_N^*~~~ 0_{N\times N} \end{array} \right).\end{aligned}$$

Define for any \(z\in \mathbb {C}\setminus \mathbb {R}\),

$$\displaystyle \begin{aligned}\tilde G(z) = \left( z I_{N+n} - \mathbb{M}_{N+n}\left(\sigma \frac{X_N}{\sqrt{N}}\right)\right)^{-1},\end{aligned}$$

and

$$\displaystyle \begin{aligned}\tilde G^{\alpha,C}(z) = \left( z I_{N+n} - \mathbb{M}_{N+n}\left(\sigma \frac{X^{\alpha,C}}{\sqrt{N}}\right)\right)^{-1} .\end{aligned}$$

Denote by \(\mathbb {U}(n+N)\) the set of unitary (n + N) × (n + N) matrices. We first establish the following approximation result.

Lemma 4.1

There exist some positive deterministic functions u and v on [0, +[ such that limC→+u(C) = 0 and limα→0v(α) = 0, and a polynomial P with nonnegative coefficients such that for any α and C > 8θ , we have that

  • almost surely, for all large N,

    $$\displaystyle \begin{aligned} &\displaystyle{ \sup_{U\in \mathbb{U}(n+N)}\sup_{(i,j)\in \{1,\ldots,n+N\}^2}\sup_{z\in \mathbb{C}\setminus \mathbb{R} } |\Im z |{}^{2} \left| (U^*\tilde{G}^{\alpha,C}(z)U)_{ij}- (U^*\tilde{{G}}(z)U)_{ij}\right|} \\ & \quad \leq u(C)+v(\alpha),{} \end{aligned} $$
    (4.19)
  • for all large N,

    $$\displaystyle \begin{aligned} & \displaystyle{ \sup_{U\in \mathbb{U}(n+N)} \sup_{(i,j)\in \{1,\ldots,n+N\}^2} \sup_{z\in \mathbb{C}\setminus \mathbb{R} } \frac{1}{ P(|\Im z |{}^{-1}) }} \\ &\quad \displaystyle{\times\left| \mathbb{E} \left( (U^*\tilde{G}^{\alpha,C}(z)U)_{ij}- (U^*\tilde{{G}}(z)U)_{ij}\right)\right|} \\ & \qquad \leq u(C)+v(\alpha)+ o_{C}(1). {} \end{aligned} $$
    (4.20)

Proof

Note that

$$\displaystyle \begin{aligned} \begin{array}{rcl} {X}_{ij}^C-Y_{ij}^C&\displaystyle =&\displaystyle \Re X_{ij}^C \left( 1-\sqrt{2} \mathbb{E} \left( \vert \Re Y_{ij}^C\vert^2 \right)^{1/2}\right) +\mathrm{i} \Im X_{ij}^C \left( 1-\sqrt{2} \mathbb{E} \left( \vert \Im Y_{ij}^C\vert^2 \right)^{1/2}\right) \\&\displaystyle =&\displaystyle \Re X_{ij}^C\frac{1 - 2\mathbb{E} \left( \vert \Re Y_{ij}^C\vert^2 \right) }{1 + \sqrt{2}\mathbb{E} \left( \vert \Re Y_{ij}^C\vert^2 \right)^{1/2}}+ \mathrm{i} \Im X_{ij}^C\frac{1 - 2\mathbb{E} \left( \vert \Im Y_{ij}^C\vert^2 \right) }{1 + \sqrt{2}\mathbb{E} \left( \vert \Im Y_{ij}^C\vert^2 \right)^{1/2}}.\end{array} \end{aligned} $$

Then,

$$\displaystyle \begin{aligned}\left\{\sup_{(i,j)\in \mathbb{N}^2}\mathbb{E} \left( \vert X^C_{ij}-Y_{ij}^C\vert^2 \right)\right\}^{1/2} \leq \frac{4\theta^*}{C}, \mbox{ and }\sup_{(i,j)\in \mathbb{N}^2}\mathbb{E} \left( \vert X_{ij}^C-Y_{ij}^C\vert^3 \right) <\infty.\end{aligned}$$

It is straightforward to see, using Lemma 4.17, that for any unitary (n + N) × (n + N) matrix U,

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \left| (U^*\tilde{G}^{\alpha,C}(z)U)_{ij}- (U^*\tilde{{G}}(z)U)_{ij}\right| \\ &\displaystyle &\displaystyle \quad \leq \frac{\sigma}{\vert \Im z \vert^2} \left\| \frac{X_N-{X}^{\alpha,C}}{\sqrt{N}} \right\| \\ &\displaystyle &\displaystyle \quad \leq \frac{\sigma}{\vert \Im z \vert^2} \left\{ \left\| \frac{X_N-Y^C}{\sqrt{N}} \right\| + \left\| \frac{{X}^C-Y^C}{\sqrt{N}} \right\|\right. \\&\displaystyle &\displaystyle \qquad \left. + \left(1- \frac{1}{\sqrt{1+\alpha^2}}\right) \left\| \frac{ X^C}{\sqrt{N}} \right\| +\alpha \left\| \frac{\mathbb{G}}{\sqrt{N}} \right\| \right\} {}. \end{array} \end{aligned} $$
(4.21)

From Bai-Yin’s theorem (Theorem 5.8 in [2]), we have

$$\displaystyle \begin{aligned}\left\| \frac{\mathbb{G}}{\sqrt{N}} \right\|=2+o(1).\end{aligned}$$

Applying Remark 4.3 to the (n + N) × (n + N) matrix \(\tilde B= \left ( \begin {array}{ll} 0_{n\times n}~~~ B\\ B^*~~~ 0_{N\times N} \end {array} \right )\) for B ∈{X N − Y C, X C − Y C, X C} (see also Appendix B of [14]), we have that almost surely

$$\displaystyle \begin{aligned}\limsup_{N\rightarrow +\infty}\left\| \frac{{X^C}}{\sqrt{N}} \right\|\leq 2 \sqrt{2} ,~\limsup_{N\rightarrow +\infty}\left\| \frac{{X}^C-Y^C}{\sqrt{N}} \right\|\leq \frac{8\sqrt{2}\theta^*}{C},\;\end{aligned}$$

and

$$\displaystyle \begin{aligned}\limsup_{N\rightarrow +\infty}\left\| \frac{{X_N}-Y^C}{\sqrt{N}} \right\|\leq 4 \sqrt{ \frac{\theta^*}{C}}. \end{aligned}$$

Then, (4.19) readily follows.

Let us introduce

$$\displaystyle \begin{aligned}\varOmega_{N,C}=\left\{ \left\| \frac{\mathbb{G}}{\sqrt{N}} \right\| \leq 4, \left\| \frac{X^C}{\sqrt{N}} \right\| \leq 4, \left\| \frac{X_N-Y^C}{\sqrt{N}} \right\| \leq 8 \sqrt{ \frac{\theta^*}{C}}, \left\| \frac{{X}^C-Y^C}{\sqrt{N}} \right\|\leq \frac{16\theta^*}{C} \right\}. \end{aligned}$$

Using (4.21), we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \left| \mathbb{E} \left( (U^*\tilde{{G}}^{\alpha,C}(z)U)_{ij}-(U^*\tilde{G}(z)U)_{ij}\right)\right| \\ &\displaystyle &\displaystyle \quad \leq \frac{4\sigma }{\vert \Im z \vert^2} \left[ 2 \sqrt{ \frac{\theta^*}{C}}+ \frac{4\theta^*}{C}+ \alpha +\left( 1-\frac{1}{\sqrt{1+\alpha^2}}\right) \right] \\ &\displaystyle &\displaystyle \qquad + \frac{2}{\vert \Im z \vert} \mathbb{P}(\varOmega_{N,C}^c). \end{array} \end{aligned} $$

Thus (4.20) follows.

Now, Lemmas 4.18, 4.1 and 4.19 readily yields the following approximation lemma.

Lemma 4.2

Let h be in \(\mathbb {C}^\infty (\mathbb {R}, \mathbb {R})\) with compact support and \(\tilde \varGamma _N\) be a (n + N) × (n + N) Hermitian matrix such that

$$\displaystyle \begin{aligned} \sup_{n,N} \Vert \tilde \varGamma_N \Vert<\infty \mathit{\text{ and }} \sup_{n,N} \mathrm{rank} (\tilde \varGamma_N) <\infty.\end{aligned} $$
(4.22)

Then, there exist some deterministic functions on [0, +[, u and v, such that limC→+u(C) = 0 and limα→0v(α) = 0, such that for all C > 0, α > 0, we have almost surely for all large N,

$$\displaystyle \begin{aligned} \left|\mathrm{Tr} \left[ h\left((\mathbb{M}_{N+n}\left(\frac{X^{\alpha,C}}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right] - \mathrm{Tr} \left[ h\left((\mathbb{M}_{N+n}\left(\frac{X_N}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right]\right| \leq a^{(1)}_{C,\alpha},\end{aligned} $$
(4.23)

and for all large N,

$$\displaystyle \begin{aligned} \left|\mathbb{E}\mathrm{Tr} \left[ h\left((\mathbb{M}_{N+n}\left(\frac{X^{\alpha,C}}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right] - \mathbb{E}\mathrm{Tr} \left[ h\left((\mathbb{M}_{N+n}\left(\frac{X_N}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right]\right| \leq a^{(2)}_{C,\alpha,N}, \end{aligned} $$
(4.24)

where

$$\displaystyle \begin{aligned}a^{(1)}_{C,\alpha}= u(C)+v(\alpha),\; a^{(2)}_{C,\alpha,N}= u(C)+v(\alpha) + o_{C}(1).\end{aligned}$$

Note that the distributions of the independent random variables \(\Re (X_{ij}^{\alpha ,C})\), \(\Im (X_{ij}^{\alpha ,C})\) are all a convolution of a centred Gaussian distribution with some variance v α, with some law with bounded support in a ball of some radius R C,α; thus, according to Lemma 4.20, they satisfy a Poincaré inequality with some common constant C PI(C, α) and therefore so does their product (see Appendix 2). An important consequence of the Poincaré inequality is the following concentration result.

Lemma 4.3

Lemma 4.4.3 and Exercise 4.4.5 in [ 1 ] or Chapter 3 in [ 24 ]. There exists K 1 > 0 and K 2 > 0 such that for any probability measure \(\mathbb {P}\) on \(\mathbb {R^M}\) which satisfies a Poincaré inequality with constant C PI, and for any Lipschitz function F on \(\mathbb {R}^M\) with Lipschitz constant |F|Lip, we have

$$\displaystyle \begin{aligned}\forall \epsilon> 0, \, \mathbb{P}\left( \vert F-\mathbb{E}_{\mathbb{P}}(F) \vert > \epsilon \right) \leq K_1 \exp\left(-\frac{\epsilon}{K_2 \sqrt{C_{PI}} \vert F \vert_{Lip}}\right).\end{aligned}$$

In order to apply Lemma 4.3, we need the following preliminary lemmas.

Lemma 4.4 (See Lemma 8.2 [10])

Let f be a real \(C_{\mathbb {L}}\) -Lipschitz function on \(\mathbb {R}\) . Then its extension on the N × N Hermitian matrices is \(C_{\mathbb {L}}\) -Lipschitz with respect to the Hilbert-Schmidt norm.

Lemma 4.5

Let \(\tilde \varGamma _N \) be a (n + N) × (n + N) matrix and h be a real Lipschitz function on \(\mathbb {R}\) . For any n × N matrix B,

$$\displaystyle \begin{aligned} \left\{\left(\Re B(i,j),~ \Im B(i,j)\right)_{1\leq i \leq n, 1 \leq j \leq N}\right\} \mapsto Tr \left[ h\left((\mathbb{ M}_{N+n}\left(B\right)\right) \tilde \varGamma_N\right] \end{aligned}$$

is Lipschitz with constant bounded by \(\sqrt {2} \left \| \tilde \varGamma _N \right \|{ }_2 \Vert h \Vert _{Lip}\).

Proof

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \left| \mathrm{Tr} \left[h (\mathbb{M}_{N+p}(B)) \tilde \varGamma_N\right]- \mathrm{Tr} \left[ h (\mathbb{M}_{N+p}(B^{\prime}))\tilde \varGamma_N\right]\right| \\ &\displaystyle &\displaystyle \quad \leq \left\| \tilde \varGamma_N \right\|{}_2 \left\| h (\mathbb{M}_{N+p}(B))- h (\mathbb{M}_{N+p}(B^{\prime}))\right\|{}_2 \\ &\displaystyle &\displaystyle \quad \leq \left\| \tilde \varGamma_N \right\|{}_2 \left\|h \right\|{}_{Lip} \left\| \mathbb{M}_{N+p}(B)-\mathbb{M}_{N+p}(B^{\prime}) \right\|{}_2{} \end{array} \end{aligned} $$
(4.25)

where we used Lemma 4.4 in the last line. Now,

$$\displaystyle \begin{aligned}\left\| \mathbb{M}_{N+p}(B)-\mathbb{M}_{N+p}(B^{\prime}) \right\|{}^2_2= 2 \left\| B-B^{\prime}\right\|{}^2_2.{} \end{aligned} $$
(4.26)

Lemma 4.5 readily follows from (4.25) and (4.26).

Lemma 4.6

Let \(\tilde \varGamma _N \) be a (n + N) × (n + N) matrix such that \( \sup _{N,n} \left \| \tilde \varGamma _N \right \|{ }_2 \leq K\) . Let h be a real Lipschitz function on \(\mathbb {R}\). \(F_N=\mathrm {Tr} \left [ h\left ( \mathbb {M}_{N+p}\left ( \frac {X^{\alpha ,C}}{\sqrt {N}}\right ) \right ) \tilde \varGamma _N\right ]\) satisfies the following concentration inequality

$$\displaystyle \begin{aligned}\forall \epsilon> 0, \, \mathbb{P}\left( \vert F_N-\mathbb{E}(F_N) \vert > \epsilon \right) \leq K_1 \exp\left(-\frac{\epsilon \sqrt{ N}}{K_2(\alpha,C) K \Vert h \Vert_{Lip}}\right),\end{aligned}$$

for some positive real numbers K 1 and K 2(α, C).

Proof

Lemma 4.6 follows from Lemmas 4.5 and 4.3 and basic facts on Poincaré inequality recalled at the end of Appendix 2.

By Borel-Cantelli’s Lemma, we readily deduce from the above Lemma the following

Lemma 4.7

Let \(\tilde \varGamma _N \) be a (n + N) × (n + N) matrix such that \( \sup _{N,n} \left \| \tilde \varGamma _N \right \|{ }_2 \leq K\) . Let h be a real \(\mathbb {C}^1\) - function with compact support on \(\mathbb {R}\).

$$\displaystyle \begin{aligned} &\mathrm{Tr} \left[ h\left( \mathbb{M}_{N+p}\left( \sigma \frac{X^{\alpha,C}}{\sqrt{N}}\right) \right)\tilde \varGamma_N\right]- \mathbb{E}\left[ \mathrm{Tr} \left[ h\left( \mathbb{M}_{N+p}\left( \sigma \frac{X^{\alpha,C}}{\sqrt{N}}\right) \right) \tilde \varGamma_N\right]\right] \\ & \quad \stackrel{a.s}{\longrightarrow}_{N\rightarrow +\infty}0.{} \end{aligned} $$
(4.27)

Now, we will establish a comparison result with the Gaussian case for the mean values by using the following lemma (which is an extension of Lemma 4.10 below to the non-Gaussian case) as initiated by Khorunzhy et al. [22] in Random Matrix Theory.

Lemma 4.8

Let ξ be a real-valued random variable such that \(\mathbb {E}(\vert \xi \vert ^{p+2}) < \infty \) . Let ϕ be a function from \(\mathbb {R}\) to \(\mathbb {C}\) such that the first p + 1 derivatives are continuous and bounded. Then,

$$\displaystyle \begin{aligned} \mathbb{E} (\xi \phi (\xi )) = \sum_{a=0}^p \frac{\kappa _{a+1}}{a!}\mathbb{E} (\phi ^{(a)}(\xi )) + \epsilon , \end{aligned} $$
(4.28)

where κ a are the cumulants of ξ, \(\vert \epsilon \vert \leq K \sup _t \vert \phi ^{(p+1)}(t)\vert \mathbb {E} (\vert \xi \vert ^{p+2})\) , K only depends on p.

Lemma 4.9

Let \(\mathbb {G}_N = [\mathbb {G}_{ij}]_{1\leq i\leq n, 1\leq j\leq N}\) be a n × N random matrix with i.i.d. complex N(0, 1) Gaussian entries. Define

$$\displaystyle \begin{aligned}\tilde G^{{ \mathbb{G}}}(z)= \left( zI_{N+n}- \mathbb{M}_{N+n}\left(\sigma \frac{\mathbb{G}_N}{\sqrt{N}}\right) \right)^{-1}\end{aligned}$$

for any \(z\in \mathbb {C}\setminus \mathbb {R}.\) There exists a polynomial P with nonnegative coefficients such that for all large N, for any (i, j) ∈{1, …, n + N}2, for any \(z\in \mathbb {C}\setminus \mathbb {R}\) , for any unitary (n + N) × (n + N) matrix U,

$$\displaystyle \begin{aligned} \left| \mathbb{E} \left[(U^*\tilde G^{{ \mathbb{G}}}(z)U)_{ij}\right]- \mathbb{E}\left[(U^*\tilde G(z)U)_{ij}\right]\right| \leq \frac{1}{\sqrt{N}}P(\left|\Im z \right|{}^{-1}).\end{aligned} $$
(4.29)

Moreover, for any (N + n) × (N + n) matrix \(\tilde \varGamma _N\) such that

$$\displaystyle \begin{aligned} \sup_{n,N} \Vert \tilde \varGamma_N \Vert<\infty \mathit{\text{ and }} \sup_{n,N} \mathrm{rank} (\tilde \varGamma_N) <\infty,\end{aligned} $$
(4.30)

and any function h in \(\mathbb {C}^\infty (\mathbb {R}, \mathbb {R})\) with compact support, there exists some constant K > 0 such that, for any large N,

$$\displaystyle \begin{aligned} & \left| \mathbb{E}\left[ \mathrm{Tr} \left[ h\left( \mathbb{M}_{N+n}\left( \sigma \frac{X_N}{\sqrt{N}}\right) \right)\tilde \varGamma_N\right]\right]- \mathbb{E}\left[ \mathrm{Tr} \left[ h\left( \mathbb{M}_{N+n}\left( \sigma \frac{\mathbb{G}_N}{\sqrt{N}}\right) \right)\tilde \varGamma_N\right]\right] \right| \\ & \quad \leq \frac{K}{\sqrt{N}}.{} \end{aligned} $$
(4.31)

Proof

We follow the approach of [26] chapters 18 and 19 consisting in introducing an interpolation matrix \(X_N(\alpha )= \cos \alpha X_N + \sin \alpha \mathbb {G}_N\) for any α in \([0;\frac {\pi }{2}]\) and the corresponding resolvent matrix \(\tilde G(\alpha ,z)= \left ( zI_{N+n}- \mathbb {M}_{N+n}\left (\sigma \frac {X_N(\alpha )}{\sqrt {N}}\right ) \right )^{-1}\) for any \(z\in \mathbb {C}\setminus \mathbb {R}.\) We have, for any (s, t) ∈{1, …, n + N}2,

$$\displaystyle \begin{aligned}\mathbb{E}\tilde G^{{ \mathbb{G}}}_{st}(z)- \mathbb{E} \tilde G_{st}(z)=\int_0^{\frac{\pi}{2}} \mathbb{E} \left( \frac{ \partial }{\partial \alpha} \tilde G_{st}(\alpha,z)\right) d\alpha\end{aligned}$$

with

$$\displaystyle \begin{aligned} \begin{array}{rcl} \frac{ \partial }{\partial \alpha} \tilde G_{st}(\alpha,z)&\displaystyle = &\displaystyle \frac{\sigma}{2\sqrt{N}}\sum_{l=1}^n \sum_{k=n+1}^{n+N }\left\{ \left[ \tilde G_{sl} (\alpha,z) \tilde G_{kt} (\alpha,z)+ \tilde G_{sk}(\alpha,z) \tilde G_{l t}(\alpha,z) \right]\right.\\ &\displaystyle &\displaystyle \times \left[ -\sin \alpha \Re X_{l(k-n)}+\cos \alpha \Re \mathbb{G}_{l(k-n)}\right] \\ &\displaystyle &\displaystyle \left.+i \left[ \tilde G_{sl}(\alpha,z) \tilde G_{kt} (\alpha,z)- \tilde G_{sk} (\alpha,z)\tilde G_{l t}(\alpha,z) \right]\right.\\ &\displaystyle &\displaystyle \left.\times \left[-\sin \alpha \Im X_{l(k-n)} +\cos \alpha \Im \mathbb{G}_{l(k-n)} \right]\right\}. \end{array} \end{aligned} $$

Now, for any l = 1, …, n and k = n + 1, …, n + N, using Lemma 4.8 for p = 1 and for each random variable ξ in the set \(\left \{\Re X_{l(k-n)}, \Re \mathbb {G}_{l(k-n)},\Im X_{l(k-n)}, \Im \mathbb {G}_{l(k-n)} \right \} \), and for each ϕ in the set

$$\displaystyle \begin{aligned}\left\{ (U^*\tilde G (\alpha ,z))_{ip}(\tilde G(\alpha,z)U)_{q j} ; (p,q)=(l,k) \mbox{ or }(k,l), (i,j)\in \{1,\ldots,n+N\}^2\right\},\end{aligned}$$

one can easily see that there exists some constant K > 0 such that

$$\displaystyle \begin{aligned}\left| \mathbb{E} (U^* \tilde G^{{ \mathbb{G}}}(z)U)_{ij}- \mathbb{E}(U^*\tilde G(z)U)_{ij} \right| \leq \frac{ K}{N^{3/2}}\sup_{Y \in \mathbb{H}_{n+N}(\mathbb{C})} \sup_{V\in \mathbb{U}(n+N)} S_V(Y)\end{aligned}$$

where \(\mathbb {H}_{n+N}(\mathbb {C})\) denotes the set of (n + N) × (n + N) Hermitian matrices and S V(Y ) is a sum of a finite number independent of N and n of terms of the form

$$\displaystyle \begin{aligned}\sum_{l=1}^n \sum_{k=n+1}^{n+N } \left|\left(U^*R(Y)\right)_{ip_1}\left(R(Y)\right)_{p_2p_3}\left(R(Y)\right)_{p_4p_5}\left(R(Y)U\right)_{p_6j} \right|\end{aligned} $$
(4.32)

with \(R(Y)=\left (zI_{N+n}- Y\right )^{-1}\) and {p 1, …, p 6} contains exactly three k and three l.

When (p 1, p 6) = (k, l) or (l, k), then, using Lemma 4.17,

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{l=1}^n \sum_{k=n+1}^{n+N }\left| \left(U^*R(Y)\right)_{ip_1}\left(R(Y)\right)_{p_2p_3}\left(R(Y)\right)_{p_4p_5}\left(R(Y)U\right)_{p_6j}\right| \\ &\displaystyle &\displaystyle \quad \leq \frac{1}{\vert \Im z \vert^2} \sum_{k,l =1}^{n+N} \left|\left(U^*R(Y)\right)_{il}\left(R(Y)U\right)_{kj}\right| \\ &\displaystyle &\displaystyle \quad \leq \frac{(N+n)}{\vert \Im z \vert^2} \left(\sum_{l =1}^{n+N} \left|\left(U^*R(Y)\right)_{il}\right|{}^2 \right)^{1/2}\left(\sum_{k =1}^{n+N} \left|\left(R(Y)U\right)_{kj}\right|{}^2 \right)^{1/2}\\ &\displaystyle &\displaystyle \quad =\frac{(N+n)}{\vert \Im z \vert^2} \left( \left(U^*R(Y)R(Y)^*U\right)_{ii} \right)^{1/2}\left( \left(U^*R(Y)^*R(Y)U\right)_{jj} \right)^{1/2}\\ &\displaystyle &\displaystyle \quad \leq \frac{(N+n)}{\vert \Im z \vert^4} \end{array} \end{aligned} $$

When p 1 = p 6 = k or l, then, using Lemma 4.17,

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{l=1}^n \sum_{k=n+1}^{n+N }\left| \left(U^*R(Y)\right)_{ip_1}\left(R(Y)\right)_{p_2p_3}\left(R(Y)\right)_{p_4p_5}\left(R(Y)U\right)_{p_6j}\right|\\ &\displaystyle &\displaystyle \quad \leq \frac{N+n}{\vert \Im z \vert^2} \sum_{l =1}^{n+N} \left|\left(U^*R(Y)\right)_{il}\left(R(Y)U\right)_{lj}\right| \\ &\displaystyle &\displaystyle \quad \leq \frac{(N+n)}{\vert \Im z \vert^2} \left(\sum_{l =1}^{n+N} \left|\left(U^*R(Y)\right)_{il}\right|{}^2 \right)^{1/2}\left(\sum_{l =1}^{n+N} \left|\left(R(Y)U\right)_{lj}\right|{}^2 \right)^{1/2}\\ &\displaystyle &\displaystyle \quad = \frac{(N+n)}{\vert \Im z \vert^2} \left( \left(U^*R(Y)R(Y)^*U\right)_{ii} \right)^{1/2}\left( \left(U^*R(Y)^*R(Y)U\right)_{jj} \right)^{1/2}\\ &\displaystyle &\displaystyle \quad \leq \frac{(N+n)}{\vert \Im z \vert^4} \end{array} \end{aligned} $$

(4.29) readily follows.

Then by Lemma 4.19, there exists some constant K > 0 such that, for any N and n, for any (i, j) ∈{1, …, n + N}2, any unitary (n + N) × (n + N) matrix U,

$$\displaystyle \begin{aligned}\limsup_{y \rightarrow 0^+} \left| \int \left[ \mathbb{E}(U^*\tilde G(t+\mathrm{i} y)U)_{ij}- \mathbb{E}(U^* {\tilde G^{{ \mathbb{G}}}}(t+\mathrm{i} y)U)_{ij}\right] h(t) dt \right| \leq \frac{ K}{\sqrt{N}}.\end{aligned} $$
(4.33)

Thus, using (4.97) and (4.30), we can deduce (4.31) from (4.33).

The above comparison lemmas allow us to establish the following convergence result.

Proposition 4.4

Let h be a function in \(\mathbb {C}^\infty (\mathbb {R}, \mathbb {R})\) with compact support and let \(\tilde \varGamma _N \) be a (n + N) × (n + N) matrix such that \( \sup _{n,N} \mathrm {rank} (\tilde \varGamma _N) <\infty \) and \( \sup _{n,N} \Vert \tilde \varGamma _N \Vert <\infty \) . Then we have that almost surely

$$\displaystyle \begin{aligned} &Tr \left[ h\left((\mathbb{M}_{N+n}\left(\sigma \frac{X_N}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right]- \mathbb{E}\left[Tr \left[ h\left((\mathbb{M}_{N+n}\left(\sigma \frac{\mathbb{G}_N}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right] \right] \\ & \quad {\longrightarrow}_{N\rightarrow +\infty}0. \end{aligned} $$
(4.34)

Proof

Lemmas 4.2, 4.7 and 4.9 readily yield that there exist some positive deterministic functions u and v on [0, +[ with limC→+u(C) = 0 and limα→0v(α) = 0, such that for any C > 0 and any α > 0, almost surely

$$\displaystyle \begin{aligned} &\limsup_{N \rightarrow +\infty} \left| Tr \left[ h\left((\mathbb{M}_{N+n}\left(\sigma \frac{X_N}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right]{-}\, \mathbb{E}\left[Tr \left[ h\left((\mathbb{M}_{N+n}\left(\sigma \frac{\mathbb{G}_N}{\sqrt{N}}\right)\right) \tilde \varGamma_N\right] \right] \right| \\ & \quad \leq u(C) +v(\alpha). \end{aligned} $$

The result follows by letting α go to zero and C go to infinity.

Now, note that, for any N × n matrix B, for any continuous real function h on \(\mathbb {R}\), and any n × n Hermitian matrix Γ N, we have

$$\displaystyle \begin{aligned}Tr \left(h\left((B+A_N)(B+A_N)^*\right) \varGamma_N\right)=Tr \left[\tilde h\left(\mathbb{M}_{N+n}\left(B\right)\right) \tilde \varGamma_N\right]\end{aligned}$$

where \(\tilde h(x)=h(x^2)\) and \( \tilde \varGamma _N= \begin {pmatrix} \varGamma _N & (0)\\ (0) & (0) \end {pmatrix}\). Thus, Proposition 4.4 readily yields Proposition 4.2.

4.4 Proof of Proposition 4.3

The aim of this section is to prove Proposition 4.3 which deals with Gaussian random variables.Therefore we assume here that A N is as (4.14) and set \(\gamma _q(N)=(A_N A_N^*)_{qq}\). In this section, we let X stand for \(\mathbb {G}_N\), A stands for A N, G denotes the resolvent of M N = ΣΣ where \(\varSigma =\sigma \frac {\mathbb {G}_N}{\sqrt {N}}+A_N\) and g N denotes the mean of the Stieltjes transform of the spectral measure of M N, that is

$$\displaystyle \begin{aligned}g_N(z) = \mathbb{E}\left(\frac{1}{n} Tr G(z)\right), \, z \in \mathbb{C}\setminus \mathbb{R}.\end{aligned}$$

4.4.1 Matricial Master Equation

To obtain Eq. (4.35) below, we will use many ideas from [17]. The following Gaussian integration by part formula is the key tool in our approach.

Lemma 4.10 (Lemma 2.4.5 [1])

Let ξ be a real centered Gaussian random variable with variance 1. Let Φ be a differentiable function with polynomial growth of Φ and Φ′. Then,

$$\displaystyle \begin{aligned}\mathbb{E} \left( \xi \varPhi(\xi) \right) = \mathbb{E} \left( \varPhi^{\prime}(\xi) \right).\end{aligned}$$

Proposition 4.5

Let z be in \(\mathbb {C}\setminus \mathbb {R}\) . We have for any (p, q) in {1, …, n}2,

(4.35)

where

$$\displaystyle \begin{aligned} \nabla_{pq} = \frac{1}{1- \sigma^2 c_N g_N}\left\{ \frac{\sigma^2}{N} \frac{ \mathbb{E}\left( G_{pq}\right) }{1-\sigma^2 c_N g_N} \varDelta_3 + \varDelta_2(p,q) + \varDelta_1(p,q).\right\},\end{aligned} $$
(4.36)
$$\displaystyle \begin{aligned}\varDelta_{1}(p,q) = {\sigma^2}\mathbb{E}\left\{ \left[\frac{1}{N} Tr G - \mathbb{E}\left(\frac{1}{N} Tr G \right)\right] (G\varSigma \varSigma^*)_{pq} \right\} ,\end{aligned} $$
(4.37)
$$\displaystyle \begin{aligned}\varDelta_{2}(p,q) = \frac{\sigma^2}{N}\mathbb{E}\left\{ Tr(GA\varSigma^* ) \left[G_{pq} - \mathbb{E}\left(G_{pq}\right) \right] \right\} ,\end{aligned} $$
(4.38)
$$\displaystyle \begin{aligned}\varDelta_{3} = {\sigma^2}\mathbb{E}\left\{ \left[\frac{1}{N} Tr G - \mathbb{E}\left(\frac{1}{N} Tr G \right)\right] Tr (\varSigma^*GA ) \right\} .\end{aligned} $$
(4.39)

Proof

Using Lemma 4.10 with \(\xi =\Re X_{ij} \) or ξ = ℑX ij and \(\varPhi = G_{pi} \overline {\varSigma _{qj}}\), we obtain that for any j, q, p,

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathbb{E}\left[ \left( G \frac{\sigma X}{\sqrt{N}}\right)_{pj} \overline{\varSigma_{qj}} \right] &\displaystyle =&\displaystyle \sum_{i=1}^n \mathbb{E}\left[ G_{pi} \frac{\sigma X_{ij} }{\sqrt{N}}\overline{\varSigma_{qj}} \right] \end{array} \end{aligned} $$
(4.40)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle =&\displaystyle \frac{\sigma^2}{N} \sum_{i=1}^n \mathbb{E}\left[ \left( G \varSigma\right)_{pj} G_{ii} \overline{\varSigma_{qj}} \right] + \frac{\sigma^2}{N} \mathbb{E}(G_{pq}) \end{array} \end{aligned} $$
(4.41)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle =&\displaystyle \frac{\sigma^2}{N} \mathbb{E}\left[ \left( Tr G\right) \left(G \varSigma\right)_{pj} \overline{\varSigma_{qj}} \right] + \frac{\sigma^2}{N} \mathbb{E}(G_{pq}). {} \end{array} \end{aligned} $$
(4.42)

On the other hand, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathbb{E}\left[ \left( G A \right)_{pj} \overline{\varSigma_{qj}} \right] &\displaystyle =&\displaystyle \mathbb{E}\left[ \left( G A \right)_{pj} \overline{A_{qj}} \right] +\sum_{i=1}^n \mathbb{E}\left[ G_{pi} A_{ij} \frac{ \sigma \overline{X_{qj}}}{\sqrt{N}} \right] \end{array} \end{aligned} $$
(4.43)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle =&\displaystyle \mathbb{E}\left[ \left( G A \right)_{pj} \overline{A_{qj}} \right] +\frac{\sigma^2}{N} \mathbb{E}\left[ G_{pq} \left( \varSigma^* G A \right)_{jj} \right] {} \end{array} \end{aligned} $$
(4.44)

where we applied Lemma 4.10 with \(\xi =\Re X_{qj} \) or ξ = ℑX qj and Ψ = G piA ij. Summing (4.42) and (4.44) yields

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathbb{E}\left[ \left( G \varSigma \right)_{pj} \overline{\varSigma_{qj}} \right] &\displaystyle =&\displaystyle \frac{\sigma^2}{N} \mathbb{E}(G_{pq})+ \frac{\sigma^2}{N} \mathbb{E}\left[ \left(Tr G\right) \left(G \varSigma\right)_{pj} \overline{\varSigma_{qj}} \right] \end{array} \end{aligned} $$
(4.45)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle +\frac{\sigma^2}{N} \mathbb{E}\left[ G_{pq} \left( \varSigma^* G A \right)_{jj} \right] + \mathbb{E}\left[ \left( G A \right)_{pj} \overline{A_{qj}} \right]. {}\end{array} \end{aligned} $$
(4.46)

Define

$$\displaystyle \begin{aligned}\varDelta_1(j)= \frac{\sigma^2}{N} \mathbb{E}\left[ \left(Tr G\right) \left(G \varSigma\right)_{pj} \overline{\varSigma_{qj}} \right]- \frac{\sigma^2}{N} \mathbb{E}\left[ Tr G \right] \mathbb{E}\left[ \left(G \varSigma\right)_{pj} \overline{\varSigma_{qj}} \right].\end{aligned}$$

From (4.46), we can deduce that

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathbb{E}\left[ \left( G \varSigma \right)_{pj} \overline{\varSigma_{qj}} \right] &\displaystyle =&\displaystyle \frac{1}{1-\sigma^2 c_N g_N} \left\{ \frac{\sigma^2}{N} \mathbb{E}(G_{pq})+ \frac{\sigma^2}{N} \mathbb{E}\left[ G_{pq} \left( \varSigma^* G A \right)_{jj} \right] \right.\\ &\displaystyle &\displaystyle \left.+ \mathbb{E}\left[ \left( G A \right)_{pj} \overline{A_{qj}} \right] + \varDelta_1(j)\right\}.\end{array} \end{aligned} $$

Then, summing over j, we obtain that

$$\displaystyle \begin{aligned} \mathbb{E}\left[ \left( G \varSigma \varSigma^*\right)_{pq} \right] &=\frac{1}{1-\sigma^2 c_N g_N} \left\{ {\sigma^2} \mathbb{E}(G_{pq})+ \frac{\sigma^2}{N} \mathbb{E}\left[ G_{pq} Tr\left( \varSigma^* G A \right) \right] \right. \\ & \quad \left. + \mathbb{E}\left[ \left( G A A^*\right)_{pq} \right] + \varDelta_1 (p,q)\right\},{} \end{aligned} $$
(4.47)

where Δ 1(p, q) is defined by (4.37). Applying Lemma 4.10 with \(\xi =\Re X_{ij} \) or ℑX ij and Ψ = (GA)ij, we obtain that

$$\displaystyle \begin{aligned}\mathbb{E} \left[Tr\left( \frac{\sigma X^*}{\sqrt{N}} G A \right) \right] =\frac{\sigma^2}{N} \mathbb{E}\left[ Tr G ~Tr\left( \varSigma^* G A \right) \right].\end{aligned}$$

Thus,

$$\displaystyle \begin{aligned}\mathbb{E}\left[ Tr\left( \varSigma^* G A \right) \right]=\mathbb{E}\left[ Tr\left( A^* G A \right) \right]+ \sigma^2 c_N g_N \mathbb{E}\left[ Tr\left( \varSigma^* G A \right) \right] + \varDelta_3,\end{aligned}$$

where Δ 3 is defined by (4.39) and then

$$\displaystyle \begin{aligned} \mathbb{E}\left[ Tr\left( \varSigma^* G A \right) \right]=\frac{1}{1-\sigma^2 c_Ng_N} \left\{ \mathbb{E}\left[ Tr\left( G A A^*\right) \right] + \varDelta_3 \right\}. \end{aligned} $$
(4.48)

(4.48) and (4.38) imply that

$$\displaystyle \begin{aligned}\frac{\sigma^2}{N} \mathbb{E}\left[ G_{pq}Tr\left( \varSigma^* G A \right) \right]= \frac{\sigma^2}{N}\frac{\mathbb{E}(G_{pq})}{1-\sigma^2 c_N g_N} \left\{ \mathbb{E}\left[ Tr\left( G A A^*\right) \right] + \varDelta_3 \right\} + \varDelta_2(p,q), \end{aligned} $$
(4.49)

where Δ 2(p, q) is defined by (4.38). We can deduce from (4.47) and (4.49) that

$$\displaystyle \begin{aligned} & \mathbb{E}\left[ \left( G \varSigma \varSigma^*\right)_{pq} \right] \\ & \quad =\frac{1}{1-\sigma^2 c_Ng_N} \left\{ {\sigma^2} \mathbb{E}(G_{pq})+ \mathbb{E}\left[ \left( G A A^*\right)_{pq} \right] \right.\\ & \qquad \left. +\frac{\sigma^2}{N} \frac{\mathbb{E}\left[ G_{pq}\right] }{1-\sigma^2 c_N g_N}\mathbb{E}\left[ Tr\left( G AA^* \right) \right]\right. \\ & \qquad \left.+\frac{\sigma^2}{N}\frac{\mathbb{E}(G_{pq})}{1-\sigma^2 c_N g_N} \varDelta_3 + \varDelta_1 (p,q)+ \varDelta_2 (p,q) \right\}. {} \end{aligned} $$
(4.50)

Using the resolvent identity and (4.50), we obtain that

$$\displaystyle \begin{aligned} \begin{array}{rcl} z\mathbb{E}\left( G_{pq} \right)&\displaystyle =&\displaystyle \frac{1}{1-\sigma^2 c_Ng_N} \left\{ {\sigma^2} \mathbb{E}(G_{pq})+ \mathbb{E}\left[ \left( G A A^*\right)_{pq} \right] \right. \\ &\displaystyle &\displaystyle \left. +\frac{\sigma^2}{N} \frac{\mathbb{E}\left[ G_{pq}\right] }{1-\sigma^2 c_Ng_N} \mathbb{E}\left[ Tr\left( G AA^* \right)\right] \right\}+ \delta_{pq}+\nabla_{pq} {}\end{array} \end{aligned} $$
(4.51)

where ∇pq is defined by (4.36). Taking p = q in (4.51), summing over p and dividing by n, we obtain that

$$\displaystyle \begin{aligned} \begin{array}{rcl} z g_N &\displaystyle =&\displaystyle \frac{\sigma^2 g_N}{1-\sigma^2 c_N g_N} + \frac{ Tr \left[ \mathbb{E} (G) AA^*\right]}{n(1-\sigma^2 c_N g_N)} \end{array} \end{aligned} $$
(4.52)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle + \frac{\sigma^2 g_N Tr \left[ \mathbb{E} (G) AA^*\right]}{N(1-\sigma^2 c_N g_N)^2} +1 + \frac{1}{n} \sum_{p=1}^n \nabla_{pp}. \end{array} \end{aligned} $$
(4.53)

It readily follows that

$$\displaystyle \begin{aligned} \frac{ Tr \left[ \mathbb{E} (G) AA^*\right]}{n(1-\sigma^2 c_Ng_N)} \left( \frac{ \sigma^2 c_N g_N}{(1-\sigma^2 c_N g_N)} +1 \right) =\left( z - \frac{ \sigma^2 }{(1-\sigma^2 c_N g_N)} \right) g_N -1 - \frac{1}{n} \sum_{p=1}^n \nabla_{pp}. \end{aligned}$$

Therefore

$$\displaystyle \begin{aligned} \frac{ Tr \left[ \mathbb{E} (G) AA^*\right]}{n(1-\sigma^2 c_N g_N)}=zg_N (1-\sigma^2 c_N g_N) - (1-\sigma^2 c_N g_N) -\sigma^2 g_N - (1-\sigma^2 c_N g_N) \frac{1}{n} \sum_{p=1}^n \nabla_{pp}. \end{aligned} $$
(4.54)

(4.54) and (4.51) yield

$$\displaystyle \begin{aligned} &\mathbb{E}(G_{pq}) \times \left\{ z (1-\sigma^2 c_N g_N) -\frac{\gamma_q}{1-\sigma^2 c_N g_N} -\sigma^2(1-c_N) + \frac{\sigma^2}{N} \sum_{p=1}^n \nabla_{pp}\right\} \\ &\quad =\delta_{pq} + \nabla_{pq}.\end{aligned} $$

Proposition 4.5 follows.

4.4.2 Variance Estimates

In this section, when we state that some quantity Δ N(z), \(z \in \mathbb {C}\setminus \mathbb {R}\), is equal to \(O(\frac {1}{N^p})\), this means precisely that there exist some polynomial P with nonnegative coefficients and some positive real number l which are all independent of N such that for any \(z \in \mathbb {C}\setminus \mathbb {R}\),

$$\displaystyle \begin{aligned}\vert \varDelta _N(z)\vert \leq \frac{(\vert z\vert+1)^l P( | \Im z |{}^{-1}) }{N^p}.\end{aligned}$$

We present now the different estimates on the variance. They rely on the following Gaussian Poincaré inequality (see Appendix 2). Let Z 1, …, Z q be q real independent centered Gaussian variables with variance σ 2. For any \(\mathbb {C}^1\) function \(f: \mathbb {R}^q \rightarrow \mathbb {C}\) such that f and gradf are in \(L^2(\mathbb {N}(0, \sigma ^2I_q))\), we have

$$\displaystyle \begin{aligned} \mathbf{V}\left\{f(Z_1,\ldots ,Z_q)\right\}\leq \sigma^2 \mathbb{E} \left(\Vert (\mathrm{grad}f) (Z_1,\ldots,Z_q)\Vert_2^2\right) , \end{aligned} $$
(4.55)

denoting for any random variable a by V(a) its variance \( \mathbb {E}(\vert a-\mathbb {E}(a)\vert ^2)\). Thus, (Z 1, …, Z q) satisfies a Poincaré inequality with constant C PI = σ 2.

The following preliminary result will be useful to these estimates.

Lemma 4.11

There exists K > 0 such for all N,

$$\displaystyle \begin{aligned}\mathbb{E} \left( \lambda_1\left( \frac{XX^*}{N} \right) \right) \leq K.\end{aligned}$$

Proof

According to Lemma 7.2 in [19], we have for any t ∈ ]0;N∕2],

$$\displaystyle \begin{aligned}\mathbb{E} \left[\mathrm{Tr} \left(\exp t \frac{XX^*}{N} \right) \right]\leq n \exp \left( (\sqrt{c_N} +1)^2 t +\frac{1}{N} (c_N +1) t^2 \right).\end{aligned}$$

By the Chebychev’s inequality, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} \exp \left(t \mathbb{E} \left( \lambda_1\left( \frac{XX^*}{N} \right) \right) \right)&\displaystyle \leq&\displaystyle \mathbb{E} \left( \exp t \lambda_1\left( \frac{XX^*}{N} \right)\right)\\ &\displaystyle \leq &\displaystyle \mathbb{E} \left[\mathrm{Tr} \left(\exp t \frac{XX^*}{N} \right) \right]\\&\displaystyle \leq&\displaystyle n \exp \left( (\sqrt{c_N} +1)^2 t +\frac{1}{N} (c_N +1) t^2 \right). \end{array} \end{aligned} $$

It follows that

$$\displaystyle \begin{aligned}\mathbb{E}\left(\lambda_1\left( \frac{XX^*}{N}\right)\right) \leq \frac{1}{t} \log n+ (\sqrt{c_N} +1)^2 + \frac{1}{N} (c_N +1) t.\end{aligned}$$

The result follows by optimizing in t.

Lemma 4.12

There exists C > 0 such that for all large N, for all \(z \in \mathbb {C}\setminus \mathbb {R}\) ,

$$\displaystyle \begin{aligned} \mathbb{E}\left( \left| \frac{1}{n}\mathrm{Tr} G - \mathbb{E}(\frac{1}{n}\mathrm{Tr} G)\right|{}^2 \right)\leq \frac{C}{N^2 \vert \Im z \vert^4},\end{aligned} $$
(4.56)
$$\displaystyle \begin{aligned}\forall (p,q)\in \{1,\ldots,n\}^2, \;\mathbb{E}\left( \vert G_{pq} - \mathbb{E}(G_{pq})\vert^2 \right)\leq \frac{C}{N \vert \Im z \vert^4},\end{aligned} $$
(4.57)
$$\displaystyle \begin{aligned}\mathbb{E}\left( \vert \mathrm{Tr} \varSigma^*GA - \mathbb{E}(\mathrm{Tr} \varSigma^* GA)\vert^2 \right)\leq \frac{C(1+\vert z \vert)^2}{ \vert \Im z \vert^4}.\end{aligned} $$
(4.58)

Proof

Let us define \(\varPsi : \mathbb {R}^{2(n\times N)} \rightarrow { M }_{n\times N}(\mathbb {C})\) by

$$\displaystyle \begin{aligned}\varPsi:~~\{x_{ij},y_{ij},i=1,\ldots,n,j=1,\ldots,N\}\rightarrow \sum_{i=1,\ldots,n}\sum_{j=1,\ldots,N} \left( x_{ij} +\mathrm{i} y_{ij} \right) e_{ij},\end{aligned}$$

where e ij stands for the n × N matrix such that for any (p, q) in {1, …, n}×{1, …, N}, (e ij)pq = δ ipδ jq. Let F be a smooth complex function on \({ M }_{n\times N}(\mathbb {C})\) and define the complex function f on \(\mathbb {R}^{2(n\times N)}\) by setting f = F ∘ Ψ. Then,

$$\displaystyle \begin{aligned}\Vert \mathrm{grad} f(u)\Vert_2 = \sup_{V\in { M }_{n\times N}(\mathbb{C}), Tr VV^*=1} \left| \frac{d}{dt} F(\varPsi(u)+tV)_{\vert_{t=0}}\right|.\end{aligned}$$

Now, \(X=\varPsi (\Re (X_{ij}), \Im (X_{ij}),1\leq i\leq n,1\leq j\leq N)\) where the distribution of the random variable \((\Re (X_{ij}), \Im (X_{ij}),1\leq i\leq n,1\leq j\leq N)\) is \(\mathbb {N}(0, \frac {1}{2}I_{2nN})\).

Hence consider \(F:~H \rightarrow \frac {1}{n} \mathrm {Tr} \left (zI_n -\left (\sigma \frac { H}{\sqrt {N}} +A \right )\left (\sigma \frac {H}{\sqrt {N}} +A \right )^*\right )^{-1}\).

Let \( V\in {M }_{n\times N}(\mathbb {C})\) such that TrVV  = 1.

$$\displaystyle \begin{aligned} &\frac{d}{dt} F(X+tV)\vert_{t=0}\\ & \quad =\frac{1}{n} \left\{\mathrm{Tr} \left(G\sigma\frac{V}{\sqrt{N}} \left(\sigma\frac{X}{\sqrt{N}} +A \right)^* G\right) + \mathrm{Tr}\left(G \left(\sigma\frac{X}{\sqrt{N}} +A \right)\sigma \frac{V^*}{\sqrt{N}} G\right)\right\}.\end{aligned} $$

Moreover using Cauchy-Schwartz’s inequality and Lemma 4.17, we have

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \left| \frac{1}{n} \mathrm{Tr} \left(G\sigma\frac{V}{\sqrt{N}} \left(\sigma\frac{X}{\sqrt{N}} +A \right)^* G\right)\right| \\ &\displaystyle &\displaystyle \quad \leq \frac{\sigma}{n} (TrVV^* )^{\frac{1}{2}}\left[\frac{1}{N}Tr (\left(\sigma\frac{X}{\sqrt{N}} +A \right)\left(\sigma\frac{X}{\sqrt{N}} +A \right)^*G^2(G^*)^2)\right]^{\frac{1}{2}}\\ &\displaystyle &\displaystyle \quad \leq \frac{\sigma}{\sqrt{N}\sqrt{n}\vert \Im z\vert^2}\left[\lambda_1\left(\left(\sigma\frac{X}{\sqrt{N}} +A \right)\left(\sigma\frac{X}{\sqrt{N}} +A \right)^*\right)\right]^{\frac{1}{2}}. \end{array} \end{aligned} $$

We get obviously the same bound for \(\vert \frac {1}{n} \mathrm {Tr}\left (G \left (\sigma \frac {X}{\sqrt {N}} +A \right ) \sigma \frac {V^*}{\sqrt {N}} G\right )\vert \). Thus

$$\displaystyle \begin{aligned} &\mathbb{E}\left(\Vert \mathrm{grad}f\left(\Re(X_{ij}), \Im(X_{ij}),1\leq i\leq n,1\leq j\leq N\right)\Vert_2^2 \right)\\ &\quad \leq \frac{4 \sigma^2}{\vert \Im z\vert^4 Nn}\mathbb{E}\left[\lambda_1\left(\left(\sigma\frac{X}{\sqrt{N}} +A \right)\left(\sigma\frac{X}{\sqrt{N}} +A \right)^*\right)\right]. {} \end{aligned} $$
(4.59)

(4.56) readily follows from (4.55), (4.59), Theorem A.8 in [2], Lemma 4.11 and the fact that ∥A N∥ is uniformly bounded. Similarly, considering

$$\displaystyle \begin{aligned}F:~H \rightarrow \mathrm{Tr} \left[\left(zI_N -\left(\sigma \frac{H}{\sqrt{N}} +A \right)\left(\sigma \frac{H}{\sqrt{N}} +A \right)^*\right)^{-1}E_{qp}\right],\end{aligned}$$

where E qp is the n × n matrix such that (E qp)ij = δ qiδ pj, we can obtain that, for any \(V\in { M }_{n\times N}(\mathbb {C})\) such that TrVV  = 1,

$$\displaystyle \begin{aligned} &\left| \frac{d}{dt} F(X+tV)_{\vert_{t=0}} \right| \\ &\quad \leq \frac{\sigma}{\sqrt{N}} \left\{\left(\left(GG^*\right)_{pp} \left(G^*\varSigma \varSigma^*G\right)_{qq}\right)^{1/2} + \left( \left(G^*G\right)_{qq} \left(G\varSigma \varSigma^*G^*\right)_{pp}\right)^{1/2}\right\}.\end{aligned} $$

Thus, one can get (4.57) in the same way. Finally, considering

$$\displaystyle \begin{aligned}F:~H \rightarrow \mathrm{Tr} \left[\left(\sigma \frac{H}{\sqrt{N}} +A \right)^*\left(zI_N -\left(\sigma \frac{H}{\sqrt{N}} +A \right)\left(\sigma \frac{H}{\sqrt{N}} +A \right)^*\right)^{-1}A\right],\end{aligned}$$

we can obtain that, for any \(V\in { M }_{n\times N}(\mathbb {C})\) such that TrVV  = 1,

$$\displaystyle \begin{aligned} \begin{array}{rcl}\left| \frac{d}{dt} F(X+tV)_{\vert_{t=0} }\right|&\displaystyle \leq&\displaystyle \sigma\left\{ \left(\frac{1}{N} \mathrm{Tr} \varSigma^* G A \varSigma^* GG^* \varSigma A^* G^*\varSigma \right)^{1/2} \right. \\&\displaystyle &\displaystyle \left.+ \left(\frac{1}{N} \mathrm{Tr} GA \varSigma^* G \varSigma \varSigma^* G^* \varSigma A^* G^* \right)^{1/2} \right. \\&\displaystyle &\displaystyle \left.+ \left(\frac{1}{N} \mathrm{Tr} GA A^* G^* \right)^{1/2}\right\} \end{array} \end{aligned} $$

Using Lemma 4.17 (i), Theorem A.8 in [2], Lemma 4.11, the identity ΣΣ G = GΣΣ  = −I + zG, and the fact that ∥A N∥ is uniformly bounded, the same analysis allows to prove (4.58).

Corollary 4.1

Let Δ 1(p, q), Δ 2(p, q), (p, q) ∈{1, …, n}2, and Δ 3 be as defined in Proposition 4.5 . Then there exist a polynomial P with nonnegative coefficients and a nonnegative real number l such that, for all large N, for any \(z\in \mathbb {C}\setminus \mathbb {R}\) ,

$$\displaystyle \begin{aligned} \varDelta_3(z)\leq \frac{P(\vert \Im z\vert^{-1}) (1+\vert z \vert )^l}{N},\end{aligned} $$
(4.60)

and for all (p, q) ∈{1, …, n}2,

$$\displaystyle \begin{aligned} \varDelta_1(p,q)(z)\leq \frac{P(\vert \Im z\vert^{-1}) (1+\vert z \vert )^l}{N},\end{aligned} $$
(4.61)
$$\displaystyle \begin{aligned} \varDelta_2(p,q)(z)\leq \frac{P(\vert \Im z\vert^{-1}) (1+\vert z \vert )^l}{N\sqrt{N}}.\end{aligned} $$
(4.62)

Proof

Using the identity

$$\displaystyle \begin{aligned}GM_N = -I + z G,\end{aligned}$$

(4.61) readily follows from Cauchy-Schwartz inequality, Lemma 4.17 and (4.56). (4.62) and (4.60) readily follows from Cauchy-Schwartz inequality and Lemma 4.12.

4.4.3 Estimates of Resolvent Entries

In order to deduce Proposition 4.3 from Proposition 4.5 and Corollary 4.1, we need the two following Lemmas 4.13 and 4.14.

Lemma 4.13

For all \(z\in \mathbb {C}\setminus \mathbb {R}\) ,

$$\displaystyle \begin{aligned}\frac{1}{\left|1- \sigma^2 c_N g_N(z)\right|} \leq \frac{\vert z\vert }{\vert \Im z \vert},\end{aligned} $$
(4.63)
$$\displaystyle \begin{aligned}\frac{1}{\left|1- \sigma^2 c g_{\mu_{\sigma,\nu,c}}(z)\right|} \leq \frac{\vert z\vert }{\vert \Im z \vert}.\end{aligned} $$
(4.64)

Proof

Since \(\mu _{M_N}\) is supported by [0, +[, (4.63) readily follows from

$$\displaystyle \begin{aligned} \frac{1}{\left|1- \sigma^2 c_N g_N(z)\right|} &=\frac{\vert z\vert }{\left|z- \sigma^2 c_N zg_N(z)\right|} \\ &\leq \frac{\vert z\vert }{\big|\Im (z- \sigma^2 c_N zg_N(z))\big|}\,{=}\, \frac{\vert z\vert }{|\Im z |\big( 1+\sigma^2 c_N \mathbb{E} \int \frac{t}{|z-t|{}^2} d\mu_{M_N}(t)\big)}.\end{aligned} $$

(4.64) may be proved similarly.

Corollary 4.1 and Lemma 4.13 yield that, there is a polynomial Q with nonnegative coefficients, a sequence b N of nonnegative real numbers converging to zero when N goes to infinity and some nonnegative integer number l, such that for any p, q in {1, …, n}, for all \(z\in \mathbb {C}\setminus \mathbb {R}\),

$$\displaystyle \begin{aligned}\nabla_{pq} \leq (1+\vert z\vert)^l Q(\vert \Im z \vert^{-1})b_N,\end{aligned} $$
(4.65)

where ∇pq was defined by (4.36).

Lemma 4.14

There is a sequence v N of nonnegative real numbers converging to zero when N goes to infinity such that for all \(z\in \mathbb {C}\setminus \mathbb {R}\) ,

$$\displaystyle \begin{aligned} \left| g_N(z)-g_{\mu_{\sigma,\nu,c}}(z)\right| \leq \left\{\frac{\vert z\vert^2 +2}{\vert \Im z \vert^{2}}+ \frac{1}{\vert \Im z \vert}\right\}v_N.\end{aligned} $$
(4.66)

Proof

First note that it is sufficient to prove (4.66) for \(z\in \mathbb {C}^+:=\{z \in \mathbb {C}; \Im z >0\}\) since \( g_N(\bar z)-g_{\mu _{\sigma ,\nu ,c}} (\bar z)= \overline {g_N(z)-g_{\mu _{\sigma ,\nu ,c}}(z)}\). Fix 𝜖 > 0. According to Theorem A.8 and Theorem 5.11 in [2], and the assumption on A N, we can choose \(K> \max \{ 2/\varepsilon ; x, x \in \mathrm {supp}( \mu _{\sigma ,\nu ,c})\}\) large enough such that \(\mathbb {P}\left ( \left \|M_N\right \| >K\right )\) goes to zero as N goes to infinity. Let us write

(4.67)

For any \(z \in \mathbb {C}^+\) such that |z| > 2K, we have

Thus, \(\forall z \in \mathbb {C}^+,\) such that |z| > 2K, we can deduce that

(4.68)

Now, it is clear that is a sequence of locally bounded holomorphic functions on \(\mathbb {C}^+\) which converges towards \(g_{\mu _{\sigma ,\nu ,c}}\). Hence, by Vitali’s Theorem, converges uniformly towards \(g_{\mu _{\sigma ,\nu ,c}}\) on each compact subset of \(\mathbb {C}^+\). Thus, there exists N(𝜖) > 0, such that for any N ≥ N(𝜖), for any \(z\in \mathbb {C}^+\), such that |z|≤ 2K and ℑz ≥ ε,

(4.69)

Finally, for any \(z\in \mathbb {C}^+\), such that ℑz ∈ ]0;ε[, we have

(4.70)

It readily follows from (4.68), (4.69) and (4.70) that for N ≥ N(𝜖),

Moreover, for N ≥ N′(𝜖) ≥ N(𝜖), \(\mathbb {P}\left ( \left \|M_N\right \| >K\right ) \leq \varepsilon .\) Therefore, for N ≥ N′(𝜖), we have for any \(z \in \mathbb {C}^+\),

(4.71)

Thus, the proof is complete by setting

$$\displaystyle \begin{aligned}v_N= \sup_{z\in \mathbb{C}^+} \left\{\left|g_N(z)-g_{\mu_{\sigma,\nu,c}}(z) \right| \left(\frac{\vert z\vert^2 +2}{\vert \Im z \vert^{2}}+ \frac{1}{\Im z}\right)^{-1}\right\}.\end{aligned}$$

Now set

$$\displaystyle \begin{aligned}\tau_N= {(1-\sigma^2c_Ng_{ N}(z))z- \frac{ \gamma_q(N)}{1- \sigma^2 c_Ng_{N}(z)} -\sigma^2 (1-c_N)}\end{aligned}$$

and

$$\displaystyle \begin{aligned} \tilde \tau_N={(1-\sigma^2cg_{ \mu_{\sigma,\nu,c}}(z))z- \frac{ \gamma_q(N)}{1- \sigma^2 cg_{ \mu_{\sigma,\nu,c}}(z)} -\sigma^2 (1-c)}.\end{aligned} $$
(4.72)

Lemmas 4.13 and 4.14 yield that there is a polynomial R with nonnegative coefficients, a sequence w N of nonnegative real numbers converging to zero when N goes to infinity and some nonnegative real number l, such that for all \(z\in \mathbb {C}\setminus \mathbb {R}\),

$$\displaystyle \begin{aligned}\left|\tau_N - \tilde \tau_N\right| \leq (1+\vert z\vert)^l R(\vert \Im z \vert^{-1})w_N.\end{aligned} $$
(4.73)

Now, one can easily see that,

$$\displaystyle \begin{aligned} \left|\Im \left\{(1-\sigma^2cg_{ \mu_{\sigma,\nu,c}}(z))z- \frac{ \gamma_q(N)}{1- \sigma^2 cg_{ \mu_{\sigma,\nu,c}}(z)} -\sigma^2 (1-c)\right\}\right| \geq \vert \Im z \vert,\end{aligned} $$
(4.74)

so that

$$\displaystyle \begin{aligned} \left| \frac{1}{\tilde \tau_N}\right| \leq \frac{1}{\vert\Im z \vert}. \end{aligned} $$
(4.75)

Note that

$$\displaystyle \begin{aligned}\frac{1}{ \tilde \tau_N} =\frac{( {1- \sigma^2c g_{\mu_{\sigma,\nu,c}}(z)})}{\omega_{\sigma, \nu, c}(z) -\gamma_q(N)}.\end{aligned} $$
(4.76)

Then, (4.16) readily follows from Proposition 4.5, (4.65), (4.73), (4.75), (4.76), and (ii) Lemma 4.17. The proof of Proposition 4.3 is complete.

4.5 Proof of Theorem 4.3

We follow the two steps presented in Sect. 4.2.

Step A

We first prove (4.11).

Let η > 0 small enough and N large enough such that for any l = 1, …, J, α l(N) ∈ [θ l − η, θ l + η] and [θ l − 2η, θ l + 2η] contains no other element of the spectrum of \(A_NA_N^*\) than α l(N). For any l = 1, …, J, choose f η,l in \(\mathbb {C}^\infty (\mathbb {R}, \mathbb {R})\) with support in [θ l − 2η, θ l + 2η] such that f η,l(x) = 1 for any x ∈ [θ l − η, θ l + η] and 0 ≤ f η,l ≤ 1. Let 0 < 𝜖 < δ 0 where δ 0 is introduced in Theorem 4.2. Choose h ε,j in \(\mathbb { C}^\infty (\mathbb {R}, \mathbb {R})\) with support in \([\rho _{\theta _j} -\varepsilon ,\rho _{\theta _j}+\varepsilon ]\) such that h ε,j ≡ 1 on \([\rho _{\theta _j} -\varepsilon /2 ,\rho _{\theta _j}+\varepsilon /2 ]\) and 0 ≤ h ε,j ≤ 1.

Almost surely for all large N, M N has k j eigenvalues in \(]\rho _{\theta _j} -\varepsilon /2 ,\rho _{\theta _j}+\varepsilon /2[\). According to Theorem 4.2, denoting by \((\xi _1,\cdots ,\xi _{k_j})\) an orthonormal system of eigenvectors associated to the k j eigenvalues of M N in \(( \rho _{\theta _j} -\varepsilon /2, \rho _{\theta _j}+\varepsilon /2)\), it readily follows from (4.12) that almost surely for all large N,

$$\displaystyle \begin{aligned}\sum_{n=1}^{k_j}\left\| P_{\ker(\alpha_l(N) I_n-A_NA_N^*)}\xi_n \right\|{}^2= \mathrm{Tr} \left[ h_{\varepsilon ,j}(M_N) f_{\eta,l}(A_NA_N^*)\right].\end{aligned}$$

Applying Proposition 4.2 with \(\varGamma _N= f_{\eta ,l}(A_NA_N^*)\) and K = k l, the problem of establishing (4.11) is reduced to prove that

$$\displaystyle \begin{aligned} & \mathbb{E}\left(\mathrm{Tr} \left[h_{\varepsilon,j} \left(\left(\sigma\frac{\mathbb{G}_N}{\sqrt{N}}+A_N\right)\left(\sigma\frac{\mathbb{ G}_N}{\sqrt{N}}+A_N\right)^*\right) f_{\eta,l}(A_NA_N^*)\right] \right)\\ & \quad \rightarrow_{N \rightarrow +\infty} \frac{k_j\delta_{jl} (1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(\rho_{\theta_j}))}{\omega_{\sigma,\nu,c}^{\prime}(\rho_{\theta_j})}. \end{aligned} $$
(4.77)

Using a Singular Value Decomposition of A N and the biunitarily invariance of the distribution of \(\mathbb {G}_N\), we can assume that A N is as (4.14) and such that for any j = 1, …, J,

$$\displaystyle \begin{aligned}(A_NA_N^*)_{ii}=\alpha_j(N) \mbox{ ~ for }i=k_1+\ldots+k_{j-1}+l, l=1,\ldots,k_j.\end{aligned}$$

Now, according to Lemma 4.18,

$$\displaystyle \begin{aligned} & \mathbb{E}\left(\mathrm{Tr} \left[h_{\varepsilon,j} \left(\left(\sigma \frac{\mathbb{G}_N}{\sqrt{N}}+A_N\right)\left(\sigma \frac{\mathbb{ G}_N}{\sqrt{N}}+A_N\right)^*\right) f_{\eta,l}(A_NA_N^*)\right] \right)\\ &\quad = - \lim_{y\rightarrow 0^{+}}\frac{1}{\pi} \int \Im \mathbb{E}\mathrm{Tr} \left[G^{\mathbb{G}}_N(t+\mathrm{i} y) f_{\eta,l}(A_NA_N^*)\right] h_{\varepsilon,j}(t) dt,\end{aligned} $$

with, for all large N,

$$\displaystyle \begin{aligned} \begin{array}{rcl}\mathbb{E}\mathrm{Tr} \left[G^{\mathbb{G}}_N(t+\mathrm{i} y) f_{\eta,l}(A_NA_N^*)\right] &\displaystyle =&\displaystyle \sum_{k=k_1+\cdot+k_{l-1}+1}^{k_1+\cdot+k_{l}} f_{\eta,l} (\alpha_l(N))\mathbb{E}[G^{\mathbb{G}}_N(t+\mathrm{i} y)]_{kk} \\&\displaystyle =&\displaystyle \sum_{k=k_1+\cdot+k_{l-1}+1}^{k_1+\cdot+k_{l}} \mathbb{E}[G^{\mathbb{G}}_N(t+\mathrm{i} y)]_{kk}. \end{array} \end{aligned} $$

Now, by considering

$$\displaystyle \begin{aligned}\tau'={(1-\sigma^2cg_{ \mu_{\sigma,\nu,c}}(z))z- \frac{ \theta_l}{1- \sigma^2 cg_{ \mu_{\sigma,\nu,c}}(z)} -\sigma^2 (1-c)}\end{aligned}$$

instead of dealing with \(\tilde \tau _N\) defined in (4.72) at the end of the proof of Proposition 4.3, one can prove that there is a polynomial P with nonnegative coefficients, a sequence (u N)N of nonnegative real numbers converging to zero when N goes to infinity and some nonnegative real number s, such that for any k in {k 1 + … + k l−1 + 1, …, k 1 + … + k l}, for all \(z\in \mathbb {C}\setminus \mathbb {R}\),

$$\displaystyle \begin{aligned} \mathbb{E} \left(\left( G^{\mathbb{G}}_N(z)\right)_{kk}\right) = \frac{1- \sigma^2 cg_{\mu_{\sigma,\nu,c}}(z)}{\omega_{\sigma, \nu, c}(z) -\theta_l} +\varDelta_{k,N}(z), \end{aligned} $$
(4.78)

with

$$\displaystyle \begin{aligned}\left| \varDelta_{k,N} (z)\right| \leq (1+\vert z\vert)^s P(\vert \Im z \vert^{-1})u_N.\end{aligned}$$

Thus,

$$\displaystyle \begin{aligned}\mathbb{E}\mathrm{Tr} \left[G^{\mathbb{G}}_N(t+\mathrm{i} y) f_{\eta,l}(A_NA_N^*)\right] = k_l \frac{1- \sigma^2 cg_{\mu,\sigma,\nu}(t+\mathrm{i} y)}{\omega_{\sigma, \nu, c}(z) -\theta_l} + \varDelta_N(t+\mathrm{i} y),\end{aligned}$$

where for all \(z \in \mathbb {C} \setminus \mathbb {R}\), \(\varDelta _N(z)= \sum _{k=k_1+\cdot +k_{l-1}+1}^{k_1+\cdot +k_{l}} \varDelta _{k,N}(z),\) and \(\left | \varDelta _{N} (z)\right | \leq k_l (1+\vert z\vert )^s P(\vert \Im z \vert ^{-1})u_N.\)

First let us compute

$$\displaystyle \begin{aligned}\lim_{y\downarrow0}\frac{k_l}{\pi}\int_{\rho_{\theta_j}-\varepsilon}^{\rho_{\theta_j}+\varepsilon} \Im\frac{h_{\varepsilon,j}(t) (1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(t+\mathrm{i} y))}{\theta_l-\omega_{\sigma,\nu,c}(t+\mathrm{i} y)}\,dt. \end{aligned}$$

The function ω σ,ν,c satisfies \(\omega _{\sigma ,\nu ,c}(\overline {z})=\overline {\omega _{\sigma ,\nu ,c}(z)}\) and \(g_{\mu _{\sigma ,\nu ,c}}(\overline {z})=\overline {g_{\mu _{\sigma ,\nu ,c}}(z)}\), so that \(\Im \frac { (1-\sigma ^2 c g_{\mu _{\sigma ,\nu ,c}}(t+\mathrm{i} y))}{\theta _l-\omega _{\sigma ,\nu ,c}(t+\mathrm{i} y)}=\frac {1}{2i}[\frac { (1-\sigma ^2 c g_{\mu _{\sigma ,\nu ,c}}(t+\mathrm{i} y))}{\theta _l-\omega _{\sigma ,\nu ,c}(t+\mathrm{i} y)}- \frac { (1-\sigma ^2 c g_{\mu _{\sigma ,\nu ,c}}(t-iy))}{\theta _l-\omega _{\sigma ,\nu ,c}(t-iy)}]\). As in [10], the above integral is split into three pieces, namely \(\int _{\rho _{\theta _j}-\varepsilon }^{\rho _{\theta _j}-\varepsilon /2}+ \int _{\rho _{\theta _j}-\varepsilon /2}^{\rho _{\theta _j}+\varepsilon /2}+\int _{\rho _{\theta _j}+\varepsilon /2}^{\rho _{\theta _j}+\varepsilon }\). Each of the first and third integrals are easily seen to go to zero when y 0 by a direct application of the definition of the functions involved and of the (Riemann) integral. As h ε,j is constantly equal to one on \([\rho _{\theta _j}-\epsilon /2; \rho _{\theta _j}+\epsilon /2]\), the second (middle) term is simply the integral

$$\displaystyle \begin{aligned}\frac{k_l}{2\pi i}\int_{\rho_{\theta_j}-\varepsilon/2}^{\rho_{\theta_j}+\varepsilon/2}\frac{1-\sigma^2cg_{\mu_{\sigma,\nu,c}}(t+\mathrm{i} y)}{\theta_l-\omega_{\sigma,\nu,c}(t+\mathrm{i} y)}- \frac{1-\sigma^2cg_{\mu_{\sigma,\nu,c}}(t-\mathrm{i} y)}{\theta_l-\omega_{\sigma,\nu,c}(t-\mathrm{i} y)}\,dt. \end{aligned}$$

Completing this to a contour integral on the rectangular with corners \(\rho _{\theta _j}\pm \varepsilon /2\pm iy\) and noting that the integrals along the vertical lines tend to zero as y 0 allows a direct application of the residue theorem for the final result, if l = j,

$$\displaystyle \begin{aligned}\frac{k_j (1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(\rho_{\theta_j}))}{\omega_{\sigma,\nu,c}^{\prime}(\rho_{\theta_j})}. \end{aligned}$$

If we consider θ l for some l ≠ j, then \(z\mapsto (1-\sigma ^2 cg_{\mu _{\sigma ,\nu ,c}}(z))(\theta _l- \omega _{\sigma ,\nu ,c}(z))^{-1}\) is analytic around \(\rho _{\theta _j}\), so its residue at \(\rho _{\theta _j}\) is zero, and the above argument provides zero as answer.

Now, according to Lemma 4.19, we have

$$\displaystyle \begin{aligned}\limsup_{y\rightarrow 0^+}~(u_N)^{-1}\left|\int h_{\varepsilon,j} (t)\varDelta_N(t+\mathrm{i} y)dt\right| <+\infty \end{aligned}$$

so that

$$\displaystyle \begin{aligned} \lim_{N\rightarrow + \infty}\limsup_{y \rightarrow 0^+} \left| \int h_{\varepsilon,j} (t) \varDelta_N(t+\mathrm{i} y)dt \right| =0. \end{aligned} $$
(4.79)

This concludes the proof of (4.11).

Step B

In the second, and final, step, we shall use a perturbation argument identical to the one used in [10] to reduce the problem to the case of a spike with multiplicity one, case that follows trivially from Step A. A further property of eigenvectors of Hermitian matrices which are close to each other in the norm will be important in the analysis of the behaviour of the eigenvectors of our matrix models. Given a Hermitian matrix \(M\in \ M_N(\mathbb C)\) and a Borel set \(S\subseteq \mathbb R\), we denote by E M(S) the spectral projection of M associated to S. In other words, the range of E M(S) is the vector space generated by the eigenvectors of M corresponding to eigenvalues in S. The following lemma can be found in [6].

Lemma 4.15

Let M and M 0 be N × N Hermitian matrices. Assume that \(\alpha ,\beta ,\delta \in \mathbb R\) are such that α < β, δ > 0, M and M 0 has no eigenvalues in [α  δ, α] ∪ [β, β + δ]. Then,

$$\displaystyle \begin{aligned}\|E_{M}((\alpha,\beta))-E_{M_0}((\alpha,\beta))\|<\frac{4(\beta-\alpha+2\delta)}{\pi\delta^2}\|M-M_0\|. \end{aligned}$$

In particular, for any unit vector \(\xi \in E_{M_0}((\alpha ,\beta ))(\mathbb C^N)\) ,

$$\displaystyle \begin{aligned}\|(I_N-E_{M}((\alpha,\beta)))\xi\|{}_2<\frac{4(\beta-\alpha+2\delta)}{\pi\delta^2}\|M-M_0\|. \end{aligned}$$

Assume that θ i is in Θ σ,ν,c defined in (4.7) and k i ≠ 1. Let us denote by \(V_1(i),\ldots , V_{k_i}(i)\), an orthonormal system of eigenvectors of \(A_NA_N^*\) associated with α i(N). Consider a Singular Value Decomposition A N = U ND NV N where V N is a N × N unitary matrix, U N is a n × n unitary matrix whose k i first columns are \( V_1(i),\ldots , V_{k_i}(i)\) and D N is as (4.14) with the first k i diagonal elements equal to \(\sqrt {\alpha _i(N)}\).

Let δ 0 be as in Theorem 4.2. Almost surely, for all N large enough, there are k i eigenvalues of M N in \((\rho _{\theta _i}- \frac {\delta _0}{4}, \rho _{\theta _i}+ \frac {\delta _0}{4})\), namely \(\lambda _{n_{i-1}+q}(M_N)\), q = 1, …, k i (where n i−1 + 1, …, n i−1 + k i are the descending ranks of α i(N) among the eigenvalues of \(A_NA_N^*\)), which are moreover the only eigenvalues of M N in \((\rho _{\theta _i}-\delta _0,\rho _{\theta _i}+\delta _0)\). Thus, the spectrum of M N is split into three pieces:

$$\displaystyle \begin{aligned}\{\lambda_1(M_N),\dots,\lambda_{n_{i-1}}(M_N)\}\subset (\rho_{\theta_i}+\delta_0,+\infty[,\end{aligned}$$
$$\displaystyle \begin{aligned}\{\lambda_{n_{i-1}+1}(M_N),\dots,\lambda_{n_{i-1}+k_i}(M_N)\} \subset(\rho_{\theta_i}- \frac{\delta_0}{4},\rho_{\theta_i}+ \frac{\delta_0}{4}),\end{aligned}$$
$$\displaystyle \begin{aligned}\{\lambda_{n_{i-1}+k_i+1}(M_N),\dots, \lambda_{N}(M_N)\}\subset [0, \rho_{\theta_i}-\delta_0). \end{aligned}$$

The distance between any of these components is equal to 3δ 0∕4. Let us fix 𝜖 0 such that \(0\leq \theta _i ( 2 \epsilon _0 k_i +\epsilon _0^2k_i^2) < dist(\theta _i, \mbox{supp }\nu \cup _{i\neq s}\theta _s )\) and such that \([\theta _i; \theta _i+\theta _i ( 2 \epsilon _0 k_i +\epsilon _0^2k_i^2)] \subset \mathbb {E}_{\sigma , \nu , c}\) defined by (4.6). For any 0 < 𝜖 < 𝜖 0, define the matrix A N(𝜖) as A N(𝜖) = U ND N(𝜖)V N where

$$\displaystyle \begin{aligned}\left(D_N(\epsilon)\right)_{m, m}=\sqrt{\alpha_{i}(N)} [1 + \epsilon (k_i-m+1)],\text{ ~for } m\in\{1,\ldots,k_i\},\end{aligned}$$

and \(\left (D_N(\epsilon )\right )_{pq}=\left (D_N\right )_{pq}\) for any (p, q)∉{(m, m), m ∈{1, …, k i}}.

Set

$$\displaystyle \begin{aligned}M_N(\epsilon)=\left(\sigma \frac{X_N}{\sqrt{N}} +A_N(\epsilon)\right)\left( \sigma \frac{X_N}{\sqrt{N}} +A_N(\epsilon)\right)^*.\end{aligned}$$

For N large enough, for each m ∈{1, …, k i}, α i(N)[1 + 𝜖(k im + 1)]2 is an eigenvalue of \(A_NA_N^*(\epsilon )\) with multiplicity one. Note that, since supNA N∥ < +, it is easy to see that there exist some constant C such that for any N and for any 0 < 𝜖 < 𝜖 0,

$$\displaystyle \begin{aligned}\left\|M_N(\epsilon)-M_N\right\|\leq C\epsilon \left( \left\| \frac{X_N}{\sqrt{N}}\right\| +1\right) . \end{aligned}$$

Applying Remark 4.3 to the (n + N) × (n + N) matrix \(\tilde X_N= \left ( \begin {array}{ll} 0_{n\times n}~~~ X_N\\ X_N^*~~~ 0_{N\times N} \end {array} \right )\) (see also Appendix B of [14]), it readily follows that there exists some constant C′ such that a.s for all large N, for any 0 < 𝜖 < 𝜖 0,

$$\displaystyle \begin{aligned} \left\| M_N(\epsilon)-M_N\right\| \leq C' \epsilon.\end{aligned} $$
(4.80)

Therefore, for 𝜖 sufficiently small such that C′𝜖 < δ 0∕4, by Theorem A.46 [2], there are precisely n i−1 eigenvalues of M N(𝜖) in \([0,\rho _{\theta _i}-3\delta _0/4)\), precisely k i in \((\rho _{\theta _i}-\delta _0/2,\rho _{\theta _i}+\delta _0/2)\) and precisely N − (n i−1 + k i) in \((\rho _{\theta _i}+3\delta _0/4,+ \infty [\). All these intervals are again at strictly positive distance from each other, in this case δ 0∕4.

Let ξ be a normalized eigenvector of M N relative to \(\lambda _{n_{i-1}+q}(M_N)\) for some q ∈{1, …, k i}. As proved in Lemma 4.15, if E(𝜖) denotes the subspace spanned by the eigenvectors associated to \(\{\lambda _{n_{i-1}+1}(M_N(\epsilon )),\dots ,\lambda _{n_{i-1}+k_i} (M_N(\epsilon ))\}\) in \(\mathbb C^N\), then there exists some constant C (which depends on δ 0) such that for 𝜖 small enough, almost surely for large N,

$$\displaystyle \begin{aligned} \left\|P_{ E(\epsilon)^{\bot}}\xi\right\|{}_2\leq C \epsilon .\end{aligned} $$
(4.81)

According to Theorem 4.2, for any j in {1, …, k i}, for large enough N, \(\lambda _{n_{i-1}+j}(M_N(\epsilon ))\) separates from the rest of the spectrum and belongs to a neighborhood of \(\varPhi _{\sigma ,\nu ,c} (\theta _i^{(j)}(\epsilon ))\) where

$$\displaystyle \begin{aligned}\theta_i^{(j)}(\epsilon)=\theta_i\left( 1+\epsilon (k_i -j+1) \right)^2.\end{aligned}$$

If ξ j(𝜖, i) denotes a normalized eigenvector associated to \(\lambda _{n_{i-1}+j}(M_N(\epsilon ))\), Step A above implies that almost surely for any p ∈{1, …, k i}, for any γ > 0, for all large N,

$$\displaystyle \begin{aligned} \left|\left| \langle V_{p}(i),\xi_{j}(\epsilon ,i)\rangle \right|{}^2 - \frac{\delta_{jp}\left(1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(\varPhi_{\sigma,\nu,c} (\theta_i^{(j)}(\epsilon)))\right)}{ \omega_{\sigma,\nu,c}^{\prime}\left(\varPhi_{\sigma,\nu,c} (\theta_i^{(j)}(\epsilon)) \right)}\right|<\gamma. \end{aligned} $$
(4.82)

The eigenvector ξ decomposes uniquely in the orthonormal basis of eigenvectors of M N(𝜖) as \(\xi =\sum _{j=1}^{k_i}c_j(\epsilon )\xi _j(\epsilon ,i)+\xi (\epsilon )^\perp \), where c j(𝜖) = 〈ξ|ξ j(𝜖, i)〉 and \(\xi (\epsilon )^\perp =P_{ E(\epsilon )^{\bot }}\xi \); necessarily \(\sum _{j=1}^{k_i}|c_j(\epsilon )|{ }^2+\|\xi (\epsilon )^\perp \|{ }_2^2=1\). Moreover, as indicated in relation (4.81), ∥ξ(𝜖)2 ≤ C𝜖. We have

$$\displaystyle \begin{aligned} \begin{array}{rcl} P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi&\displaystyle =&\displaystyle \sum_{j=1}^{k_i}c_j(\epsilon) P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi_j(\epsilon,i)\\&\displaystyle &\displaystyle +P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi(\epsilon)^\perp\\ &\displaystyle =&\displaystyle \sum_{j=1}^{k_i}c_j(\epsilon) \sum_{l=1}^{k_i}\langle \xi_j(\epsilon,i) | V_{l}(i)\rangle V_{l}(i)\\ &\displaystyle &\displaystyle +P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi(\epsilon)^\perp. \end{array} \end{aligned} $$

Take in the above the scalar product with \(\xi =\sum _{j=1}^{k_i}c_j(\epsilon )\xi _j(\epsilon ,i)+\xi (\epsilon )^\perp \) to get

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \langle P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi|\xi\rangle \\ &\displaystyle &\displaystyle \quad =\sum_{j,l,s=1}^{k_i}c_j(\epsilon)\langle \xi_j(\epsilon, i) |V_{l}(i) \rangle\overline{c_s(\epsilon)}\langle V_{l}(i)|\xi_s(\epsilon, i)\rangle\\ &\displaystyle &\displaystyle \qquad +\sum_{j=1}^{k_i}c_j(\epsilon) \sum_{l=1}^{k_i}\langle \xi_j(\epsilon,i)| V_{l}(i) \rangle \langle V_{l}(i)|\xi(\epsilon)^\perp\rangle\\ &\displaystyle &\displaystyle \qquad +\langle P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi(\epsilon)^\perp|\xi\rangle. \end{array} \end{aligned} $$

Relation (4.82) indicates that

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{j,l,s=1}^{k_i}c_j(\epsilon)\langle \xi_j(\epsilon, i) | V_{l}(i) \rangle\overline{c_s(\epsilon)}\langle V_{l}(i)|\xi_s(\epsilon, i)\rangle\\ &\displaystyle &\displaystyle \quad = \sum_{j=1}^{k_i}|c_j(\epsilon)|{}^2|\langle V_{j}(i)|\xi_j(\epsilon, i)\rangle|{}^2+\varDelta_1\\ &\displaystyle &\displaystyle \quad = \sum_{j=1}^{k_i}|c_j(\epsilon)|{}^2 \frac{\left(1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(\varPhi_{\sigma,\nu,c} (\theta_i^{(j)}(\epsilon)))\right)}{ \omega_{\sigma,\nu,c}^{\prime}\left(\varPhi_{\sigma,\nu,c} (\theta_i^{(j)}(\epsilon)) \right)}+\varDelta_1 + \varDelta_2, \end{array} \end{aligned} $$

where for all large N, \(\vert \varDelta _1\vert \leq \sqrt { \gamma } k_i^3\) and |Δ 2|≤ γ. Since ∥ξ(𝜖)2 ≤ C𝜖,

$$\displaystyle \begin{aligned} &\left|\sum_{j=1}^{k_i}c_j(\epsilon) \sum_{l=1}^{k_i}\langle \xi_j(\epsilon,i) | V_{l}(i) \rangle \langle V_{l}(i)|\xi(\epsilon)^\perp\rangle \right. \\ &\quad \left. +\langle P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi(\epsilon)^\perp|\xi\rangle\right| \leq\left(k_i^2+1\right){C\epsilon}. \end{aligned} $$

Thus, we conclude that almost surely for any γ > 0, for all large N,

$$\displaystyle \begin{aligned}&\left|\langle P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi|\xi\rangle- \sum_{j=1}^{k_i}\frac{|c_j(\epsilon)|{}^2 \left(1-\sigma^2 c g_{\mu_{\sigma,\nu,c}}(\varPhi_{\sigma,\nu,c} (\theta_i^{(j)}(\epsilon)))\right)}{ \omega_{\sigma,\nu,c}^{\prime}\left(\varPhi_{\sigma,\nu,c} (\theta_i^{(j)}(\epsilon)) \right)} \right| \\ & \quad \leq(k_i^2+1)C\epsilon+ \sqrt{\gamma}k_i^3+\gamma .{} \end{aligned} $$
(4.83)

Since we have the identity

$$\displaystyle \begin{aligned}\langle P_{\ker(\alpha_i(N)I_N-A_NA_N^*)}\xi|\xi\rangle=\|P_{\ker( \alpha_i(N)I_N-A_NA_N^*)}\xi\|{}_2^2\end{aligned}$$

and the three obvious convergences \(\lim _{\epsilon \to 0}\omega _{\sigma ,\nu ,c}^{\prime }\left (\varPhi _{\sigma ,\nu ,c} (\theta _i^{(j)}(\epsilon )) \right )=\omega _{\sigma ,\nu ,c}^{\prime }(\rho _{\theta _i})\), \(\lim _{\epsilon \to 0}g_{\mu _{\sigma ,\nu ,c}}\left (\varPhi _{\sigma ,\nu ,c} (\theta _i^{(j)}(\epsilon )) \right )=g_{\mu _{\sigma ,\nu ,c}}(\rho _{\theta _i})\) and \(\lim _{\epsilon \to 0}\sum _{j=1}^{k_i}|c_j(\epsilon )|{ }^2=1\), relation (4.83) concludes Step B and the proof of Theorem 4.3. (Note that we use (2.9) of [11] which is true for any \(x\in \mathbb {C}\setminus \mathbb {R}\) to deduce that \(1-\sigma ^2 c g_{\mu _{\sigma ,\nu ,c}}(\varPhi _{\sigma ,\nu ,c}(\theta _i))= \frac { 1}{1+ \sigma ^2 cg_\nu (\theta _i)}\) by letting x goes to Φ σ,ν,c(θ i)).