1 Introduction

In [1], the authors consider the invertibility of d × d-matrices of the form D + R, with D an arbitrary symmetric deterministic matrix and R a symmetric random matrix whose independent entries have continuous distributions with bounded densities. In this setting, a uniform estimate

$$\displaystyle{ \Vert (D + R)^{-1}\Vert = O(d^{2}) }$$
(1)

is shown to hold with high probability. The authors conjecture that (1) may be improved to \(O(\sqrt{d})\). The purpose of this short Note is to prove this in the case R is Gaussian. Thus we have (stated in the d 2-normalized setting).

Proposition

Let T be an arbitrary matrix in Sym(d). Then, for A (normalized) in GOE, there is a uniform estimate

$$\displaystyle{ \Vert (A + T)^{-1}\Vert = O(d) }$$
(2)

with large probability.

2 Proof of the Proposition

By invariance of GOE under orthogonal transformations, we may assume T diagonal. Let K be a suitable constant and partition

$$\displaystyle{\{1,\ldots,d\} = \Omega _{1} \cup \Omega _{2}}$$

with

$$\displaystyle{\Omega _{1} =\{\, j = 1,\ldots,d;\vert T_{jj}\vert > K\}.}$$

Denote \(T^{(i)} =\pi _{\Omega _{i}}T\pi _{\Omega _{i}}(i = 1,2)\) and \(A^{(i,j)} =\pi _{\Omega _{i}}A\pi _{\Omega _{j}}(i,j = 1,2)\). Since

$$\displaystyle{(A^{(1,1)} + T^{(1)})^{-1} =\big (I + (T^{(1)})^{-1}A^{(1,1)})(T^{(1)})^{-1}}$$

and

$$\displaystyle{\Vert (T^{(1)})^{-1}A^{(1,1)}\Vert \leq \frac{1} {K}\Vert A^{(1,1)}\Vert < \frac{1} {2}}$$

with large probability, we ensure that

$$\displaystyle{ \Vert (A^{(1,1)} + T^{(1)})^{-1}\Vert < 1. }$$
(3)

Next, write by the Schur complement formula

$$\displaystyle\begin{array}{rcl} & & \ (A + T)^{-1} \\ & & = \left (\begin{array}{*{10}c} (A^{(1,1)} + T^{(1)})^{-1} + (A^{(1,1)}\,+\,T^{(1)})^{-1}A^{(1,2)}S^{-1}A^{(2,1)}(A^{(1,1)}\,+\,T^{(1)})^{-1} & -(A^{(1,1)} + T^{(1)})^{-1}A^{(1,2)}S^{-1} \\ -S^{-1}A^{(2,1)}(A^{(1,1)} + T^{(1)})^{-1} & S^{-1} \end{array} \right ){}\end{array}$$
(4)

defining

$$\displaystyle{ S = A^{(2,2)} + T^{(2)} - A^{(2,1)}(A^{(1,1)} + T^{(1)})^{-1}A^{(1,2)}. }$$
(5)

Hence by (4)

$$\displaystyle{ \begin{array}{rcl} \Vert (A + T)^{-1}\Vert & \leq &C(1 +\Vert (A^{(1,1)} + T^{(1)})^{-1}\Vert ^{2})(1 +\Vert A\Vert ^{2})\Vert S^{-1}\Vert \\ & \leq &C_{1}\Vert S^{-1}\Vert. \end{array} }$$
(6)

Note that A (2, 2) and A (2, 1)(A (1, 1) + T (1))−1 A (1, 2) are independent in the A randomness. Thus S may be written in the form

$$\displaystyle{ S = A^{(2,2)} + S_{ 0} }$$
(7)

with S 0 ∈ Sym(d), ∥ S 0 ∥  < O(1) (by construction, ∥ T (2) ∥ ≤ K) and A (2, 2) and S 0 independent.

Fixing S 0, we may again exploit the invariance to put S 0 in diagonal form, obtaining

$$\displaystyle{ A^{(2,2)} + S_{ 0}^{{\prime}}\ \text{ with }\ S_{ 0}^{{\prime}}\ \text{ diagonal }. }$$
(8)

Hence, we reduced the original problem to the case T is diagonal and ∥ T ∥  < K + 1.

Note however that (8) is a (d 1 × d 1)-matrix and since d 1 may be significantly smaller than d, A (2, 2) is not necessarily normalized anymore. Thus after renormalization of A (2, 2), setting

$$\displaystyle{ A_{1} =\Big ( \frac{d} {d_{1}}\Big)^{\frac{1} {2} }A^{(2,2)} }$$
(9)

and denoting

$$\displaystyle{ T_{1} =\Big ( \frac{d} {d_{1}}\Big)^{\frac{1} {2} }S_{0}^{{\prime}} }$$
(10)

we have

$$\displaystyle{ \Vert T_{1}\Vert <\Big ( \frac{d} {d_{1}}\Big)^{\frac{1} {2} }(K + 1) }$$
(11)

while the condition [cf. (6)]

$$\displaystyle{ \Vert (A^{(2,2)} + S_{ 0}^{{\prime}})^{-1}\Vert = O(d) }$$
(12)

becomes

$$\displaystyle{ \Vert (A_{1} + T_{1})^{-1}\Vert = O(\sqrt{dd_{ 1}}). }$$
(13)

At this point, we invoke Theorem  1.2 from [2]. As Vershynin kindly pointed out to the author, the argument in [2] simplifies considerably in the Gaussian case. Examination of the proof shows that in fact the statement from [2], Theorem  1.2 can be improved in this case as follows.

Claim

Let A be a d × d normalized GOE matrix and T a deterministic, diagonal (d × d)-matrix. Then

$$\displaystyle{ \mathbb{P}[\Vert (A + T)^{-1}\Vert >\lambda d] \leq C(1 +\Vert T\Vert )\lambda ^{-\frac{1} {9} }. }$$
(14)

We distinguish two cases. If \(d_{1} \geq \frac{1} {C_{2}} d,C_{2} > C_{1}^{3}\), immediately apply the above claim with d replaced by d 1, A by A 1 and T by T 1. Thus by (11)

$$\displaystyle{ \mathbb{P}[\Vert (A_{1} + T_{1})^{-1}\Vert >\lambda \sqrt{dd_{ 1}}] \leq C(1 +\Vert T_{1}\Vert )\Big(\frac{d_{1}} {d} \Big)^{-\frac{1} {18} }\lambda ^{-\frac{1} {9} } < C\big(1 + \sqrt{C_{2}}(K + 1)\big)\lambda ^{-\frac{1} {9} } }$$
(15)

and (12) follows. If \(d_{1} < \frac{1} {C_{2}} d\), repeat the preceding replacing A by A 1, T by T 1. In the definition of \(\Omega _{1}\), replace K by K 1 = 2K, so that (3) will hold with probability at least

$$\displaystyle{ 1 - e^{-cK_{1}^{2} } = 1 - e^{-4cK^{2} } }$$
(16)

the point being of making the measure bounds \(e^{-c4^{s}K^{2} }\), s = 0, 1, 2,  obtained in an iteration, sum up to \(e^{-c_{1}K^{2} } = o(1)\).

Note that in (13), we only seek for an estimate

$$\displaystyle{ \Vert (A_{1} + T_{1})^{-1}\Vert < O\Big(\frac{\sqrt{C_{2}}} {C_{1}} d_{1}\Big) }$$
(17)

hence, cf. (12)

$$\displaystyle{ \Vert (A_{1}^{(2,2)} + S_{ 1,0}^{{\prime}})^{-1}\Vert < O\Big(\frac{\sqrt{d_{2}}} {C_{1}} d_{1}\Big) }$$
(18)

where A 1 (2, 2) and S 1, 0 are defined as before, considering now A 1 and T 1. Hence (13) gets replaced by

$$\displaystyle{ \Vert (A_{2} + T_{2})^{-1}\Vert = O\Big(\frac{\sqrt{C_{2}}} {C_{1}} \sqrt{d_{1 } d_{2}}\Big) }$$
(19)

where A 2, T 2 are (d 2 × d 2)-matrices,

$$\displaystyle{ \Vert T_{2}\Vert <\Big (\frac{d_{1}} {d_{2}}\Big)^{\frac{1} {2} }(2K + 1). }$$
(20)

Assuming \(d_{2} \geq \frac{1} {C_{2}} d_{1}\), we obtain instead of (15)

$$\displaystyle{ \begin{array}{rcl} \mathbb{P}[\Vert (A_{2} + T_{2})^{-1}\Vert >\lambda \frac{\sqrt{C_{2}}} {C_{1}} \sqrt{d_{1 } d_{2}}]& \leq &C\big(1 + \sqrt{C_{2}}(K_{1} + 1)\big)\Big(\frac{\sqrt{C_{2}}} {C_{1}} \lambda \Big)^{-\frac{1} {9} } \\ & <&C\big(1 + \sqrt{C_{2}}(K + 1)\big)\ (2C_{1}^{\frac{1} {9} }C_{2}^{-\frac{1} {18} })\lambda ^{-\frac{1} {9} } \end{array} }$$
(21)

and we take C 2 to ensure that \(2C_{1}^{\frac{1} {9} }C_{2}^{-\frac{1} {18} } < \frac{1} {2}\).

The continuation of the process is now clear and terminates in at most2logd steps. At step s, we obtain if \(d_{s+1} \geq \frac{1} {C_{2}} d_{s}\)

$$\displaystyle{ \mathbb{P}\Big[\Vert (A_{s+1} + T_{s+1})^{-1}\Vert >\lambda \Big (\frac{\sqrt{C_{2}}} {C_{1}} \Big)^{s}\sqrt{d_{ s}d_{s+1}}\Big] < C\big(1 + \sqrt{C_{2}}(K + 1)\big)2^{-s}\lambda ^{-\frac{1} {9} }. }$$
(22)

Summation over s gives a measure estimate \(O(\lambda ^{-\frac{1} {9} }) = o(1)\).

This concludes the proof of the Proposition. From quantitative point of view, previous argument shows

Proposition’

Let T and A be as in the Proposition. Then

$$\displaystyle{ \mathbb{P}[\Vert (A + T)^{-1}\Vert >\lambda d] < O(\lambda ^{-\frac{1} {10} }). }$$
(23)

Note

The author’s interest in this issue came up in the study ( joint with I. Goldsheid) of quantitative localization of eigenfunctions of random band matrices. The purpose of this Note is to justify some estimates in this forthcoming work.