Abstract
The Brown measure is a generalization of the eigenvalue distribution for a general (not necessarily normal) operator in a finite von Neumann algebra (i.e. a von Neumann algebra which possesses a trace). It was introduced by Larry Brown in [46], but fell into obscurity soon after. It was revived by Haagerup and Larsen [85] and played an important role in Haagerup’s investigations around the invariant subspace problem [87].
Access provided by CONRICYT-eBooks. Download chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
The Brown measure is a generalization of the eigenvalue distribution for a general (not necessarily normal) operator in a finite von Neumann algebra (i.e. a von Neumann algebra which possesses a trace). It was introduced by Larry Brown in [46], but fell into obscurity soon after. It was revived by Haagerup and Larsen [85] and played an important role in Haagerup’s investigations around the invariant subspace problem [87]. By using a “hermitization” idea, one can actually calculate the Brown measure by \(M_{2}(\mathbb{C})\)-valued free probability tools. This leads to an extension of the algorithm from the last chapter to the calculation of arbitrary polynomials in free variables. For generic non-self-adjoint random matrix models, their asymptotic complex eigenvalue distribution is expected to converge to the Brown measure of the (∗-distribution) limit operator. However, because the Brown measure is not continuous with respect to convergence in ∗-moments, this is an open problem in the general case.
11.1 Brown measure for normal operators
Let (M, τ) be a W ∗-probability space and consider an operator a ∈ M. The relevant information about a is contained in its ∗-distribution which is by definition the collection of all ∗-moments of a with respect to τ. In the case of self-adjoint or normal a, we can identify this distribution with an analytic object, a probability measure μ a on the spectrum of a. Let us first recall these facts.
If a = a ∗ is self-adjoint, there exists a uniquely determined probability measure μ a on \(\mathbb{R}\) such that for all \(n \in \mathbb{N}\)
and the support of μ a is the spectrum of a; see also the discussion after equation (2.2) in Chapter 2
More general, if a ∈ M is normal (i.e. aa ∗ = a ∗ a), then the spectral theorem provides us with a projection-valued spectral measure E a , and the Brown measure is just the spectral measure μ a = τ ∘ E a . Note that in the normal case μ a may not be determined by the moments of a. Indeed, if a = u is a Haar unitary, then the moments of u are the same as the moments of the zero operator. Of course, their ∗-moments are different. For a normal operator a, its spectral measure μ a is uniquely determined by
for all \(m,n \in \mathbb{N}\). The support of μ a is again the spectrum of a.
We will now try to assign to any operator a ∈ M a probability measure μ a on its spectrum, which contains relevant information about the ∗-distribution of a. This μ a will be called the Brown measure of a. One should note that for non-normal operators, there are many more ∗-moments of a than those appearing in (11.1). There is no possibility to capture all the ∗-moments of a by the ∗-moments of a probability measure. Hence, we will necessarily loose some information about the ∗-distribution of a when we go over to the Brown measure of a. It will also turn out that we need our state τ to be a trace in order to define μ a . Hence, in the following, we will only work in tracial W ∗-probability spaces (M, τ). Recall that this means that τ is a faithful and normal trace. Von Neumann algebras which admit such faithful and normal traces are usually addressed as finite von Neumann algebras. If M is a finite factor, then a tracial state \(\tau: M \rightarrow \mathbb{C}\) is unique on M and is automatically normal and faithful.
11.2 Brown measure for matrices
In the finite-dimensional case \(M = M_{n}(\mathbb{C})\), the Brown measure μ T for a normal matrix \(T \in M_{n}(\mathbb{C})\), determined by (11.1), really is the eigenvalue distribution of the matrix. It is clear that in the case of matrices, we can extend this definition to the general, non-normal case. For a general matrix \(T \in M_{n}(\mathbb{C})\), the spectrum σ(T) is given by the roots of the characteristic polynomial
where λ 1, …, λ n are the roots repeated according to algebraic multiplicity. In this case, we have as eigenvalue distribution (and thus as Brown measure)
We want to extend this definition of μ T to an infinite-dimensional situation. Since the characteristic polynomial does not make sense in such a situation, we have to find an analytic way of determining the roots of P(λ) which survives also in an infinite-dimensional setting.
Consider
We claim that the function λ ↦ log | λ | is harmonic in \(\mathbb{C}\setminus \{0\}\) and that in general it has Laplacian
in the distributional sense. Here, the Laplacian is given by
where λ r and λ i are the real and imaginary part of \(\lambda \in \mathbb{C}\). (Note that we use the symbol ∇2 for the Laplacian, since we reserve the symbol Δ for the Fuglede-Kadison determinant of the next section.)
Let us prove this claim on the behaviour of log | λ |. For λ ≠ 0, we write ∇2 in terms of polar coordinates
and have
Ignoring the singularity at 0, we can write formally
That is,
independent of r > 0. Hence, ∇2log | λ | must be 2πδ 0.
Exercise 1.
By integrating against a test function show rigorously that ∇2log | λ | = 2πδ 0 as distributions.
Given the fact (11.2), we can now rewrite the eigenvalue distribution μ T in the form
As there exists a version of the determinant in an infinite-dimensional setting, we can use this formula to generalize the definition of μ T .
11.3 Fuglede-Kadison determinant in finite von Neumann algebras
In order to use (11.3) in infinite dimensions, we need a generalization of the determinant. Such a generalization was provided by Fuglede and Kadison [75] in 1952 for operators in a finite factor M; the case of a general finite von Neumann algebra is an straightforward extension.
Definition 1.
Let (M, τ) be a tracial W ∗-probability space and consider a ∈ M. Its Fuglede-Kadison determinant Δ(a) is defined as follows. If a is invertible, one can put
where | a | = (a ∗ a)1∕2. More generally, we define
By functional calculus and the monotone convergence theorem, the limit always exists.
This determinant Δ has the following properties:
-
∘ Δ(ab) = Δ(a)Δ(b) for all a, b ∈ M.
-
∘ Δ(a) = Δ(a ∗) = Δ( | a | ) for all a ∈ M.
-
∘ Δ(u) = 1 when u is unitary.
-
∘ Δ(λa) = | λ | Δ(a) for all \(\lambda \in \mathbb{C}\) and a ∈ M.
-
∘ a ↦ Δ(a) is upper semicontinuous in norm-topology and in ∥⋅ ∥ p -norm for all p > 0.
Let us check what this definition gives in the case of matrices, \(M = M_{n}(\mathbb{C})\), τ = tr. If T is invertible, then we can write
with t i > 0. Then we have
and
Note that det | T | = | detT |, because we have the polar decomposition T = V | T |, where V is unitary and hence | detV | = 1.
Thus, we have in finite dimensions
So we are facing the question whether it is possible to make sense out of
for operators a in general finite von Neumann algebras, where Δ denotes the Fuglede-Kadison determinant. (Here and in the following, we will write a −λ for a −λ1.)
11.4 Subharmonic functions and their Riesz measures
Definition 2.
A function \(f: \mathbb{R}^{2} \rightarrow [-\infty,\infty )\) is called subharmonic if
-
(i)
f is upper semicontinuous, i.e.
$$\displaystyle{f(z) \geq \limsup _{n\rightarrow \infty }f(z_{n}),\qquad \text{whenever}\qquad z_{n} \rightarrow z;}$$ -
(ii)
f satisfies the submean inequality: for every circle the value of f at the centre is less or equal to the mean value of f over the circle, i.e.
$$\displaystyle{f(z) \leq \frac{1} {2\pi }\int _{0}^{2\pi }f(z + re^{i\theta })d\theta;}$$ -
(iii)
f is not constantly equal to −∞.
If f is subharmonic then f is Borel measurable, f(z) > −∞ almost everywhere with respect to Lebesgue measure and \(f \in L_{loc}^{1}(\mathbb{R}^{2})\). One has the following classical theorem for subharmonic functions; e.g. see [13, 92].
Theorem 3.
If f is subharmonic on \(\mathbb{R}^{2} \equiv \mathbb{C}\) , then ∇2 f exists in the distributional sense, and it is a positive Radon measure ν f ; i.e. ν f is uniquely determined by
If ν f has compact support, then
where h is a harmonic function on \(\mathbb{C}\).
Definition 4.
The measure ν f = ∇2 f is called the Riesz measure of the subharmonic function f.
11.5 Definition of the Brown measure
If we apply this construction to our question about (11.5), we get the construction of the Brown measure as follows. This was defined by L. Brown in [46] (for the case of factors); for more information, see also [85].
Theorem 5.
Let (M, τ) be a tracial W ∗ -probability space. Then we have:
-
(i)
The function λ ↦ logΔ(a −λ) is subharmonic.
-
(ii)
The corresponding Riesz measure
$$\displaystyle{ \mu _{a}:= \frac{1} {2\pi }\nabla ^{2}\log \varDelta (a-\lambda ) }$$(11.6)is a probability measure on \(\mathbb{C}\) with support contained in the spectrum of a.
-
(iii)
Moreover, one has for all \(\lambda \in \mathbb{C}\)
$$\displaystyle{ \int _{\mathbb{C}}\log \vert \lambda - z\vert d\mu _{a}(z) =\log \varDelta (a-\lambda ) }$$(11.7)and this characterizes μ a among all probability measures on \(\mathbb{C}\).
Definition 6.
The measure μ a from Theorem 5 is called the Brown measure of a.
Proof (Sketch of Proof of Theorem 5(i)):
: Suppose a ∈ M. We want to show that f(λ): = logΔ(a −λ) is subharmonic. We have
Thus
as a decreasing limit as ɛ ↘ 0. So, with the notations
we have
For ɛ > 0, the function f ɛ is a C 2-function, and therefore f ɛ being subharmonic is equivalent to ∇2 f ɛ ≥ 0 as a function. But ∇2 f ɛ can be computed explicitly:
Since we have for general positive operators x and y that τ(xy) = τ(x 1∕2 yx 1∕2) ≥ 0, we see that ∇2 f ɛ (λ) ≥ 0 for all \(\lambda \in \mathbb{C}\) and thus f ɛ is subharmonic.
The fact that f ɛ ↘ f implies then that f is upper semicontinuous and satisfies the submean inequality. Furthermore, if λ ∉ σ(a), then a −λ is invertible; hence, Δ(a −λ) > 0, and thus f(λ) ≠ −∞. Hence, f is subharmonic. □
Exercise 2.
We want to prove here (11.8). We consider f ɛ (λ) as a function in λ and \(\bar{\lambda }\); hence, the Laplacian is given by (where as usual λ = λ r + iλ i)
where
-
(i)
Show that we have for each \(n \in \mathbb{N}\) (by relying heavily on the fact that τ is a trace)
$$\displaystyle{\frac{\partial } {\partial \lambda }\tau [(a_{\lambda }^{{\ast}}a_{\lambda })^{n}] = -n\tau [(a_{\lambda }^{{\ast}}a_{\lambda })^{n-1}a_{\lambda }^{{\ast}}]}$$and
$$\displaystyle{\frac{\partial } {\partial \bar{\lambda }}\tau [(a_{\lambda }^{{\ast}}a_{\lambda })^{n}a_{\lambda }^{{\ast}}] = -\sum _{ j=0}^{n}\tau [(a_{\lambda }a_{\lambda }^{{\ast}})^{j}(a_{\lambda }^{{\ast}}a_{\lambda })^{n-j}].}$$ -
(ii)
Prove (11.8) by using the power series expansion of
$$\displaystyle{\log (a_{\lambda }^{{\ast}}a_{\lambda }+\varepsilon ) =\log \varepsilon +\log \left (1 + \frac{a_{\lambda }^{{\ast}}a_{\lambda }} {\varepsilon } \right ).}$$
In the case of a normal operator, the Brown measure is just the spectral measure τ ∘ E a , where E a is the projection-valued spectral measure according to the spectral theorem. In that case, μ a is determined by the equality of the ∗-moments of μ a and of a, i.e. by
for all \(m,n \in \mathbb{N}\). If a is not normal, then this equality does not hold anymore. Only the equality of the moments is always true, i.e. for all \(n \in \mathbb{N}\)
One should note, however, that the Brown measure of a is in general actually determined by the ∗-moments of a. This is the case, since τ is faithful and the Brown measure depends only on τ restricted to the von Neumann algebra generated by a; the latter is uniquely determined by the ∗-moments of a; see also Chapter 6, Theorem 6.2.
What one can say in general about the relation between the ∗-moments of μ a and of a is the following generalized Weyl Inequality of Brown [46]. For any a ∈ M and 0 < p < ∞, we have
This was strengthened by Haagerup and Schultz [87] in the following way: If M inv denotes the invertible elements in M, then we actually have for all a ∈ M and every p > 0 that
Note here that because of Δ(bab −1) = Δ(a), we have \(\mu _{bab^{-1}} =\mu _{a}\) for b ∈ M inv .
Exercise 3.
Let (M, τ) be a tracial W ∗-probability space and a ∈ M. Let p(z) be a polynomial in the variable z (not involving \(\bar{z}\)), hence p(a) ∈ M. Show that the Brown measure of p(a) is the push-forward of the Brown measure of a, i.e. μ p(a) = p ∗(μ a ), where the push-forward p ∗(ν) of a measure ν is defined by p ∗(ν)(E) = ν(p −1(E)) for any measurable set E.
The calculation of the Brown measure of concrete non-normal operators is usually quite hard, and there are not too many situations where one has explicit solutions. We will in the following present some of the main concrete results.
11.6 Brown measure of R-diagonal operators
R-diagonal operators were introduced by Nica and Speicher [136]. They provide a class of, in general non-normal, operators which are usually accessible to concrete calculations. In particular, one is able to determine their Brown measure quite explicitly.
R-diagonal operators can be considered in general ∗-probability spaces, but we will restrict here to the tracial W ∗-probability space situation; only there the notion of Brown measure makes sense.
Definition 7.
An operator a in a tracial W ∗-probability space (M, τ) is called R-diagonal if its only non-vanishing ∗-cumulants (i.e. cumulants where each argument is either a or a ∗) are alternating, i.e. of the form κ 2n (a, a ∗, a, a ∗, …, a, a ∗) = κ 2n (a ∗, a, a ∗, a…, a ∗, a) for some \(n \in \mathbb{N}\).
Main examples for R-diagonal operators are Haar unitaries and Voiculescu’s circular operator. With the exception of multiples of Haar unitaries, R-diagonal operators are not normal. One main characterization [136] of R-diagonal operators is the following: a is R-diagonal if and only if a has the same ∗-distribution as up where u is a Haar unitary, p ≥ 0, and u and p are ∗-free. If ker(a) = {0}, then this can be refined to the characterization that R-diagonal operators have a polar decomposition of the form a = u | a |, where u is Haar unitary and | a | is ∗-free from u.
The Brown measure of R-diagonal operators was calculated by Haagerup and Larsen [85]. The following theorem contains their main statements on this.
Theorem 8.
Let (M, τ) be a tracial W ∗ -probability space and a ∈ M be R-diagonal. Assume that ker(a) = {0} and that a ∗ a is not a constant operator. Then we have the following:
-
(i)
The support of the Brown measure μ a is given by
$$\displaystyle{ \mathit{\text{supp}}(\mu _{a}) =\{ z \in \mathbb{C}\mid \Vert a^{-1}\Vert _{ 2}^{-1} \leq \vert z\vert \leq \Vert a\Vert _{ 2}\}, }$$(11.9)where we put ∥a −1∥2 −1 = 0 if a −1 ∉ L 2(M, τ).
-
(ii)
μ a is invariant under rotations about \(0 \in \mathbb{C}\).
-
(iii)
For 0 < t < 1, we have
$$\displaystyle{ \mu _{a}(B(0,r)) = t\qquad \mathit{\text{for}}\qquad r = \frac{1} {\sqrt{S_{a^{{\ast} } a } (t - 1)}}, }$$(11.10)where \(S_{a^{{\ast}}a}\) is the S-transform of the operator a ∗ a and B(0, r) is the open disk with radius r.
-
(iv)
The conditions (i), (ii), and (iii) determine μ a uniquely.
-
(v)
The spectrum of an R-diagonal operator a coincides with supp(μ a ) unless a −1 ∈ L 2(M, τ)∖M in which case supp(μ a ) is the annulus (11.9), while the spectrum of a is the full closed disk with radius ∥a∥2.
For the third part, one has to note that
maps (0, 1) onto (∥a −1∥2 −1, ∥a∥2).
11.6.1 A little about the proof
We give some key ideas of the proof from [85]; for another proof, see [158].
Consider \(\lambda \in \mathbb{C}\) and put α: = | λ |. A key point is to find a relation between μ | a | and μ | a−λ |. For a probability measure σ, we denote its symmetrized version by \(\tilde{\sigma }\), i.e. for any measurable set E, we have \(\tilde{\sigma }(E) = (\sigma (E) +\sigma (-E))/2\). Then one has the relation
or in terms of the R-transforms:
Hence, μ | a | determines μ | a−λ |, which determines
Exercise 4.
Prove (11.11) by showing that if a is R-diagonal then the matrices
are free in the \((M_{2}(\mathbb{C}) \otimes M,\mathrm{tr}\otimes \tau )\).
11.6.2 Example: circular operator
Let us consider, as a concrete example, the circular operator \(c = (s_{1} + is_{2})/\sqrt{2}\), where s 1 and s 2 are free standard semi-circular elements.
The distribution of c ∗ c is free Poisson with rate 1, given by the density \(\sqrt{ 4 - t}/2\pi t\) on [0, 4], and thus the distribution μ | c | of the absolute value | c | is the quarter-circular distribution with density \(\sqrt{ 4 - t^{2}}/\pi\) on [0, 2]. We have ∥c∥2 = 1 and ∥c −1∥2 = ∞, and hence the support of the Brown measure of c is the closed unit disk, \(\text{supp}(\mu _{c}) = \overline{B(0,1)}\). This coincides with the spectrum of c.
In order to apply Theorem 8, we need to calculate the S-transform of c ∗ c. We have \(R_{c^{{\ast}}c}(z) = 1/(1 - z)\), and thus \(S_{c^{{\ast}}c}(z) = 1/(1 + z)\) (because z ↦ zR(z) and w ↦ wS(w) are inverses of each other; see [137, Remark 16.18] and also the discussion around [137, Eq. (16.8)]). So, for 0 < t < 1, we have \(S_{c^{{\ast}}c}(t - 1) = 1/t\). Thus, \(\mu _{c}(B(0,\sqrt{t})) = t\), or, for 0 < r < 1, μ c (B(0, r)) = r 2. Together with the rotation invariance, this shows that μ c is the uniform measure on the unit disk \(\overline{B(0,1)}\).
11.6.3 The circular law
The circular law is the non-self-adjoint version of Wigner’s semi-circle law. Consider an N × N matrix where all entries are independent and identically distributed. If the distribution of the entries is Gaussian, then this ensemble is also called Ginibre ensemble. It is very easy to check that the ∗-moments of the Ginibre random matrices converge to the corresponding ∗-moments of the circular operator. So it is quite plausible to expect that the Brown measure (i.e. the eigenvalue distribution) of the Ginibre random matrices converges to the Brown measure of the circular operator, i.e. to the uniform distribution on the disk. This statement is known as the circular law. However, one has to note that the above is not a proof for the circular law, because the Brown measure is not continuous with respect to our notion of convergence in ∗-distribution. One can construct easily examples where this fails.
Exercise 5.
Consider the sequence (T N ) N ≥ 2 of nilpotent N × N matrices
Show that,
-
∘ with respect to tr, T N converges in ∗-moments to a Haar unitary element,
-
∘ the Brown measure of a Haar unitary element is the uniform distribution on the circle of radius 1,
-
∘ but the asymptotic eigenvalue distribution of T N is given by δ 0.
However, for nice random matrix ensembles, the philosophy of convergence of the eigenvalue distribution to the Brown measure of the limit operator seems to be correct. For the Ginibre ensemble, one can write down quite explicitly its eigenvalue distribution, and then it is easy to check the convergence to the circular law. If the distribution of the entries is not Gaussian, then one still has convergence to the circular law under very general assumptions (only second moment of the distribution has to exist), but proving this in full generality has only been achieved recently. For a survey on this, see [42, 171].
11.6.4 The single ring theorem
There are also canonical random matrix models for R-diagonal operators. If one considers on (non-self-adjoint) N × N matrices a density of the form
then one can check, under suitable assumptions on the function f, that the ∗-distribution of the corresponding random matrix A converges to an R-diagonal operator (whose concrete form is of course determined in terms of f). So again one expects that the eigenvalue distribution of those random matrices converges to the Brown measure of the limit R-diagonal operator, whose form is given in Theorem 8. (In particular, this limiting eigenvalue distribution lives on an, possibly degenerate, annulus, i.e. a single ring, even if f has several minima.) This has been proved recently by Guionnet, Krishnapur, and Zeitouni [82].
11.7 Brown measure of elliptic operators
An elliptic operator is of the form a = αs 1 + iβs 2, where α, β > 0 and s 1 and s 2 are free standard semi-circular operators. An elliptic operator is not R-diagonal, unless α = β (in which case it is a circular operator). The following theorem was proved by Larsen [116] and by Biane and Lehner [38].
Theorem 9.
Consider the elliptic operator
Put γ: = cos(2θ) and λ = λ r + iλ i . Then the spectrum of a is the ellipse
and the Brown measure μ a is the measure with constant density on σ(a):
11.8 Brown measure for unbounded operators
The Brown measure can also be extended to unbounded operators which are affiliated to a tracial W ∗-probability space; for the notion of “affiliated operators”, see our discussion before Definition 8.15 in Chapter 8 This extension of the Brown measure was done by Haagerup and Schultz in [86].
Δ and μ a can be defined for unbounded a provided ∫ 1 ∞log(t)dμ | a |(t) < ∞, in which case
and the Brown measure μ a is still determined by (11.7).
Example 10.
Let c 1 and c 2 be two ∗-free circular elements and consider a: = c 1 c 2 −1. If c 1, c 2 live in the tracial W ∗-probability space (M, τ), then a ∈ L p(M, τ) for 0 < p < 1. In this case, Δ(a −λ) and μ a are well defined. In order to calculate μ a , one has to extend the class of R-diagonal operators and the formulas for their Brown measure to unbounded operators. This was done in [86]. Since the product of an R-diagonal element with a ∗ free element is R-diagonal, too, we have that a is R-diagonal. So to use (the unbounded version of) Theorem 8, we need to calculate the S-transform of a ∗ a. Since with c 2, also its inverse c 2 −1 is R-diagonal, we have \(S_{\vert a\vert ^{2}} = S_{\vert c_{1}\vert ^{2}}S_{\vert c_{2}^{-1}\vert ^{2}}\). The S-transform of the first factor is \(S_{\vert c_{1}\vert ^{2}}(z) = 1/(1 + z)\); compare Section 11.6.2. Furthermore, the S-transforms of x and x −1 are, for positive x, in general related by \(S_{x}(z) = 1/S_{x^{-1}}(-1 - z)\). Since | c 2 −1 |2 = | c 2 ∗ |−2 and since c 2 ∗ has the same distribution as c 2, we have that \(S_{\vert c_{2}^{-1}\vert ^{2}} = S_{\vert c_{2}\vert ^{-2}}\) and thus
This gives then \(S_{\vert a\vert ^{2}}(z) = -z/(1 + z)\), for − 1 < z < 0, or \(S_{\vert a\vert ^{2}}(t - 1) = (1 - t)/t\) for 0 < t < 1. So our main formula (11.10) from Theorem 8 gives \(\mu _{a}(B(0,\sqrt{t/(1 - t)})) = t\) or μ a (B(0, r)) = r 2∕(1 + r 2). We have ∥a∥2 = ∞ = ∥a −1∥2, and thus \(\text{supp}(\mu _{a}) = \mathbb{C}\). The above formula for the measure of balls gives then the density
For more details and, in particular, proofs of the above used facts about R-diagonal elements and the relation between S x and \(S_{x^{-1}}\), one should see the original paper of Haagerup and Schultz [86].
11.9 Hermitization method: using operator-valued free probability for calculating the Brown measure
Note that formula (11.7) for determining the Brown measure can also be written as
This tells us that we can understand the Brown measure of a non-normal operator a if we understand the distributions of all Hermitian operators | a −λ | for all \(\lambda \in \mathbb{C}\) sufficiently well. In the random matrix literature, this idea goes back at least to Girko [77] and is usually addressed as hermitization method. A contact of this idea with the world of free probability was made on a formal level in the works of Janik, Nowak, Papp, and Zahed [103] and of Feinberg and Zee [71]. In [24], it was shown that operator-valued free probability is the right frame to deal with this rigorously. (Examples for explicit operator-valued calculations were also done before in [1].) Combining this hermitization idea with the subordination formulation of operator-valued free convolution allows then to calculate the Brown measure of any (not just self-adjoint) polynomial in free variables.
In order to make this connection between Brown measure and operator-valued quantities more precise, we first have to rewrite our description of the Brown measure. In Section 11.5, we have seen that we get the Brown measure of a as the limit for ɛ → 0 of
This can also be reformulated in the following form (compare [116], or Lemma 4.2 in [1]: Let us define
Then
is a probability measure on the complex plane (whose density is given by ∇2 f ɛ ), which converges weakly for ɛ → 0 to the Brown measure of a.
In order to calculate the Brown measure, we need G ɛ, a (λ) as defined in (11.14). Let now
Note that A is self-adjoint. Consider A in the \(M_{2}(\mathbb{C})\)-valued probability space with respect to \(E = \mathit{id}\otimes \tau: M_{2}(M) \rightarrow M_{2}(\mathbb{C})\) given by
For the argument
consider now the \(M_{2}(\mathbb{C})\)-valued Cauchy transform of A
One can easily check that (Λ ɛ − A)−1 is actually given by
and thus we are again in the situation that our quantity of interest is actually one entry of an operator-valued Cauchy transform: G ɛ, a (λ) = g 21(ɛ, λ) = [G A (Λ ɛ )]21.
11.10 Brown measure of arbitrary polynomials in free variables
So in order to calculate the Brown measure of some polynomial p in self-adjoint free variables, we should first hermitize the problem by going over to self-adjoint 2 × 2 matrices over our underlying space, and then we should linearize the problem on this level and use finally our subordination description of operator-valued free convolution to deal with this linear problem. It might be not so clear whether hermitization and linearization go together well, but this is indeed the case. Essentially we do here a linearization of an operator-valued model instead of a scalar-valued one: we have to linearize a polynomial in matrices. But the linearization algorithm works in this case as well. As the end is near, let us illustrate this just with an example. For more details, see [95].
Example 11.
Consider the polynomial a = xy in the free self-adjoint variables x = x ∗ and y = y ∗. For the Brown measure of this a, we have to calculate the operator-valued Cauchy transform of
In order to linearize this, we should first write it as a polynomial in matrices of x and matrices of y. This can be achieved as follows:
which is a self-adjoint polynomial in the self-adjoint variables
This self-adjoint polynomial XY X has a self-adjoint linearization
Plugging in back the 2 × 2 matrices for X and Y, we get finally the self-adjoint linearization of A as
We have written this as the sum of two \(M_{6}(\mathbb{C})\)-free matrices, both of them being self-adjoint. For calculating the Cauchy transform of this sum, we can then use again the subordination algorithm for the operator-valued free convolution from Theorem 10.5. Putting all the steps together gives an algorithm for calculating the Brown measure of a = xy. One might note that in the case where both x and y are even elements (i.e. all odd moments vanish), the product is actually R-diagonal; see [137, Theorem 15.17]. Hence, in this case, we even have an explicit formula for the Brown measure of xy, given by Theorem 8 and the fact that we can calculate the S-transform of a ∗ a in terms of the S-transforms of x and of y.
Of course, we expect that the eigenvalue distribution of our polynomial evaluated in asymptotically free matrices (like independent Wigner or Wishart matrices) should converge to the Brown measure of the polynomial in the corresponding free variables. However, as was already pointed out before (see the discussion around Exercise 5), this is not automatic from the convergence of all ∗-moments, and one actually has to control probabilities of small eigenvalues during all of the calculations. Such controls have been achieved in the special cases of the circular law or the single ring theorem. However, for an arbitrary polynomial in asymptotically free matrices, this is an open problem at the moment.
In Figs. 11.1, 11.2 and 11.3, we give for some polynomials the Brown measure calculated according to the algorithm outlined above, and we also compare this with histograms of the complex eigenvalues of the corresponding polynomials in independent random matrices.
References
L. Aagaard, U. Haagerup, Moment formulas for the quasi-nilpotent DT-operator. Int. J. Math. 15(6), 581–628 (2004)
D.H. Armitage, S.J. Gardiner, Classical Potential Theory (Springer, London, 2001)
S.T. Belinschi, P. Śniady, R. Speicher, Eigenvalues of non-hermitian random matrices and Brown measure of non-normal operators: Hermitian reduction and linearization method. arXiv:1506.02017 (2015)
P. Biane, F. Lehner, Computation of some examples of Brown’s spectral measure in free probability. Colloq. Math. 90(2), 181–211 (2001)
C. Bordenave, D. Chafaï, Around the circular law. Probab. Surv. 9, 1–89 (2012)
L.G. Brown, Lidskiĭ’s theorem in the type II case, in Geometric Methods in Operator Algebras (Kyoto, 1983). Pitman Research Notes in Mathematics Series, vol. 123 (Longman Science Technology, Harlow, 1986), pp. 1–35
J. Feinberg, A. Zee, Non-Hermitian random matrix theory: method of Hermitian reduction. Nuclear Phys. B 504(3), 579–608 (1997)
B. Fuglede, R.V. Kadison, Determinant theory in finite factors. Ann. Math. (2) 55, 520–530 (1952)
V.L. Girko, The circular law. Teor. Veroyatnost. i Primenen. 29(4), 669–679 (1984)
A. Guionnet, M. Krishnapur, O. Zeitouni, The single ring theorem. Ann. Math. (2) 174(2), 1189–1217 (2011)
U. Haagerup, F. Larsen, Brown’s spectral distribution measure for R-diagonal elements in finite von Neumann algebras. J. Funct. Anal. 176(2), 331–367 (2000)
U. Haagerup, H. Schultz, Brown measures of unbounded operators affiliated with a finite von Neumann algebra. Math. Scand. 100(2), 209–263 (2007)
U. Haagerup, H. Schultz, Invariant subspaces for operators in a general II1-factor. Publ. Math. Inst. Hautes Études Sci. 109(1), 19–111 (2009)
W.K. Hayman, P.B. Kennedy, Subharmonic Functions. Vol. I. London Mathematical Society Monographs, vol. 9 (Academic, London/New York, 1976)
J.W. Helton, T. Mai, R. Speicher, Applications of realizations (aka linearizations) to free probability. arXiv preprint arXiv:1511.05330, 2015
R.A. Janik, M.A. Nowak, G. Papp, I. Zahed, Non-Hermitian random matrix models. Nuclear Phys. B 501(3), 603–642 (1997)
F. Larsen, Brown measures and R-diagonal elements in finite von Neumann algebras. Ph.D. thesis, University of Southern Denmark, 1999
A. Nica, R. Speicher, R-diagonal pairs—a common approach to Haar unitaries and circular elements, in Free Probability Theory (Waterloo, ON, 1995). Fields Institute Communications, vol. 12 (American Mathematical Society, Providence, RI, 1997), pp. 149–188
A. Nica, R. Speicher, Lectures on the Combinatorics of Free Probability. London Mathematical Society Lecture Note Series, vol. 335 (Cambridge University Press, Cambridge, 2006)
P. Śniady, R. Speicher, Continuous family of invariant subspaces for R–diagonal operators. Invent. Math. 146(2), 329–363 (2001)
T. Tao, Topics in Random Matrix Theory, vol. 132 (American Mathematical Society Providence, RI, 2012)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media LLC
About this chapter
Cite this chapter
Mingo, J.A., Speicher, R. (2017). Brown Measure. In: Free Probability and Random Matrices. Fields Institute Monographs, vol 35. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-6942-5_11
Download citation
DOI: https://doi.org/10.1007/978-1-4939-6942-5_11
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-6941-8
Online ISBN: 978-1-4939-6942-5
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)