Abstract
We address the question of a Berry-Esseen type theorem for the speed of convergence in a multivariate free central limit theorem. For this, we estimate the difference between the operator-valued Cauchy transforms of the normalized partial sums in an operator-valued free central limit theorem and the Cauchy transform of the limiting operator-valued semicircular element. Since we have to deal with in general non-self-adjoint operators, we introduce the notion of matrix-valued resolvent sets and study the behavior of Cauchy transforms on them.
2010 Mathematics Subject Classification. 46L54, 60F05, 47A10.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In classical probability theory the famous Berry-Esseen theorem gives a quantitative statement about the order of convergence in the central limit theorem. It states in its simplest version: If \((X_{i})_{i\in \mathbb{N}}\) is a sequence of independent and identically distributed random variables with mean 0 and variance 1, then the distance between \(S_{n} := \frac{1} {\sqrt{n}}(X_{1} +\ldots +X_{n})\) and a normal variable γ of mean 0 and variance 1 can be estimated in terms of the Kolmogorov distance \(\Delta \) by
where C is a constant and ρ is the absolute third moment of the variables X i . The question for a free analogue of the Berry-Esseen estimate in the case of one random variable was answered by Chistyakov and Götze in [2] (and independently, under the more restrictive assumption of compact support of the X i , by Kargin [10]): If \((X_{i})_{i\in \mathbb{N}}\) is a sequence of free and identically distributed variables with mean 0 and variance 1, then the distance between \(S_{n} := \frac{1} {\sqrt{n}}(X_{1} +\ldots +X_{n})\) and a semicircular variable s of mean 0 and variance 1 can be estimated as
where c > 0 is an absolute constant and m 3 and m 4 are the third and fourth moment, respectively, of the X i .
In this paper we want to present an approach to a multivariate version of a free Berry-Esseen theorem. The general idea is the following: Since there is up to now no suitable replacement of the Kolmorgorov metric in the multivariate case, we will, in order to describe the speed of convergence of a d-tuple \((S_{n}^{(1)},\ldots ,S_{n}^{(d)})\) of partial sums to the limiting semicircular family \((s_{1},\ldots ,s_{d})\), consider the speed of convergence of \(p(S_{n}^{(1)},\ldots ,S_{n}^{(d)})\) to \(p(s_{1},\ldots ,s_{d})\) for any self-adjoint polynomial p in d non-commuting variables. By using the linearization trick of Haagerup and Thorbjørnsen [5, 6], we can reformulate this in an operator-valued setting, where we will state an operator-valued free Berry-Esseen theorem. Because estimates for the difference between scalar-valued Cauchy transforms translate by results of Bai [1] to estimates with respect to the Kolmogorov distance, it is convenient to describe the speed of convergence in terms of Cauchy transforms. On the level of deriving equations for the (operator-valued) Cauchy transforms we can follow ideas which are used for dealing with speed of convergence questions for random matrices; here we are inspired in particular by the work of Götze and Tikhomirov [4], but see also [1].
Since the transition from the multivariate to the operator-valued setting leads to operators which are, even if we start from self-adjoint polynomials p, in general not self-adjoint, we have to deal with (operator-valued) Cauchy transforms defined on domains different from the usual ones. Since most of the analytic tools fail in this generality, we have to develop them along the way.
As a first step in this direction, the present paper (which is based on the unpublished preprint [13]) leads finally to the proof of the following theorem:
Theorem 1.1.
Let \((\mathcal{C},\tau )\) be a non-commutative C ∗ -probability space with τ faithful and put \(\mathcal{A} := \mathrm{M}_{m}(\mathbb{C}) \otimes \mathcal{C}\) and E := id ⊗ τ. Let \((X_{i})_{i\in \mathbb{N}}\) be a sequence of non-zero elements in the operator-valued probability space \((\mathcal{A},E)\). We assume:
-
All X i ’s have the same ∗-distribution with respect to E and their first moments vanish, i.e. E[X i ] = 0.
-
The X i are ∗-free with amalgamation over \(\mathrm{M}_{m}(\mathbb{C})\) (which means that the ∗-algebras \(\mathcal{X}_{i}\) , generated by \(\mathrm{M}_{m}(\mathbb{C})\) and X i , are free with respect to E).
-
We have \(\sup _{i\in \mathbb{N}}\|X_{i}\| < \infty \).
Then the sequence \((S_{n})_{n\in \mathbb{N}}\) defined by
converges to an operator-valued semicircular element s. Moreover, we can find κ > 0, c > 1, C > 0 and \(N \in \mathbb{N}\) such that
where
and where Gs and \(G_{S_{n}}\) denote the operator-valued Cauchy transforms of s and of Sn, respectively.
Applying this operator-valued statement to our multivariate problem gives the following main result on a multivariate free Berry Esseen theorem.
Theorem 1.2.
Let \((x_{i}^{(k)})_{k=1}^{d}\), \(i \in \mathbb{N}\) , be free and identically distributed sets of d self-adjoint non-zero random variables in some non-commutative C ∗ -probability space \((\mathcal{C},\tau )\) , with τ faithful, such that the conditions
and
are fulfilled. We denote by \(\Sigma = (\sigma _{k,l})_{k,l=1}^{d}\), where \(\sigma _{k,l} :=\tau (x_{i}^{(k)}x_{i}^{(l)})\), their joint covariance matrix. Moreover, we put
Then \((S_{n}^{(1)},\ldots ,S_{n}^{(d)})\) converges in distribution to a semicircular family \((s_{1},\ldots ,s_{d})\) of covariance \(\Sigma \). We can quantify the speed of convergence in the following way. Let p be a (not necessarily self-adjoint) polynomial in d non-commutating variables and put
Then, there are constants C > 0, R > 0 and \(N \in \mathbb{N}\) (depending on the polynomial) such that
where GP and \(G_{P_{n}}\) denote the scalar-valued Cauchy transform of P and of Pn, respectively.
In the case of a self-adjoint polynomial p, we can consider the distribution measures μ n and μ of the operators P n and P from above, which are probability measures on \(\mathbb{R}\). Moreover, let \(\mathcal{F}_{\mu _{n}}\) and \(\mathcal{F}_{\mu }\) be their cumulative distribution functions. In order to deduce estimates for the Kolmogorov distance
one has to transfer the estimate for the difference of the scalar-valued Cauchy transforms of P n and P from near infinity to a neighborhood of the real axis. A partial solution to this problem was given in the appendix of [14], which we will recall in Sect. 4. But this leads to the still unsolved question, whether \(p(s_{1},\ldots ,s_{d})\) has a continuous density. We conjecture that the latter is true for any self-adjoint polynomial in free semicirculars, but at present we are not aware of a proof of that statement.
The paper is organized as follows. In Sect. 2 we recall some basic facts about holomorphic functions on domains in Banach spaces. The tools to deal with matrix-valued Cauchy transform will be presented in Sect. 3. Section 4 is devoted to the proof of Theorems 1.1 and 1.2.
2 Holomorphic Functions on Domains in Banach Spaces
For reader’s convenience, we briefly recall the definition of holomorphic functions on domains in Banach spaces and we state the theorem of Earle-Hamilton, which will play a major role in the subsequent sections.
Definition 2.1.
Let \((X,\|\cdot \|_{X})\), \((Y,\|\cdot \|_{Y })\) be two complex Banach spaces and let D ⊆ X be an open subset of X. A function f : D → Y is called
-
Strongly holomorphic, if for each x ∈ D there exists a bounded linear mapping Df(x) : X → Y such that
$$\lim _{y\rightarrow 0}\frac{\|f(x + y) - f(x) - Df(x)y\|_{Y }} {\|y\|_{X}} = 0.$$ -
Weakly holomorphic, if it is locally bounded and the mapping
$$\lambda \mapsto \phi (f(x +\lambda y))$$is holomorphic at λ = 0 for each x ∈ D, y ∈ Y and all continuous linear functionals \(\phi : Y \rightarrow \mathbb{C}\).
An important theorem due to Dunford says, that a function on a domain (i.e. an open and connected subset) in a Banach space is strongly holomorphic if and only if it is weakly holomorphic. Hence, we do not have to distinguish between both definitions.
Definition 2.2.
Let D be a nonempty domain in a complex Banach space \((X,\|\cdot \|)\) and let f : D → D be a holomorphic function. We say, that f(D) lies strictly inside D, if there is some ε > 0 such that
holds, whereby we denote by B r (y) the open ball with radius r around y.
The remarkable fact, that strict holomorphic mappings are strict contractions in the so-called Carathéodory-Riffen-Finsler metric, leads to the following theorem of Earle-Hamilton (cf. [3]), which can be seen as a holomorphic version of Banach’s contraction mapping theorem. For a proof of this theorem and variations of the statement we refer to [7].
Theorem 2.3 (Earle-Hamilton, 1970).
Let ∅≠D ⊆ X be a domain in a Banach space \((X,\|\cdot \|)\) and let f : D → D be a bounded holomorphic function. If f(D) lies strictly inside D, then f has a unique fixed point in D.
3 Matrix-Valued Spectra and Cauchy Transforms
The statement of the following lemma is well-known and quite simple. But since it turns out to be extremely helpful, it is convenient to recall it here.
Lemma 3.1.
Let \((A,\|\cdot \|)\) be a complex Banach-algebra with unit 1. If x ∈ A is invertible and y ∈ A satisfies \(\|x - y\| <\sigma \frac{1} {\|{x}^{-1}\|}\) for some 0 < σ < 1, then y is invertible as well and we have
Proof.
We can easily check that
is absolutely convergent in A and gives the inverse element of y. Moreover we get
which proves the stated estimate. □
Let \((\mathcal{C},\tau )\) be a non-commutative C ∗ -probability space, i.e., \(\mathcal{C}\) is a unital C ∗ -algebra and τ is a unital state (positive linear functional) on \(\mathcal{C}\); we will always assume that τ is faithful. For fixed \(m \in \mathbb{N}\) we define the operator-valued C ∗ -probability space \(\mathcal{A} := \mathrm{M}_{m}(\mathbb{C}) \otimes \mathcal{C}\) with conditional expectation
where we denote by \(\mathrm{M}_{m}(\mathbb{C})\) the C ∗ -algebra of all m ×m matrices over the complex numbers \(\mathbb{C}\). Under the canonical identification of \(\mathrm{M}_{m}(\mathbb{C}) \otimes \mathcal{C}\) with \(\mathrm{M}_{m}(\mathcal{C})\) (matrices with entries in \(\mathcal{C}\)), the expectation E corresponds to applying the state τ entrywise in a matrix. We will also identify \(b \in \mathrm{M}_{m}(\mathbb{C})\) with \(b \otimes 1 \in \mathcal{A}\).
Definition 3.2.
For \(a \in \mathcal{A} = \mathrm{M}_{m}(\mathcal{C})\) we define the matrix-valued resolvent set
and the matrix-valued spectrum
Since the set \(\mathrm{GL}(\mathcal{A})\) of all invertible elements in \(\mathcal{A}\) is an open subset of \(\mathcal{A}\) (cf. Lemma 3.1), the continuity of the mapping
implies, that the matrix-valued resolvent set \(\rho _{m}(a) = f_{a}^{-1}(\mathrm{GL}(\mathcal{A}))\) of an element \(a \in \mathcal{A}\) is an open subset of \(\mathrm{M}_{m}(\mathbb{C})\). Hence, the matrix-valued spectrum σ m (a) is always closed.
Although the behavior of this matrix-valued generalizations of the classical resolvent set and spectrum seems to be quite similar to the classical case (which is of course included in our definition for m = 1), the matrix valued spectrum is in general not bounded and hence not a compact subset of \(\mathrm{M}_{m}(\mathbb{C})\). For example, we have for all \(\lambda \in \mathbb{C}\), that
i.e. σ m (λ1) consists of all matrices \(b \in \mathrm{M}_{m}(\mathbb{C})\) for which λ belongs to the spectrum \(\sigma _{\mathrm{M}_{m}(\mathbb{C})}(b)\). Particularly, σ m (λ1) is unbounded for m ≥ 2.
In the following, we denote by \(\mathrm{GL}_{m}(\mathbb{C}) := \mathrm{GL}(\mathrm{M}_{m}(\mathbb{C}))\) the set of all invertible matrices in \(\mathrm{M}_{m}(\mathbb{C})\).
Lemma 3.3.
Let \(a \in \mathcal{A}\) be given. Then for all \(b \in \mathrm{GL}_{m}(\mathbb{C})\) the following inclusion holds:
Proof.
Let \(\lambda \in \rho _{\mathcal{A}}({b}^{-1}a)\) be given. By definition of the usual resolvent set this means that \(\lambda 1 - {b}^{-1}a\) is invertible in \(\mathcal{A}\). It follows, that
is invertible as well, and we get, as desired, λb ∈ ρ m (a). □
Lemma 3.4.
For all \(0\neq a \in \mathcal{A}\) we have
and
Proof.
Obviously, the second inclusion is a direct consequence of the first. Hence, it suffices to show the first statement.
Let \(b \in \mathrm{GL}_{m}(\mathbb{C})\) with \(\|{b}^{-1}\| < \frac{1} {\|a\|}\) be given. It follows, that \(h := 1 - {b}^{-1}a\) is invertible, because
Therefore, we can deduce, that also
is invertible, i.e. b ∈ ρ m (a). This proves the assertion. □
The main reason to consider matrix-valued resolvent sets is, that they are the natural domains for matrix-valued Cauchy transforms, which we will define now.
Definition 3.5.
For \(a \in \mathcal{A}\) we call
the matrix-valued Cauchy transform of a.
Note that G a is a continuous function (and hence locally bounded) and induces for all b 0 ∈ ρ m (a), \(b \in \mathrm{M}_{m}(\mathbb{C})\) and bounded linear functionals \(\phi : \mathcal{A}\rightarrow \mathbb{C}\) a function
which is holomorphic in a neighborhood of λ = 0. Hence, G a is weakly holomorphic and therefore (as we have seen in the previous section) strongly holomorphic as well.
Because the structure of ρ m (a) and therefore the behavior of G a might in general be quite complicated, we restrict our attention to a suitable restriction of G a . In this way, we will get some additional properties of G a .
The first restriction enables us to control the norm of the matrix-valued Cauchy transform on a sufficiently nice subset of the matrix-valued resolvent set.
Lemma 3.6.
Let \(0\neq a \in \mathcal{A}\) be given. For 0 < θ < 1 the matrix valued Cauchy transform Ga induces a mapping
Proof.
Lemma 3.4 (c) tells us, that the open set
is contained in ρ m (a), i.e. G a is well-defined on U. Moreover, we get from (1)
and hence
for all b ∈ U. This proves the claim. □
To ensure, that the range of G a is contained in \(\mathrm{GL}_{m}(\mathbb{C})\), we have to shrink the domain again.
Lemma 3.7.
Let \(0\neq a \in \mathcal{A}\) be given. For 0 < θ < 1 and c > 1 we define
and
If the condition
is satisfied for some 0 < σ < 1, then the matrix-valued Cauchy transform Ga induces a mapping \(G_{a} : \Omega \rightarrow \Omega ^\prime \) and we have the estimates
and
Proof.
For all \(b \in \Omega \) we have
which enables us to deduce
Using Lemma 3.1, this implies \(G_{a}(b) \in \mathrm{GL}_{m}(\mathbb{C})\) and (4). Since we already know from (2) in Lemma 3.6, that (3) holds, it follows \(G_{a}(b) \in \Omega ^\prime \) and the proof is complete. □
Remark 3.8.
Since domains of our holomorphic functions should be connected it is necessary to note, that for κ > 0 and c > 1
and for r > 0
are pathwise connected subsets of \(\mathrm{M}_{m}(\mathbb{C})\). Indeed, if \(b_{1},b_{2} \in \mathrm{GL}_{m}(\mathbb{C})\) are given, we consider their polar decomposition \(b_{1} = U_{1}P_{1}\) and \(b_{2} = U_{2}P_{2}\) with unitary matrices \(U_{1},U_{2} \in \mathrm{GL}_{m}(\mathbb{C})\) and positive-definite Hermitian matrices \(P_{1},P_{2} \in \mathrm{GL}_{m}(\mathbb{C})\) and define (using functional calculus for normal elements in the C ∗ -algebra \(\mathrm{M}_{m}(\mathbb{C})\))
Then γ fulfills γ(0) = b 1 and γ(1) = b 2, and γ([0, 1]) is contained in \(\Omega \) and \(\Omega ^\prime \) if b 1, b 2 are elements of \(\Omega \) and \(\Omega ^\prime \), respectively.
Since the matrix-valued Cauchy transform is a solution of a special equation (cf. [8, 12]), we will be interested in the following situation:
Corollary 3.9.
Let \(\eta : \mathrm{GL}_{m}(\mathbb{C}) \rightarrow \mathrm{M}_{m}(\mathbb{C})\) be a holomorphic function satisfying
for some M > 0. Moreover, we assume that
holds. Let 0 < θ,σ < 1 and c > 1 be given with
and let \(\Omega \) and \(\Omega ^\prime \) be as in Lemma 3.7.
Then, for fixed \(b \in \Omega \) , the equation
has a unique solution, which is given by w = G a (b).
Proof.
Let \(b \in \Omega \) be given. For all \(w \in \Omega ^\prime \) we get
and therefore
This means, that \(1 - {b}^{-1}\eta (w)\) and hence b − η(w) is invertible with
and shows, that we have a well-defined and holomorphic mapping
with
and therefore \(\mathcal{F}(w) \in \Omega ^\prime \).
Now, we want to show that \(\mathcal{F}(\Omega ^\prime )\) lies strictly inside \(\Omega ^\prime \). We put
and consider \(w \in \Omega ^\prime \) and \(u \in \mathrm{M}_{m}(\mathbb{C})\) with \(\|u -\mathcal{F}(w)\| <\epsilon\). At first, we get
and thus
which shows \(u \in \mathrm{GL}_{m}(\mathbb{C})\), and secondly
which shows \(u \in \Omega ^\prime \).
Let now \(w \in \Omega ^\prime \) be a solution of (5). This implies that
and hence \(\mathcal{F}(w) = w\). Since \(\mathcal{F} : \Omega ^\prime \rightarrow \Omega ^\prime \) is holomorphic on the domain \(\Omega ^\prime \) and \(\mathcal{F}(\Omega ^\prime )\) lies strictly inside \(\Omega ^\prime \), it follows by the Theorem of Earle-Hamilton, Theorem 2.3, that \(\mathcal{F}\) has exactly one fixed point. Because G a (b) (which is an element of \(\Omega ^\prime \) by Lemma 3.7) solves (5) by assumption and hence is already a fixed point of \(\mathcal{F}\), it follows w = G a (b) and we are done. □
Remark 3.10.
Let \((\mathcal{A}^\prime ,E^\prime )\) be an arbitrary operator-valued C ∗ -probability space with conditional expectation \(E^\prime : \mathcal{A}^\prime \rightarrow \mathrm{M}_{m}(\mathbb{C})\). This provides us with a unital (and continuous) ∗ -embedding \(\iota : \mathrm{M}_{m}(\mathbb{C}) \rightarrow \mathcal{A}^\prime \). In this section, we only considered the special embedding
which is given by the special structure \(\mathcal{A} = \mathrm{M}_{m}(\mathbb{C}) \otimes \mathcal{C}\). But we can define matrix-valued resolvent sets, spectra and Cauchy transforms also in this more general framework. To be more precise, we put for all \(a \in \mathcal{A}^\prime \)
and \(\sigma _{m}(a) := \mathrm{M}_{m}(\mathbb{C})\setminus \rho _{m}(a)\) and
We note, that all the results of this section stay valid in this general situation.
4 Multivariate Free Central Limit Theorem
4.1 Setting and First Observations
Let \((X_{i})_{i\in \mathbb{N}}\) be a sequence in the operator-valued probability space \((\mathcal{A},E)\) with \(\mathcal{A} = \mathrm{M}_{m}(\mathcal{C}) = \mathrm{M}_{m}(\mathbb{C}) \otimes \mathcal{C}\) and E = id ⊗ τ, as defined in the previous section. We assume:
-
All X i ’s have the same ∗ -distribution with respect to E and their first moments vanish, i.e. E[X i ] = 0.
-
The X i are ∗ -free with amalgamation over \(\mathrm{M}_{m}(\mathbb{C})\) (which means that the ∗ -algebras \(\mathcal{X}_{i}\), generated by \(\mathrm{M}_{m}(\mathbb{C})\) and X i , are free with respect to E).
-
We have \(\sup _{i\in \mathbb{N}}\|X_{i}\| < \infty \).
If we define the linear (and hence holomorphic) mapping
we easily get from the continuity of E, that
holds. Hence we can find M > 0 such that \(\|\eta (b)\| < M\|b\|\) holds for all \(b \in \mathrm{M}_{m}(\mathbb{C})\). Moreover, we have for all \(k \in \mathbb{N}\) and all \(b_{1},\ldots ,b_{k} \in \mathrm{M}_{m}(\mathbb{C})\)
Since \((X_{i})_{i\in \mathbb{N}}\) is a sequence of centered free non-commutative random variables, Theorem 8.4 in [15] tells us that the sequence \((S_{n})_{n\in \mathbb{N}}\) defined by
converges to an operator-valued semicircular element s. Moreover, we know from Theorem 4.2.4 in [12] that the operator-valued Cauchy transform G s satisfies
where we put \(U_{r} :=\{ b \in \mathrm{GL}_{m}(\mathbb{C})\mid \|{b}^{-1}\| < r\} \subseteq \rho _{m}(s)\) for all suitably small r > 0.
By Proposition 7.1 in [9], the boundedness of the sequence \((X_{i})_{i\in \mathbb{N}}\) guarantees boundedness of \((S_{n})_{n\in \mathbb{N}}\) as well. In order to get estimates for the difference between the Cauchy transforms G s and \(G_{S_{n}}\) we will also need the fact, that \((S_{n})_{n\in \mathbb{N}}\) is bounded away from 0. The precise statement is part of the following lemma, which also includes a similar statement for
Lemma 4.1.
In the situation described above, we have for all \(n \in \mathbb{N}\) and all 1 ≤ i ≤ n
where \(\alpha := E[X_{i}^{{\ast}}X_{i}] \in \mathrm{M}_{m}(\mathbb{C})\) .
Proof.
By the ∗ -freeness of \(X_{1},X_{2},\ldots\), we have
and thus
Similarly
which proves the statement. □
We define for \(n \in \mathbb{N}\)
and for \(n \in \mathbb{N}\) and 1 ≤ i ≤ n
Lemma 4.2.
For all \(n \in \mathbb{N}\) and 1 ≤ i ≤ n we have
and
for all \(b \in \rho _{m}(S_{n}) \cap \rho _{m}(S_{n}^{[i]})\) .
Proof.
We have
which leads, by multiplication with \(R_{n}(b) = {(b - S_{n})}^{-1}\) from the left and with \(R_{n}^{[i]}(b) = {(b - S_{n}^{[i]})}^{-1}\) from the right, to (6).
Moreover, we have
which leads, by multiplication with \(R_{n}(b) = {(b - S_{n})}^{-1}\) from the right and with \(R_{n}^{[i]}(b) = {(b - S_{n}^{[i]})}^{-1}\) from the left, to equation (7). □
Obviously, we have
4.2 Proof of the Main Theorem
During this subsection, let 0 < θ, σ < 1 and c > 1 be given, such that
holds. For all \(n \in \mathbb{N}\) we define
and
Lemma 3.4 shows, that \(\Omega _{n}\) is a subset of ρ m (S n ).
Theorem 4.3.
For all \(2 \leq n \in \mathbb{N}\) the function G n satisfies the following equation
where
with a holomorphic function
satisfying
with a constant C > 0, independent of n.
Proof.
-
(i)
Let \(n \in \mathbb{N}\) and b ∈ ρ m (S n ) be given. Then we have
$$S_{n}R_{n}(b) = bR_{n}(b) - (b - S_{n})R_{n}(b) = bR_{n}(b) - 1$$and hence
$$E[S_{n}R_{n}(b)] = E\big[bR_{n}(b) - 1\big] = bG_{n}(b) - 1.$$ -
(ii)
Let \(n \in \mathbb{N}\) be given. For all
$$b \in \rho _{m,n} :=\rho _{m}(S_{n}) \cap \displaystyle\bigcap _{i=1}^{n}\rho _{ m}(S_{n}^{[i]})$$we deduce from the formula in (6), that
$$\displaystyle\begin{array}{rcl} & & E[S_{n}R_{n}(b)] = \frac{1} {\sqrt{n}}\displaystyle\sum _{i=1}^{n}E[X_{ i}R_{n}(b)] \\ & & \quad = \frac{1} {\sqrt{n}}\displaystyle\sum _{i=1}^{n}\bigg(E\big[X_{ i}R_{n}^{[i]}(b)\big] + \frac{1} {\sqrt{n}}E\big[X_{i}R_{n}^{[i]}(b)X_{ i}R_{n}^{[i]}(b)\big] \\ & & \qquad + \frac{1} {n}E\big[X_{i}R_{n}(b)X_{i}R_{n}^{[i]}(b)X_{ i}R_{n}^{[i]}(b)\big]\bigg) \\ & & \quad = \frac{1} {n}\displaystyle\sum _{i=1}^{n}\bigg(E\big[X_{ i}R_{n}^{[i]}(b)X_{ i}R_{n}^{[i]}(b)\big] + \frac{1} {\sqrt{n}}E\big[X_{i}R_{n}(b)X_{i}R_{n}^{[i]}(b)X_{ i}R_{n}^{[i]}(b)\big]\bigg) \\ & & \quad = \frac{1} {n}\displaystyle\sum _{i=1}^{n}\bigg(E\big[X_{ i}G_{n}^{[i]}(b)X_{ i}\big]G_{n}^{[i]}(b) + \frac{1} {\sqrt{n}}E\big[X_{i}R_{n}(b)X_{i}R_{n}^{[i]}(b)X_{ i}R_{n}^{[i]}(b)\big]\bigg) \\ & & \quad = \frac{1} {n}\displaystyle\sum _{i=1}^{n}\Big(\eta (G_{ n}^{[i]}(b))G_{ n}^{[i]}(b) + r_{ n,1}^{[i]}(b)\Big), \\ \end{array}$$where
$$r_{n,1}^{[i]} :\rho _{ m}(S_{n}) \cap \rho _{m}(S_{n}^{[i]})\,\rightarrow \,\mathrm{M}_{ m}(\mathbb{C}),\ b\,\mapsto \, \frac{1} {\sqrt{n}}E\big[X_{i}R_{n}(b)X_{i}R_{n}^{[i]}(b)X_{ i}R_{n}^{[i]}(b)\big].$$There we used the fact, that, since the \((X_{j})_{j\in \mathbb{N}}\) are free with respect to E, also X i is free from R n [i], and thus we have
$$E\big[X_{i}R_{n}^{[i]}(b)\big] = E[X_{ i}]E\big[R_{n}^{[i]}(b)\big] = 0$$and
$$E\big[X_{i}R_{n}^{[i]}(b)X_{ i}R_{n}^{[i]}(b)\big] = E\big[X_{ i}E\big[R_{n}^{[i]}(b)\big]X_{ i}\big]E\big[R_{n}^{[i]}(b)\big].$$ -
(iii)
Taking (7) into account, we get for all \(n \in \mathbb{N}\) and 1 ≤ i ≤ n
$$G_{n}(b) = E\big[R_{n}(b)\big] = E\big[R_{n}^{[i]}(b)\big] + \frac{1} {\sqrt{n}}E\big[R_{n}^{[i]}(b)X_{ i}R_{n}(b)\big] = G_{n}^{[i]}(b) - r_{ n,2}^{[i]}(b)$$and therefore
$$G_{n}^{[i]}(b) = G_{ n}(b) + r_{n,2}^{[i]}(b)$$for all \(b \in \rho _{m}(S_{n}) \cap \rho _{m}(S_{n}^{[i]})\), where we put
$$r_{n,2}^{[i]} :\ \rho _{ m}(S_{n}) \cap \rho _{m}(S_{n}^{[i]}) \rightarrow \mathrm{M}_{ m}(\mathbb{C}),\ b\mapsto - \frac{1} {\sqrt{n}}E\big[R_{n}^{[i]}(b)X_{ i}R_{n}(b)\big].$$ -
(iv)
The formula in (iii) enables us to replace G n [i] in (ii) by G n . Indeed, we get
$$\displaystyle\begin{array}{rcl} E[S_{n}R_{n}(b)]& =& \frac{1} {n}\displaystyle\sum _{i=1}^{n}\Big(\eta (G_{ n}^{[i]}(b))G_{ n}^{[i]}(b) + r_{ n,1}^{[i]}(b)\Big) \\ & =& \frac{1} {n}\displaystyle\sum _{i=1}^{n}\Big(\eta \big(G_{ n}(b) + r_{n,2}^{[i]}(b)\big)\big(G_{ n}(b) + r_{n,2}^{[i]}(b)\big) + r_{ n,1}^{[i]}(b)\Big) \\ & =& \eta (G_{n}(b))G_{n}(b) + \frac{1} {n}\displaystyle\sum _{i=1}^{n}r_{ n,3}^{[i]}(b) \\ \end{array}$$for all b ∈ ρ m, n , where the function
$$r_{n,3}^{[i]} :\ \rho _{ m}(S_{n}) \cap \rho _{m}(S_{n}^{[i]}) \rightarrow \mathrm{M}_{ m}(\mathbb{C})$$is defined by
$$r_{n,3}^{[i]}(b) :=\eta (G_{ n}(b))r_{n,2}^{[i]}(b)+\eta (r_{ n,2}^{[i]}(b))G_{ n}(b)+\eta (r_{n,2}^{[i]}(b))r_{ n,2}^{[i]}(b)+r_{ n,1}^{[i]}(b).$$ -
(v)
Combining the results from (i) and (iv), it follows
$$bG_{n}(b) - 1 = E[S_{n}R_{n}(b)] =\eta (G_{n}(b))G_{n}(b) + \Theta _{n}(b),$$where we define
$$\Theta _{n} :\ \rho _{m,n} \rightarrow \mathrm{M}_{m}(\mathbb{C}),\ b\mapsto \frac{1} {n}\displaystyle\sum _{i=1}^{n}r_{ n,3}^{[i]}(b).$$Due to (8), Lemmas 3.4 and 3.7 show that \(\Omega _{n} \subseteq \rho _{m,n}\) and \(G_{n}(b) \in \mathrm{GL}_{m}(\mathbb{C})\) for \(b \in \Omega _{n}\). This gives
$$\big(b - \Theta _{n}(b)G_{n}{(b)}^{-1}\big)G_{ n}(b) = 1 +\eta (G_{n}(b))G_{n}(b)$$and hence, as desired, for all \(b \in \Omega _{n}\)
$$\Lambda _{n}(b)G_{n}(b) = 1 +\eta (G_{n}(b))G_{n}(b).$$ -
(v)
The definition of \(\Omega _{n}\) gives, by Lemma 3 and by Lemma 4.1, the following estimates
$$\|G_{n}(b)\| \leq \| R_{n}(b)\| \leq \frac{\theta } {1-\theta }\cdot \frac{1} {\|S_{n}\|} \leq \frac{\theta } {1-\theta }\cdot { \frac{1} {\|\alpha \|}^{\frac{1} {2} }} ,\qquad b \in \Omega _{n}$$and
$$\|G_{n}^{[i]}(b)\| \leq \| R_{ n}^{[i]}(b)\| \leq \frac{\theta } {1-\theta }\cdot \frac{1} {\|S_{n}^{[i]}\|} \leq \frac{\theta } {1-\theta }\cdot \frac{1} {{\sqrt{1 - \frac{1} {n}}\|\alpha \|}^{\frac{1} {2} }} ,\qquad b \in \Omega _{n}.$$Therefore, we have for all \(b \in \Omega _{n}\) by (ii)
$$\|r_{n,1}^{[i]}(b)\| \leq \frac{1} {\sqrt{n}}\|X{_{i}\|}^{3}\|R_{ n}(b)\|\|R_{n}^{[i]}{(b)\|}^{2} \leq \frac{1} {\sqrt{n}} \frac{n} {n - 1}\Big{( \frac{\theta } {1-\theta }{\frac{1} {\|\alpha \|}^{\frac{1} {2} }} \Big)}^{3}\|X{_{ i}\|}^{3}$$and by (iii)
$$\|r_{n,2}^{[i]}(b)\| \leq \frac{1} {\sqrt{n}}\|X_{i}\|\|R_{n}(b)\|\|R_{n}^{[i]}(b)\| \leq \frac{1} {\sqrt{n - 1}}\Big{( \frac{\theta } {1-\theta }{\frac{1} {\|\alpha \|}^{\frac{1} {2} }} \Big)}^{2}\|X_{ i}\|$$and finally by (iv)
$$\displaystyle\begin{array}{rcl} \|r_{n,3}^{[i]}(b)\|& \leq & 2M\|G_{ n}(b)\|\|r_{n,2}^{[i]}(b)\| + M\|r_{ n,2}^{[i]}{(b)\|}^{2} +\| r_{ n,1}^{[i]}(b)\| \\ & \leq & \frac{1} {\sqrt{n - 1}}\Big{( \frac{\theta } {1-\theta }{\frac{1} {\|\alpha \|}^{\frac{1} {2} }} \Big)}^{3}\|X_{ i}\| \cdot \\ & &\bigg(2M + \frac{1} {\sqrt{n - 1}}M\Big( \frac{\theta } {1-\theta }{\frac{1} {\|\alpha \|}^{\frac{1} {2} }} \Big)\|X_{i}\| + \sqrt{ \frac{n} {n - 1}}\|X{_{i}\|}^{2}\bigg) \\ & \leq & \frac{C} {\sqrt{n}} \\ \end{array}$$for all \(b \in \Omega _{n}\), where C > 0 is a constant, which is independent of n. Hence, it follows from (v) that
$$\sup _{b\in \Omega _{n}}\|\Theta _{n}(b)\| \leq \frac{C} {\sqrt{n}}.$$□
The definition of \(\Omega _{n}\) ensures, that
satisfies
where
We choose
(note, that 0 < γ < 1) and we put \({c}^{{\ast}} := c - (1 + c)\gamma\), which fulfills clearly 1 < c ∗ < c. Since we have θ ∗ < θ and c ∗ < c, we see
and hence
Finally, we define
and
Corollary 4.4.
There exists \(N \in \mathbb{N}\) such that
Proof.
Since we have by Theorem 4.3
for all \(2 \leq n \in \mathbb{N}\), we can choose an \(N \in \mathbb{N}\) such that
holds for all n ≥ N. Now, we get for all \(b \in \Omega _{n}^{{\ast}}\):
-
(i)
\(\Lambda _{n}(b)\) is invertible: Since (4) gives
$$\|G_{n}{(b)}^{-1}\| \leq \frac{1} {1-\sigma }\|b\|\qquad \mbox{ for all $b \in \Omega _{n}$},$$we immediately get
$$\|\Lambda _{n}(b) - b\| \leq \| \Theta _{n}(b)\|\|G_{n}{(b)}^{-1}\| <\gamma \frac{\|b\|} {{c}^{{\ast}}} <\gamma \frac{1} {\|{b}^{-1}\|} < \frac{1} {\|{b}^{-1}\|}$$ -
(ii)
We have \(\|\Lambda _{n}{(b)}^{-1}\| <\kappa _{n}\): Using Lemma 3.1, we get from (i) that
$$\|\Lambda _{n}{(b)}^{-1}\| \leq \frac{1} {1-\gamma }\|{b}^{-1}\| < \frac{\kappa _{n}^{{\ast}}} {1-\gamma } <\kappa _{n}.$$ -
iii)
We have \(\|\Lambda _{n}(b)\|\|\Lambda _{n}{(b)}^{-1}\| < c\): Using
$$\|\Lambda _{n}(b) - b\| <\gamma \frac{\|b\|} {{c}^{{\ast}}}$$from (i) and
$$\|\Lambda _{n}{(b)}^{-1}\| < \frac{1} {1-\gamma }\|{b}^{-1}\|$$from (ii), we get
$$\displaystyle\begin{array}{rcl} \|\Lambda _{n}(b)\|\|\Lambda _{n}{(b)}^{-1}\|& \leq &\big(\|b\| +\| \Lambda _{ n}(b) - b\|\big)\|\Lambda _{n}{(b)}^{-1}\| \\ & <& \Big(1 + \frac{\gamma } {{c}^{{\ast}}}\Big) \frac{1} {1-\gamma }\cdot \| b\|\|{b}^{-1}\| \\ & <& \frac{{c}^{{\ast}}+\gamma } {1-\gamma } < c.\end{array}$$
Finally, this shows \(\Lambda _{n}(b) \in \Omega _{n}\). □
Corollary 4.5.
For all n ≥ N we have
Proof.
For all \(n \in \mathbb{N}\) we define
Let n ≥ N and \(b \in \Omega _{n}^{{\ast}}\) be given. We know, that
holds, i.e. \(w = G(\Lambda _{n}(b)) \in \Omega _{n}^\prime \) is a solution of the equation
Combining (8) with Lemma 4.1, we get
Hence, the equation above has, by Theorem 3.9, the unique solution \(w = G_{n}(b) \in \Omega _{n}^\prime \). This implies, as desired, \(G_{n}(b) = G(\Lambda _{n}(b))\). □
Corollary 4.6.
For all n ≥ N we have
where C′ > 0 is a constant independent of n.
Proof.
For all \(b \in \Omega _{n}^{{\ast}}\subseteq \Omega _{n} \subseteq \Omega \) we have
and therefore by (4), which gives
and (since \(\Lambda _{n}(b) \in \Omega _{n} \subseteq \Omega \)) by (3)
where
This proves the corollary. □
We recall, that the sequence \((X_{i})_{i\in \mathbb{N}}\) is bounded, which implies boundedness of the sequence \((S_{n})_{n\in \mathbb{N}}\) as well. This has the important consequence, that
for some κ ∗ > 0. If we define
we easily see \({\Omega }^{{\ast}}\subseteq \Omega _{n}^{{\ast}}\) for all \(n \in \mathbb{N}\). Hence, by renaming \({\Omega }^{{\ast}}\) to \(\Omega \) etc., we have shown our main Theorem 1.1.
We conclude this section with the following remark about the geometric structure of subsets of \(\mathrm{M}_{m}(\mathbb{C})\) like \(\Omega \).
Lemma 4.7.
For κ > 0 and c > 1 we consider
For \(\lambda ,\mu \in \mathbb{C}\setminus \{0\}\) we define
If \(\frac{1} {\kappa } < \vert \mu \vert \) holds, we have \(\Lambda (\lambda ,\mu ) \in \Omega \) for all
Particularly, we have for all \(\vert \lambda \vert > \frac{1} {\kappa }\), that \(\lambda 1 \in \Omega \).
Proof.
Let \(\mu \in \mathbb{C}\setminus \{0\}\) with \(\frac{1} {\kappa } < \vert \mu \vert \) be given. For all \(\lambda \in \mathbb{C}\setminus \{0\}\), which satisfy (11), we get
and
which implies \(\Lambda (\lambda ,\mu ) \in \Omega \). In particular, for \(\lambda \in \mathbb{C}\setminus \{0\}\) with \(\vert \lambda \vert > \frac{1} {\kappa }\) we see that μ = λ fulfills (11) and it follows \(\lambda 1 = \Lambda (\lambda ,\lambda ) \in \Omega \). □
4.3 Application to Multivariate Situation
4.3.1 Multivariate Free Central Limit Theorem
Let \((x_{i}^{(k)})_{k=1}^{d}\), \(i \in \mathbb{N}\), be free and identically distributed sets of d self-adjoint non-zero random variables in some non-commutative C ∗ -probability space \((\mathcal{C},\tau )\), with τ faithful, such that
and
We denote by \(\Sigma = (\sigma _{k,l})_{k,l=1}^{d}\), where \(\sigma _{k,l} :=\tau (x_{i}^{(k)}x_{i}^{(l)})\), their joint covariance matrix. Moreover, we put
We know (cf. [11]), that \((S_{n}^{(1)},\ldots ,S_{n}^{(d)})\) converges in distribution as n → ∞ to a semicircular family \((s_{1},\ldots ,s_{d})\) of covariance \(\Sigma \). For notational convenience we will assume that \(s_{1},\ldots ,s_{d}\) live also in \((\mathcal{C},\tau )\); this can always be achieved by enlarging \((\mathcal{C},\tau )\).
Using Proposition 2.1 and Proposition 2.3 in [6], for each polynomial p of degree g in d non-commuting variables vanishing in 0, we can find \(m \in \mathbb{N}\) and \(a_{1},\ldots ,a_{d} \in \mathrm{M}_{m}(\mathbb{C})\) such that
are invertible in \(\mathcal{C}\) if and only if
respectively, are invertible in \(\mathcal{A} = \mathrm{M}_{m}(\mathcal{C})\). The matrices \(\Lambda (\lambda ,1) \in \mathrm{M}_{m}(\mathbb{C})\) were defined in Lemma 4.7, and S n and s are defined as follows:
and
If we also put
then we have
We note, that the sequence \((X_{i})_{i\in \mathbb{N}}\) is ∗ -free with respect to the conditional expectation \(E : \mathcal{A} = \mathrm{M}_{m}(\mathcal{C}) \rightarrow \mathrm{M}_{m}(\mathbb{C})\) and that all the X i ’s have the same ∗ -distribution with respect to E and that they satisfy E[X i ] = 0. In addition, (12) implies \(\sup _{i\in \mathbb{N}}\|X_{i}\| < \infty \). Hence, the conditions of Theorem 1.1 are fulfilled. But before we apply it, we note that \((S_{n})_{n\in \mathbb{N}}\) converges in distribution (with respect to E) to s, which is an \(\mathrm{M}_{m}(\mathbb{C})\)-valued semicircular element with covariance mapping
which is given by
Now, we get from Theorem 1.1 constants κ ∗ > 0, c ∗ > 0 and C′ > 0 and \(N \in \mathbb{N}\) such that we have for the difference of the operator-valued Cauchy transforms
the estimate
where we put
Moreover, Proposition 2.3 in [6] tells us
and
where \(\pi : \mathrm{M}_{m}(\mathbb{C}) \rightarrow \mathbb{C}\) is the mapping given by \(\pi ((a_{i,j})_{i,j=1,\ldots ,m}) := a_{1,1}\). Since \(\tau \circ (\pi \otimes \mathrm{id}_{\mathcal{C}}) =\pi \circ E\), this implies a direct connection between the operator-valued Cauchy transforms of S n and s and the scalar-valued Cauchy transforms of \(P_{n} := p(S_{n}^{(1)},\ldots ,S_{n}^{(d)})\) and \(P := p(s_{1},\ldots ,s_{d})\), respectively. To be more precise, we get
and
for all \(\lambda \in \rho _{\mathcal{C}}(P_{n})\) and \(\lambda \in \rho _{\mathcal{C}}(P)\), respectively.
If we choose \(\mu \in \mathbb{C}\) such that \(\vert \mu \vert >{ \frac{1} {\kappa }^{{\ast}}}\) holds, it follows from Lemma 4.7, that \(\Lambda (\lambda ,\mu ) \in {\Omega }^{{\ast}}\) is fulfilled for all λ ∈ A(μ), where \(A(\mu ) \subseteq \mathbb{C}\) denotes the open set of all \(\lambda \in \mathbb{C}\) satisfying (11), i.e.
If we apply Propositions 2.1 and 2.2 in [6] to the polynomial \({\frac{1} {\mu }^{g}}p\) (which corresponds to the operators \(\frac{1} {\mu } S_{n}\) and \(\frac{1} {\mu } S\)), we easily deduce that
are invertible in \(\mathcal{C}\) if and only if
respectively, are invertible in \(\mathcal{A}\). Moreover, we have
and
for all \(\lambda \in \rho _{\mathcal{C}}({ \frac{1} {\mu }^{g-1}} P_{n})\) and \(\lambda \in \rho _{\mathcal{C}}({ \frac{1} {\mu }^{g-1}} P)\), respectively.
Particularly, for all λ ∈ A(μ) we get \(\Lambda (\lambda ,\mu ) \in {\Omega }^{{\ast}}\) and hence \(\lambda \in \rho _{\mathcal{C}}({ \frac{1} {\mu }^{g-1}} P_{n}) \cap \rho _{\mathcal{C}}({ \frac{1} {\mu }^{g-1}} P)\) for all n ≥ N. Therefore, Theorem 1.1 implies
and hence
This means, that
holds for all \(z \in \mathbb{C}\) with \({ \frac{z} {\mu }^{g-1}} \in A(\mu )\) and all n ≥ N. By definition of A(μ), we particularly get
where we put \(C := C^\prime {({c}^{{\ast}})}^{2}\vert \mu {\vert }^{g} > 0\). Since \(z\mapsto G_{P}(z) - G_{P_{n}}(z)\) is holomorphic on \(\{z \in \mathbb{C}\mid \vert z\vert > R\}\) for \(R := \frac{1} {{c}^{{\ast}}}\vert \mu {\vert }^{g} > 0\) and extends holomorphically to ∞, the maximum modulus principle gives
This shows Theorem 1.2 in the case of a polynomial p vanishing in 0. For a general polynomial p, we consider the polynomial \(\tilde{p} = p - p_{0}\) with \(p_{0} := p(0,\ldots ,0)\), which leads to the operators \(\tilde{P} = P - p_{0}1\) and \(\tilde{P}_{n} = P_{n} - p_{0}1\). Since we can apply the result above to \(\tilde{p}\) and since the Cauchy transforms G P and \(G_{P_{n}}\) are just translations of \(G_{\tilde{P}}\) and \(G_{\tilde{P}_{n}}\), respectively, the general statement follows easily.
4.3.2 Estimates in Terms of the Kolmogorov Distance
In the classical case, estimates between scalar-valued Cauchy transforms can be established (for self-adjoint operators) in all of the upper complex plane and lead then to estimates in terms of the Kolmogorov distance. In the case treated above, we have a statement about the behavior of the difference between two Cauchy transforms only near infinity. Even in the case, where our operators are self-adjoint, we still have to transport estimates from infinity to the real line, and hence we can not apply the results of Bai [1] directly. A partial solution to this problem was given in the appendix of [14] with the following theorem, formulated in terms of probability measures instead of operators. There we use the notation G μ for the Cauchy transform of the measure μ, and put
Theorem 4.8.
Let μ be a probability measure with compact support contained in an interval [−A,A] such that the cumulative distribution function \(\mathcal{F}_{\mu }\) satisfies
for some constant ρ > 0. Then for all R > 0 and β ∈ (0,1) we can find \(\Theta > 0\) and m0 > 0 such that for any probability measure ν with compact support contained in [−A,A], which satisfies
for some m > m 0 , the Kolmogorov distance \(\Delta (\mu ,\nu ) :=\sup _{x\in \mathbb{R}}\vert \mathcal{F}_{\mu }(x) -\mathcal{F}_{\nu }(x)\vert \) fulfills
Obviously, this leads to the following questions: First, the stated estimate for the speed of convergence in terms of the Kolmogorov distance is far from the expected one. We hope to improve this result in a future work. Furthermore, in order to apply this theorem, we have to ensure that \(p(s_{1},\ldots ,s_{d})\) has a continuous density. As mentioned in the introduction, it is a still unsolved problem, whether this is always true for any self-adjoint polynomials p.
References
Z.D. Bai, Convergence rate of expected spectral distributions of large-dimensional random matrices: Part I. Wigner matrices. Ann. Probab. 21, 625–648 (1993)
G.P. Chistyakov, F. Götze, Limit theorems in free probability theory I. Ann. Probab. 1(1), 54–90 (2008)
C.J. Earle, R.S. Hamilton, A Fixed Point Theorem for Holomorphic Mappings. Global Analysis (Proc. Sympos. Pure Math., Vol. XVI, Berkeley, CA, 1968) (American Mathematical Society, Providence, 1970), pp. 61–65
F. Götze, A. Tikhomirov, Limit Theorems for Spectra of Random Matrices with Martingale Structure. Stein’s Method and Applications, Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap., vol. 5 (Singapore University Press, Singapore, 2005), pp. 181–193
U. Haagerup, S. Thorbjørnsen, A new application of random matrices: \(\mathrm{Ext}(C_{\mathrm{red}}^{{\ast}}(F_{2}))\) is not a group. Ann. Math. 162, 711–775 (2005)
U. Haagerup, H. Schultz, S. Thorbjørnsen, A random matrix approach to the lack of projections in \(C_{\mathrm{red}}^{{\ast}}(\mathbb{F}_{2})\). Adv. Math. 204, 1–83 (2006)
L.A. Harris, Fixed points of holomorphic mappings for domains in Banach spaces. Abstr. Appl. Anal. 2003(5), 261–274 (2003)
J.W. Helton, R.R. Far, R. Speicher, Operator-valued semicircular elements: solving a quadratic matrix equation with positivity constraints. Int. Math. Res. Not. (22), Article ID rnm086, 15 (2007)
M. Junge, Embedding of the operator space OH and the logarithmic ‘little Grothendieck inequality’. Invent. math. 161, 225–286 (2005)
V. Kargin, Berry-Esseen for free random variables. J. Theor. Probab. 20, 381–395 (2007)
R. Speicher, A new example of independence and white noise. Probab. Theor. Relat. Fields 84, 141–159 (1990)
R. Speicher, Combinatorial theory of the free product with amalgamation and operator-valued free probability theory. Mem. Am. Math. Soc. 132(627), x + 88 (1998)
R. Speicher, On the rate of convergence and Berry-Esseen type theorems for a multivariate free central limit theorem. SFB 701. Preprint (2007)
R. Speicher, C. Vargas, Free deterministic equivalents, rectangular random matrix models and operator-valued free probability theory. Random Matrices: Theor. Appl. 1(1), 1150008, 26 (2012)
D. Voiculescu, Operations on certain non-commutative operator-valued random variables. Astérisque 232, 243–275 (1995). Recent advances in operator algebras (Orléans, 1992)
Acknowledgements
This project was initiated by discussions with Friedrich Götze during the visit of the second author at the University of Bielefeld in November 2006. He thanks the Department of Mathematics and in particular the SFB 701 for its generous hospitality and Friedrich Götze for the invitation and many interesting discussions. A preliminary version of this paper appeared as preprint [13] of SFB 701. Research of T. Mai supported by funds of Roland Speicher from the Alfried Krupp von Bohlen und Halbach-Stiftung; research of R. Speicher partially supported by a Discovery grant from NSERC (Canada) and by a Killam Fellowship from the Canada Council for the Arts.The second author also thanks Uffe Haagerup for pointing out how ideas from [6] can be used to improve the results from an earlier version of this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Additional information
Dedicated to Professor Friedrich Götze on the occasion of his 60th birthday
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mai, T., Speicher, R. (2013). Operator-Valued and Multivariate Free Berry-Esseen Theorems. In: Eichelsbacher, P., Elsner, G., Kösters, H., Löwe, M., Merkl, F., Rolles, S. (eds) Limit Theorems in Probability, Statistics and Number Theory. Springer Proceedings in Mathematics & Statistics, vol 42. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36068-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-36068-8_7
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-36067-1
Online ISBN: 978-3-642-36068-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)