Abstract
In classical probability the law of large numbers for the multiplicative convolution follows directly from the law for the additive convolution. In free probability this is not the case. The free additive law was proved by D. Voiculescu in 1986 for probability measures with bounded support and extended to all probability measures with first moment by J.M. Lindsay and V. Pata in 1997, while the free multiplicative law was proved only recently by G. Tucci in 2010. In this paper we extend Tucci’s result to measures with unbounded support while at the same time giving a more elementary proof for the case of bounded support. In contrast to the classical multiplicative convolution case, the limit measure for the free multiplicative law of large numbers is not a Dirac measure, unless the original measure is a Dirac measure. We also show that the mean value of lnx is additive with respect to the free multiplicative convolution while the variance of lnx is not in general additive. Furthermore we study the two parameter family (μ α, β ) α, β ≥ 0 of measures on (0, ∞) for which the S-transform is given by \(S_{\mu _{\alpha,\beta }}(z) = {(-z)}^{\beta }{(1 + z)}^{-\alpha }\), 0 < z < 1.
Mathematics Subject Classification (2010): 46L54, 60F05.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
8.1 Introduction
In classical probability the weak law of large numbers is well known (see for instance [14, Corollary 5.4.11]), both for additive and multiplicative convolution of Borel measures on \(\mathbb{R}\), respectively, [0, ∞).
Going from classical probability to free probability, one could ask if similar results exist for the additive and multiplicative free convolutions \(\boxplus \) and \(\boxtimes \) as defined by D. Voiculescu in [16] and [17] and extended to unbounded probability measures by H. Bercovici and D. Voiculescu in [4]. The law of large numbers for the free additive convolution of measures with bounded support is an immediate consequence of D. Voiculescu’s work in [16] and J. M. Lindsay and V. Pata proved it for measures with first moment in [11, Corollary 5.2]. !free additive
Theorem 1 ([11, Corollary 5.2]).
Let μ be a probability measure on \(\mathbb{R}\) with existing mean value α, and let \(\psi _{n}: \mathbb{R} \rightarrow \mathbb{R}\) be the map \(\psi _{n}(x) = \frac{1} {n}x\) . Then
where convergence is weak and δ x denotes the Dirac measure at \(x \in \mathbb{R}\) .
Here \(\dot{\phi }(\mu )\) denotes the image measure of μ under ϕ for a Borel measurable function \(\phi: \mathbb{R} \rightarrow \mathbb{R}\), respectively, [0, ∞) → [0, ∞).
In classical probability the multiplicative law follows directly from the additive law. This is not the case in free probability, here a multiplicative law requires a separate proof. This has been proved by G.H. Tucci in [15, Theorem 3.2] for measures with bounded support using results on operator algebras from [6] and [8]. In this paper we give an elementary proof of Tucci’s theorem which also shows that the theorem holds for measures with unbounded support. !free multiplicative
Theorem 2.
Let μ be a probability measure on [0,∞) and let ϕ n : [0,∞) → [0,∞) be the map \(\phi _{n}(x) = {x}^{\frac{1} {n} }\) . Set δ = μ({0}). If we denote
then ν n converges weakly to a probability measure ν on [0,∞). If μ is a Dirac measure on [0,∞) then ν = μ. Otherwise ν is the unique measure on [0,∞) characterised by \(\nu \left (\left [0, \frac{1} {S_{\mu }(t-1)}\right ]\right ) = t\) for all t ∈ (δ,1) and ν({0}) = δ. The support of the measure ν is the closure of the interval
where 0 ≤ a < b ≤∞.
Note that unlike the additive case, the multiplicative limit distribution is only a Dirac measure if μ is a Dirac measure. Furthermore S μ and hence (by [17, Theorem 2.6]) μ can be reconstructed from the limit measure.
We start by recalling some definitions and proving some preliminary results in Sect. 8.2, which then in Sect. 8.3 are used to prove Theorem 2. In Sect. 8.4 we prove some further formulas in connection with the limit law, which we in Sect. 8.5 apply to the two parameter family (μ α, β ) α, β ≥ 0 of measures on (0, ∞) for which the S-transform is given by \(S_{\mu _{\alpha,\beta }}(z) = \frac{{(-z)}^{\beta }} {{(1+z)}^{\alpha }}\), 0 < z < 1.
8.2 Preliminaries
We start with recalling some results we will use and proving some technical tools necessary for the proof of Theorem 2. At first we recall the definition and some properties of Voiculescu’s S-transform for measures on [0, ∞) with unbounded support as defined by H. Bercovici and D. Voiculescu in [4].
Definition 1 ([4, Sect. 6]).
Let μ be a probability measure on [0, ∞) and assume that δ = μ({0}) < 1. We define \(\psi _{\mu }(u) =\int _{ 0}^{\infty } \frac{tu} {1-tu}\mathrm{d}\mu (t)\) and denote its inverse in a neighbourhood of (δ − 1, 0) by χ μ . Now we define the S-transform of μ by \(S_{\mu }(z) = \frac{z+1} {z} \chi _{\mu }(z)\) for z ∈ (δ − 1, 0).
Lemma 1 ([4, Proposition 6.8]).
Let μ be a probability measure on [0,∞) with δ = μ({0}) < 1 then S μ is decreasing on (δ − 1,0) and positive. Moreover, if δ > 0 we have S μ (z) →∞ if z →δ − 1.
Lemma 2.
Let μ be a probability measure on [0,∞) with δ = μ({0}) < 1. Assume that μ is not a Dirac measure, then S μ ′(z) < 0 for z ∈ (δ − 1,0). In particular S μ is strictly decreasing on (δ − 1,0).
Proof.
For u ∈ (−∞, 0),
Moreover \(\lim _{u\rightarrow 0-}\psi _{\mu }(u) = 0\) and \(\lim _{u\rightarrow -\infty }\psi _{\mu }(u) =\delta -1\). Hence ψ μ is a strictly increasing homeomorphism of (−∞, 0) onto (δ − 1, 0). For u ∈ (−∞, 0), we have
Hence
where the denominator is positive and the nominator is equal to
where we have used that
Since μ is not a Dirac measure,
and thus
which shows that the right hand side of (8.2) is strictly positive. Hence
for z ∈ (δ − 1, 0), which proves the lemma. □
Remark 1.
Furthermore, by [4, Proposition 6.1] and [4, Proposition 6.3] ψ μ and χ μ are analytic in a neighbourhood of (−∞, 0), respectively, (−1, 0), hence S μ is analytic in a neighbourhood of (δ − 1, 0).
Lemma 3 ([4, Corollary 6.6]).
Let μ and ν be probability measures on [0,∞), none of them being δ 0 , then we have \(S_{\mu \boxtimes \nu } = S_{\mu }S_{\nu }\) .
Next we have to determine the image of S μ . Here we closely follow the argument given for measures with compact support by F. Larsen and the first author in [6, Theorem 4.4].
Lemma 4.
Let μ be a probability measure on [0,∞) not being a Dirac measure, then S μ ((δ − 1,0)) = (b −1 ,a −1 ), where a, b and δ are defined as in Theorem 2.
Proof.
First assume δ = 0. Observe that for u → ∞ we have
Hence
Similarly, for u → 0 we have
Hence
As χ μ is the inverse of ψ μ we have
By (8.1) and Lemma 2 ψ μ is strictly increasing and continuous and S μ is strictly decreasing and continuous so S μ (ψ μ ((−∞, 0))) = S μ ((−1, 0)) = (b −1, a −1).
If now δ > 0 we have by Lemma 1 that S μ (z) → ∞ for z → δ − 1, so in this case continuity gives us S μ ((δ − 1, 0)) = (b −1, ∞), which is as desired as a = 0 in this case. □
8.3 Proof of the Main Result
Let μ be a probability measure on [0, ∞) and let ν be as defined in Theorem 2. If μ is a Dirac measure, then ν n = μ for all n and hence ν n → ν = μ weakly, so the theorem holds in this case. In the following we can therefore assume that μ is not a Dirac measure. We start by assuming further that μ({0}) = 0, and will deal with the case μ({0}) > 0 in Remark 2.
Lemma 5.
For all t ∈ (0,1) and all n ≥ 1 we have
Proof.
Let t ∈ (0, 1) and set z = t − 1. By Definition 1 we have
In the last equality we use multiplicativity of the S-transform from Lemma 3.
Now substitute t = z + 1 and afterwards y n = x and use the definition of ν n to get
□
Now, using this lemma, we can prove the following characterisation of the weak limit of ν n .
Lemma 6.
For all t ∈ (0,1) we have \(t =\lim _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t-1)}\right ]\right )\) .
Proof.
Fix t ∈ (0, 1) and let t′ ∈ (0, t). Then
Here the first inequality holds as t′ ≤ t while S μ (t′ − 1)n x n > 0, the second holds as \(1 + \frac{1-t} {t} S_{\mu }{(t^{\prime} - 1)}^{n}{x}^{n} \geq 0\), and the last because ν n is a probability measure.
By Lemma 2, S μ (t − 1) is strictly decreasing, and hence \(\frac{S_{\mu }(t^{\prime}-1)} {S_{\mu }(t-1)} > 1\). This implies
And hence
As this holds for all t′ ∈ (0, t) we have
On the other hand if t ′ ′ ∈ (t, 1) we get
Here the first inequality holds as t ′ ′ > t while S μ (t ′ ′− 1)x n ≥ 0, and the second to last inequality holds as S μ (t − 1) is decreasing.
Again as S μ (t − 1) is strictly decreasing we have \(\frac{S_{\mu }({t}^{{\prime\prime}}-1)} {S_{\mu }(t-1)} < 1\), hence
This implies
As this holds for all t ′ ′ ∈ (t, 1) we have
Combining (8.3) and (8.4) we get
as desired. □
For proving weak convergence of ν n to ν it remains to show that ν n vanishes in limit outside of the support of ν.
Lemma 7.
For all x ≤ a and y ≥ b we have ν n ([0,x]) → 0, respectively, ν n ([0,y]) → 1.
Proof.
To prove the first convergence, let t ≤ a and s ∈ (0, 1). Now we have that \(t \leq \frac{1} {S_{\mu }(s-1)}\) from Lemma 4 and hence
Here the inequality holds because ν n is a positive measure and the equality comes from Lemma 6. As this holds for all s ∈ (0, 1) we have \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) \leq 0\) and hence \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) = 0\) by positivity of the measure.
For the second convergence we proceed in the same manner, by letting t ≥ b and s ∈ (0, 1). Now we have that \(t \geq \frac{1} {S_{\mu }(s-1)}\) from Lemma 4 and hence
Again the inequality holds because ν n is a positive measure and the equality comes from Lemma 6. As this holds for all s ∈ (0, 1) we have \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) \geq 1\) and hence \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) = 1\) as ν n is a probability measure. □
Lemmas 6 and 7 now prove Theorem 2 without any assumptions on bounded support as weak convergence of measures is equivalent to point-wise convergence of distribution functions for all but countably many x ∈ [0, ∞).
Remark 2.
In the case δ = μ({0}) > 0, S μ is only defined on (δ − 1, 0) and S μ (z) → ∞ when z → δ − 1. This implies that Lemma 5 only holds for t ∈ (δ, 1), with a similar proof. Similarly, Lemma 6 only holds for t ∈ (δ, 1), and in the proof we have to assume t′ ∈ (δ, t). Similarly, in the proof of Lemma 7 we have to assume s ∈ (δ, 1). Moreover, in Lemma 7 the statement, 0 ≤ x ≤ a implies ν n ([0, x]) → 0 for n → ∞, should be changed to a = 0 and ν n ({0}) = δ = ν({0}) for all \(n \in \mathbb{N}\).
Using our result we can prove the following corollary, generalizing a theorem ([8, Theorem 2.2]) by H. Schultz and the first author.
Let \((\mathcal{M},\tau )\) be a finite von Neumann algebra \(\mathcal{M}\) with a normal faithful tracial state τ. In [7, Proposition 3.9] the definition of Brown’s spectral distribution measure μ T was extended to all operators \(T \in {\mathcal{M}}^{\varDelta }\), where \({\mathcal{M}}^{\varDelta }\) is the set of unbounded operators affiliated with \(\mathcal{M}\) for which τ(ln+( | T | )) < ∞.
Corollary 1.
If T is an R-diagonal in \({\mathcal{M}}^{\varDelta }\) then \(\dot{\phi }(\mu _{{({T}^{{\ast}})}^{n}{T}^{n}}) \rightarrow \dot{\psi } (\mu _{T})\) weakly, where ψ(z) = |z| 2, \(z \in \mathbb{C}\) , and ϕ n (x) = x 1∕n for x ≥ 0.
Proof.
By [7, Proposition 3.9] we have \(\mu _{{T}^{{\ast}}T}^{\boxtimes n} =\mu _{{({T}^{{\ast}})}^{n}{T}^{n}}\) and by Theorem 2 we have \(\dot{\phi }(\mu _{{T}^{{\ast}}T}^{\boxtimes n}) \rightarrow \nu\) weakly. On the other hand observe that \(\nu =\dot{\psi } (\mu _{T})\) by [7, Theorem 4.17] which gives the result. □
Remark 3.
In [8, Theorem 1.5] it was shown that \(\dot{\phi }_{n}(\mu _{{({T}^{{\ast}})}^{n}{T}^{n}}) \rightarrow \dot{\psi } (\mu _{T})\) weakly for all bounded operators \(T \in \mathcal{M}\). It would be interesting to know, whether this limit law can be extended to all \(T \in {\mathcal{M}}^{\varDelta }\).
8.4 Further Formulas for the S-Transform
In this section we present some further formulas for the S-transform of measures on [0, ∞), obtained by similar means as in the preceding sections and use those to investigate the difference between the laws of large numbers for classical and free probability. From now on we assume μ({0}) = 0. Therefore μ can be considered as a probability measure on (0, ∞).
We start with a technical lemma which will be useful later.
Lemma 8.
We have the following identities
Proof.
For the first identity we start with the substitution \(x = \frac{t} {1-t}\) which gives us \(t = \frac{x} {1+x}\) and \(\mathrm{d}t = \frac{\mathrm{d}x} {{(1+x)}^{2}}\) and hence
where B(⋅ , ⋅ ) denotes the Beta function. The second and the third identity follow from the substitution t↦exp(−x), respectively, 1 − t↦exp(−x).
Finally, the last identity follows by observing
which gives the desired result. □
Now we prove two propositions calculating the expectations of lnx and ln2 x both for μ and ν expressed by the S-transform of μ.
Proposition 1.
Let μ be a probability measure on (0,∞) and let ν be as defined in Theorem 2 . Then \(\int _{0}^{\infty }\left \vert \ln x\right \vert \mathrm{d}\mu (x) < \infty \) if and only if \(\int _{0}^{1}\left \vert \ln S_{\mu }(t - 1)\right \vert \mathrm{d}t < \infty \) and if and only if \(\int _{0}^{\infty }\left \vert \ln x\right \vert \mathrm{d}\nu (x) < \infty \) . If these integrals are finite, then
Proof.
For x > 0, put \({\ln }^{+}x =\max (\ln x,0)\) and \({\ln }^{-}x =\max (-\ln x,0)\). Then one easily checks that
and by replacing x by \(\frac{1} {x}\) it follows that
Hence
and
We prove next that
and
Recall from (8.1), that
Hence by Tonelli’s theorem
and similarly,
By partial integration, we have
and similarly,
which proves (8.5) and (8.6). Therefore
and substituting x = ψ μ (−u) + 1 we get
Since \(\int _{0}^{1}\left \vert \ln \left ( \frac{t} {1-t}\right )\right \vert \mathrm{d}t < \infty \) it follows that
If μ is not a Dirac measure, the substitution x = S μ (t − 1)−1, 0 < t < 1 gives t = ν((0, x]) for a < x < b, where as before \(a ={ \left (\int _{0}^{\infty }{x}^{-1}\mathrm{d}\mu (x)\right )}^{-1}\) and \(b =\int _{ 0}^{\infty }x\mathrm{d}\mu (x)\). The measure ν is concentrated on the interval (a, b). Hence
This proves the first statement in Proposition 1. If all three integrals in that statement are finite, we get
By the substitution t = ψ μ (−u) + 1 we get
Hence \(\int _{0}^{\infty }\ln x\mathrm{d}\mu (x) = -\int _{0}^{1}\ln S_{\mu }(t - 1)\mathrm{d}t\). Moreover, by the substitution x = S μ (t − 1)−1, 0 < t < 1 we get
Finally, if μ = δ x , x ∈ (0, ∞), this identity holds trivially, because ν = δ x and \(S_{\nu }(z) = \frac{1} {x},0 < z < 1\). □
Corollary 2.
Let μ 1 and μ 2 be probability measures on (0,∞). If \(\mathbb{E}_{\mu _{1}}(\ln x)\) and \(\mathbb{E}_{\mu _{2}}(\ln x)\) exist then \(\mathbb{E}_{\mu _{1}\boxtimes \mu _{2}}(\ln x)\) also exists and
where \(\mathbb{E}_{\mu }(f) =\int _{ 0}^{\infty }f(x)\mathrm{d}\mu (x)\) .
Proof.
The statement follows directly from Proposition 1 and multiplicativity of the S-transform. □
For further use, we define the map ρ for a probability measure μ on (0, ∞) by
Note that ρ(μ) is well-defined and non-negative for all probability measures on (0, ∞) because
where the first term on the right hand side is non-negative for all t ∈ (0, 1) and the second term is integrable with integral 0.
Lemma 9.
Let μ be a probability measure on (0,∞), then
Furthermore, ρ(μ) = 0 if and only if μ is a Dirac measure. Moreover, equality holds in the right inequality if and only if \(S_{\mu }(z) ={ \left ( \frac{z} {1+z}\right )}^{\gamma }\) for some γ > 0 and in this case \(\rho (\mu ) =\gamma \frac{{\pi }^{2}} {3}\) . Additionally, if μ 1 ,μ 2 are probability measures on (0,∞) we have \(\rho (\mu _{1} \boxtimes \mu _{2}) =\rho (\mu _{1}) +\rho (\mu _{2})\) .
Proof.
We already have observed ρ ≥ 0. For the second inequality observe that
by the Cauchy-Schwarz-inequality, where the first term equals \(\frac{{\pi }^{2}} {3}\) by Lemma 8.
If μ = δ a for some a > 0 we have \(S_{\mu }(z) = \frac{1} {a}\), hence lnS μ (t − 1) is constant so the oddity of \(\ln (\frac{1-t} {t} )\) gives us ρ(μ) = 0. On the other hand, if ρ(μ) = 0, the first term in (8.7) has to integrate to 0, but by symmetry of \(\ln \left (\frac{1-t} {t} \right )\) and the fact that S μ is decreasing, this implies that S μ must be constant, hence μ is a Dirac measure.
Equality in the second inequality, by the Cauchy-Schwarz inequality happens precisely if \(\ln S_{\mu }(t - 1) =\gamma \ln (\frac{1-t} {t} )\) for some γ > 0 which is the case if and only if \(S_{\mu }(t - 1) ={ \left (\frac{1-t} {t} \right )}^{\gamma }\), and in this case \(\rho (\mu ) =\gamma \frac{{\pi }^{2}} {3}\) by Lemma 8.
For the last formula we use multiplicity of the S-transform to get
□
Proposition 2.
Let μ be a probability measure on (0,∞), and let ν be defined as in Theorem 2 . Then
as equalities of numbers in [0,∞], where \(\mathbb{V}_{\sigma }(\ln x)\) denotes the variance of ln x with respect to a probability measure σ on (0,∞). Moreover,
Proof.
We first prove the following identity
Since \(\psi ^{\prime}(-u) =\int _{ 0}^{\infty } \frac{x} {{(1+\mathit{ux})}^{2}} \mathrm{d}x\), we get by Tonelli’s theorem, that
Note next that
where \(c_{0} =\int _{ 0}^{\infty } \frac{{\ln }^{2}v} {{(1+v)}^{2}} \mathrm{d}v\), \(c_{1} = -2\int _{0}^{\infty } \frac{\ln v} {{(1+v)}^{2}} \mathrm{d}v\), and \(c_{2} =\int _{ 0}^{\infty } \frac{1} {{(1+v)}^{2}} \mathrm{d}v = 1\). Moreover, by the substitution \(v = \frac{1} {w}\) one gets c 1 = −c 1 and hence c 1 = 0. Finally, by the substitution \(v = \frac{t} {1-t},0 < t < 1\) and Lemma 8,
Hence
which proves (8.8). Next by the substitution t = ψ μ (−u) + 1, we have
Since \(t\mapsto \ln \left (\frac{1-t} {t} \right )\) is square integrable on (0, 1) the right hand side of (8.9) is finite if and only if
Hence by (8.8) and (8.9) this condition is equivalent to
so to prove the first equation in Proposition 2 is suffices to consider the case, where the two above integrals are finite. In that case ρ(μ) < ∞ by Lemma 9. Thus by Lemma 8 and the definition of ρ(μ),
The second equality in Proposition 2
follows from the substitution x = S μ (t − 1)−1 in case μ is not a Dirac measure, and it is trivially true for Dirac measures. By the first two equalities in Proposition 2, we have
If both sides of this equality are finite, then by Proposition 1,
where both integrals are well-defined. Combined with (8.10) we get
and if \(\int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\mu (x) = +\infty \), both sides of (8.11) must be infinite by (8.10).
As the S-transform behaves linearly when scaling the probability distribution in the sense that the image measure μ c of μ under x↦cx for c > 0 gives us \(S_{\mu _{c}}(z) = {c}^{-1}S_{\mu }(z)\) we have for ρ that
by anti-symmetry of the second term around \(t = \frac{1} {2}\). Using this for \(c =\exp \left (\mathbb{E}_{\nu }(\ln x)\right )\), we get
□
Now we can use the preceding lemmas to investigate the different behavior of the multiplicative law of large numbers in classical and free probability. Note that in classical probability for a family of identically distributed independent random variables (X i ) i = 1 ∞ we have the identity \(\mathbb{V}(\ln (\prod _{i=1}^{n}X_{i})) = n\mathbb{V}(\ln X_{1})\). In free probability by Propositions 1 and 2 we have instead
Hence \(\mathbb{V}_{{\mu }^{\boxtimes n}}(\ln t) = n\mathbb{V}_{\mu }(\ln t) + n(n - 1)\mathbb{V}_{\nu }(\ln t) > n\mathbb{V}_{\mu }(\ln t)\) for n ≥ 2 if μ is not a Dirac measure and \(\mathbb{V}_{\nu }(\ln t) < \infty \), which shows that the variance of lnt is not in general additive.
Lemma 10.
Let μ be a probability measure on (0,∞) and let ν be defined as in Theorem 2. Then
for − 1 < γ < 1 and
for \(\gamma \in \mathbb{R}\) as equalities of numbers in [0,∞].
Proof.
By Tonelli’s theorem followed by the substitution u = yx we get
where \(B(s,t) =\int _{ 0}^{\infty } \frac{{u}^{s-1}} {{(1+u)}^{s+t}}\mathrm{d}u\) is the Beta function. But \(B(1-\gamma,1+\gamma ) = \frac{\sin (\pi \gamma )} {\pi \gamma }\) by well-known properties of B. Substitute now x = −χ μ (−z) and z = 1 − t to get
which gives the first identity. The second identity follows from the substitution x = S μ (t − 1)−1 and the properties of ν from Theorem 2. □
8.5 Examples
In this section we will investigate a two parameter family of distributions for which there can be made explicit calculations.
Proposition 3.
Let α,β ≥ 0. There exists a probability measure μ α,β on (0,∞) which S-transform is given by
Furthermore, these measures form a two-parameter semigroup, multiplicative under \(\boxtimes \) induced by multiplication of (α,β) ∈ [0,∞) × [0,∞).
Proof.
Note first that α = β = 0 gives \(S_{\mu _{0,0}} = 1\), which by uniqueness of the S-transform results in μ 0, 0 = δ 1, hence we can in the following assume (α, β) ≠ (0, 0).
Define the function \(v_{\alpha,\beta }: \mathbb{C}\setminus [0,1] \rightarrow \mathbb{C}\) by
for all \(z \in \mathbb{C}\setminus [0,1]\).
In the following we for \(z \in \mathbb{C}\) denote by \(\arg z \in [-\pi,\pi ]\) its argument. Assume z = x + iy and y > 0 then
where arg(−x −iy) < 0, which implies that \(\ln ({\mathbb{C}}^{+}) \subseteq {\mathbb{C}}^{-}\). Similarly, if we assume \(z = x + \mathrm{i}y\) and y > 0 then
where \(\arg ((x + 1) + \mathrm{i}y) > 0\), which implies that \(-\ln (1 + {\mathbb{C}}^{+}) \subseteq {\mathbb{C}}^{-}\) and hence \(v_{\alpha,\beta }({\mathbb{C}}^{+}) \subseteq {\mathbb{C}}^{-}\). Furthermore, we observe that for all \(z \in \mathbb{C}\), \(v_{\alpha,\beta }(\bar{z}) = \overline{v_{\alpha,\beta }(z)}\). By [4, Theorem 6.13 (ii)] these results imply that there exists a unique \(\boxtimes \)-infinitely divisible measure μ α, β with the S-transform
The semigroup property follows from multiplicativity of the S-transform. □
The existence of μ α, 0 was previously proven by T. Banica, S.T. Belinschi, M. Capitaine and B. Collins in [2] as a special case of free Bessel laws. The case μ α, α is known as a Boolean stable law from O. Arizmendi and T. Hasebe [1].
Furthermore, there is a clear relationship between the measures μ α, β and μ β, α .
Lemma 11.
Let α,β ≥ 0, (α,β) ≠ (0,0) and let ζ: (0,∞) → (0,∞) be the map ζ(t) = t −1 . Then we have \(\mu _{\beta,\alpha } =\dot{\zeta } (\mu _{\alpha,\beta })\) , where \(\dot{\zeta }\) denotes the image measure under the map ζ.
Proof.
Put \(\sigma =\dot{\zeta } (\mu _{\alpha,\beta })\). Then by the proof of [7, Proposition 3.13],
for 0 < z < 1. Hence σ = μ β, α . □
Lemma 12.
Let (α,β) ≠ (0,0). Denote the limit measure corresponding to μ α,β by ν α,β . Then ν α,β is uniquely determined by the formula
for 0 < t < 1, where F α,β (x) = ν α,β ((0,x]) is the distribution function of ν α,β .
Proof.
The lemma follows directly from Lemma 3 and Theorem 2. □
For β = 0 and α > 0,
Similarly, for α = 0 and β > 0
Hence ν 0, β is the Pareto distribution with scale parameter 1 and shape parameter \(\frac{1} {\beta }\).
Moreover, if α = β > 0 we get F α, α (x) = (1 + x −1∕α)−1 for x ∈ (0, ∞), which we recognize as the image measure of the Burr distribution with parameters (1, α −1) (or equivalently the Fisk or log-logistic distribution (cf. [9, p. 54]) with scale parameter 1 and shape parameter α −1) under the map x↦x −1.
On the other hand, we can make some observations about the distribution μ α, β , too. For the cases (α, β) = (1, 0) and (α, β) = (0, 1) we can recognize the measures μ 1, 0 and μ 0, 1 from their S-transform, as \(S_{\mu _{1,0}}(z) = {(1 + z)}^{-1}\) is the S-transform of the free Poisson distributions with shape parameter 1 (cf. [18, p. 34]), which is given by
while \(S_{\mu _{0,1}}(z) = -z\) according to Lemma 11 is the S-transform of the image of the above free Poisson distribution under the map t↦t −1,
which is the same as the free stable distribution with parameters α = 1∕2 and ρ = 1 as described by H. Bercovici, V. Pata and P. Biane in [3, Appendix A1]. More generally, μ 0, β is the same as the free stable distribution v α, ρ with \(\alpha = \frac{1} {\beta +1}\) and ρ = 1, because by [3, Appendix A4] v α, 1 is characterized by \(\varSigma _{v_{\alpha,1}}(y) ={ \left ( \frac{-y} {1-y}\right )}^{\frac{1} {\alpha } -1}\), y ∈ (−∞, 0), and it is easy to check that
From the above observations, we now can describe a construction of the measures μ m, n .
Proposition 4.
Let m,n be nonnegative integers. Then the measure μ m,n is given by
Proof.
By multiplicativity of the S-transform we have that
which by uniqueness of the S-transform gives the desired result. □
Proposition 5.
For all α,β ≥ 0
Proof.
These formulas follow easily from Propositions 1 and 2 and Lemma 8. □
Furthermore, we also can calculate explicitly all fractional moments of μ α, β by the following theorem.
Theorem 3.
Let α,β > 0 and \(\gamma \in \mathbb{R}\) then we have
Proof.
Let first − 1 < γ < 1. Then (8.12)–(8.14) follow from Lemma 10 together with the formula \(\varGamma (1+\gamma )\varGamma (1-\gamma ) = \frac{\pi \gamma } {\sin (\pi \gamma )}\). Since \(S_{\mu _{\alpha,0}}(z) = \frac{1} {{(z+1)}^{\alpha }}\) is analytic in a neighborhood of 0, μ α, 0 has finite moments of all orders. Therefore the functions
are both analytic in the half-plane ℜ s > 0 and they coincide for s ∈ (0, 1). Hence they are equal for all \(s \in \mathbb{C}\) with ℜ s > 0 which proves (8.13). By Lemma 11 (8.14) follows from (8.13). □
Remark 4.
By Theorem 3 (8.12) we have
-
1.
If β > 0, then \(\int _{0}^{\infty }x\mathrm{d}\mu _{\alpha,\beta }(x) = \infty \). Hence \(\sup (\text{supp}(\mu _{\alpha,\beta })) = \infty \). Similarly, if α > 0 then \(\int _{0}^{\infty }{x}^{-1}\mathrm{d}\mu _{\alpha,\beta }(x) = \infty \). Hence \(\inf (\text{supp}(\mu _{\alpha,\beta })) = 0\).
-
2.
If β = 0, then by Stirling’s formula
$$\displaystyle{ \sup (\text{supp}(\mu _{\alpha,0})) =\lim _{0\rightarrow \infty }{\left (\int _{0}^{\infty }{t}^{n}\mathrm{d}\mu _{\alpha,0}(t)\right )}^{ \frac{1} {n} } ={ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }}. }$$Hence by Lemma 11, we have for α = 0
$$\displaystyle{ \inf (\text{supp}(\mu _{0,\beta })) = \frac{{\beta }^{\beta }} {{(\beta +1)}^{\beta +1}}. }$$Note that \(\sup (\text{supp}(\mu _{n,0})) = \frac{{(n+1)}^{n+1}} {{n}^{n}},n \in \mathbb{N}\) was already proven by F. Larsen in [10, Proposition 4.1] and it was proven by T. Banica, S. T. Belinschi, M. Capitane and B. Collins in [2] that \(\text{supp}(\mu _{\alpha,0}) = \left [0,{ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }} \right ]\). Note that this also follows from our Corollary 3.
If α = β it is also possible to calculate explicitly the density of μ α, α . To do this we require an additional lemma.
Lemma 13.
For − 1 < γ < 1 and −π < θ < π we have
Proof.
Note first that by the substitution t = e x we have
The function
is meromorphic with simple poles in \(x = \pm \mathrm{i}(\pi -\theta ) + p2\pi\), \(p \in \mathbb{Z}\). Apply now the residue integral formula to this function on the boundary of
and let R → ∞. The result follows. □
The density of μ α, α was computed by P. Biane [5, Sect. 5.4]. For completeness we include a different proof based on Theorem 3 and Lemma 13.
Theorem 4 ([5]).
Let α > 0 then μ α,α has the density \(f_{\alpha,\alpha }(t)\mathrm{d}t\) , where
for t ∈ (0,∞). In particular μ 1,1 has the density \({(\pi \sqrt{t}(1 + t))}^{-1}\mathrm{d}t\) and μ 2,2 has the density
Proof.
To prove this note that for \(\vert \gamma \vert < \frac{1} {1+\alpha }\)
using the substitution \(y = {x}^{ \frac{1} {\alpha +1} }\). Now by Lemma 13 and Theorem 3 (8.12) we have
This implies by unique analytic continuation that the same formula holds for all \(\gamma \in \mathbb{C}\) with \(\vert \mathfrak{R}\gamma \vert < \frac{1} {\alpha +1}\). In particular
for all \(s \in \mathbb{R}\), which shows that the image measures under x↦lnx of f α, α (x)dx and μ α, α have the same characteristic function. Hence \(\mu _{\alpha,\alpha } = f_{\alpha,\alpha }(x)\mathrm{d}x\). □
Proposition 6.
For all α,β ≥ 0, (α,β) ≠ (0,0), the measure μ α,β has a continuous density f α,β (x), (x > 0), with respect to the Lebesgue measure on \(\mathbb{R}\) and
Proof.
By the method of proof of Theorem 4, the integral
can be obtained by replacing γ by \(\mathrm{i}s\) in the formulas (8.12)–(8.14). Moreover,
where σ α, β is the image measure of μ α, β by the map x↦logx, (x > 0). Hence by standard Fourier analysis, we know that if \(h_{\alpha,\beta } \in {L}^{1}(\mathbb{R})\) then σ α, β has a density \(g_{\alpha,\beta } \in C_{0}(\mathbb{R})\) with respect to the Lebesgue measure on \(\mathbb{R}\) and hence μ α, β has density \(f_{\alpha,\beta }(x) = \frac{1} {x}g_{\alpha,\beta }(\log x)\) for x > 0, which satisfies the condition (8.15). To prove that \(h_{\alpha,\beta } \in {L}^{1}(\mathbb{R})\) for all α, β ≥ 0, (α, β) ≠ (0, 0), we observe first that
and hence by the functional equation of Γ
In particular, we have
Applying these formulas to (8.12)–(8.14) with γ replaced by \(\mathrm{i}s\), we get
for all choices of α, β ≥ 0, (α, β) ≠ (0, 0). Thus by the continuity of h α, β it follows that \(h_{\alpha,\beta } \in {L}^{1}(\mathbb{R})\), which proves the proposition. □
Note that by Remark 4 it follows that f α, 0(x) can only be non-zero if \(x \in \left (0,{ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }} \right )\) and f 0, β (x) can only be non-zero if \(x \in \left ( \frac{{\beta }^{\beta }} {{(\beta +1)}^{\beta +1}},\infty \right )\). Since we have seen, that μ 0, β coincides with the stable distribution v α, ρ with \(\alpha = \frac{1} {\beta +1}\) and ρ = 1 we have from [3, Appendix 4] that
Theorem 5 ([3]).
The map
is a bijection of the interval \(\left (0, \frac{\pi } {\beta +1}\right )\) onto \(\left ( \frac{{\beta }^{\beta }} {{(\beta +1)}^{\beta +1}},\infty \right )\) and
Proof.
We know that \(\mu _{0,\beta } = v_{ \frac{1} {\beta +1},1}\), the stable distribution with parameters \(\alpha = \frac{1} {\beta +1}\) and ρ = 1. Moreover, we have from [3, Proposition A1.4], that v α, 1 has density ψ α, 1 on the interval \(\left (\alpha {(1-\alpha )}^{1/\alpha -1},\infty \right )\) given by
where θ ∈ (0, π) is the only solution to the equation
It is now easy to check that \(f_{0,\beta }(x) =\psi _{ \frac{1} {\beta +1},1}(x)\) has the form (8.16) by using the substitution \(\phi = \frac{\theta } {\beta +1}\). □
Corollary 3.
The map
is a bijection of the interval \(\left (0, \frac{\pi } {\alpha +1}\right )\) onto \(\left (0,{ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }} \right )\) and
Proof.
Since μ α, 0 is the image measure of μ 0, α by the map \(t\mapsto \frac{1} {t}\), (t > 0), we have
The corollary now follows from Theorem 5 by elementary calculations. □
We next use Biane’s method to compute the density f α, β for all α, β > 0.
Theorem 6.
Let α,β > 0. Then for each x > 0 there are unique real numbers ϕ 1 ,ϕ 2 > 0 for which
Moreover
Proof.
As μ α, β has the S-transform \(S_{\mu _{\alpha,\beta }}(z) = \frac{{(-z)}^{\beta }} {{(1+z)}^{\alpha }}\) we by Definition 1 observe that
for z in some complex neighborhood of (−1, 0). Now it is known that
for every probability measure on (0, ∞). Hence
for z in a complex neighborhood of (−1, 0).
Let H denote the upper half plane in \(\mathbb{C}\):
For z ∈ H, put
Basic trigonometry applied to the triangle with vertices − 1, 0 and z, shows that ϕ 1 +ϕ 2 < π and
Hence
from which
It follows that Φ: z↦(ϕ 1(z), ϕ 2(z)) is a diffeomorphism of H onto the triangle \(T =\{ (\phi _{1},\phi _{2}) \in {\mathbb{R}}^{2}:\phi _{1},\phi _{2} > 0,\phi _{1} +\phi _{2} <\pi \}\) with inverse
Put \(H_{\alpha,\beta } =\{ z \in H: (\alpha +1)\phi _{1}(z) + (\beta +1)\phi _{2}(z) <\pi \}\). Then \(H_{\alpha,\beta } {=\varPhi }^{-1}\left (T_{\alpha,\beta }\right )\) where \(T_{\alpha,\beta } =\{ (\phi _{1},\phi _{2}) \in T: (\alpha +1)\phi _{1} + (\beta +1)\phi _{2} <\pi \}.\)
In particular H α, β is an open connected subset of H. Put
Then
so for z ∈ H α, β , ℑ F(z) < 0. Therefore \(G_{\mu _{\alpha,\beta }}(F(z))\) is a well-defined analytic function on H α, β , and since (−1, 0) is contained in the closure of H α, β it follows from (8.20)
for z in some open subset of H α, β and thus by analyticity it holds for all z ∈ H α, β .
Let x > 0 and assume that ϕ 1, ϕ 2 > 0 satisfy (8.17) and (8.18). Put
Then by (8.21)
Since μ α, β has a continuous density f α, β on (0, ∞) by Proposition 6, the inverse Stieltjes transform gives
For 0 < t < 1, put z t = Φ −1(t ϕ 1, t ϕ 2). Then
Thus \(\mathfrak{I}F(z_{t}) < 0\). Moreover, z t → z and F(z t ) → F(z) = x for t → 1−. Hence by (8.22),
which proves (8.19). To complete the proof of Theorem 6, we only need to prove the existence and uniqueness of ϕ 1, ϕ 2 > 0. Assume that ϕ 1, ϕ 2 satisfy (8.17) then
for a unique θ ∈ (0, π). Moreover,
Hence, expressing \(u = \frac{{\sin }^{\alpha +1}\phi _{ 2}} {{\sin }^{\beta +1}\phi _{1}} {\sin }^{\beta -\alpha }(\phi _{ 1} +\phi _{2})\) as a function u(θ) of θ, we get
where
For α ≠ β A(ϕ 1, ϕ 2) ≥ (α −β)2sin2 ϕ 1sin2 ϕ 2 > 0 and for α = β A(ϕ 1, ϕ 2) = (α + 1)2sin(ϕ 1 +ϕ 2) > 0. Hence u(θ) is a differentiable, strictly increasing function of θ, and it is easy to check that
Hence u(θ) is a bijection of (0, π) onto (0, ∞), which completes the proof of Theorem 6. □
Remark 5.
It is much more complicated to express the densities f α, β (x) directly as functions of x. This has been done for β = 0, \(\alpha \in \mathbb{N}\) by K. Penson and K. Życzkowski in [13] and extended to the case \(\alpha \in {\mathbb{Q}}^{+}\) by W. Młotkowski, K. Penson and K. Życzkowski in [12, Theorem 3.1].
References
Arizmendi, O., Hasebe, T.: Classical and free infinite divisibility for Boolean stable laws (2012), arXiv:1205.1575
Banica, T., Belinschi, S.T., Capitaine, M., Collins, B.: Free Bessel laws. Canad. J. Math. 63(1), 3–37 (2011). DOI 10.4153/CJM-2010-060-6. URL http://dx.doi.org/10.4153/CJM-2010-060-6
Bercovici, H., Pata, V.: Stable laws and domains of attraction in free probability theory. Ann. of Math. (2) 149(3), 1023–1060 (1999). DOI 10.2307/121080. URL http://dx.doi.org/10.2307/121080. With an appendix by Philippe Biane
Bercovici, H., Voiculescu, D.: Free convolution of measures with unbounded support. Indiana Univ. Math. J. 42(3), 733–773 (1993). DOI 10.1512/iumj.1993.42.42033. URL http://dx.doi.org/10.1512/iumj.1993.42.42033
Biane, P.: Processes with free increments. Math. Z. 227(1), 143–174 (1998). DOI 10.1007/ PL00004363. URL http://dx.doi.org/10.1007/PL00004363
Haagerup, U., Larsen, F.: Brown’s spectral distribution measure for R-diagonal elements in finite von Neumann algebras. J. Funct. Anal. 176(2), 331–367 (2000). DOI 10.1006/ jfan.2000.3610. URL http://dx.doi.org/10.1006/jfan.2000.3610
Haagerup, U., Schultz, H.: Brown measures of unbounded operators affiliated with a finite von Neumann algebra. Math. Scand. 100(2), 209–263 (2007)
Haagerup, U., Schultz, H.: Invariant subspaces for operators in a general II1-factor. Publ. Math. Inst. Hautes Études Sci. (109), 19–111 (2009). DOI 10.1007/s10240-009-0018-7. URL http://dx.doi.org/10.1007/s10240-009-0018-7
Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous univariate distributions. Vol. 1, second edn. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley & Sons Inc., New York (1994). A Wiley-Interscience Publication
Larsen, F.: Powers of R-diagonal elements. J. Operator Theory 47(1), 197–212 (2002)
Lindsay, J.M., Pata, V.: Some weak laws of large numbers in noncommutative probability. Math. Z. 226(4), 533–543 (1997). DOI 10.1007/PL00004356. URL http://dx.doi.org/10.1007/PL00004356
Mlotkowski, W., Penson, K.A., Zyczkowski, K.: Densities of the Raney distributions (2012), arXiv:1211.7259
Penson, K.A., Zyczkowski, K.: Product of Ginibre matrices: Fuss-Catalan and Raney distributions. Phys. Rev. E 83, 061,118 (2011), arXiv:1103.3453
Rosenthal, J.S.: A first look at rigorous probability theory, second edn. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2006)
Tucci, G.H.: Limits laws for geometric means of free random variables. Indiana Univ. Math. J. 59(1), 1–13 (2010). DOI 10.1512/iumj.2010.59.3775. URL http://dx.doi.org/10.1512/iumj.2010.59.3775
Voiculescu, D.: Addition of certain noncommuting random variables. J. Funct. Anal. 66(3), 323–346 (1986). DOI 10.1016/0022-1236(86)90062-5. URL http://dx.doi.org/10.1016/0022-1236(86)90062-5
Voiculescu, D.: Multiplication of certain noncommuting random variables. J. Operator Theory 18(2), 223–235 (1987)
Voiculescu, D.V., Dykema, K.J., Nica, A.: Free random variables, CRM Monograph Series, vol. 1. American Mathematical Society, Providence, RI (1992). A noncommutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups
Acknowledgements
The first author is supported by ERC Advanced Grant No. OAFPG 27731 and the Danish National Research Foundation through the Center for Symmetry and Deformation (DNRF92).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Haagerup, U., Möller, S. (2013). The Law of Large Numbers for the Free Multiplicative Convolution. In: Carlsen, T., Eilers, S., Restorff, G., Silvestrov, S. (eds) Operator Algebra and Dynamics. Springer Proceedings in Mathematics & Statistics, vol 58. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39459-1_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-39459-1_8
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39458-4
Online ISBN: 978-3-642-39459-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)