Keywords

8.1 Introduction

In classical probability the weak law of large numbers is well known (see for instance [14, Corollary 5.4.11]), both for additive and multiplicative convolution of Borel measures on \(\mathbb{R}\), respectively, [0, ).

Going from classical probability to free probability, one could ask if similar results exist for the additive and multiplicative free convolutions \(\boxplus \) and \(\boxtimes \) as defined by D. Voiculescu in [16] and [17] and extended to unbounded probability measures by H. Bercovici and D. Voiculescu in [4]. The law of large numbers for the free additive convolution of measures with bounded support is an immediate consequence of D. Voiculescu’s work in [16] and J. M. Lindsay and V. Pata proved it for measures with first moment in [11, Corollary 5.2]. !free additive

Theorem 1 ([11, Corollary 5.2]).

Let μ be a probability measure on \(\mathbb{R}\) with existing mean value α, and let \(\psi _{n}: \mathbb{R} \rightarrow \mathbb{R}\) be the map \(\psi _{n}(x) = \frac{1} {n}x\) . Then

$$\displaystyle{ \dot{\psi }_{n}(\mathop{\underbrace{\mu \boxplus \ldots \boxplus \mu }}\limits _{\text{n times}}) \rightarrow \delta _{\alpha } }$$

where convergence is weak and δ x denotes the Dirac measure at \(x \in \mathbb{R}\) .

Here \(\dot{\phi }(\mu )\) denotes the image measure of μ under ϕ for a Borel measurable function \(\phi: \mathbb{R} \rightarrow \mathbb{R}\), respectively, [0, ) → [0, ).

In classical probability the multiplicative law follows directly from the additive law. This is not the case in free probability, here a multiplicative law requires a separate proof. This has been proved by G.H. Tucci in [15, Theorem 3.2] for measures with bounded support using results on operator algebras from [6] and [8]. In this paper we give an elementary proof of Tucci’s theorem which also shows that the theorem holds for measures with unbounded support. !free multiplicative

Theorem 2.

Let μ be a probability measure on [0,∞) and let ϕ n : [0,∞) → [0,∞) be the map \(\phi _{n}(x) = {x}^{\frac{1} {n} }\) . Set δ = μ({0}). If we denote

$$\displaystyle{ \nu _{n} =\dot{\phi } _{n}(\mu _{n}) =\dot{\phi } _{n}(\mathop{\underbrace{\mu \boxtimes \ldots \boxtimes \mu }}\limits _{\text{n times}}) }$$

then ν n converges weakly to a probability measure ν on [0,∞). If μ is a Dirac measure on [0,∞) then ν = μ. Otherwise ν is the unique measure on [0,∞) characterised by \(\nu \left (\left [0, \frac{1} {S_{\mu }(t-1)}\right ]\right ) = t\) for all t ∈ (δ,1) and ν({0}) = δ. The support of the measure ν is the closure of the interval

$$\displaystyle{ (a,b) = \left ({\left (\int _{0}^{\infty }{x}^{-1}\mathrm{d}\mu (x)\right )}^{-1},\int _{ 0}^{\infty }x\mathrm{d}\mu (x)\right ), }$$

where 0 ≤ a < b ≤∞.

Note that unlike the additive case, the multiplicative limit distribution is only a Dirac measure if μ is a Dirac measure. Furthermore S μ and hence (by [17, Theorem 2.6]) μ can be reconstructed from the limit measure.

We start by recalling some definitions and proving some preliminary results in Sect. 8.2, which then in Sect. 8.3 are used to prove Theorem 2. In Sect. 8.4 we prove some further formulas in connection with the limit law, which we in Sect. 8.5 apply to the two parameter family (μ α, β ) α, β ≥ 0 of measures on (0, ) for which the S-transform is given by \(S_{\mu _{\alpha,\beta }}(z) = \frac{{(-z)}^{\beta }} {{(1+z)}^{\alpha }}\), 0 < z < 1.

8.2 Preliminaries

We start with recalling some results we will use and proving some technical tools necessary for the proof of Theorem 2. At first we recall the definition and some properties of Voiculescu’s S-transform for measures on [0, ) with unbounded support as defined by H. Bercovici and D. Voiculescu in [4].

Definition 1 ([4, Sect. 6]).

Let μ be a probability measure on [0, ) and assume that δ = μ({0}) < 1. We define \(\psi _{\mu }(u) =\int _{ 0}^{\infty } \frac{tu} {1-tu}\mathrm{d}\mu (t)\) and denote its inverse in a neighbourhood of (δ − 1, 0) by χ μ . Now we define the S-transform of μ by \(S_{\mu }(z) = \frac{z+1} {z} \chi _{\mu }(z)\) for z ∈ (δ − 1, 0).

Lemma 1 ([4, Proposition 6.8]).

Let μ be a probability measure on [0,∞) with δ = μ({0}) < 1 then S μ is decreasing on (δ − 1,0) and positive. Moreover, if δ > 0 we have S μ (z) →∞ if z →δ − 1.

Lemma 2.

Let μ be a probability measure on [0,∞) with δ = μ({0}) < 1. Assume that μ is not a Dirac measure, then S μ ′(z) < 0 for z ∈ (δ − 1,0). In particular S μ is strictly decreasing on (δ − 1,0).

Proof.

For u ∈ (−, 0),

$$\displaystyle{ \psi _{\mu }^{\prime}(u) =\int _{ 0}^{\infty } \frac{t} {{(1 -\mathit{ut})}^{2}}\mathrm{d}\mu (t) > 0. }$$
(8.1)

Moreover \(\lim _{u\rightarrow 0-}\psi _{\mu }(u) = 0\) and \(\lim _{u\rightarrow -\infty }\psi _{\mu }(u) =\delta -1\). Hence ψ μ is a strictly increasing homeomorphism of (−, 0) onto (δ − 1, 0). For u ∈ (−, 0), we have

$$\displaystyle{ S_{\mu }(\psi _{\mu }(u)) = \frac{\psi _{\mu }(u) + 1} {\psi _{\mu }(u)} \cdot u. }$$

Hence

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}u}\left (\ln S_{\mu }(\psi _{\mu }(u))\right ) = - \frac{\psi _{\mu }^{\prime}(u)} {\psi _{\mu }(u)(\psi _{\mu }(u) + 1)} + \frac{1} {u} = \frac{\psi _{\mu }(u)(\psi _{\mu }(u) + 1) - u\psi _{\mu }^{\prime}(u)} {u\psi _{\mu }(u)(\psi _{\mu }(u) + 1)} }$$
(8.2)

where the denominator is positive and the nominator is equal to

$$\displaystyle\begin{array}{rcl} & & \left (\int _{0}^{\infty } \frac{\mathit{ut}} {1 -\mathit{ut}}\mathrm{d}\mu (t)\right ) \cdot \left (\int _{0}^{\infty } \frac{1} {1 -\mathit{ut}}\mathrm{d}\mu (t)\right ) -\int _{0}^{\infty } \frac{\mathit{ut}} {{(1 -\mathit{ut})}^{2}}\mathrm{d}\mu (t) {}\\ & & = \frac{u} {2}\int _{0}^{\infty }\int _{ 0}^{\infty } \frac{s + t} {(1 -\mathit{us})(1 -\mathit{ut})}\mathrm{d}\mu (s)\mathrm{d}\mu (t) {}\\ & & \qquad -\frac{u} {2}\int _{0}^{\infty }\int _{ 0}^{\infty }\left ( \frac{s} {{(1 -\mathit{us})}^{2}} + \frac{t} {{(1 -\mathit{ut})}^{2}}\right )\mathrm{d}\mu (s)\mathrm{d}\mu (t) {}\\ & & = -\frac{{u}^{2}} {2} \int _{0}^{\infty }\int _{ 0}^{\infty } \frac{{(s - t)}^{2}} {{(1 -\mathit{us})}^{2}{(1 -\mathit{ut})}^{2}}\mathrm{d}\mu (s)\mathrm{d}\mu (t) {}\\ \end{array}$$

where we have used that

$$\displaystyle{ (s + t)(1 -\mathit{us})(1 -\mathit{ut}) - s{(1 -\mathit{ut})}^{2} - t{(1 -\mathit{us})}^{2} = -u{(s - t)}^{2}. }$$

Since μ is not a Dirac measure,

$$\displaystyle{ (\mu \times \mu )\left (\left \{(s,t) \in [0,\infty {)}^{2}: s\neq t\right \}\right ) > 0 }$$

and thus

$$\displaystyle{ \int _{0}^{\infty }\int _{ 0}^{\infty } \frac{{(s - t)}^{2}} {{(1 -\mathit{us})}^{2}{(1 -\mathit{ut})}^{2}}\mathrm{d}\mu (s)\mathrm{d}\mu (t) > 0 }$$

which shows that the right hand side of (8.2) is strictly positive. Hence

$$\displaystyle{ \frac{\mathrm{d}} {\mathrm{d}z}\left (\ln S_{\mu }(z)\right ) < 0 }$$

for z ∈ (δ − 1, 0), which proves the lemma. □ 

Remark 1.

Furthermore, by [4, Proposition 6.1] and [4, Proposition 6.3] ψ μ and χ μ are analytic in a neighbourhood of (−, 0), respectively, (−1, 0), hence S μ is analytic in a neighbourhood of (δ − 1, 0).

Lemma 3 ([4, Corollary 6.6]).

Let μ and ν be probability measures on [0,∞), none of them being δ 0 , then we have \(S_{\mu \boxtimes \nu } = S_{\mu }S_{\nu }\) .

Next we have to determine the image of S μ . Here we closely follow the argument given for measures with compact support by F. Larsen and the first author in [6, Theorem 4.4].

Lemma 4.

Let μ be a probability measure on [0,∞) not being a Dirac measure, then S μ ((δ − 1,0)) = (b −1 ,a −1 ), where a, b and δ are defined as in Theorem 2.

Proof.

First assume δ = 0. Observe that for u →  we have

$$\displaystyle{ \int _{0}^{\infty } \frac{u} {1 + \mathit{ut}}\mathrm{d}\mu (t) \rightarrow \int _{0}^{\infty }\frac{1} {t}\mathrm{d}\mu (t) = {a}^{-1}\quad \text{ and }\quad \int _{ 0}^{\infty } \frac{\mathit{ut}} {1 + \mathit{ut}}\mathrm{d}\mu (t) \rightarrow 1. }$$

Hence

$$\displaystyle{ \frac{-\psi _{\mu }(-u)} {u(\psi _{\mu }(-u) + 1)} = \left (\int _{0}^{\infty } \frac{\mathit{ut}} {1 + \mathit{ut}}\mathrm{d}\mu (t)\right ){\left (\int _{0}^{\infty } \frac{u} {1 + \mathit{ut}}\mathrm{d}\mu (t)\right )}^{-1} \rightarrow a\quad \text{ for }u \rightarrow \infty. }$$

Similarly, for u → 0 we have

$$\displaystyle{ \int _{0}^{\infty } \frac{t} {1 + \mathit{ut}}\mathrm{d}\mu (t) \rightarrow \int _{0}^{\infty }t\mathrm{d}\mu (t) = b\quad \text{ and }\quad \int _{ 0}^{\infty } \frac{1} {1 + \mathit{ut}}\mathrm{d}\mu (t) \rightarrow 1. }$$

Hence

$$\displaystyle{ \frac{-\psi _{\mu }(-u)} {u(\psi _{\mu }(-u) + 1)} = \frac{\int _{0}^{\infty } \frac{t} {1+\mathit{ut}}\mathrm{d}\mu (t)} {\int _{0}^{\infty } \frac{1} {1+\mathit{ut}}\mathrm{d}\mu (t)} \rightarrow b\quad \text{ for }u \rightarrow 0. }$$

As χ μ is the inverse of ψ μ we have

$$\displaystyle{ S_{\mu }(\psi _{\mu }(-u)) = \frac{\psi _{\mu }(-u) + 1} {\psi _{\mu }(-u)} \chi _{\mu }(\psi _{\mu }(-u)) = \frac{u(\psi _{\mu }(-u) + 1)} {-\psi _{\mu }(-u)}. }$$

By (8.1) and Lemma 2 ψ μ is strictly increasing and continuous and S μ is strictly decreasing and continuous so S μ (ψ μ ((−, 0))) = S μ ((−1, 0)) = (b −1, a −1).

If now δ > 0 we have by Lemma 1 that S μ (z) →  for z → δ − 1, so in this case continuity gives us S μ ((δ − 1, 0)) = (b −1, ), which is as desired as a = 0 in this case. □ 

8.3 Proof of the Main Result

Let μ be a probability measure on [0, ) and let ν be as defined in Theorem 2. If μ is a Dirac measure, then ν n  = μ for all n and hence ν n  → ν = μ weakly, so the theorem holds in this case. In the following we can therefore assume that μ is not a Dirac measure. We start by assuming further that μ({0}) = 0, and will deal with the case μ({0}) > 0 in Remark 2.

Lemma 5.

For all t ∈ (0,1) and all n ≥ 1 we have

$$\displaystyle{ \int _{0}^{\infty }{\left (1 + \frac{1 - t} {t} S_{\mu }{(t - 1)}^{n}{x}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) = t. }$$

Proof.

Let t ∈ (0, 1) and set z = t − 1. By Definition 1 we have

$$\displaystyle\begin{array}{rcl} z + 1& =& \psi _{\mu _{n}}(\chi _{\mu _{n}}(z)) + 1 {}\\ & =& \int _{0}^{\infty } \frac{\chi _{\mu _{n}}(z)x} {1 -\chi _{\mu _{n}}(z)x}\mathrm{d}\mu _{n}(x) + 1 {}\\ & =& \int _{0}^{\infty } \frac{1} {1 -\chi _{\mu _{n}}(z)x}\mathrm{d}\mu _{n}(x) {}\\ & =& \int _{0}^{\infty }{\left (1 - \frac{z} {z + 1}S_{\mu _{n}}(z)x\right )}^{-1}\mathrm{d}\mu _{ n}(x) {}\\ & =& \int _{0}^{\infty }{\left (1 - \frac{z} {z + 1}S_{\mu }{(z)}^{n}x\right )}^{-1}\mathrm{d}\mu _{ n}(x). {}\\ \end{array}$$

In the last equality we use multiplicativity of the S-transform from Lemma 3.

Now substitute t = z + 1 and afterwards y n = x and use the definition of ν n to get

$$\displaystyle\begin{array}{rcl} t& =& \int _{0}^{\infty }{\left (1 + \frac{1 - t} {t} S_{\mu }{(t - 1)}^{n}x\right )}^{-1}\mathrm{d}\mu _{ n}(x) {}\\ & =& \int _{0}^{\infty }{\left (1 + \frac{1 - t} {t} S_{\mu }{(t - 1)}^{n}{y}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(y). {}\\ \end{array}$$

 □ 

Now, using this lemma, we can prove the following characterisation of the weak limit of ν n .

Lemma 6.

For all t ∈ (0,1) we have \(t =\lim _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t-1)}\right ]\right )\) .

Proof.

Fix t ∈ (0, 1) and let t′ ∈ (0, t). Then

$$\displaystyle\begin{array}{rcl} t^{\prime}& =& \int _{0}^{\infty }{\left (1 + \frac{1 - t^{\prime}} {t^{\prime}} S_{\mu }{(t^{\prime} - 1)}^{n}{x}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \leq & \int _{0}^{\infty }{\left (1 + \frac{1 - t} {t} S_{\mu }{(t^{\prime} - 1)}^{n}{x}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \leq & \int _{0}^{ \frac{1} {S_{\mu }(t-1)} }1\mathrm{d}\nu _{n}(x) +\int _{ \frac{1} {S_{\mu }(t-1)} }^{\infty }{\left (1 + \frac{1 - t} {t} S_{\mu }{(t^{\prime} - 1)}^{n}{x}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \leq & \int _{0}^{ \frac{1} {S_{\mu }(t-1)} }1\mathrm{d}\nu _{n}(x) +\int _{ \frac{1} {S_{\mu }(t-1)} }^{\infty }{\left (1 + \frac{1 - t} {t}{ \left (\frac{S_{\mu }(t^{\prime} - 1)} {S_{\mu }(t - 1)}\right )}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \leq & \nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t - 1)}\right ]\right ) +{ \left (1 + \frac{1 - t} {t}{ \left (\frac{S_{\mu }(t^{\prime} - 1)} {S_{\mu }(t - 1)}\right )}^{n}\right )}^{-1}. {}\\ \end{array}$$

Here the first inequality holds as t′ ≤ t while S μ (t′ − 1)n x n > 0, the second holds as \(1 + \frac{1-t} {t} S_{\mu }{(t^{\prime} - 1)}^{n}{x}^{n} \geq 0\), and the last because ν n is a probability measure.

By Lemma 2, S μ (t − 1) is strictly decreasing, and hence \(\frac{S_{\mu }(t^{\prime}-1)} {S_{\mu }(t-1)} > 1\). This implies

$$\displaystyle\begin{array}{rcl} \lim _{n\rightarrow \infty }{\left (1 + \frac{1 - t} {t}{ \left (\frac{S_{\mu }(t^{\prime} - 1)} {S_{\mu }(t - 1)}\right )}^{n}\right )}^{-1} = 0.& & {}\\ \end{array}$$

And hence

$$\displaystyle{ t^{\prime} \leq \liminf _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t - 1)}\right ]\right ). }$$

As this holds for all t′ ∈ (0, t) we have

$$\displaystyle{ t \leq \liminf _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t - 1)}\right ]\right ). }$$
(8.3)

On the other hand if t ′ ′ ∈ (t, 1) we get

$$\displaystyle\begin{array}{rcl}{ t}^{{\prime\prime}}& =& \int _{ 0}^{\infty }{\left (1 + \frac{1 - {t}^{{\prime\prime}}} {{t}^{{\prime\prime}}} S_{\mu }{({t}^{{\prime\prime}}- 1)}^{n}{x}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \geq & \int _{0}^{\infty }{\left (1 + \frac{1 - t} {t} S_{\mu }{({t}^{{\prime\prime}}- 1)}^{n}{x}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \geq & \int _{0}^{ \frac{1} {S(t-1)} }{\left (1 + \frac{1 - t} {t} S_{\mu }{({t}^{{\prime\prime}}- 1)}^{n}{x}^{n}\right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \geq & \int _{0}^{ \frac{1} {S(t-1)} }{\left (1 + \frac{1 - t} {t} \frac{S_{\mu }{({t}^{{\prime\prime}}- 1)}^{n}} {S_{\mu }{(t - 1)}^{n}} \right )}^{-1}\mathrm{d}\nu _{ n}(x) {}\\ & \geq & \nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t - 1)}\right ]\right ) \cdot {\left (1 + \frac{1 - t} {t}{ \left (\frac{S_{\mu }({t}^{{\prime\prime}}- 1)} {S_{\mu }(t - 1)} \right )}^{n}\right )}^{-1}. {}\\ \end{array}$$

Here the first inequality holds as t ′ ′ > t while S μ (t ′ ′− 1)x n ≥ 0, and the second to last inequality holds as S μ (t − 1) is decreasing.

Again as S μ (t − 1) is strictly decreasing we have \(\frac{S_{\mu }({t}^{{\prime\prime}}-1)} {S_{\mu }(t-1)} < 1\), hence

$$\displaystyle{ \lim _{n\rightarrow \infty }{\left (1 + \frac{1 - t} {t}{ \left (\frac{S_{\mu }({t}^{{\prime\prime}}- 1)} {S_{\mu }(t - 1)} \right )}^{n}\right )}^{-1} = 1. }$$

This implies

$$\displaystyle{ {t}^{{\prime\prime}}\geq \limsup _{ n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t - 1)}\right ]\right ). }$$

As this holds for all t ′ ′ ∈ (t, 1) we have

$$\displaystyle{ t \geq \limsup _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t - 1)}\right ]\right ). }$$
(8.4)

Combining (8.3) and (8.4) we get

$$\displaystyle{ t =\lim _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(t - 1)}\right ]\right ) }$$

as desired. □ 

For proving weak convergence of ν n to ν it remains to show that ν n vanishes in limit outside of the support of ν.

Lemma 7.

For all x ≤ a and y ≥ b we have ν n ([0,x]) → 0, respectively, ν n ([0,y]) → 1.

Proof.

To prove the first convergence, let t ≤ a and s ∈ (0, 1). Now we have that \(t \leq \frac{1} {S_{\mu }(s-1)}\) from Lemma 4 and hence

$$\displaystyle{ \limsup _{n\rightarrow \infty }\nu _{n}([0,t]) \leq \limsup _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(s - 1)}\right ]\right ) = s. }$$

Here the inequality holds because ν n is a positive measure and the equality comes from Lemma 6. As this holds for all s ∈ (0, 1) we have \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) \leq 0\) and hence \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) = 0\) by positivity of the measure.

For the second convergence we proceed in the same manner, by letting t ≥ b and s ∈ (0, 1). Now we have that \(t \geq \frac{1} {S_{\mu }(s-1)}\) from Lemma 4 and hence

$$\displaystyle{ \liminf _{n\rightarrow \infty }\nu _{n}([0,t]) \geq \liminf _{n\rightarrow \infty }\nu _{n}\left (\left [0, \frac{1} {S_{\mu }(s - 1)}\right ]\right ) = s. }$$

Again the inequality holds because ν n is a positive measure and the equality comes from Lemma 6. As this holds for all s ∈ (0, 1) we have \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) \geq 1\) and hence \(\limsup _{n\rightarrow \infty }\nu _{n}([0,t]) = 1\) as ν n is a probability measure. □ 

Lemmas 6 and 7 now prove Theorem 2 without any assumptions on bounded support as weak convergence of measures is equivalent to point-wise convergence of distribution functions for all but countably many x ∈ [0, ).

Remark 2.

In the case δ = μ({0}) > 0, S μ is only defined on (δ − 1, 0) and S μ (z) →  when z → δ − 1. This implies that Lemma 5 only holds for t ∈ (δ, 1), with a similar proof. Similarly, Lemma 6 only holds for t ∈ (δ, 1), and in the proof we have to assume t′ ∈ (δ, t). Similarly, in the proof of Lemma 7 we have to assume s ∈ (δ, 1). Moreover, in Lemma 7 the statement, 0 ≤ x ≤ a implies ν n ([0, x]) → 0 for n → , should be changed to a = 0 and ν n ({0}) = δ = ν({0}) for all \(n \in \mathbb{N}\).

Using our result we can prove the following corollary, generalizing a theorem ([8, Theorem 2.2]) by H. Schultz and the first author.

Let \((\mathcal{M},\tau )\) be a finite von Neumann algebra \(\mathcal{M}\) with a normal faithful tracial state τ. In [7, Proposition 3.9] the definition of Brown’s spectral distribution measure μ T was extended to all operators \(T \in {\mathcal{M}}^{\varDelta }\), where \({\mathcal{M}}^{\varDelta }\) is the set of unbounded operators affiliated with \(\mathcal{M}\) for which τ(ln+( | T | )) < .

Corollary 1.

If T is an R-diagonal in \({\mathcal{M}}^{\varDelta }\) then \(\dot{\phi }(\mu _{{({T}^{{\ast}})}^{n}{T}^{n}}) \rightarrow \dot{\psi } (\mu _{T})\) weakly, where ψ(z) = |z| 2, \(z \in \mathbb{C}\) , and ϕ n (x) = x 1∕n for x ≥ 0.

Proof.

By [7, Proposition 3.9] we have \(\mu _{{T}^{{\ast}}T}^{\boxtimes n} =\mu _{{({T}^{{\ast}})}^{n}{T}^{n}}\) and by Theorem 2 we have \(\dot{\phi }(\mu _{{T}^{{\ast}}T}^{\boxtimes n}) \rightarrow \nu\) weakly. On the other hand observe that \(\nu =\dot{\psi } (\mu _{T})\) by [7, Theorem 4.17] which gives the result. □ 

Remark 3.

In [8, Theorem 1.5] it was shown that \(\dot{\phi }_{n}(\mu _{{({T}^{{\ast}})}^{n}{T}^{n}}) \rightarrow \dot{\psi } (\mu _{T})\) weakly for all bounded operators \(T \in \mathcal{M}\). It would be interesting to know, whether this limit law can be extended to all \(T \in {\mathcal{M}}^{\varDelta }\).

8.4 Further Formulas for the S-Transform

In this section we present some further formulas for the S-transform of measures on [0, ), obtained by similar means as in the preceding sections and use those to investigate the difference between the laws of large numbers for classical and free probability. From now on we assume μ({0}) = 0. Therefore μ can be considered as a probability measure on (0, ).

We start with a technical lemma which will be useful later.

Lemma 8.

We have the following identities

$$\displaystyle\begin{array}{rcl} \int _{0}^{1}{\ln }^{2}\left ( \frac{t} {1 - t}\right )\mathrm{d}t& =& \frac{{\pi }^{2}} {3} {}\\ \int _{0}^{1}{\ln }^{2}t\mathrm{d}t& =& 2 {}\\ \int _{0}^{1}{\ln }^{2}(1 - t)\mathrm{d}t& =& 2 {}\\ \int _{0}^{1}\ln t\ln (1 - t)\mathrm{d}t& =& 2 -\frac{{\pi }^{2}} {6}. {}\\ \end{array}$$

Proof.

For the first identity we start with the substitution \(x = \frac{t} {1-t}\) which gives us \(t = \frac{x} {1+x}\) and \(\mathrm{d}t = \frac{\mathrm{d}x} {{(1+x)}^{2}}\) and hence

$$\displaystyle\begin{array}{rcl} \int _{0}^{1}{\ln }^{2}\left ( \frac{t} {1 - t}\right )\mathrm{d}t& =& \int _{0}^{\infty } \frac{{\ln }^{2}x} {{(1 + x)}^{2}}\mathrm{d}x {}\\ & =& \left.\frac{\mathrm{{d}}^{2}} {\mathrm{d}{\alpha }^{2}}\int _{0}^{\infty } \frac{{x}^{\alpha }} {{(1 + x)}^{2}}\mathrm{d}x\right \vert _{\alpha =0} {}\\ & =& \left.\frac{\mathrm{{d}}^{2}} {\mathrm{{d}\alpha }^{2}}B(1+\alpha,1-\alpha )\right \vert _{\alpha =0} {}\\ & =& \left.\frac{\mathrm{{d}}^{2}} {\mathrm{{d}\alpha }^{2}} \frac{\pi \alpha } {\sin (\pi \alpha )}\right \vert _{\alpha =0} {}\\ & =& \left.\frac{\mathrm{{d}}^{2}} {\mathrm{{d}\alpha }^{2}}{\left (1 -\frac{{(\pi \alpha )}^{2}} {3!} + \cdots \,\right )}^{-1}\right \vert _{\alpha =0} {}\\ & =& \left.\frac{\mathrm{{d}}^{2}} {\mathrm{{d}\alpha }^{2}}\left (1 + \frac{{\pi }^{2}} {6}{\alpha }^{2} + \cdots \,\right )\right \vert _{\alpha =0} = \frac{{\pi }^{2}} {3} {}\\ \end{array}$$

where B(⋅ , ⋅ ) denotes the Beta function. The second and the third identity follow from the substitution t↦exp(−x), respectively, 1 − t↦exp(−x).

Finally, the last identity follows by observing

$$ \displaystyle\begin{array}{rcl} \frac{{\pi }^{2}} {3}& =& \int _{0}^{1}{\ln }^{2}\left ( \frac{t} {1 - t}\right )\mathrm{d}t {}\\ & =& \int _{0}^{1}{\ln }^{2}t {+\ln }^{2}(1 - t) - 2\ln t\ln (1 - t)\mathrm{d}t {}\\ & =& 4 - 2\int _{0}^{1}\ln t\ln (1 - t)\mathrm{d}t {}\\ \end{array}$$

which gives the desired result. □ 

Now we prove two propositions calculating the expectations of lnx and ln2 x both for μ and ν expressed by the S-transform of μ.

Proposition 1.

Let μ be a probability measure on (0,∞) and let ν be as defined in Theorem  2 . Then \(\int _{0}^{\infty }\left \vert \ln x\right \vert \mathrm{d}\mu (x) < \infty \) if and only if \(\int _{0}^{1}\left \vert \ln S_{\mu }(t - 1)\right \vert \mathrm{d}t < \infty \) and if and only if \(\int _{0}^{\infty }\left \vert \ln x\right \vert \mathrm{d}\nu (x) < \infty \) . If these integrals are finite, then

$$\displaystyle{ \int _{0}^{\infty }\ln x\mathrm{d}\mu (x) = -\int _{ 0}^{1}\ln S_{\mu }(t - 1)\mathrm{d}t =\int _{ 0}^{\infty }\ln x\mathrm{d}\nu (x). }$$

Proof.

For x > 0, put \({\ln }^{+}x =\max (\ln x,0)\) and \({\ln }^{-}x =\max (-\ln x,0)\). Then one easily checks that

$$\displaystyle{{ \ln }^{+}x \leq \ln (x + 1) {\leq \ln }^{+}x +\ln 2 }$$

and by replacing x by \(\frac{1} {x}\) it follows that

$$\displaystyle{{ \ln }^{-}x \leq \ln \left (\frac{x + 1} {x} \right ) {\leq \ln }^{-}x +\ln 2. }$$

Hence

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{+}x\mathrm{d}\mu (x) < \infty \Leftrightarrow \int _{ 0}^{\infty }\ln (x + 1)\mathrm{d}\mu (x) < \infty }$$

and

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{-}x\mathrm{d}\mu (x) < \infty \Leftrightarrow \int _{ 0}^{\infty }\ln \left (\frac{x + 1} {x} \right )\mathrm{d}\mu (x) < \infty. }$$

We prove next that

$$\displaystyle{ \int _{0}^{\infty }\ln (x + 1)\mathrm{d}\mu (x) =\int _{ 0}^{\infty }{\ln }^{-}u\psi _{\mu }^{\prime}(-u)\mathrm{d}u }$$
(8.5)

and

$$\displaystyle{ \int _{0}^{\infty }\ln \left (\frac{x + 1} {x} \right )\mathrm{d}\mu (x) =\int _{ 0}^{\infty }{\ln }^{+}u\psi _{\mu }^{\prime}(-u)\mathrm{d}u. }$$
(8.6)

Recall from (8.1), that

$$\displaystyle{ \psi _{\mu }^{\prime}(-u) =\int _{ 0}^{\infty } \frac{t} {{(1 + \mathit{ut})}^{2}}\mathrm{d}\mu (t),\quad u > 0. }$$

Hence by Tonelli’s theorem

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{+}u\psi _{\mu }^{\prime}(-u)\mathrm{d}u =\int _{ 1}^{\infty }\ln u\psi _{\mu }^{\prime}(-u)\mathrm{d}u =\int _{ 0}^{\infty }\int _{ 1}^{\infty } \frac{x} {{(1 + \mathit{ux})}^{2}}\ln u\mathrm{d}u\mathrm{d}\mu (x) }$$

and similarly,

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{-}u\psi _{\mu }^{\prime}(-u)\mathrm{d}u =\int _{ 0}^{\infty }\int _{ 0}^{1} \frac{x} {{(1 + \mathit{ux})}^{2}}\ln \left (\frac{1} {u}\right )\mathrm{d}u\mathrm{d}\mu (x). }$$

By partial integration, we have

$$\displaystyle{ \int _{1}^{\infty } \frac{x} {{(1 + \mathit{ux})}^{2}}\ln u\mathrm{d}u = \left [- \frac{\ln u} {1 + \mathit{ux}} +\ln \left ( \frac{u} {1 + \mathit{ux}}\right )\right ]_{u=1}^{u=\infty } =\ln \left (\frac{x + 1} {x} \right ) }$$

and similarly,

$$\displaystyle\begin{array}{rcl} \int _{0}^{1} \frac{x} {{(1 + \mathit{ux})}^{2}}\ln \left (\frac{1} {u}\right )\mathrm{d}u& =& \left [ \frac{\ln u} {1 + \mathit{ux}} -\ln \left ( \frac{u} {1 + \mathit{ux}}\right )\right ]_{u=0}^{u=1} {}\\ & =& \left [ \frac{\mathit{ux}} {1 + \mathit{ux}}\ln u +\ln (1 + \mathit{ux})\right ]_{u=0}^{u=1} =\ln (x + 1) {}\\ \end{array}$$

which proves (8.5) and (8.6). Therefore

$$\displaystyle{ \int _{0}^{\infty }\left \vert \ln x\right \vert \mathrm{d}\mu (x) < \infty \Leftrightarrow \int _{ 0}^{\infty }\left \vert \ln u\right \vert \psi _{\mu }^{\prime}(-u)\mathrm{d}u < \infty }$$

and substituting x = ψ μ (−u) + 1 we get

$$\displaystyle{ \int _{0}^{\infty }\left \vert \ln u\right \vert \psi _{\mu }^{\prime}(-u)\mathrm{d}u\,=\,\int _{ 0}^{1}\left \vert \ln \left (-\chi _{\mu }(t - 1)\right )\right \vert \mathrm{d}t\,=\,\int _{ 0}^{1}\left \vert \ln \left ( \frac{t} {1 - t}\right )\,+\,\ln S_{\mu }(t - 1)\right \vert \mathrm{d}t. }$$

Since \(\int _{0}^{1}\left \vert \ln \left ( \frac{t} {1-t}\right )\right \vert \mathrm{d}t < \infty \) it follows that

$$\displaystyle{ \int _{0}^{\infty }\left \vert \ln u\right \vert \psi _{\mu }^{\prime}(-u)\mathrm{d}u < \infty \Leftrightarrow \int _{ 0}^{1}\left \vert \ln S_{\mu }(t - 1)\right \vert \mathrm{d}t < \infty. }$$

If μ is not a Dirac measure, the substitution x = S μ (t − 1)−1, 0 < t < 1 gives t = ν((0, x]) for a < x < b, where as before \(a ={ \left (\int _{0}^{\infty }{x}^{-1}\mathrm{d}\mu (x)\right )}^{-1}\) and \(b =\int _{ 0}^{\infty }x\mathrm{d}\mu (x)\). The measure ν is concentrated on the interval (a, b). Hence

$$\displaystyle{ \int _{0}^{\infty }\left \vert \ln x\right \vert \mathrm{d}\nu (x) =\int _{ a}^{b}\left \vert \ln x\right \vert \mathrm{d}\nu (x) =\int _{ 0}^{1}\left \vert \ln \left ( \frac{1} {S_{\mu }(t - 1)}\right )\right \vert \mathrm{d}t =\int _{ 0}^{1}\left \vert \ln S_{\mu }(t - 1)\right \vert \mathrm{d}t. }$$

This proves the first statement in Proposition 1. If all three integrals in that statement are finite, we get

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }\ln x\mathrm{d}\mu (x)& =& \int _{ 0}^{\infty }\ln (x + 1)\mathrm{d}\mu (x) -\int _{ 0}^{\infty }\ln \left (\frac{x + 1} {x} \right )\mathrm{d}\mu (x) {}\\ & =& \int _{0}^{\infty }\left ({\ln }^{-}u {-\ln }^{+}u\right )\psi _{\mu }^{\prime}(-u)\mathrm{d}u = -\int _{ 0}^{\infty }\ln u\psi _{\mu }^{\prime}(-u)\mathrm{d}u. {}\\ \end{array}$$

By the substitution t = ψ μ (−u) + 1 we get

$$\displaystyle{ \int _{0}^{1}\ln \left (-\chi _{\mu }(t - 1)\right )\mathrm{d}t =\int _{ 0}^{1}\left (\ln \left (\frac{1 - t} {t} \right ) +\ln S_{\mu }(t - 1)\right )\mathrm{d}t =\int _{ 0}^{1}\ln S_{\mu }(t - 1)\mathrm{d}t. }$$

Hence \(\int _{0}^{\infty }\ln x\mathrm{d}\mu (x) = -\int _{0}^{1}\ln S_{\mu }(t - 1)\mathrm{d}t\). Moreover, by the substitution x = S μ (t − 1)−1, 0 < t < 1 we get

$$\displaystyle{ \int _{0}^{\infty }\ln x\mathrm{d}\mu (x) =\int _{ 0}^{1}\ln \left ( \frac{1} {S_{\mu }(t - 1)}\right )\mathrm{d}t =\int _{ 0}^{\infty }\ln x\mathrm{d}\nu (x). }$$

Finally, if μ = δ x , x ∈ (0, ), this identity holds trivially, because ν = δ x and \(S_{\nu }(z) = \frac{1} {x},0 < z < 1\). □ 

Corollary 2.

Let μ 1 and μ 2 be probability measures on (0,∞). If \(\mathbb{E}_{\mu _{1}}(\ln x)\) and \(\mathbb{E}_{\mu _{2}}(\ln x)\) exist then \(\mathbb{E}_{\mu _{1}\boxtimes \mu _{2}}(\ln x)\) also exists and

$$\displaystyle{ \mathbb{E}_{\mu _{1}\boxtimes \mu _{2}}(\ln x) = \mathbb{E}_{\mu _{1}}(\ln x) + \mathbb{E}_{\mu _{2}}(\ln x) }$$

where \(\mathbb{E}_{\mu }(f) =\int _{ 0}^{\infty }f(x)\mathrm{d}\mu (x)\) .

Proof.

The statement follows directly from Proposition 1 and multiplicativity of the S-transform. □ 

For further use, we define the map ρ for a probability measure μ on (0, ) by

$$\displaystyle{ \rho (\mu ) =\int _{ 0}^{1}\ln \left (\frac{1 - t} {t} \right )\ln S_{\mu }(t - 1)\mathrm{d}t. }$$

Note that ρ(μ) is well-defined and non-negative for all probability measures on (0, ) because

$$\displaystyle{ \ln \left (\tfrac{1-t} {t} \right )\ln S_{\mu }(t - 1) =\ln \left (\tfrac{1-t} {t} \right )\ln \left (\frac{S_{\mu }(t - 1)} {S_{\mu }(-\tfrac{1} {2})} \right ) +\ln \left (\tfrac{1-t} {t} \right )S_{\mu }\left (-\tfrac{1} {2}\right ), }$$
(8.7)

where the first term on the right hand side is non-negative for all t ∈ (0, 1) and the second term is integrable with integral 0.

Lemma 9.

Let μ be a probability measure on (0,∞), then

$$\displaystyle{ 0 \leq \rho (\mu ) \leq \frac{\pi } {\sqrt{3}}{\left (\int _{0}^{1}{\ln }^{2}S_{\mu }(t - 1)\mathrm{d}t\right )}^{1/2}. }$$

Furthermore, ρ(μ) = 0 if and only if μ is a Dirac measure. Moreover, equality holds in the right inequality if and only if \(S_{\mu }(z) ={ \left ( \frac{z} {1+z}\right )}^{\gamma }\) for some γ > 0 and in this case \(\rho (\mu ) =\gamma \frac{{\pi }^{2}} {3}\) . Additionally, if μ 1 2 are probability measures on (0,∞) we have \(\rho (\mu _{1} \boxtimes \mu _{2}) =\rho (\mu _{1}) +\rho (\mu _{2})\) .

Proof.

We already have observed ρ ≥ 0. For the second inequality observe that

$$\displaystyle{ \rho {(\mu )}^{2} \leq \left (\int _{ 0}^{1}{\ln }^{2}\left (\frac{1 - t} {t} \right )\mathrm{d}t\right )\left (\int _{0}^{1}{\ln }^{2}S_{\mu }(t - 1)\mathrm{d}t\right ) }$$

by the Cauchy-Schwarz-inequality, where the first term equals \(\frac{{\pi }^{2}} {3}\) by Lemma 8.

If μ = δ a for some a > 0 we have \(S_{\mu }(z) = \frac{1} {a}\), hence lnS μ (t − 1) is constant so the oddity of \(\ln (\frac{1-t} {t} )\) gives us ρ(μ) = 0. On the other hand, if ρ(μ) = 0, the first term in (8.7) has to integrate to 0, but by symmetry of \(\ln \left (\frac{1-t} {t} \right )\) and the fact that S μ is decreasing, this implies that S μ must be constant, hence μ is a Dirac measure.

Equality in the second inequality, by the Cauchy-Schwarz inequality happens precisely if \(\ln S_{\mu }(t - 1) =\gamma \ln (\frac{1-t} {t} )\) for some γ > 0 which is the case if and only if \(S_{\mu }(t - 1) ={ \left (\frac{1-t} {t} \right )}^{\gamma }\), and in this case \(\rho (\mu ) =\gamma \frac{{\pi }^{2}} {3}\) by Lemma 8.

For the last formula we use multiplicity of the S-transform to get

$$\displaystyle\begin{array}{rcl} \rho (\mu _{1} \boxtimes \mu _{2})& =& \int _{0}^{1}\ln \left (\frac{1 - t} {t} \right )\ln S_{\mu _{1}\boxtimes \mu _{2}}(t - 1)\mathrm{d}t {}\\ & =& \int _{0}^{1}\ln \left (\frac{1 - t} {t} \right )\left (\ln S_{\mu _{1}}(t - 1) +\ln S_{\mu _{2}}(t - 1)\right )\mathrm{d}t {}\\ & =& \rho (\mu _{1}) +\rho (\mu _{2}). {}\\ \end{array}$$

 □ 

Proposition 2.

Let μ be a probability measure on (0,∞), and let ν be defined as in Theorem  2 . Then

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\mu (x)& = & \int _{ 0}^{1}{\ln }^{2}S_{\mu }(t - 1)\mathrm{d}t + 2\rho (\mu ) {}\\ \int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\nu (x)& = & \int _{ 0}^{1}{\ln }^{2}S_{\mu }(t - 1)\mathrm{d}t {}\\ \mathbb{V}_{\mu }(\ln x)& = \mathbb{V}_{\nu }(\ln x) + 2\rho (\mu )& {}\\ \end{array}$$

as equalities of numbers in [0,∞], where \(\mathbb{V}_{\sigma }(\ln x)\) denotes the variance of ln x with respect to a probability measure σ on (0,∞). Moreover,

$$\displaystyle{ 0 \leq \rho (\mu ) \leq \frac{\pi } {\sqrt{3}}\mathbb{V}_{\nu }{(\ln x)}^{\frac{1} {2} }. }$$

Proof.

We first prove the following identity

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{2}u\psi _{\mu }^{\prime}(-u)\mathrm{d}u =\int _{ 0}^{\infty }{\ln }^{2}x\mathrm{d}\mu (x) + \frac{{\pi }^{2}} {3}. }$$
(8.8)

Since \(\psi ^{\prime}(-u) =\int _{ 0}^{\infty } \frac{x} {{(1+\mathit{ux})}^{2}} \mathrm{d}x\), we get by Tonelli’s theorem, that

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }{\ln }^{2}u\psi _{\mu }^{\prime}(-u)\mathrm{d}u& =& \int _{ 0}^{\infty }\left (\int _{ 0}^{\infty }{\ln }^{2}u \frac{x} {{(1 + \mathit{ux})}^{2}}\mathrm{d}u\right )\mathrm{d}\mu (x) {}\\ & =& \int _{0}^{\infty }\left (\int _{ 0}^{\infty }{\ln }^{2}\left (\frac{v} {x}\right ) \frac{\mathrm{d}v} {{(1 + v)}^{2}}\right )\mathrm{d}\mu (x). {}\\ \end{array}$$

Note next that

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{2}\left (\frac{v} {x}\right ) \frac{\mathrm{d}v} {{(1 + v)}^{2}} = c_{0} + c_{1}\ln x + c{_{2}\ln }^{2}x }$$

where \(c_{0} =\int _{ 0}^{\infty } \frac{{\ln }^{2}v} {{(1+v)}^{2}} \mathrm{d}v\), \(c_{1} = -2\int _{0}^{\infty } \frac{\ln v} {{(1+v)}^{2}} \mathrm{d}v\), and \(c_{2} =\int _{ 0}^{\infty } \frac{1} {{(1+v)}^{2}} \mathrm{d}v = 1\). Moreover, by the substitution \(v = \frac{1} {w}\) one gets c 1 = −c 1 and hence c 1 = 0. Finally, by the substitution \(v = \frac{t} {1-t},0 < t < 1\) and Lemma 8,

$$\displaystyle{ c_{0} =\int _{ 0}^{1}{\ln }^{2}\left ( \frac{t} {1 - t}\right )\mathrm{d}t = \frac{{\pi }^{2}} {3}. }$$

Hence

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{2}u\psi _{\mu }(-u)\mathrm{d}u =\int _{ 0}^{\infty }\left ({\ln }^{2}x + \frac{{\pi }^{2}} {3}\right )\mathrm{d}\mu (x) }$$

which proves (8.8). Next by the substitution t = ψ μ (−u) + 1, we have

$$\displaystyle\begin{array}{rcl} & \int _{0}^{\infty }{\ln }^{2}u\psi _{\mu }^{\prime}(-u)\mathrm{d}u =\int _{ 0}^{1}{\ln }^{2}\left (-\chi _{\mu }(t - 1)\right )\mathrm{d}t =& \\ & \int _{0}^{1}{\left (\ln \tfrac{1-t} {t} +\ln S_{\mu }(t - 1)\right )}^{2}\mathrm{d}t. &{}\end{array}$$
(8.9)

Since \(t\mapsto \ln \left (\frac{1-t} {t} \right )\) is square integrable on (0, 1) the right hand side of (8.9) is finite if and only if

$$\displaystyle{ \int _{0}^{1}\ln {\left (S_{\mu }(t - 1)\right )}^{2}\mathrm{d}t < \infty. }$$

Hence by (8.8) and (8.9) this condition is equivalent to

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\mu (x) < \infty, }$$

so to prove the first equation in Proposition 2 is suffices to consider the case, where the two above integrals are finite. In that case ρ(μ) <  by Lemma 9. Thus by Lemma 8 and the definition of ρ(μ),

$$\displaystyle{ \int _{0}^{1}{\left (\ln \left (\frac{1 - t} {t} \right ) +\ln S_{\mu }(t - 1)\right )}^{2}\mathrm{d}t =\int _{ 0}^{1}{\ln }^{2}\left (S_{\mu }(t - 1)\right )\mathrm{d}t + 2\rho (\mu ) + \frac{{\pi }^{2}} {3}. }$$

Hence by (8.8) and (8.9)

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\mu (x) =\int _{ 0}^{1}{\ln }^{2}\left (S_{\mu }(t - 1)\right )\mathrm{d}t + 2\rho (\mu ). }$$

The second equality in Proposition 2

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\nu (x) =\int _{ 0}^{1}{\ln }^{2}S_{\mu }(t - 1)\mathrm{d}t }$$

follows from the substitution x = S μ (t − 1)−1 in case μ is not a Dirac measure, and it is trivially true for Dirac measures. By the first two equalities in Proposition 2, we have

$$\displaystyle{ \int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\mu (x) =\int _{ 0}^{\infty }{\ln }^{2}x\mathrm{d}\nu (x) + 2\rho (\mu ). }$$
(8.10)

If both sides of this equality are finite, then by Proposition 1,

$$\displaystyle{ \int _{0}^{\infty }\ln x\mathrm{d}\mu (x) =\int _{ 0}^{\infty }\ln x\mathrm{d}\nu (x) }$$

where both integrals are well-defined. Combined with (8.10) we get

$$\displaystyle{ \mathbb{V}_{\mu }(\ln x) = \mathbb{V}_{\nu }(\ln x) + 2\rho (\mu ) }$$
(8.11)

and if \(\int _{0}^{\infty }{\ln }^{2}x\mathrm{d}\mu (x) = +\infty \), both sides of (8.11) must be infinite by (8.10).

As the S-transform behaves linearly when scaling the probability distribution in the sense that the image measure μ c of μ under xcx for c > 0 gives us \(S_{\mu _{c}}(z) = {c}^{-1}S_{\mu }(z)\) we have for ρ that

$$\displaystyle\begin{array}{rcl} \rho (\mu _{c})& =& \int _{0}^{1}\ln \left (\frac{1 - t} {t} \right )\ln ({c}^{-1}S_{\mu }(t - 1))\mathrm{d}t {}\\ & =& \int _{0}^{1}\ln \left (\frac{1 - t} {t} \right )\ln S_{\mu }(t - 1)\mathrm{d}t +\int _{ 0}^{1}\ln \left (\frac{1 - t} {t} \right ){c}^{-1}\mathrm{d}t =\rho (\mu ) + 0 {}\\ \end{array}$$

by anti-symmetry of the second term around \(t = \frac{1} {2}\). Using this for \(c =\exp \left (\mathbb{E}_{\nu }(\ln x)\right )\), we get

$$\displaystyle\begin{array}{rcl} \rho (\mu ) =\rho (\mu _{c})& \leq & \frac{\pi } {\sqrt{3}}{\left (\int _{0}^{1}{\left (\ln S_{\mu }(t - 1) - \mathbb{E}_{\nu }\left (\ln x\right )\right )}^{2}\mathrm{d}t\right )}^{\frac{1} {2} } {}\\ & =& \frac{\pi } {\sqrt{3}}{\left (\int _{0}^{1}\left (\ln S_{\mu }{(t - 1)}^{2} - 2\mathbb{E}_{\nu }{\left (\ln x\right )}^{2} + \mathbb{E}_{\nu }{\left (\ln x\right )}^{2}\right )\mathrm{d}t\right )}^{\frac{1} {2} } {}\\ & =& \frac{\pi } {\sqrt{3}}{\left (\mathbb{V}_{\nu }(\ln x)\right )}^{\frac{1} {2} }. {}\\ \end{array}$$

 □ 

Now we can use the preceding lemmas to investigate the different behavior of the multiplicative law of large numbers in classical and free probability. Note that in classical probability for a family of identically distributed independent random variables (X i ) i = 1 we have the identity \(\mathbb{V}(\ln (\prod _{i=1}^{n}X_{i})) = n\mathbb{V}(\ln X_{1})\). In free probability by Propositions 1 and 2 we have instead

$$\displaystyle\begin{array}{rcl} & & \mathbb{V}_{{\mu }^{\boxtimes n}}(\ln t) {}\\ & & =\int _{ 0}^{\infty }{\ln }^{2}t\mathrm{d}{(\mu }^{\boxtimes n})(t) -{\left (\int _{ 0}^{\infty }\ln t\mathrm{d}{(\mu }^{\boxtimes n})(t)\right )}^{2} {}\\ & & =\int _{ 0}^{1}{\ln }^{2}S_{{\mu }^{ \boxtimes n}}(t - 1)\mathrm{d}z + 2\rho {(\mu }^{\boxtimes n}) -{\left (-\int _{ -1}^{0}\ln S_{{\mu }^{ \boxtimes n}}(z)\mathrm{d}z\right )}^{2} {}\\ & & = {n}^{2}\int _{ 0}^{1}{\ln }^{2}S_{\mu }(t - 1)\mathrm{d}z + 2n\rho (\mu ) - {n}^{2}{\left (\int _{ -1}^{0}\ln S_{\mu }(z)\mathrm{d}z\right )}^{2} {}\\ & & = {n}^{2}\mathbb{V}_{\nu }(\ln x) + 2n\rho (\mu ). {}\\ \end{array}$$

Hence \(\mathbb{V}_{{\mu }^{\boxtimes n}}(\ln t) = n\mathbb{V}_{\mu }(\ln t) + n(n - 1)\mathbb{V}_{\nu }(\ln t) > n\mathbb{V}_{\mu }(\ln t)\) for n ≥ 2 if μ is not a Dirac measure and \(\mathbb{V}_{\nu }(\ln t) < \infty \), which shows that the variance of lnt is not in general additive.

Lemma 10.

Let μ be a probability measure on (0,∞) and let ν be defined as in Theorem 2. Then

$$\displaystyle{ \int _{0}^{\infty }{x}^{\gamma }\mathrm{d}\mu (x) = \frac{\sin (\pi \gamma )} {\pi \gamma } \int _{0}^{1}{\left (\frac{1 - t} {t} S_{\mu }(t - 1)\right )}^{-\gamma }\mathrm{d}t }$$

for − 1 < γ < 1 and

$$\displaystyle{ \int _{0}^{\infty }{x}^{\gamma }\mathrm{d}\nu (x) =\int _{ 0}^{1}S_{\mu }{(t - 1)}^{-\gamma }\mathrm{d}t }$$

for \(\gamma \in \mathbb{R}\) as equalities of numbers in [0,∞].

Proof.

By Tonelli’s theorem followed by the substitution u = yx we get

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }{y}^{-\gamma }\psi _{ \mu }^{\prime}(-y)\mathrm{d}y& =& \int _{0}^{\infty }\int _{ 0}^{\infty } \frac{{y}^{-\gamma }x} {{(1 + yx)}^{2}}\mathrm{d}y\mathrm{d}\mu (x) {}\\ & =& \int _{0}^{\infty }{x}^{\gamma }\int _{ 0}^{\infty } \frac{{u}^{-\gamma }} {{(1 + u)}^{2}}\mathrm{d}u\mathrm{d}\mu (x) {}\\ & =& B(1-\gamma,1+\gamma )\int _{0}^{\infty }{x}^{\gamma }\mathrm{d}\mu (x), {}\\ \end{array}$$

where \(B(s,t) =\int _{ 0}^{\infty } \frac{{u}^{s-1}} {{(1+u)}^{s+t}}\mathrm{d}u\) is the Beta function. But \(B(1-\gamma,1+\gamma ) = \frac{\sin (\pi \gamma )} {\pi \gamma }\) by well-known properties of B. Substitute now x = −χ μ (−z) and z = 1 − t to get

$$\displaystyle{ \int _{0}^{\infty }{x}^{-\gamma }\psi _{ \mu }^{\prime}(-x)\mathrm{d}x =\int _{ 0}^{1}{\left (-\chi _{\mu }(-z)\right )}^{-\gamma }\mathrm{d}z =\int _{ 0}^{1}{\left (\frac{1 - t} {t} S_{\mu }(t - 1)\right )}^{-\gamma }\mathrm{d}t, }$$

which gives the first identity. The second identity follows from the substitution x = S μ (t − 1)−1 and the properties of ν from Theorem 2. □ 

8.5 Examples

In this section we will investigate a two parameter family of distributions for which there can be made explicit calculations.

Proposition 3.

Let α,β ≥ 0. There exists a probability measure μ α,β on (0,∞) which S-transform is given by

$$\displaystyle{ S_{\mu _{\alpha,\beta }}(z) = \frac{{(-z)}^{\beta }} {{(1 + z)}^{\alpha }}. }$$

Furthermore, these measures form a two-parameter semigroup, multiplicative under \(\boxtimes \) induced by multiplication of (α,β) ∈ [0,∞) × [0,∞).

Proof.

Note first that α = β = 0 gives \(S_{\mu _{0,0}} = 1\), which by uniqueness of the S-transform results in μ 0, 0 = δ 1, hence we can in the following assume (α, β) ≠ (0, 0).

Define the function \(v_{\alpha,\beta }: \mathbb{C}\setminus [0,1] \rightarrow \mathbb{C}\) by

$$\displaystyle{ v_{\alpha,\beta }(z) =\beta \ln (-z) -\alpha \ln (1 + z) }$$

for all \(z \in \mathbb{C}\setminus [0,1]\).

In the following we for \(z \in \mathbb{C}\) denote by \(\arg z \in [-\pi,\pi ]\) its argument. Assume z = x + iy and y > 0 then

$$\displaystyle{ \ln (-z) = \frac{1} {2}\ln \left ({x}^{2} + {y}^{2}\right ) + \mathrm{i}\arg (-x -\mathrm{i}y) }$$

where arg(−x −iy) < 0, which implies that \(\ln ({\mathbb{C}}^{+}) \subseteq {\mathbb{C}}^{-}\). Similarly, if we assume \(z = x + \mathrm{i}y\) and y > 0 then

$$\displaystyle{ \ln (1 + z) = \frac{1} {2}\ln \left ({(x + 1)}^{2} + {y}^{2}\right ) + \mathrm{i}\arg ((x + 1) + \mathrm{i}y) }$$

where \(\arg ((x + 1) + \mathrm{i}y) > 0\), which implies that \(-\ln (1 + {\mathbb{C}}^{+}) \subseteq {\mathbb{C}}^{-}\) and hence \(v_{\alpha,\beta }({\mathbb{C}}^{+}) \subseteq {\mathbb{C}}^{-}\). Furthermore, we observe that for all \(z \in \mathbb{C}\), \(v_{\alpha,\beta }(\bar{z}) = \overline{v_{\alpha,\beta }(z)}\). By [4, Theorem 6.13 (ii)] these results imply that there exists a unique \(\boxtimes \)-infinitely divisible measure μ α, β with the S-transform

$$\displaystyle{ S_{\mu _{\alpha,\beta }}(z) =\exp (v(z)) =\exp (\beta \ln (-z) -\alpha \ln (1 + z)) = \frac{{(-z)}^{\beta }} {{(1 + z)}^{\alpha }}. }$$

The semigroup property follows from multiplicativity of the S-transform. □ 

The existence of μ α, 0 was previously proven by T. Banica, S.T. Belinschi, M. Capitaine and B. Collins in [2] as a special case of free Bessel laws. The case μ α, α is known as a Boolean stable law from O. Arizmendi and T. Hasebe [1].

Furthermore, there is a clear relationship between the measures μ α, β and μ β, α .

Lemma 11.

Let α,β ≥ 0, (α,β) ≠ (0,0) and let ζ: (0,∞) → (0,∞) be the map ζ(t) = t −1 . Then we have \(\mu _{\beta,\alpha } =\dot{\zeta } (\mu _{\alpha,\beta })\) , where \(\dot{\zeta }\) denotes the image measure under the map ζ.

Proof.

Put \(\sigma =\dot{\zeta } (\mu _{\alpha,\beta })\). Then by the proof of [7, Proposition 3.13],

$$\displaystyle{ S_{\sigma }(z) = \frac{1} {S_{\mu _{\alpha,\beta }}(-1 - z)} = \frac{{(-z)}^{\alpha }} {{(1 + z)}^{\beta }} = S_{\mu _{\beta,\alpha }} }$$

for 0 < z < 1. Hence σ = μ β, α . □ 

Lemma 12.

Let (α,β) ≠ (0,0). Denote the limit measure corresponding to μ α,β by ν α,β . Then ν α,β is uniquely determined by the formula

$$\displaystyle{ F_{\alpha,\beta }\left ( \frac{{t}^{\alpha }} {{(1 - t)}^{\beta }}\right ) = t }$$

for 0 < t < 1, where F α,β (x) = ν α,β ((0,x]) is the distribution function of ν α,β .

Proof.

The lemma follows directly from Lemma 3 and Theorem 2. □ 

For β = 0 and α > 0,

$$\displaystyle{ F_{\alpha,0}(x) = \left \{\begin{array}{ll} {x}^{\frac{1} {\alpha } },&\qquad 0 < x < 1 \\ 1, &\qquad x \geq 1. \end{array} \right. }$$

Similarly, for α = 0 and β > 0

$$\displaystyle{ F_{0,\beta }(x) = \left \{\begin{array}{ll} 0, &\qquad 0 < x < 1 \\ {(1 - x)}^{-\frac{1} {\beta } },&\qquad x \geq 1. \end{array} \right. }$$

Hence ν 0, β is the Pareto distribution with scale parameter 1 and shape parameter \(\frac{1} {\beta }\).

Moreover, if α = β > 0 we get F α, α (x) = (1 + x −1∕α)−1 for x ∈ (0, ), which we recognize as the image measure of the Burr distribution with parameters (1, α −1) (or equivalently the Fisk or log-logistic distribution (cf. [9, p. 54]) with scale parameter 1 and shape parameter α −1) under the map xx −1.

On the other hand, we can make some observations about the distribution μ α, β , too. For the cases (α, β) = (1, 0) and (α, β) = (0, 1) we can recognize the measures μ 1, 0 and μ 0, 1 from their S-transform, as \(S_{\mu _{1,0}}(z) = {(1 + z)}^{-1}\) is the S-transform of the free Poisson distributions with shape parameter 1 (cf. [18, p. 34]), which is given by

$$\displaystyle{ \mu _{1,0} = \frac{1} {2\pi }\sqrt{\frac{4 - x} {x}} 1_{(0,4)}(x)\mathrm{d}x, }$$

while \(S_{\mu _{0,1}}(z) = -z\) according to Lemma 11 is the S-transform of the image of the above free Poisson distribution under the map tt −1,

$$\displaystyle{ \mu _{0,1} = \frac{1} {2\pi } \frac{\sqrt{4x - 1}} {{x}^{2}} 1_{(\frac{1} {4},\infty )}(x)\mathrm{d}x, }$$

which is the same as the free stable distribution with parameters α = 1∕2 and ρ = 1 as described by H. Bercovici, V. Pata and P. Biane in [3, Appendix A1]. More generally, μ 0, β is the same as the free stable distribution v α, ρ with \(\alpha = \frac{1} {\beta +1}\) and ρ = 1, because by [3, Appendix A4] v α, 1 is characterized by \(\varSigma _{v_{\alpha,1}}(y) ={ \left ( \frac{-y} {1-y}\right )}^{\frac{1} {\alpha } -1}\), y ∈ (−, 0), and it is easy to check that

$$\displaystyle{ S_{v_{\alpha,0}}(z) =\varSigma _{v_{\alpha,0}}\left ( \frac{z} {1 + z}\right ) = {(-z)}^{\frac{1} {\alpha } -1} = S_{\mu _{ 0,\frac{1} {\alpha } -1}}(z),\quad 0 < z < 1,0 <\alpha < 1. }$$

From the above observations, we now can describe a construction of the measures μ m, n .

Proposition 4.

Let m,n be nonnegative integers. Then the measure μ m,n is given by

$$\displaystyle{ \mu _{m,n} =\mu _{ 1,0}^{\boxtimes m} \boxtimes \mu _{ 0,1}^{\boxtimes n}. }$$

Proof.

By multiplicativity of the S-transform we have that

$$\displaystyle{ S_{\mu _{1,0}^{\boxtimes m}\boxtimes \mu _{0,1}^{\boxtimes n}}(z) = S_{\mu _{1,0}}{(z)}^{m}S_{\mu _{ 0,1}}{(z)}^{n} = \frac{{(-z)}^{n}} {{(1 + z)}^{m}} = S_{\mu _{m,n}}(z), }$$

which by uniqueness of the S-transform gives the desired result. □ 

Proposition 5.

For all α,β ≥ 0

$$\displaystyle\begin{array}{rcl} \mathbb{E}_{\mu _{\alpha,\beta }}(\ln x)& =& \beta -\alpha {}\\ \rho (\mu _{\alpha,\beta })& =& \frac{{\pi }^{2}} {6}(\alpha +\beta ) {}\\ \mathbb{V}_{\mu _{\alpha,\beta }}(\ln x)& =& {(\alpha -\beta )}^{2} + \frac{{\pi }^{2}} {3}(\alpha \beta +\alpha +\beta ). {}\\ \end{array}$$

Proof.

These formulas follow easily from Propositions 1 and 2 and Lemma 8. □ 

Furthermore, we also can calculate explicitly all fractional moments of μ α, β by the following theorem.

Theorem 3.

Let α,β > 0 and \(\gamma \in \mathbb{R}\) then we have

$$\displaystyle{ \int _{0}^{\infty }{x}^{\gamma }\mathrm{d}\mu _{\alpha,\beta }(x) = \left \{\begin{array}{ll} \frac{\sin (\pi \gamma )} {\pi \gamma } \frac{\varGamma (1+\gamma +\gamma \alpha )\varGamma (1-\gamma -\gamma \beta )} {\varGamma (2+\gamma \alpha -\gamma \beta )} & \quad - \frac{1} {1+\alpha } <\gamma < \frac{1} {1+\beta } \\ \infty &\quad \text{otherwise} \end{array} \right. }$$
(8.12)
$$\displaystyle{ \int _{0}^{\infty }{x}^{\gamma }\mathrm{d}\mu _{\alpha,0}(x) = \left \{\begin{array}{ll} \frac{\varGamma (1+\gamma +\gamma \alpha )} {\varGamma (1+\gamma )\varGamma (2+\gamma \alpha )} & \quad \gamma > - \frac{1} {1+\alpha } \\ \infty &\quad \text{otherwise} \end{array} \right. }$$
(8.13)
$$\displaystyle{ \int _{0}^{\infty }{x}^{\gamma }\mathrm{d}\mu _{ 0,\beta }(x) = \left \{\begin{array}{ll} \frac{\varGamma (1-\gamma -\gamma \beta )} {\varGamma (1-\gamma )\varGamma (2-\gamma \beta )} & \quad \gamma < \frac{1} {1+\beta } \\ \infty &\quad \text{otherwise.} \end{array} \right. }$$
(8.14)

Proof.

Let first − 1 < γ < 1. Then (8.12)–(8.14) follow from Lemma 10 together with the formula \(\varGamma (1+\gamma )\varGamma (1-\gamma ) = \frac{\pi \gamma } {\sin (\pi \gamma )}\). Since \(S_{\mu _{\alpha,0}}(z) = \frac{1} {{(z+1)}^{\alpha }}\) is analytic in a neighborhood of 0, μ α, 0 has finite moments of all orders. Therefore the functions

$$\displaystyle\begin{array}{rcl} s& \mapsto & \int _{0}^{\infty }{x}^{s}\mathrm{d}\mu _{\alpha,0}(x) {}\\ s& \mapsto & \frac{\varGamma (1 + s + s\alpha )} {\varGamma (1 + s)\varGamma (2 + s\alpha )} {}\\ \end{array}$$

are both analytic in the half-plane ℜ s > 0 and they coincide for s ∈ (0, 1). Hence they are equal for all \(s \in \mathbb{C}\) with ℜ s > 0 which proves (8.13). By Lemma 11 (8.14) follows from (8.13). □ 

Remark 4.

By Theorem 3 (8.12) we have

  1. 1.

    If β > 0, then \(\int _{0}^{\infty }x\mathrm{d}\mu _{\alpha,\beta }(x) = \infty \). Hence \(\sup (\text{supp}(\mu _{\alpha,\beta })) = \infty \). Similarly, if α > 0 then \(\int _{0}^{\infty }{x}^{-1}\mathrm{d}\mu _{\alpha,\beta }(x) = \infty \). Hence \(\inf (\text{supp}(\mu _{\alpha,\beta })) = 0\).

  2. 2.

    If β = 0, then by Stirling’s formula

    $$\displaystyle{ \sup (\text{supp}(\mu _{\alpha,0})) =\lim _{0\rightarrow \infty }{\left (\int _{0}^{\infty }{t}^{n}\mathrm{d}\mu _{\alpha,0}(t)\right )}^{ \frac{1} {n} } ={ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }}. }$$

    Hence by Lemma 11, we have for α = 0

    $$\displaystyle{ \inf (\text{supp}(\mu _{0,\beta })) = \frac{{\beta }^{\beta }} {{(\beta +1)}^{\beta +1}}. }$$

    Note that \(\sup (\text{supp}(\mu _{n,0})) = \frac{{(n+1)}^{n+1}} {{n}^{n}},n \in \mathbb{N}\) was already proven by F. Larsen in [10, Proposition 4.1] and it was proven by T. Banica, S. T. Belinschi, M. Capitane and B. Collins in [2] that \(\text{supp}(\mu _{\alpha,0}) = \left [0,{ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }} \right ]\). Note that this also follows from our Corollary 3.

If α = β it is also possible to calculate explicitly the density of μ α, α . To do this we require an additional lemma.

Lemma 13.

For − 1 < γ < 1 and −π < θ < π we have

$$\displaystyle{ \frac{\sin \theta } {\pi }\int _{0}^{\infty } \frac{{t}^{\gamma }} {{t}^{2} + 2\cos (\theta )t + 1}\mathrm{d}t = \frac{\sin (\theta \gamma )} {\sin (\pi \gamma )}. }$$

Proof.

Note first that by the substitution t = e x we have

$$\displaystyle{ \int _{0}^{\infty } \frac{{t}^{\gamma }} {{t}^{2} + 2\cos (\theta )t + 1}\mathrm{d}t = \frac{1} {2}\int _{-\infty }^{\infty } \frac{{e}^{\gamma x}} {\cosh x+\cos \theta }\mathrm{d}x. }$$

The function

$$\displaystyle{ z\mapsto \frac{{e}^{\gamma x}} {\cosh x+\cos \theta } }$$

is meromorphic with simple poles in \(x = \pm \mathrm{i}(\pi -\theta ) + p2\pi\), \(p \in \mathbb{Z}\). Apply now the residue integral formula to this function on the boundary of

$$\displaystyle{ \{z \in \mathbb{C}: -R \leq \mathfrak{R}z \leq R,0 \leq \mathfrak{I}z \leq 2\pi \} }$$

and let R → . The result follows. □ 

The density of μ α, α was computed by P. Biane [5, Sect. 5.4]. For completeness we include a different proof based on Theorem 3 and Lemma 13.

Theorem 4 ([5]).

Let α > 0 then μ α,α has the density \(f_{\alpha,\alpha }(t)\mathrm{d}t\) , where

$$\displaystyle{ f_{\alpha,\alpha }(t) = \frac{\sin \left ( \frac{\pi }{\alpha +1}\right )} {\pi t\left ({t}^{ \frac{1} {\alpha +1} } + 2\cos \left ( \frac{\pi }{\alpha +1}\right ) + {t}^{- \frac{1} {\alpha +1} }\right )} }$$

for t ∈ (0,∞). In particular μ 1,1 has the density \({(\pi \sqrt{t}(1 + t))}^{-1}\mathrm{d}t\) and μ 2,2 has the density

$$\displaystyle{ \frac{\sqrt{3}} {2\pi (1 + {t}^{\frac{2} {3} } + {t}^{\frac{4} {3} })}\mathrm{d}t. }$$

Proof.

To prove this note that for \(\vert \gamma \vert < \frac{1} {1+\alpha }\)

$$\displaystyle\begin{array}{rcl} \int _{0}^{\infty }{x}^{\gamma }f_{\alpha,\alpha }(x)\mathrm{d}x& =& \int _{0}^{\infty } \frac{\sin \left ( \frac{\pi }{\alpha +1}\right )(\alpha +1){y}^{\gamma (\alpha +1)}} {\pi \left (y + 2\cos \left ( \frac{\pi }{\alpha +1}\right ) + {y}^{-1}\right )} \frac{\mathrm{d}y} {y} {}\\ & =& \frac{(\alpha +1)\sin \left ( \frac{\pi }{\alpha +1}\right )} {\pi } \int _{0}^{\infty } \frac{{y}^{\gamma (\alpha +1)}} {{y}^{2} + 2\cos \left ( \frac{\pi }{\alpha +1}\right )y + 1}\mathrm{d}y {}\\ \end{array}$$

using the substitution \(y = {x}^{ \frac{1} {\alpha +1} }\). Now by Lemma 13 and Theorem 3 (8.12) we have

$$\displaystyle{ \int _{0}^{\infty }{x}^{\gamma }f_{\alpha,\alpha }(x)\mathrm{d}x =\int _{ 0}^{\infty }{x}^{\gamma }\mathrm{d}\mu _{\alpha,\alpha }(x) < \infty. }$$

This implies by unique analytic continuation that the same formula holds for all \(\gamma \in \mathbb{C}\) with \(\vert \mathfrak{R}\gamma \vert < \frac{1} {\alpha +1}\). In particular

$$\displaystyle{ \int _{0}^{\infty }{x}^{\mathrm{i}s}f_{\alpha,\alpha }(x)\mathrm{d}x =\int _{ 0}^{\infty }{x}^{\mathrm{i}s}\mathrm{d}\mu _{\alpha,\alpha }(x) }$$

for all \(s \in \mathbb{R}\), which shows that the image measures under x↦lnx of f α, α (x)dx and μ α, α have the same characteristic function. Hence \(\mu _{\alpha,\alpha } = f_{\alpha,\alpha }(x)\mathrm{d}x\). □ 

Proposition 6.

For all α,β ≥ 0, (α,β) ≠ (0,0), the measure μ α,β has a continuous density f α,β (x), (x > 0), with respect to the Lebesgue measure on  \(\mathbb{R}\)  and

$$\displaystyle{ \lim _{x\rightarrow {0}^{+}}xf_{\alpha,\beta }(x) =\lim _{x\rightarrow \infty }xf_{\alpha,\beta }(x) = 0. }$$
(8.15)

Proof.

By the method of proof of Theorem 4, the integral

$$\displaystyle{ h_{\alpha,\beta }(s) =\int _{ 0}^{\infty }{x}^{\mathrm{i}s}\mathrm{d}\mu _{\alpha,\beta }(x),\quad s \in \mathbb{R} }$$

can be obtained by replacing γ by \(\mathrm{i}s\) in the formulas (8.12)–(8.14). Moreover,

$$\displaystyle{ h_{\alpha,\beta }(s) =\int _{ 0}^{\infty }\exp (\mathrm{i}st)\mathrm{d}\sigma _{\alpha,\beta }(t) }$$

where σ α, β is the image measure of μ α, β by the map x↦logx, (x > 0). Hence by standard Fourier analysis, we know that if \(h_{\alpha,\beta } \in {L}^{1}(\mathbb{R})\) then σ α, β has a density \(g_{\alpha,\beta } \in C_{0}(\mathbb{R})\) with respect to the Lebesgue measure on \(\mathbb{R}\) and hence μ α, β has density \(f_{\alpha,\beta }(x) = \frac{1} {x}g_{\alpha,\beta }(\log x)\) for x > 0, which satisfies the condition (8.15). To prove that \(h_{\alpha,\beta } \in {L}^{1}(\mathbb{R})\) for all α, β ≥ 0, (α, β) ≠ (0, 0), we observe first that

$$\displaystyle{ \varGamma (1 - z)\varGamma (1 + z) = \frac{\pi z} {\sin \pi z},\quad z \in \mathbb{C}\setminus \mathbb{Z} }$$

and hence by the functional equation of Γ

$$\displaystyle{ \varGamma (2 - z)\varGamma (2 + z) = \frac{\pi z(1 - {z}^{2})} {\sin \pi z},\quad z \in \mathbb{C}\setminus \mathbb{Z}. }$$

In particular, we have

$$\displaystyle\begin{array}{rcl} \vert \varGamma (1 + \mathrm{i}s){\vert }^{2}& =& \frac{\pi s} {\sinh \pi s},\quad s \in \mathbb{R} {}\\ \vert \varGamma (2 + \mathrm{i}s){\vert }^{2}& =& \frac{\pi s(1 + {s}^{2})} {\sinh \pi s},\quad s \in \mathbb{R}. {}\\ \end{array}$$

Applying these formulas to (8.12)–(8.14) with γ replaced by \(\mathrm{i}s\), we get

$$\displaystyle{ h_{\alpha,\beta }(s) = O\left (\vert s{\vert }^{-3/2}\right ),\quad \text{for }s \rightarrow \pm \infty }$$

for all choices of α, β ≥ 0, (α, β) ≠ (0, 0). Thus by the continuity of h α, β it follows that \(h_{\alpha,\beta } \in {L}^{1}(\mathbb{R})\), which proves the proposition. □ 

Note that by Remark 4 it follows that f α, 0(x) can only be non-zero if \(x \in \left (0,{ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }} \right )\) and f 0, β (x) can only be non-zero if \(x \in \left ( \frac{{\beta }^{\beta }} {{(\beta +1)}^{\beta +1}},\infty \right )\). Since we have seen, that μ 0, β coincides with the stable distribution v α, ρ with \(\alpha = \frac{1} {\beta +1}\) and ρ = 1 we have from [3, Appendix 4] that

Theorem 5 ([3]).

The map

$$\displaystyle{ \phi \mapsto \frac{\sin \phi {\sin }^{\beta }(\beta \phi )} {{\sin }^{\beta +1}((\beta +1)\phi )},\quad 0 <\phi < \frac{\pi } {\beta +1} }$$

is a bijection of the interval \(\left (0, \frac{\pi } {\beta +1}\right )\) onto \(\left ( \frac{{\beta }^{\beta }} {{(\beta +1)}^{\beta +1}},\infty \right )\) and

$$\displaystyle{ f_{\mu _{0,\beta }}\left ( \frac{\sin \phi {\sin }^{\beta }(\beta \phi )} {{\sin }^{\beta +1}((\beta +1)\phi )}\right ) = \frac{{\sin }^{\beta +2}((\beta +1)\phi )} {\pi {\sin }^{\beta +1}(\beta \phi )},\quad 0 <\phi < \frac{\pi } {\beta +1}. }$$
(8.16)

Proof.

We know that \(\mu _{0,\beta } = v_{ \frac{1} {\beta +1},1}\), the stable distribution with parameters \(\alpha = \frac{1} {\beta +1}\) and ρ = 1. Moreover, we have from [3, Proposition A1.4], that v α, 1 has density ψ α, 1 on the interval \(\left (\alpha {(1-\alpha )}^{1/\alpha -1},\infty \right )\) given by

$$\displaystyle{ \psi _{\alpha,1}(x) ={ \frac{1} {\pi } \sin {}^{1+\frac{1} {\alpha } }\theta \sin }^{-\frac{1} {\alpha } }((1-\alpha )\theta ), }$$

where θ ∈ (0, π) is the only solution to the equation

$$\displaystyle{ x {=\sin { }^{-\frac{1} {\alpha } }\theta \sin }^{ \frac{1} {\alpha } -1}((1-\alpha )\theta )\sin \alpha \theta. }$$

It is now easy to check that \(f_{0,\beta }(x) =\psi _{ \frac{1} {\beta +1},1}(x)\) has the form (8.16) by using the substitution \(\phi = \frac{\theta } {\beta +1}\). □ 

Corollary 3.

The map

$$\displaystyle{ \phi \mapsto \frac{{\sin }^{\alpha +1}((\alpha +1)\phi )} {\sin \phi {\sin }^{\alpha }(\alpha \phi )},\quad 0 <\phi < \frac{\pi } {\alpha +1} }$$

is a bijection of the interval \(\left (0, \frac{\pi } {\alpha +1}\right )\) onto \(\left (0,{ \frac{{(\alpha +1)}^{\alpha +1}} {\alpha }^{\alpha }} \right )\) and

$$\displaystyle{ f_{\mu _{\alpha,0}}\left (\frac{{\sin }^{\alpha +1}((\alpha +1)\phi )} {\sin \phi {\sin }^{\alpha }(\alpha \phi )} \right ) = \frac{{\sin }^{2}\phi {\sin }^{\alpha -1}(\alpha \phi )} {\pi {\sin }^{\alpha }((\alpha +1)\phi )},\quad 0 <\phi < \frac{\pi } {\alpha +1}. }$$

Proof.

Since μ α, 0 is the image measure of μ 0, α by the map \(t\mapsto \frac{1} {t}\), (t > 0), we have

$$\displaystyle{ f_{\alpha,0}(x) = \frac{1} {{x}^{2}}f_{0,\alpha }\left (\frac{1} {x}\right ),\quad x > 0. }$$

The corollary now follows from Theorem 5 by elementary calculations. □ 

We next use Biane’s method to compute the density f α, β for all α, β > 0.

Theorem 6.

Let α,β > 0. Then for each x > 0 there are unique real numbers ϕ 1 2 > 0 for which

$$\displaystyle{ \pi = (\alpha +1)\phi _{1} + (\beta +1)\phi _{2} }$$
(8.17)
$$\displaystyle{ x = \frac{{\sin }^{\alpha +1}\phi _{2}} {{\sin }^{\beta +1}\phi _{1}}{\sin }^{\beta -\alpha }(\phi _{ 1} +\phi _{2}). }$$
(8.18)

Moreover

$$\displaystyle{ f_{\mu _{\alpha,\beta }}\left (x\right ) = \frac{{\sin }^{\beta +2}\phi _{1}} {\pi {\sin }^{\alpha }\phi _{2}} {\sin }^{\alpha -\beta -1}(\phi _{ 1} +\phi _{2}). }$$
(8.19)

Proof.

As μ α, β has the S-transform \(S_{\mu _{\alpha,\beta }}(z) = \frac{{(-z)}^{\beta }} {{(1+z)}^{\alpha }}\) we by Definition 1 observe that

$$\displaystyle{ \chi _{\mu _{\alpha,\beta }}(z) = \frac{-{(-z)}^{\beta +1}} {{(1 + z)}^{\alpha +1}}\quad \text{whence}\quad \psi _{\mu _{\alpha,\beta }}\left (- \frac{{(-z)}^{\beta +1}} {{(1 + z)}^{\alpha +1}}\right ) = z }$$

for z in some complex neighborhood of (−1, 0). Now it is known that

$$\displaystyle{ G_{\mu }\left (\frac{1} {t}\right ) = t\left (1 +\psi _{\mu }(t)\right ) }$$

for every probability measure on (0, ). Hence

$$\displaystyle{ G_{\mu _{\alpha,\beta }}\left (-\frac{{(1 + z)}^{\alpha +1}} {{(-z)}^{\beta +1}} \right ) = -\frac{{(-z)}^{\beta +1}} {{(1 + z)}^{\alpha }} }$$
(8.20)

for z in a complex neighborhood of (−1, 0).

Let H denote the upper half plane in \(\mathbb{C}\):

$$\displaystyle{ H =\{ z \in \mathbb{C}: \mathfrak{I}z > 0\}. }$$

For z ∈ H, put

$$\displaystyle\begin{array}{rcl} \phi _{1}& =& \phi _{1}(z) =\arg (1 + z) \in (0,\pi ) {}\\ \phi _{2}& =& \phi _{2}(z) =\pi -\arg (z) \in (0,\pi ). {}\\ \end{array}$$

Basic trigonometry applied to the triangle with vertices − 1, 0 and z, shows that ϕ 1 +ϕ 2 < π and

$$\displaystyle{ \frac{\sin \phi _{1}} {\vert z\vert } = \frac{\sin \phi _{2}} {\vert 1 + z\vert } = \frac{\sin (\pi -\phi _{1} -\phi _{2})} {1}. }$$

Hence

$$\displaystyle{ \vert z\vert = \frac{\sin \phi _{1}} {\sin (\phi _{1} +\phi _{2})}\quad \text{and}\quad \vert 1 + z\vert = \frac{\sin \phi _{2}} {\sin (\phi _{1} +\phi _{2})} }$$

from which

$$\displaystyle{ z = - \frac{\sin \phi _{1}} {\sin (\phi _{1} +\phi _{2})}{e}^{\mathrm{i}\phi _{2} }\quad \text{and}\quad \mathfrak{I}z = \frac{\sin \phi _{1}\sin \phi _{2}} {\sin (\phi _{1} +\phi _{2})}. }$$

It follows that Φ: z↦(ϕ 1(z), ϕ 2(z)) is a diffeomorphism of H onto the triangle \(T =\{ (\phi _{1},\phi _{2}) \in {\mathbb{R}}^{2}:\phi _{1},\phi _{2} > 0,\phi _{1} +\phi _{2} <\pi \}\) with inverse

$$\displaystyle{{ \varPhi }^{-1}(\phi _{ 1},\phi _{2}) = - \frac{\sin \phi _{1}} {\sin (\phi _{1} +\phi _{2})}{e}^{-\mathrm{i}\phi _{2} },\quad (\phi _{1},\phi _{2}) \in T. }$$

Put \(H_{\alpha,\beta } =\{ z \in H: (\alpha +1)\phi _{1}(z) + (\beta +1)\phi _{2}(z) <\pi \}\). Then \(H_{\alpha,\beta } {=\varPhi }^{-1}\left (T_{\alpha,\beta }\right )\) where \(T_{\alpha,\beta } =\{ (\phi _{1},\phi _{2}) \in T: (\alpha +1)\phi _{1} + (\beta +1)\phi _{2} <\pi \}.\)

In particular H α, β is an open connected subset of H. Put

$$\displaystyle{ F(z) = -\frac{{(1 + z)}^{\alpha +1}} {{(-z)}^{\beta +1}},\quad \mathfrak{I}z > 0. }$$

Then

$$\displaystyle{ F(z) = \frac{\vert 1 + z{\vert }^{\alpha +1}} {\vert z{\vert }^{\beta +1}} {e}^{\mathrm{i}((\alpha +1)\phi _{1}(z)+(\beta +1)\phi _{2}(z)-\pi )} }$$
(8.21)

so for z ∈ H α, β , ℑ F(z) < 0. Therefore \(G_{\mu _{\alpha,\beta }}(F(z))\) is a well-defined analytic function on H α, β , and since (−1, 0) is contained in the closure of H α, β it follows from (8.20)

$$\displaystyle{ G_{\mu _{\alpha,\beta }}(F(z)) = \frac{1 + z} {F(z)} }$$
(8.22)

for z in some open subset of H α, β and thus by analyticity it holds for all z ∈ H α, β .

Let x > 0 and assume that ϕ 1, ϕ 2 > 0 satisfy (8.17) and (8.18). Put

$$\displaystyle{ z {=\varPhi }^{-1}(\phi _{ 1},\phi _{2}) = - \frac{\sin \phi _{1}} {\sin (\phi _{1} +\phi _{2})}{e}^{-\mathrm{i}\phi _{2} }. }$$

Then by (8.21)

$$\displaystyle{ F(z) = \frac{\vert 1 + z{\vert }^{\alpha +1}} {\vert z{\vert }^{\beta +1}} ={ \left ( \frac{\sin \phi _{2}} {\sin (\phi _{1} +\phi _{2})}\right )}^{\alpha +1}{\left (\frac{\sin (\phi _{1} +\phi _{2})} {\sin \phi _{1}} \right )}^{\beta +1} = x. }$$

Since μ α, β has a continuous density f α, β on (0, ) by Proposition 6, the inverse Stieltjes transform gives

$$\displaystyle{ f_{\alpha,\beta }(x) = -\frac{1} {\pi } \lim _{w\rightarrow x,\mathfrak{I}w>0}\mathfrak{I}G_{\mu _{\alpha,\beta }}(w) = \frac{1} {\pi } \lim _{w\rightarrow x,\mathfrak{I}w<0}\mathfrak{I}G_{\mu _{\alpha,\beta }}(w). }$$

For 0 < t < 1, put z t  = Φ −1(t ϕ 1, t ϕ 2). Then

$$\displaystyle{ z_{t} {\in \varPhi }^{-1}\left (T_{\alpha,\beta }\right ) = H_{\alpha,\beta }. }$$

Thus \(\mathfrak{I}F(z_{t}) < 0\). Moreover, z t  → z and F(z t ) → F(z) = x for t → 1. Hence by (8.22),

$$\displaystyle{ f_{\alpha,\beta }(x) = \frac{1} {\pi } \lim _{t\rightarrow {1}^{-}}\mathfrak{I}G_{\mu _{\alpha,\beta }}(F(z_{t})) = \frac{1} {\pi } \lim _{t\rightarrow {1}^{-}}\mathfrak{I}\left (\frac{z_{t} + 1} {F(z_{t})} \right ) = \frac{\mathfrak{I}z} {\pi x} = \frac{\sin \phi _{1}\sin \phi _{2}} {\pi x\sin (\phi _{1} +\phi _{2})} }$$

which proves (8.19). To complete the proof of Theorem 6, we only need to prove the existence and uniqueness of ϕ 1, ϕ 2 > 0. Assume that ϕ 1, ϕ 2 satisfy (8.17) then

$$\displaystyle{ \phi _{1} = \frac{\pi -\theta } {\alpha +1}\quad \text{and}\quad \phi _{2} = \frac{\theta } {\beta +1} }$$

for a unique θ ∈ (0, π). Moreover,

$$\displaystyle{ \frac{\mathrm{d}\phi _{1}} {\mathrm{d}\theta } = - \frac{1} {\alpha +1}\quad \text{and}\quad \frac{\mathrm{d}\phi _{2}} {\mathrm{d}\theta } = \frac{1} {\beta +1}. }$$

Hence, expressing \(u = \frac{{\sin }^{\alpha +1}\phi _{ 2}} {{\sin }^{\beta +1}\phi _{1}} {\sin }^{\beta -\alpha }(\phi _{ 1} +\phi _{2})\) as a function u(θ) of θ, we get

$$\displaystyle\begin{array}{rcl} (\alpha +1)(\beta +1)\frac{\mathrm{d}u(\theta )} {\mathrm{d}\theta } & =& {(\beta +1)}^{2}\cot \phi _{ 1} + {(\alpha +1)}^{2}\cot \phi _{ 2} - 2{(\alpha -\beta )}^{2}\cot (\phi _{ 1} +\phi _{2}) {}\\ & =& \frac{A(\phi _{1},\phi _{2})} {\sin \phi _{1}\sin \phi _{2}\sin (\phi _{1} +\phi _{2})} {}\\ \end{array}$$

where

$$\displaystyle{ A(\phi _{1},\phi _{2}) ={ \left ((\alpha +1)\sin \phi _{1}\cos \phi _{2} + (\beta +1)\cos \phi _{1}\sin \phi _{2}\right )}^{2} + {(\alpha -\beta ){}^{2}\sin }^{2}\phi {_{ 1}\sin }^{2}\phi _{ 2}. }$$

For αβ A(ϕ 1, ϕ 2) ≥ (αβ)2sin2 ϕ 1sin2 ϕ 2 > 0 and for α = β A(ϕ 1, ϕ 2) = (α + 1)2sin(ϕ 1 +ϕ 2) > 0. Hence u(θ) is a differentiable, strictly increasing function of θ, and it is easy to check that

$$\displaystyle{ \lim _{\theta \rightarrow {0}^{+}}u(\theta ) = 0\quad \text{and}\quad \lim _{\theta {\rightarrow \pi }^{-}}u(\theta ) = \infty. }$$

Hence u(θ) is a bijection of (0, π) onto (0, ), which completes the proof of Theorem 6. □ 

Remark 5.

It is much more complicated to express the densities f α, β (x) directly as functions of x. This has been done for β = 0, \(\alpha \in \mathbb{N}\) by K. Penson and K. Życzkowski in [13] and extended to the case \(\alpha \in {\mathbb{Q}}^{+}\) by W. Młotkowski, K. Penson and K. Życzkowski in [12, Theorem 3.1].