Abstract
The concentration of measure phenomenon may be summarized as follows: a function of many weakly dependent random variables that is not too sensitive to any of its individual arguments will tend to take values very close to its expectation. This phenomenon is most completely understood when the arguments are mutually independent random variables, and there exist several powerful complementary methods for proving concentration inequalities, such as the martingale method, the entropy method, and the method of transportation inequalities. The setting of dependent arguments is much less well understood. This chapter focuses on the martingale method for deriving concentration inequalities without independence assumptions. In particular, we use the machinery of so-called Wasserstein matrices to show that the Azuma-Hoeffding concentration inequality for martingales with almost surely bounded differences, when applied in a sufficiently abstract setting, is powerful enough to recover and sharpen several known concentration results for nonproduct measures. Wasserstein matrices provide a natural formalism for capturing the interplay between the metric and the probabilistic structures, which is fundamental to the concentration phenomenon.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
1 Introduction
At its most abstract, the concentration of measure phenomenon may be summarized as follows: a function of several weakly dependent random variables that is not too sensitive to any of the individual arguments will tend to take values very close to its expectation. This phenomenon is most completely understood in the case of independent arguments, and the recent book [2] provides an excellent survey (see also [30] for an exposition from the viewpoint of, and with applications to, information theory).
The case of dependent arguments has yet to mature into such a unified, overarching theory. The earliest concentration results for nonproduct measures were established for Haar measures on various groups, and relied strongly on the highly symmetric nature of the Haar measure in question. These results include Lévy’s classic isoperimetric inequality on the sphere [20] and Maurey’s concentration inequality on the permutation group [28]. To the best of our knowledge, the first concentration result for a nonproduct, non-Haar measure is due to Marton [22], where she proved a McDiarmid-type bound for contracting Markov chains. A flurry of activity followed. Besides Marton’s own follow-up work [23, 24, 25], the transportation method she pioneered was extended by Samson [33], and martingale techniques [32, 6, 17], as well as methods relying on the Dobrushin interdependence matrix [19, 4, 36], have been employed in obtaining concentration results for nonproduct measures. The underlying theme is that the independence assumption may be relaxed to one of weak dependence, the latter being quantified by various mixing coefficients.
This chapter is an attempt at providing an abstract unifying framework that generalizes and sharpens some of the above results. This framework combines classical martingale techniques with the method of Wasserstein matrices [10]. In particular, we rely on Wasserstein matrices to obtain general-purpose quantitative estimates of the local variability of a function of many dependent random variables after taking a conditional expectation with respect to a subset of the variables. A concentration inequality in a metric space must necessarily capture the interplay between the metric and the distribution, and, in our setting, Wasserstein matrices provide the ideal analytical tool for this task. As an illustration, we recover (and, in certain cases, sharpen) some results of [19, 6, 17] by demonstrating all of these to be special cases of the Wasserstein matrix method.
The remainder of the chapter is organized as follows. Section 2 is devoted to setting up the basic notation and preliminary definitions. A brief discussion of the concentration of measure phenomenon in high-dimensional spaces is presented in Section 3, together with a summary of key methods to establish concentration under the independence assumption. Next, in Section 4, we present our abstract martingale technique and then demonstrate its wide scope in Section 5 by deriving many of the previously published concentration inequalities as special cases. We conclude in Section 6 by listing some open questions.
2 Preliminaries and Notation
2.1 Metric Probability Spaces
A metric probability space is a triple (Ω, μ, d), where Ω is a Polish space equipped with its Borel σ-field, μ is a Borel probability measure on Ω, and d is metric on Ω, assumed to be a measurable function on the product space Ω ×Ω. We do not assume that d is the same metric that metrizes the Polish topology on Ω.
2.2 Product Spaces
Since concentration of measure is a high-dimensional phenomenon, a natural setting for studying it is that of a product space. Let T be a finite index set, which we identify with the set [n] ≜ { 1, …, n}, where n = | T | (this amounts to fixing some linear ordering of the elements of T). We will use the following notation for subintervals of T: [i] ≜ { 1, …, i}; [i, j] ≜ { i, i + 1, …, j} for i ≠ j; (i, j] ≜ { i + 1, …, j} for i < j; (i, j) ≜ { i + 1, …, j − 1} for i < j − 1; etc.
With each i ∈ T, we associate a measurable space \((\mathsf{X}_{i},\mathcal{B}_{i})\), where X i is a Polish space and \(\mathcal{B}_{i}\) is its Borel σ-field. For each I ⊆ T, we will equip the product space X I ≜ ∏ i ∈ I X i with the product σ-field \(\mathcal{B}^{I} \triangleq \bigotimes _{i\in I}\mathcal{B}_{i}\). When I = T, we will simply write X and \(\mathcal{B}\). We will write x I and x for a generic element of X I and X, respectively. Given two sets I, J ⊂ T with \(I a\mbox{\c{}}p J = \varnothing\), the concatenation of x I ∈ X I and z J ∈ X J is defined as y = x I z J ∈ X I∪J by setting
Given a random object X = (X i ) i ∈ T taking values in X according to a probability law μ, we will denote by \(\mathbb{P}_{\mu }[\cdot ]\) and \(\mathbb{E}_{\mu }[\cdot ]\) the probability and expectation with respect to μ, by μ I(dx I | x J) the regular conditional probability law of X I given X J = x J, and by μ I(dx I) the marginal probability law of X I. When I = { i}, we will write μ i (⋅ ) and μ i (⋅ | x J).
For each i ∈ T, we fix a metric on X i , which is assumed to be measurable with respect to the product σ-field \(\mathcal{B}_{i} \otimes \mathcal{B}_{i}\). For each I ⊆ T, equip X I with the product metric ρ I, where
When I ≡ T, we will simply write ρ instead of ρ T. In this way, for any Borel probability measure μ on X, we can introduce a “global” metric probability space (X, μ, ρ), as well as “local” metric probability spaces (X I, μ I, ρ I) and (X I, μ I(⋅ | x J), ρ I ) for all I, J ⊂ T and all x J ∈ X J.
2.3 Couplings and Transportation Distances
Let Ω be a Polish space. A coupling of two Borel probability measures μ and ν on Ω is a Borel probability measure P on the product space Ω ×Ω, such that P(⋅ ×Ω) = μ and P(Ω ×⋅ ) = ν. We denote the set of all couplings of μ and ν by \(\mathcal{C}(\mu,\nu )\). Let d be a lower-semicontinuous metric on Ω. We denote by Lip(Ω, d) the space of all functions \(\varOmega \rightarrow \mathbb{R}\) that are Lipschitz with respect to d, and by Lip c (Ω, d) the subset of Lip(Ω, d) consisting of c-Lipschitz functions. The L 1 Wasserstein (or transportation) distance between μ and ν is defined as
where (X, Y ) is a random element of Ω ×Ω with law P. The transportation distance admits a dual (Kantorovich–Rubinstein) representation
For example, when we equip Ω with the trivial metric d(ω, ω′) = 1{ω ≠ ω′}, the corresponding Wasserstein distance coincides with the total variation distance:
where the supremum is over all Borel subsets of Ω.
In the context of the product space (X, ρ) defined earlier, we will use the shorthand W i for \(W_{\rho _{i}}\), W I for \(W_{\rho ^{I}}\), and W for W ρ .
2.4 Markov Kernels and Wasserstein Matrices
A Markov kernel on X is a mapping \(K: \mathsf{X} \times \mathcal{B}\rightarrow [0,1]\), such that x ↦ K(x, A) is measurable for each \(A \in \mathcal{B}\), and K(x, ⋅ ) is a Borel probability measure on X for each x ∈ X. Given a Markov kernel K and a bounded measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), we denote by Kf the bounded measurable function
Likewise, given a Borel probability measure μ on X, we denote by μ K the Borel probability measure
It is not hard to see that ∫ X fd(μ K) = ∫ X (Kf)dμ.
Given a measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), we define the local oscillation of f at i ∈ T as
where we follow the convention 0∕0 = 0. This quantity measures the variability of f in its ith argument when all other arguments are held fixed. As will become evident later on, our martingale technique for establishing concentration inequalities for a given function \(f: \mathsf{X} \rightarrow \mathbb{R}\) requires controlling the local oscillations δ i (Kf) in terms of the local oscillations δ i (f) for appropriately chosen Markov kernels K.
To get an idea of what is involved, let us consider the simple case when each X i is endowed with the scaled trivial metric ρ i (x i , z i ) ≜ α i 1{x i ≠ z i }, where α i > 0 is some fixed constant. Then
The corresponding metric ρ on X is the weighted Hamming metric
Fix a Markov kernel K on X. The Dobrushin contraction coefficient of K (also associated in the literature with Doeblin’s name) is the smallest θ ≥ 0 for which \(\left \Vert K(x,\cdot ) - K(z,\cdot )\right \Vert _{\mbox{ TV}} \leq \theta\) holds for all x, z ∈ X. The term contraction is justified by the well-known inequality (apparently going back to Markov himself [21, §5])
which holds for all probability measures μ, ν on X. Then we have the following estimate:
Proposition 2.1
If K is a Markov kernel on X with Dobrushin coefficient θ, then for every i ∈ T and for every \(f \in \mathrm{Lip}(\mathsf{X},\rho _{\boldsymbol{\alpha }})\), we have
Proof
Fix an index i ∈ T and any two x, z ∈ X that differ only in the ith coordinate: x T∖{i} = z T∖{i} and x i ≠ z i . Pick an arbitrary coupling \(\mathbf{P}_{x,z} \in \mathcal{C}(K(x,\cdot ),K(z,\cdot ))\). Then
where the first inequality is by the definition of δ i (f), while the second one follows from the obvious implication u j ≠ y j ⇒ u ≠ y. Taking the infimum of both sides over all couplings \(\mathbf{P}_{x,z} \in \mathcal{C}(K(x,\cdot ),K(z,\cdot ))\) yields
Finally, dividing both sides of the above inequality by α i and taking the supremum over all choices of x, z that differ only in the ith coordinate, we obtain (5). □
One shortcoming of the above result (which is nontrivial only under the rather strong condition
for all i, j ∈ T) is that it gives only a very rough idea of the influence of δ j (f) for j ∈ T on δ i (Kf). For example, if α 1 = … = α n = 1, then the condition (6) reduces to the Dobrushin contraction condition θ < 1, and the inequality (5) becomes
suggesting that all of the δ j (f)’s influence δ i (Kf) equally. However, this picture can be refined. To that end, we introduce the notion of a Wasserstein matrix following Föllmer [10]. Let us denote by \(\boldsymbol{\delta }(f)\) the vector (δ i (f)) i ∈ T . We say that a nonnegative matrix V = (V ij ) i, j ∈ T is a Wasserstein matrix for K if, for every f ∈ Lip(X, ρ) and for every i ∈ T,
or, in vector form, if \(\boldsymbol{\delta }(Kf)\preceq V \boldsymbol{\delta }(f)\).
One of our main objectives will be to show that concentration inequalities for functions f of X ∼ μ can be obtained using Wasserstein matrices for certain Markov kernels K related to μ. In order to motivate the introduction of Wasserstein matrices, we record a couple of contraction estimates for Markov kernels that may be of independent interest. To that end, we introduce another coupling-based distance between probability measures [2, Chap. 8]: for two Borel probability measures on X, define
where (X, Y ) is a random element of X ×X. Even though \(\bar{W}\) is not a Wasserstein distance, we can use the inequality \(\sqrt{ a + b} \leq \sqrt{a} + \sqrt{b}\) for a, b ≥ 0 to show that \(\bar{W}(\mu,\nu ) \leq W(\mu,\nu )\).
Proposition 2.2
Let V be a Wasserstein matrix for a Markov kernel K on X. Then for any Lipschitz function \(f: \mathsf{X} \rightarrow \mathbb{R}\),
Proof
Fix an arbitrary coupling \(\mathbf{P} \in \mathcal{C}(\mu,\nu )\) and let (X, Y ) be a random element of X ×X with law P. Then
where in the last step we have used the definition of the Wasserstein matrix. Using the Cauchy–Schwarz inequality, we obtain
Taking the infimum of both sides over all \(\mathbf{P} \in \mathcal{C}(\mu,\nu )\), we obtain (9). □
Corollary 2.3
Let V be a Wasserstein matrix for a Markov kernel K on X. Then, for any two Borel probability measures μ and ν on X,
where \(\mathbf{1} \in \mathbb{R}^{T}\) is the vector of all ones, and therefore
Proof
A function \(f: \mathsf{X} \rightarrow \mathbb{R}\) belongs to Lip1(X, ρ) if and only if \(\boldsymbol{\delta }(f) \in [0,1]^{T}\). Using the dual representation (2) of W and applying Proposition 2.2, we can write
Since V is a nonnegative matrix, the supremum is achieved by ξ = 1. □
2.5 Relative Entropy
Finally, we will need some key notions from information theory. The relative entropy (or information divergence) between two probability measures μ, ν on a space Ω is defined as
We use natural logarithms throughout the chapter. The relative entropy is related to the total variation distance via Pinsker’s inequalityFootnote 1
3 Concentration of Measure and Sufficient Conditions
In this section, we give a precise definition of the concentration of measure phenomenon, review several sufficient conditions for it to hold, and briefly discuss how it can be established under the independence assumption via tensorization. For more details and further references, the reader can consult [2] or [30].
We say that the metric probability space (X, μ, ρ) has the concentration of measure property if there exists a positive constant c > 0, such that, for every Lipschitz function \(f: \mathsf{X} \rightarrow \mathbb{R}\),
where
is the Lipschitz constant of f. A sufficient (and, up to constants, necessary) condition for (12) is that, for every f ∈ Lip1(X, ρ), the random variable f(X) with X ∼ μ is c-subgaussian, i.e.,
A fundamental result of Bobkov and Götze [1] states that the subgaussian estimate (13) holds for all f ∈ Lip1(X, ρ) if and only if μ satisfies the so-called transportation-information inequality
where ν ranges over all Borel probability measures on X. We will use the shorthand μ ∈ T ρ (c) to denote the fact that the inequality (14) holds for all ν. The key role of transportation-information inequalities in characterizing the concentration of measure phenomenon was first recognized by Marton in a breakthrough paper [22], with further developments in [23, 24, 25].
The entropy method (see, e.g., [2, Chap. 6] and [30, Chap. 3]) provides another route to establishing (13). Its underlying idea can be briefly described as follows. Given a measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), consider the logarithmic moment-generating function
of the centered random variable \(f(X) - \mathbb{E}_{\mu }[f(X)]\). For any λ ≠ 0, introduce the tilted probability measure μ (λ f) with
Then a simple calculation shows that the relative entropy D(μ (λ f) ∥ μ) can be expressed as
where the prime denotes differentiation with respect to λ. Using the fact that ψ f (0) = 0 and integrating, we obtain the following formula for ψ f (λ):
This representation is at the basis of the so-called Herbst argument, which for our purposes can be summarized as follows:
Lemma 3.1 (Herbst)
The metric probability space (X,μ,ρ) has the concentration property with constant c if, for any f ∈Lip1(X,ρ),
Remark 3.2
Up to a constant, the converse is also true [34, Prob. 3.12]: if the subgaussian estimate (13) holds for every f ∈ Lip1(X, ρ), then
for every f ∈ Lip1(X, ρ).
In this way, the problem of establishing the concentration phenomenon reduces to showing that (16) holds for every f ∈ Lip1(X, ρ), typically via logarithmic Sobolev inequalities or other functional inequalities.
3.1 Concentration of Measure Under the Independence Assumption
To set the stage for the general treatment of the concentration phenomenon in high dimensions, we first consider the independent case, i.e., when coordinates X i , i ∈ T, of the random object X ∼ μ are mutually independent. In other words, the probability measure μ is equal to the product of its marginals: μ = μ 1 ⊗ … ⊗μ n . The key to establishing the concentration property in such a setting is tensorization, which is an umbrella term for any result that allows one to derive the “global” concentration property of the high-dimensional product space (X 1 ⊗ … ⊗X n , ρ, μ 1 ⊗ … ⊗μ n ) from “local” concentration properties of the coordinate spaces (X i , ρ i , μ i ), i ∈ T.
Below, we list two such tensorization results, one for the transportation-information inequalities and one for the relative entropy. Both of these results are deep consequences of the interplay between the independence structure of μ and the metric structure of ρ. Indeed, a function \(f: \mathsf{X} \rightarrow \mathbb{R}\) belongs to Lip1(X, ρ) if and only if δ i (f) ≤ 1 for all i ∈ T, i.e., if and only if, for every i ∈ T and every x T∖{i} ∈ X T∖{i}, the function f i : X i → R given by f i (y i ) ≜ f(y i x T∖{i}) is 1-Lipschitz with respect to the metric ρ i on X i . With this in mind, it is reasonable to expect that if one can establish a concentration property for all 1-Lipschitz functions on the coordinate spaces X i , then one can deduce a concentration property for functions on the product space X that are 1-Lipschitz in each coordinate.
Lemma 3.3 (Tensorization of transportation-information inequalities)
Suppose that there exist constants c 1 ,…,c n ≥ 0, such that
Then μ = μ 1 ⊗ … ⊗μ n ∈ T ρ (c) with c = ∑ i=1 n c i .
For example, by an appropriate rescaling of Pinsker’s inequality (11), we see that, if each coordinate space X i is endowed with the scaled trivial metric ρ i (x i , z i ) = α i 1{x i ≠ z i } for some α i > 0, then any Borel probability measure μ i on X i satisfies the transportation-information inequality with c i = α i 2∕4. By the above tensorization lemma, any product measure μ 1 ⊗ … ⊗μ n on the product space X 1 ⊗ … ⊗X n equipped with the weighted Hamming metric \(\rho _{\boldsymbol{\alpha }}\) defined in (3) satisfies \(T_{\rho _{\boldsymbol{\alpha }}}(c)\) with \(c = \frac{1} {4}\sum _{i\in T}\alpha _{i}^{2}\). Consequently, by the Bobkov–Götze theorem, the subgaussian estimate (13) holds for any function \(f \in \mathrm{Lip}_{1}(\mathsf{X},\rho _{\boldsymbol{\alpha }})\), which in turn implies, via the Chernoff bound, that
This provides an alternative derivation of McDiarmid’s inequality (with the sharp constant in the exponent), which was originally proved using the martingale method.
Lemma 3.4 (Tensorization of relative entropy)
Consider a product measure μ = μ 1 ⊗ … ⊗μ n . Then for any other probability measure ν on X we have
The idea is to apply this lemma to ν = μ (tf) for some t ≥ 0 and an arbitrary f ∈ Lip1(X, ρ). In that case, a simple calculation shows that the conditional probability measure ν i (dx i | x T∖{i}) = μ i (tf)(dx i | x T∖{i}) is equal to the tilted distribution \(\mu _{i}^{(tf_{i})}\) with f i (x i ) = f(x i x T∖{i}), and therefore
If f ∈ Lip1(X, ρ), then f i ∈ Lip1(X i , ρ i ). Thus, if we can show that, for any g ∈ Lip1(X i , ρ i ),
then the estimate
holds with c = ∑ i = 1 n c i for all f ∈ Lip(X, ρ) by the tensorization lemma. Invoking the Herbst argument, we conclude that (X, μ, ρ) has the concentration property with the same c.
4 The Abstract Martingale Method
In this section, we present a general martingale-based scheme for deriving concentration inequalities for functions of many dependent random variables. Let \(f: \mathsf{X} \rightarrow \mathbb{R}\) be the function of interest, and let X = (X i ) i ∈ T be a random element of the product space X with probability law μ. Let \(\mathcal{F}_{0} \subset \mathcal{F}_{1} \subset \ldots \subset \mathcal{F}_{m}\) be a filtration (i.e., an increasing sequence of σ-fields) on X, such that \(\mathcal{F}_{0}\) is trivial and \(\mathcal{F}_{m} = \mathcal{B}\). The idea is to decompose the centered random variable \(f(X) - \mathbb{E}_{\mu }[f(X)]\) as a sum of martingale differences
By construction, \(\mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{m}] = f(X)\) and \(\mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{0}] = \mathbb{E}_{\mu }[f(X)]\), so the problem of bounding the probability \(\mathbb{P}_{\mu }\big\{\vert f - \mathbb{E}_{\mu }f\vert \geq t\big\}\) for a given t ≥ 0 reduces to bounding the probability
The latter problem hinges on being able to control the martingale differences M (j). In particular, if each M (j) is a.s. bounded, we have the following:
Theorem 4.1 (Azuma–Hoeffding inequality)
Let {M (j) } j=1 m be a martingale difference sequence with respect to a filtration \(\{\mathcal{F}_{j}\}_{j=0}^{m}\) . Suppose that, for each j, there exist \(\mathcal{F}_{j-1}\) -measurable random variables A (j) and B (j), such that A (j) ≤ M (j) ≤ B (j) a.s. Then
Consequently, for any t ≥ 0,
The most straightforward choice of the filtration is also the most natural one: take m = | T | = n, and for each i ∈ T take \(\mathcal{F}_{i} =\sigma (X^{[i]})\). For i ∈ T, define a Markov kernel K (i) on X by
Then, for any f ∈ L 1(μ) we have
in particular, \(K^{(1)}f = \mathbb{E}_{\mu }f\). We extend this definition to i = n + 1 in the obvious way:
so that K (n+1) f = f. Then, for each i ∈ T, we can write M (i) = K (i+1) f − K (i) f. With this construction, we can state the following theorem that applies to the case when each coordinate space X i is endowed with a bounded measurable metric ρ i :
Theorem 4.2
Assume that, for all i,
For each i ∈{ 1,…,n + 1}, let V (i) be a Wasserstein matrix for the Markov kernel K (i) defined in (20) , in the sense that \(\boldsymbol{\delta }(K^{(i)}f)\preceq V ^{(i)}\boldsymbol{\delta }(f)\) holds for each f ∈Lip (X,ρ) as in ( 7 ). Define the matrix Γ = (Γ ij ) i,j∈T with entries
Then, for any f ∈Lip (X,ρ) and for any t ≥ 0, we have
Proof
For each i ∈ T, using the tower property of conditional expectations, we can write
From this, it follows that A (i) ≤ M (i) ≤ B (i) a.s., where
and
By definition of the Wasserstein matrix, we have
Substituting this estimate into (22), we get
The probability estimate (21) then follows from the Azuma–Hoeffding inequality (19). □
We can also use the martingale method to obtain a tensorization result for transportation inequalities without independence assumptions. This result, which generalizes a theorem of Djellout, Guillin, and Wu [8, Thm. 2.11], can be used even when the metrics ρ i are not necessarily bounded.
Theorem 4.3
Suppose that there exist constants c 1 ,…,c n ≥ 0, such that
For each i ∈{ 1,…,n + 1}, let V (i) be a Wasserstein matrix for K (i) . Then μ ∈ T ρ (c) with
Proof
By the Bobkov–Götze theorem [1], it suffices to show that, for every \(f: \mathsf{X} \rightarrow \mathbb{R}\) with ∥ f ∥ Lip ≤ 1, the random variable f(X) with X ∼ μ is c-subgaussian, with c given by (25). To that end, we again consider the martingale decomposition
with M (i) = K (i+1) f − K (i) f. We will show that, for every i,
This, in turn, will yield the desired subgaussian estimate
for every \(\lambda \in \mathbb{R}\).
To proceed, note that, for a fixed realization x [i−1] of X [i−1], M (i) = K (i+1) f − K (i) f is σ(X i )-measurable, and
where we have used the definition of the Wasserstein matrix, as well as the fact that ∥ f ∥ Lip ≤ 1 is equivalent to δ j (f) ≤ 1 for all j ∈ T. Since \(\mu _{i}(\cdot \vert x^{[i-1]}) \in T_{\rho _{i}}(c)\) by hypothesis, we obtain the estimate (26) by the Bobkov–Götze theorem. □
As a sanity check, let us confirm that, in the case when μ is a product measure and the product space X is endowed with the weighted Hamming metric \(\rho _{\boldsymbol{\alpha }}\) defined in (3), Theorems 4.2 and 4.3 both reduce to McDiarmid’s inequality. To see this, we first note that, when the X i ’s are independent, we can write
for each i ∈ T, f ∈ L 1(μ), and x ∈ X. This, in turn, implies that
where we have used the fact that, with ρ i (x i , z i ) = α i 1{x i ≠ z i }, ∥ ρ i ∥ = α i for every i ∈ T. Therefore, for each i ∈ T, we can always choose a Wasserstein matrix V (i+1) for K (i+1) in such a way that its ith row has zeroes everywhere except for the ith column, where it has a 1. Now, for any function \(f: \mathsf{X} \rightarrow \mathbb{R}\) which is 1-Lipschitz with respect to \(\rho _{\boldsymbol{\alpha }}\), we can take \(\boldsymbol{\delta }(f) = \mathbf{1}\). Therefore, for any such f Theorem 4.2 gives
which is precisely McDiarmid’s inequality. Since the constant 2 in McDiarmid’s inequality is known to be sharp, this shows that the coefficient 2 in the exponent in (21) is likewise optimal. Moreover, with our choice of ρ i , condition (24) of Theorem 4.3 holds with c i = α i 2∕4, and, in light of the discussion above, we can arrange ∑ j V ij (i+1) = 1. Therefore, by Theorem 4.3, any function \(f: \mathsf{X} \rightarrow \mathbb{R}\) which is 1-Lipschitz with respect to \(\rho _{\boldsymbol{\alpha }}\) is c-subgaussian with constant
which is just another equivalent statement of McDiarmid’s inequality.
It is also possible to consider alternative choices of the filtration \(\{\mathcal{F}_{j}\}_{j=0}^{m}\). For example, if we partition the index set T into m disjoint subsets (blocks) T 1, …, T m , we can take
where Λ j ≜ T 1 ∪ … ∪ T j . Defining for each j ∈ [m] the Markov kernel
we can write
for every j ∈ [m]. As before, we take \(K^{(1)}f = \mathbb{E}_{\mu }[f]\) and K (m+1) f = f. Given a measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), we can define the oscillation of f in the jth block T j , j ∈ [m], by
The definition of a Wasserstein matrix is modified accordingly: we say that a nonnegative matrix \(\tilde{V } = (\tilde{V }_{jk})_{j,k\in [m]}\) is a Wasserstein matrix for a Markov kernel K on X with respect to the partition {T j } j = 1 m if, for any Lipschitz function \(f: \mathsf{X} \rightarrow \mathbb{R}\),
for all j ∈ [m]. With these definitions at hand, the following theorem, which generalizes a result of Paulin [29, Thm. 2.1], can be proved in the same way as Theorem 4.2:
Theorem 4.4
For each j ∈ [m + 1], let \(\tilde{V }^{(j)} = (\tilde{V }_{k\ell}^{(j)})_{k,\ell\in [m]}\) be a Wasserstein matrix for \(\tilde{K}^{(j)}\) with respect to the partition {T j }. Define the matrix \(\tilde{\varGamma }= (\tilde{\varGamma }_{k\ell})_{k,\ell\in [m]}\) with entries
where
is the diameter of the metric space \((\mathsf{X}^{T_{k}},\rho ^{T_{k}})\) . Then, for any f ∈Lip (X,ρ) and for any t ≥ 0,
5 The Martingale Method in Action
We now show that several previously published concentration inequalities for functions of dependent random variables arise as special cases of Theorem 4.2 by exploiting the freedom to choose the Wasserstein matrices V (i). In fact, careful examination of the statement of Theorem 4.2 shows that, for each i ∈ T, we only need to extract the ith row of V (i+1).
5.1 Concentration Inequalities Under the Dobrushin Uniqueness Condition
One particularly clean way of constructing the desired Wasserstein matrices is via the classical comparison theorem of Dobrushin for Gibbs measures [9]. For our purposes, we give its formulation due to Föllmer [11]:
Lemma 5.1
Let ν and \(\tilde{\nu }\) be two Borel probability measures on X. Define the matrix Cν = (Cij ν)i,j∈T and the vector \(b^{\nu,\tilde{\nu }} = (b_{i}^{\nu,\tilde{\nu }})_{i\in T}\) by
and
Suppose that the spectral radius of C ν is strictly smaller than unity. Then, for any f ∈ L 1 (μ),
where D ν ≜ ∑ m=0 ∞ (C ν ) m .
Remark 5.2
The matrix C ν is called the Dobrushin interdependence matrix of ν. When the spectral radius of C ν is strictly smaller than unity, we say that ν satisfies the Dobrushin uniqueness condition. This condition is used in statistical physics to establish the absence of phase transitions, which is equivalent to uniqueness of a global Gibbs measure consistent with a given local specification (see the book of Georgii [12] for details).
Given an index i ∈ T, we will extract the ith row of a Wasserstein matrix for K (i+1) by applying the Dobrushin comparison theorem to a particular pair of probability measures on X. Let x, z ∈ X be two configurations that differ only in the ith coordinate: x T∖{i} = z T∖{i} and x i ≠ z i . Thus, we can write z = x [i−1] z i x (i, n], and
By definition of the local oscillation, the first integral in (30) is bounded by δ i (f)ρ i (x i , z i ). To handle the remaining terms, define two probability measures \(\nu,\tilde{\nu }\) on X by
Using this definition and Lemma 5.1, we can write
It remains to obtain explicit upper bounds on the entries of D ν and \(b^{\nu,\tilde{\nu }}\). To that end, we first note that, for a given j ∈ T and for any u, y ∈ X,
Therefore, C jk ν ≤ C jk μ. Likewise, for a given k ∈ T and for any y ∈ X,
Therefore, \(b_{k}^{\nu,\tilde{\nu }} \leq C_{ki}^{\mu }\rho _{i}(x_{i},z_{i})\). Since the matrices C ν and C μ are nonnegative, D jk ν ≤ D jk μ. Consequently, we can write
Therefore, from (31) and (32), we have
We have thus proved the following:
Corollary 5.3
Suppose that the probability measure μ satisfies the Dobrushin uniqueness condition, i.e., the spectral radius of its Dobrushin interdependence matrix C μ is strictly smaller than unity. Then, for any t ≥ 0, the concentration inequality (21) holds with
For example, when each X i is equipped with the trivial metric ρ i (x i , z i ) = 1{x i ≠ z i }, we have ∥ ρ i ∥ = 1 for all i, and consequently obtain the concentration inequality
The same inequality, but with a worse constant in the exponent, was obtained by Külske [19, p. 45].
5.2 Concentration Inequalities Via Couplings
Another method for constructing Wasserstein matrices for the Markov kernels K (i) is via couplings. One notable advantage of this method is that it does not explicitly rely on the Dobrushin uniqueness condition; however, some such condition is typically necessary in order to obtain good bounds for the norm \(\|\varGamma \boldsymbol{\delta }(f)\|_{\ell^{2}(T)}\).
Fix an index i ∈ T and any two x, z ∈ X that differ only in the ith coordinate: x T∖{i} = z T∖{i} and x i ≠ z i . Let P x, z [i] be any coupling of the conditional laws μ (i, n](⋅ | x [i]) and μ (i, n](⋅ | z [i]). Then for any f ∈ L 1(μ) we can write
Therefore,
Remembering that we only need the ith row of a Wasserstein matrix for K (i+1), we may take
We have thus proved the following:
Corollary 5.4
For each index i ∈ T and for each pair x,z ∈ X of configurations with xT∖{i} = zT∖{i}, pick an arbitrary coupling P x,z [i] of the conditional laws μ {i} (⋅|x [i] ) and μ {i} (⋅|z [i] ). Then, for any t ≥ 0, the concentration inequality (21) holds with
where the entries V ij (i+1) are given by (36).
In the case when each X i is equipped with the trivial metric ρ i (x i , z i ) = 1{x i ≠ z i }, the entries Γ ij for j > i take the form
where \((Y ^{(0)},Y ^{(1)}) =\big ((Y _{i+1}^{(0)},\ldots,Y _{n}^{(0)}),(Y _{i+1}^{(1)},\ldots,Y _{n}^{(1)})\big)\) is a random object taking values in X (i, n] ×X (i, n]. A special case of this construction, under the name of coupling matrix, was used by Chazottes et al. [6]. In that work, each P x, z [i] was chosen to minimize
over all couplings P of μ (i, n](⋅ | x [i]) and μ (i, n](⋅ | z [i]), in which case we have
However, it is not clear how to relate the quantities \(\mathbf{P}_{x,z}^{[i]}\left \{Y _{j}^{(0)}\neq Y _{j}^{(1)}\right \}\) and \(\mathbf{P}_{x,z}^{[i]}\left \{Y ^{(0)}\neq Y ^{(1)}\right \}\), apart from the obvious bound
which gives
An alternative choice of coupling is the so-called maximal coupling due to Goldstein [13], which for our purposes can be described as follows: let U = (U ℓ ) ℓ = 1 m and Y = (Y ℓ ) ℓ = 1 m be two random m-tuples taking values in a product space E = E 1 × … ×E m , where each E ℓ is Polish. Then there exists a coupling P of the probability laws \(\mathcal{L}(U)\) and \(\mathcal{L}(Y )\), such that
Thus, for each i ∈ T and for every pair x, z ∈ X with x T∖{i} = z T∖{i}, let P x, z [i] be the Goldstein coupling of μ (i, n](⋅ | x [i]) and μ (i, n](⋅ | z [i]). Then for each j ∈ { i + 1, …, n}, using (39) we have
This choice of coupling gives rise to the upper-triangular matrix Γ = (Γ ij ) i, j ∈ T with
Substituting this matrix into (21), we recover the concentration inequality of Kontorovich and Ramanan [17], but with an improved constant in the exponent.
Remark 5.5
It was erroneously claimed in [16, 14, 15] that the basic concentration inequalities of Chazottes et al. [6] and Kontorovich and Ramanan [17] are essentially the same, only derived using different methods. As the discussion above elucidates, the two methods use different couplings (the former, explicitly, and the latter, implicitly) — which yield quantitatively different and, in general, incomparable mixing coefficients.
Remark 5.6
Kontorovich and Ramanan obtained the matrix (40) using analytic methods without constructing an explicit coupling. In 2012, S. Shlosman posed the following question: could this matrix have been derived using a suitable coupling? We can now answer his question in the affirmative: the coupling is precisely Goldstein’s maximal coupling.
As an illustration, let us consider two specific types of the probability law μ: a directed Markov model (i.e., a Markov chain) and an undirected Markov model (i.e., a Gibbsian Markov random field). In the directed case, suppose that the elements of T are ordered in such a way that μ can be disintegrated in the form
where μ 0 is a Borel probability measure on \((\mathsf{X}_{1},\mathcal{B}_{1})\), and, for each i ∈ [1, n − 1], K i is a Markov kernel from X i to X i+1. For each i ∈ [1, n), let
be the Dobrushin contraction coefficient of K i . Fix 1 ≤ i < j ≤ n and x [i−1] ∈ X [i−1], y i , y i ′ ∈ X i . An easy calculation [14] shows that, defining the signed measures η i on X i+1 by η(dx i+1) = K i (y i , dx i+1) − K i (y i ′, dx i+1) and ζ j on X j by
we have
where (4) was repeatedly invoked to obtain the last inequality. The above yields an upper bound on the Γ ij in (40) and hence in the corresponding concentration inequality in (21). When more delicate (e.g., spectral [15, 29]) estimates on \(\left \Vert \zeta _{j}\right \Vert _{\mbox{ TV}}\) are available, these translate directly into tighter concentration bounds.
In the undirected case, μ is a Gibbsian Markov random field induced by pair potentials [12]. To keep things simple, we assume that the local spaces X i are all finite. Define an undirected graph with vertex set T = [n] and edge set \(E = \left \{[i,i + 1]: 1 \leq i <n\right \}\) (i.e., a chain graph with vertex set T). Associate with each edge (i, j) ∈ E a potential function ψ ij : X i ×X j → [0, ∞). Together, these define a probability measure μ on X via
Since μ is a Markov measure on X, there is a sequence of Markov kernels K 1, …, K n−1 generating μ in the sense of (41). It is shown in [14] that the contraction coefficient θ i of the kernel K i is bounded by
where
The estimate above implies a concentration result, either via (42) or via (35). To apply the latter, recall that D μ = ∑ k = 1 ∞(C μ)k, where C μ is the Dobrushin interdependence matrix defined in (27). Assuming that ρ is the unweighted Hamming metric (i.e., ρ i (x i , z i ) = 1{x i ≠ z i } for all i) and that the θ i ’s are all majorized by some θ < 1, it is easy to see that (C μ) ij ≤ θ | i−j | .
6 Open Questions
Our focus in this chapter has been on the martingale method for establishing concentration inequalities. In the case of product measures, other techniques, such as the entropy method or transportation-information inequalities, often lead to sharper bounds. However, these alternative techniques are less developed in the dependent setting, and there appears to be a gap between what is achievable using the martingale method and what is achievable using other means. We close the chapter by listing some open questions that are aimed at closing this gap:
-
(Approximate) tensorization of entropy. In the independent case, it is possible to derive the same concentration inequality (e.g., McDiarmid’s inequality) using either the martingale method or the entropy method, often with the same sharp constants. However, once the independence assumption is dropped, the situation is no longer so simple. Consider, for example, tensorization of entropy. Several authors (see, e.g., [26, 3, 27]) have obtained so-called approximate tensorization inequalities for the relative entropy in the case of weakly dependent random variables: under certain regularity conditions on μ, there exists a constant A μ ≥ 1, such that, for any other probability measure ν,
$$\displaystyle\begin{array}{rcl} D(\nu \|\mu ) \leq A_{\mu } \cdot \sum _{i\in T}\mathbb{E}_{\nu }D\big(\nu _{i}(\cdot \vert X^{T\setminus \left \{i\right \}})\big\|\mu _{ i}(\cdot \vert X^{T\setminus \left \{i\right \}})\big).& & {}\end{array}$$(43)Having such an inequality in hand, one can proceed to prove concentration for Lipschitz functions in exactly the same way as in the independent case. However, it seems that the constants A μ in (43) are not sharp in the sense that the resulting concentration inequalities are typically worse than what one can obtain using Theorems 4.2 or 4.3 under the same assumptions on μ and f. This motivates the following avenue for further investigation: Derive sharp inequalities of the form (43) by relating the constant A μ to appropriately chosen Wasserstein matrices.
-
General Wasserstein-type matrices. Using the techniques pioneered by Marton, Samson proved the following concentration of measure result: Consider a function \(f: \mathsf{X} \rightarrow \mathbb{R}\) satisfying an “asymmetric” Lipschitz condition of the form
$$\displaystyle{ f(x) - f(y) \leq \sum _{i\in T}\alpha _{i}(x)\mathbf{1}\{x_{i}\neq y_{i}\},\qquad \forall x,y \in \mathsf{X} }$$for some functions \(\alpha _{i}: \mathsf{X} \rightarrow \mathbb{R}\), such that ∑ i ∈ T α i 2(x) ≤ 1 for all x ∈ X. Then, for any Borel probability measure μ on X, we have
$$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Big\{f(X) - \mathbb{E}_{\mu }[f(X)] \geq t\Big\} \leq \exp \left (- \frac{t^{2}} {2\|\varDelta \|_{2}^{2}}\right ),& & {}\end{array}$$(44)where the matrix Δ has entries \(\varDelta _{ij} = \sqrt{\varGamma _{ij}}\) with Γ ij given by (40), and
$$\displaystyle{ \|\varDelta \|_{2} \triangleq \sup _{v\in \mathbb{R}^{T}\setminus \{0\}}\frac{\|\varDelta v\|_{\ell^{2}(T)}} {\|v\|_{\ell^{2}(T)}} }$$is the operator norm of Δ. A more general result in this vein was derived by Marton [24], who showed that an inequality of the form (44) holds with Δ computed in terms of any matrix Γ of the form (36), where each ρ i is the trivial metric. Samson’s proof relies on a fairly intricate recursive coupling argument. It would be interesting to develop analogs of (44) for arbitrary choices of the metrics ρ i and with full freedom to choose the Wasserstein matrices V (i) for each i ∈ T. A recent paper by Wintenberger [35] pursues this line of work.
-
The method of exchangeable pairs and Wasserstein matrices. An alternative route towards concentration inequalities in the dependent setting is via Stein’s method of exchangeable pairs [4, 5]. Using this method, Chatterjee obtained the following result [4, Chap. 4]: Let \(f: \mathsf{X} \rightarrow \mathbb{R}\) be a function which is 1-Lipschitz with respect to the weighted Hamming metric \(\rho _{\boldsymbol{\alpha }}\) defined in (3). Let μ be a Borel probability measure on X, whose Dobrushin interdependence matrix C μ satisfies the condition ∥ C μ ∥ 2 < 1. Then, for any t ≥ 0,
$$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Big\{\vert f(X) - \mathbb{E}_{\mu }[f(X)] \geq t\vert \Big\}\leq 2\exp \left (-\frac{(1 -\| C^{\mu }\|_{2})t^{2}} {\sum _{i\in T}\alpha _{i}^{2}} \right ).& & {}\end{array}$$(45)The key ingredient in the proof of (45) is the so-called Gibbs sampler, i.e., the Markov kernel \(\bar{K}\) on X given by
$$\displaystyle{ \bar{K}(x,\mathrm{d}y) \triangleq \frac{1} {\vert T\vert }\sum _{i\in T}\delta _{x^{T\setminus \left \{i\right \}}}(\mathrm{d}y^{T\setminus \left \{i\right \}}) \otimes \mu _{ i}(\mathrm{d}y_{i}\vert x^{T\setminus \left \{i\right \}}). }$$This kernel leaves μ invariant, i.e., \(\mu =\mu \bar{ K}\), and it is easy to show (see, e.g., [27]) that it contracts the \(\bar{W}\) distance: for any other probability measure ν on X,
$$\displaystyle\begin{array}{rcl} \bar{W}(\mu \bar{K},\nu \bar{K}) \leq \left (1 -\frac{1 -\| C^{\mu }\|_{2}} {\vert T\vert } \right )\bar{W}(\mu,\nu ).& & {}\\ \end{array}$$Since one can obtain contraction estimates for Markov kernels using Wasserstein matrices, it is natural to ask whether Chatterjee’s result can be derived as a special case of a more general method, which would let us freely choose an arbitrary Markov kernel K that leaves μ invariant and control the constants in the resulting concentration inequality by means of a judicious choice of a Wasserstein matrix for K. Such a method would most likely rely on general comparison theorems for Gibbs measures [31].
References
Sergey G. Bobkov and Friedrich Götze. Exponential integrability and transportation cost related to logarithmic Sobolev inequalities. J. Funct. Anal., 163:1–28, 1999.
Stéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013.
Pietro Caputo, Georg Menz, and Prasad Tetali. Approximate tensorization of entropy at high temperature. Annales de la Faculté des Sciences de Toulouse Sér. 6, 24(4):691–716, 2015.
Sourav Chatterjee. Concentration inequalities with exchangeable pairs. PhD thesis, Stanford University, 2005.
Sourav Chatterjee. Stein’s method for concentration inequalities. Probability Theory and Related Fields, 138:305–321, 2007.
Jean-René Chazottes, Pierre Collet, Christof Külske, and Frank Redig. Concentration inequalities for random fields via coupling. Probability Theory and Related Fields, 137 (1-2):201–225, 2007.
Imre Csiszár. Information-type measures of difference of probability distributions and indirect observations. Studia Sci. Math. Hungar., 2:299–318, 1967.
H. Djellout, A. Guillin, and L. Wu. Transportation cost-information inequalities and applications to random dynamical systems and diffusions. Ann. Probab., 32(3B):2702–2732, 2004.
Roland L. Dobrushin. Prescribing a system of random variables by conditional distributions. Theory of Probability and Its Applications, 15(3):458–486, 1970. Translated from Russian.
Hans Föllmer. Tail structure of Markov chains on infinite product spaces. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 50:273–285, 1979.
Hans Föllmer. A covariance estimate for Gibbs measures. Journal of Functional Analsysi, 46:387–395, 1982.
Hans-Otto Georgii. Gibbs Measures and Phase Transitions, volume 9 of de Gruyter Studies in Mathematics. Walter de Gruyter & Co., 2nd edition, 2011.
Sheldon Goldstein. Maximal coupling. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 46(193–204), 1979.
Aryeh Kontorovich. Obtaining measure concentration from Markov contraction. Markov Processes and Related Fields, 4:613–638, 2012.
Aryeh Kontorovich and Roi Weiss. Uniform Chernoff and Dvoretzky-Kiefer-Wolfowitz-type inequalities for Markov chains and related processes. Journal of Applied Probability, 51:1–14, 2014.
Aryeh (Leonid) Kontorovich. Measure Concentration of Strongly Mixing Processes with Applications. PhD thesis, Carnegie Mellon University, 2007.
Leonid (Aryeh) Kontorovich and Kavita Ramanan. Concentration Inequalities for Dependent Random Variables via the Martingale Method. Ann. Probab., 36(6):2126–2158, 2008.
Solomon Kullback. A lower bound for discrimination information in terms of variation. IEEE Trans. Inform. Theory, 13:126–127, 1967. Correction, volume 16, p. 652, 1970.
Christof Külske. Concentration inequalities for functions of Gibbs fields with application to diffraction and random Gibbs measures. Commun. Math. Phys., 239:29–51, 2003.
Paul Lévy. Problèmes concrets d’analyse fonctionnelle. Gauthier-Villars, Paris, 1951. 2d ed.
Andrei A. Markov. Extension of the law of large numbers to dependent quantities. Izvestiia Fiz.-Matem. Obsch. Kazan Univ., 15:135–156, 1906.
Katalin Marton. Bounding \(\bar{d}\)-distance by informational divergence: a method to prove measure concentration. Ann. Probab., 24(2):857–866, 1996.
Katalin Marton. Measure concentration for a class of random processes. Probability Theory and Related Fields, 110(3):427–439, 1998.
Katalin Marton. Measure concentration and strong mixing. Studia Scientiarum Mathematicarum Hungarica, 19(1-2):95–113, 2003.
Katalin Marton. Measure concentration for Euclidean distance in the case of dependent random variables. Ann. Probab., 32(3):2526–2544, 2004.
Katalin Marton. An inequality for relative entropy and logarithmic Sobolev inequalities in Euclidean spaces. Journal of Functional Analysis, 264(34–61), 2013.
Katalin Marton. Logarithmic Sobolev inequalities in discrete product spaces: a proof by a transportation cost distance. arXiv.org preprint 1507.02803, July 2015.
Bernard Maurey. Construction de suites symétriques. C. R. Acad. Sci. Paris Sér. A-B 288, (14):A679–A681, 1979.
Daniel Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods. Electronic Journal of Probability, 20:1–32, 2015.
Maxim Raginsky and Igal Sason. Concentration of Measure Inequalities in Information Theory, Communications, and Coding. Foundations and Trends in Communications and Information Theory. Now Publishers, 2nd edition, 2014.
Patrick Rebeschini and Ramon van Handel. Comparison theorems for Gibbs measures. J. Stat. Phys., 157:234–281, 2014.
Emmanuel Rio. Inégalités de Hoeffding pour les fonctions lipschitziennes de suites dépendantes. C. R. Acad. Sci. Paris Sér. I Math., 330(10):905–908, 2000.
Paul-Marie Samson. Concentration of measure inequalities for Markov chains and Φ-mixing processes. Ann. Probab., 28(1):416–461, 2000.
Ramon van Handel. Probability in high dimension. ORF 570 Lecture Notes, Princeton University, June 2014.
Olivier Wintenberger. Weak transport inequalities and applications to exponential and oracle inequalities. Electronic Journal of Probability, 20(114):1–27, 2015.
Liming Wu. Poincaré and transportation inequalities for Gibbs measures under the Dobrushin uniqueness condition. Ann. Probab., 34(5):1960–1989, 2006.
Acknowledgements
The second author would like to thank IMA for an invitation to speak at the workshop on Information Theory and Concentration Phenomena in Spring 2015, which was part of the annual program “Discrete Structures: Analysis and Applications.” The authors are grateful to the anonymous referee for several constructive suggestions, and to Dr. Naci Saldi for spotting an error in an earlier version of the manuscript. A. Kontorovich was partially supported by the Israel Science Foundation (grant No. 1141/12) and a Yahoo Faculty award. M. Raginsky would like to acknowledge the support of the U.S. National Science Foundation via CAREER award CCF–1254041.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media LLC
About this paper
Cite this paper
Kontorovich, A., Raginsky, M. (2017). Concentration of Measure Without Independence: A Unified Approach Via the Martingale Method. In: Carlen, E., Madiman, M., Werner, E. (eds) Convexity and Concentration. The IMA Volumes in Mathematics and its Applications, vol 161. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-7005-6_6
Download citation
DOI: https://doi.org/10.1007/978-1-4939-7005-6_6
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-7004-9
Online ISBN: 978-1-4939-7005-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)