1 Introduction

At its most abstract, the concentration of measure phenomenon may be summarized as follows: a function of several weakly dependent random variables that is not too sensitive to any of the individual arguments will tend to take values very close to its expectation. This phenomenon is most completely understood in the case of independent arguments, and the recent book [2] provides an excellent survey (see also [30] for an exposition from the viewpoint of, and with applications to, information theory).

The case of dependent arguments has yet to mature into such a unified, overarching theory. The earliest concentration results for nonproduct measures were established for Haar measures on various groups, and relied strongly on the highly symmetric nature of the Haar measure in question. These results include Lévy’s classic isoperimetric inequality on the sphere [20] and Maurey’s concentration inequality on the permutation group [28]. To the best of our knowledge, the first concentration result for a nonproduct, non-Haar measure is due to Marton [22], where she proved a McDiarmid-type bound for contracting Markov chains. A flurry of activity followed. Besides Marton’s own follow-up work [23, 24, 25], the transportation method she pioneered was extended by Samson [33], and martingale techniques [32, 6, 17], as well as methods relying on the Dobrushin interdependence matrix [19, 4, 36], have been employed in obtaining concentration results for nonproduct measures. The underlying theme is that the independence assumption may be relaxed to one of weak dependence, the latter being quantified by various mixing coefficients.

This chapter is an attempt at providing an abstract unifying framework that generalizes and sharpens some of the above results. This framework combines classical martingale techniques with the method of Wasserstein matrices [10]. In particular, we rely on Wasserstein matrices to obtain general-purpose quantitative estimates of the local variability of a function of many dependent random variables after taking a conditional expectation with respect to a subset of the variables. A concentration inequality in a metric space must necessarily capture the interplay between the metric and the distribution, and, in our setting, Wasserstein matrices provide the ideal analytical tool for this task. As an illustration, we recover (and, in certain cases, sharpen) some results of [19, 6, 17] by demonstrating all of these to be special cases of the Wasserstein matrix method.

The remainder of the chapter is organized as follows. Section 2 is devoted to setting up the basic notation and preliminary definitions. A brief discussion of the concentration of measure phenomenon in high-dimensional spaces is presented in Section 3, together with a summary of key methods to establish concentration under the independence assumption. Next, in Section 4, we present our abstract martingale technique and then demonstrate its wide scope in Section 5 by deriving many of the previously published concentration inequalities as special cases. We conclude in Section 6 by listing some open questions.

2 Preliminaries and Notation

2.1 Metric Probability Spaces

A metric probability space is a triple (Ω, μ, d), where Ω is a Polish space equipped with its Borel σ-field, μ is a Borel probability measure on Ω, and d is metric on Ω, assumed to be a measurable function on the product space Ω ×Ω. We do not assume that d is the same metric that metrizes the Polish topology on Ω.

2.2 Product Spaces

Since concentration of measure is a high-dimensional phenomenon, a natural setting for studying it is that of a product space. Let T be a finite index set, which we identify with the set [n] ≜ { 1, , n}, where n = | T | (this amounts to fixing some linear ordering of the elements of T). We will use the following notation for subintervals of T: [i] ≜ { 1, , i}; [i, j] ≜ { i, i + 1, , j} for ij; (i, j] ≜ { i + 1, , j} for i < j; (i, j) ≜ { i + 1, , j − 1} for i < j − 1; etc.

With each i ∈ T, we associate a measurable space \((\mathsf{X}_{i},\mathcal{B}_{i})\), where X i is a Polish space and \(\mathcal{B}_{i}\) is its Borel σ-field. For each I ⊆ T, we will equip the product space X I ≜  i ∈ I X i with the product σ-field \(\mathcal{B}^{I} \triangleq \bigotimes _{i\in I}\mathcal{B}_{i}\). When I = T, we will simply write X and \(\mathcal{B}\). We will write x I and x for a generic element of X I and X, respectively. Given two sets I, J ⊂ T with \(I a\mbox{\c{}}p J = \varnothing\), the concatenation of x I ∈ X I and z J ∈ X J is defined as y = x I z J ∈ X IJ by setting

$$\displaystyle\begin{array}{rcl} y_{i} = \left \{\begin{array}{@{}l@{\quad }l@{}} x_{i},\quad &i \in I \\ z_{i},\quad &i \in J \end{array} \right..& & {}\\ \end{array}$$

Given a random object X = (X i ) i ∈ T taking values in X according to a probability law μ, we will denote by \(\mathbb{P}_{\mu }[\cdot ]\) and \(\mathbb{E}_{\mu }[\cdot ]\) the probability and expectation with respect to μ, by μ I(dx I | x J) the regular conditional probability law of X I given X J = x J, and by μ I(dx I) the marginal probability law of X I. When I = { i}, we will write μ i (⋅ ) and μ i (⋅ | x J).

For each i ∈ T, we fix a metric on X i , which is assumed to be measurable with respect to the product σ-field \(\mathcal{B}_{i} \otimes \mathcal{B}_{i}\). For each I ⊆ T, equip X I with the product metric ρ I, where

$$\displaystyle\begin{array}{rcl} \rho ^{I}(x^{I},z^{I}) \triangleq \sum _{ i\in I}\rho _{i}(x_{i},z_{i}),\qquad \forall x^{I},z^{I} \in \mathsf{X}^{I}.& & {}\\ \end{array}$$

When I ≡ T, we will simply write ρ instead of ρ T. In this way, for any Borel probability measure μ on X, we can introduce a “global” metric probability space (X, μ, ρ), as well as “local” metric probability spaces (X I, μ I, ρ I) and (X I, μ I(⋅ | x J), ρ I ) for all I, J ⊂ T and all x J ∈ X J.

2.3 Couplings and Transportation Distances

Let Ω be a Polish space. A coupling of two Borel probability measures μ and ν on Ω is a Borel probability measure P on the product space Ω ×Ω, such that P(⋅ ×Ω) = μ and P(Ω ×⋅ ) = ν. We denote the set of all couplings of μ and ν by \(\mathcal{C}(\mu,\nu )\). Let d be a lower-semicontinuous metric on Ω. We denote by Lip(Ω, d) the space of all functions \(\varOmega \rightarrow \mathbb{R}\) that are Lipschitz with respect to d, and by Lip c (Ω, d) the subset of Lip(Ω, d) consisting of c-Lipschitz functions. The L 1 Wasserstein (or transportation) distance between μ and ν is defined as

$$\displaystyle\begin{array}{rcl} W_{d}(\mu,\nu ) \triangleq \inf _{\mathbf{P}\in \mathcal{C}(\mu,\nu )}\mathbb{E}_{\mathbf{P}}[d(X,Y )],& &{}\end{array}$$
(1)

where (X, Y ) is a random element of Ω ×Ω with law P. The transportation distance admits a dual (Kantorovich–Rubinstein) representation

$$\displaystyle\begin{array}{rcl} W_{d}(\mu,\nu ) =\sup _{f\in \mathrm{Lip}_{1}(\varOmega,d)}\left \vert \int _{\varOmega }f\mathrm{d}\mu -\int _{\varOmega }f\mathrm{d}\nu \right \vert.& &{}\end{array}$$
(2)

For example, when we equip Ω with the trivial metric d(ω, ω′) = 1{ωω′}, the corresponding Wasserstein distance coincides with the total variation distance:

$$\displaystyle\begin{array}{rcl} W_{d}(\mu,\nu ) = \left \Vert \mu -\nu \right \Vert _{\mbox{ TV}} =\sup _{A}\vert \mu (A) -\nu (A)\vert,& & {}\\ \end{array}$$

where the supremum is over all Borel subsets of Ω.

In the context of the product space (X, ρ) defined earlier, we will use the shorthand W i for \(W_{\rho _{i}}\), W I for \(W_{\rho ^{I}}\), and W for W ρ .

2.4 Markov Kernels and Wasserstein Matrices

A Markov kernel on X is a mapping \(K: \mathsf{X} \times \mathcal{B}\rightarrow [0,1]\), such that xK(x, A) is measurable for each \(A \in \mathcal{B}\), and K(x, ⋅ ) is a Borel probability measure on X for each x ∈ X. Given a Markov kernel K and a bounded measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), we denote by Kf the bounded measurable function

$$\displaystyle\begin{array}{rcl} Kf(x) \triangleq \int _{\mathsf{X}}f(y)K(x,\mathrm{d}y),\qquad x \in \mathsf{X}.& & {}\\ \end{array}$$

Likewise, given a Borel probability measure μ on X, we denote by μ K the Borel probability measure

$$\displaystyle\begin{array}{rcl} \mu K(A) \triangleq \int _{\mathsf{X}}K(x,A)\mu (\mathrm{d}x),\qquad A \in \mathcal{B}.& & {}\\ \end{array}$$

It is not hard to see that X fd(μ K) =  X (Kf)dμ.

Given a measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), we define the local oscillation of f at i ∈ T as

$$\displaystyle\begin{array}{rcl} \delta _{i}(f) \triangleq \sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\frac{\vert f(x) - f(z)\vert } {\rho _{i}(x_{i},z_{i})},& & {}\\ \end{array}$$

where we follow the convention 0∕0 = 0. This quantity measures the variability of f in its ith argument when all other arguments are held fixed. As will become evident later on, our martingale technique for establishing concentration inequalities for a given function \(f: \mathsf{X} \rightarrow \mathbb{R}\) requires controlling the local oscillations δ i (Kf) in terms of the local oscillations δ i (f) for appropriately chosen Markov kernels K.

To get an idea of what is involved, let us consider the simple case when each X i is endowed with the scaled trivial metric ρ i (x i , z i ) ≜ α i 1{x i z i }, where α i  > 0 is some fixed constant. Then

$$\displaystyle\begin{array}{rcl} \delta _{i}(f) = \frac{1} {\alpha _{i}} \sup \Big\{\vert f(x) - f(z)\vert: x,z \in \mathsf{X},\,x^{T\setminus \left \{i\right \}} = z^{T\setminus \left \{i\right \}}\Big\}.& & {}\\ \end{array}$$

The corresponding metric ρ on X is the weighted Hamming metric

$$\displaystyle\begin{array}{rcl} \rho _{\boldsymbol{\alpha }}(x,z) \triangleq \sum _{i\in T}\alpha _{i}\mathbf{1}\{x_{i}\neq z_{i}\}.& &{}\end{array}$$
(3)

Fix a Markov kernel K on X. The Dobrushin contraction coefficient of K (also associated in the literature with Doeblin’s name) is the smallest θ ≥ 0 for which \(\left \Vert K(x,\cdot ) - K(z,\cdot )\right \Vert _{\mbox{ TV}} \leq \theta\) holds for all x, z ∈ X. The term contraction is justified by the well-known inequality (apparently going back to Markov himself [21, §5])

$$\displaystyle\begin{array}{rcl} \left \Vert \mu K -\nu K\right \Vert _{\mbox{ TV}} \leq \theta \left \Vert \mu -\nu \right \Vert _{\mbox{ TV}},& &{}\end{array}$$
(4)

which holds for all probability measures μ, ν on X. Then we have the following estimate:

Proposition 2.1

If K is a Markov kernel on X with Dobrushin coefficient θ, then for every i ∈ T and for every \(f \in \mathrm{Lip}(\mathsf{X},\rho _{\boldsymbol{\alpha }})\), we have

$$\displaystyle\begin{array}{rcl} \delta _{i}(Kf) \leq \frac{\theta } {\alpha _{i}}\sum _{j\in T}\alpha _{j}\delta _{j}(f).& &{}\end{array}$$
(5)

Proof

Fix an index i ∈ T and any two x, z ∈ X that differ only in the ith coordinate: x T∖{i} = z T∖{i} and x i z i . Pick an arbitrary coupling \(\mathbf{P}_{x,z} \in \mathcal{C}(K(x,\cdot ),K(z,\cdot ))\). Then

$$\displaystyle\begin{array}{rcl} \vert Kf(x) - Kf(z)\vert & =& \left \vert \int _{\mathsf{X}}K(x,\mathrm{d}u)f(u) -\int _{\mathsf{X}}K(z,\mathrm{d}y)f(y)\right \vert {}\\ & =& \left \vert \int _{\mathsf{X}\times \mathsf{X}}\mathbf{P}_{x,z}(\mathrm{d}u,\mathrm{d}y)\big(f(u) - f(y)\big)\right \vert {}\\ &\leq & \sum _{j\in T}\delta _{j}(f)\int _{\mathsf{X}\times \mathsf{X}}\mathbf{P}_{x,z}(\mathrm{d}u,\mathrm{d}y)\rho _{j}(u_{j},y_{j}) {}\\ & =& \sum _{j\in T}\alpha _{j}\delta _{j}(f)\int _{\mathsf{X}\times \mathsf{X}}\mathbf{P}_{x,z}(\mathrm{d}u,\mathrm{d}y)\mathbf{1}\{u_{j}\neq y_{j}\} {}\\ & \leq & \sum _{j\in T}\alpha _{j}\delta _{j}(f) \cdot \int _{\mathsf{X}\times \mathsf{X}}\mathbf{P}_{x,z}(\mathrm{d}u,\mathrm{d}y)\mathbf{1}\{u\neq y\}, {}\\ \end{array}$$

where the first inequality is by the definition of δ i (f), while the second one follows from the obvious implication u j y j  ⇒ uy. Taking the infimum of both sides over all couplings \(\mathbf{P}_{x,z} \in \mathcal{C}(K(x,\cdot ),K(z,\cdot ))\) yields

$$\displaystyle\begin{array}{rcl} \vert Kf(x) - Kf(z)\vert & \leq & \sum _{j\in T}\alpha _{j}\delta _{j}(f) \cdot \left \Vert K(x,\cdot ) - K(z,\cdot )\right \Vert _{\mbox{ TV}} {}\\ & \leq & \theta \sum _{j\in T}\alpha _{j}\delta _{j}(f). {}\\ \end{array}$$

Finally, dividing both sides of the above inequality by α i and taking the supremum over all choices of x, z that differ only in the ith coordinate, we obtain (5). □ 

One shortcoming of the above result (which is nontrivial only under the rather strong condition

$$\displaystyle\begin{array}{rcl} \theta <\frac{\alpha _{i}} {\alpha _{j}} <\theta ^{-1}& &{}\end{array}$$
(6)

for all i, j ∈ T) is that it gives only a very rough idea of the influence of δ j (f) for j ∈ T on δ i (Kf). For example, if α 1 =  = α n  = 1, then the condition (6) reduces to the Dobrushin contraction condition θ < 1, and the inequality (5) becomes

$$\displaystyle{ \delta _{i}(Kf) \leq \theta \sum _{j\in T}\delta _{j}(f), }$$

suggesting that all of the δ j (f)’s influence δ i (Kf) equally. However, this picture can be refined. To that end, we introduce the notion of a Wasserstein matrix following Föllmer [10]. Let us denote by \(\boldsymbol{\delta }(f)\) the vector (δ i (f)) i ∈ T . We say that a nonnegative matrix V = (V ij ) i, j ∈ T is a Wasserstein matrix for K if, for every f ∈ Lip(X, ρ) and for every i ∈ T,

$$\displaystyle\begin{array}{rcl} \delta _{i}(Kf) \leq \sum _{j\in T}V _{ij}\delta _{j}(f),& &{}\end{array}$$
(7)

or, in vector form, if \(\boldsymbol{\delta }(Kf)\preceq V \boldsymbol{\delta }(f)\).

One of our main objectives will be to show that concentration inequalities for functions f of X ∼ μ can be obtained using Wasserstein matrices for certain Markov kernels K related to μ. In order to motivate the introduction of Wasserstein matrices, we record a couple of contraction estimates for Markov kernels that may be of independent interest. To that end, we introduce another coupling-based distance between probability measures [2, Chap. 8]: for two Borel probability measures on X, define

$$\displaystyle\begin{array}{rcl} \bar{W}(\mu,\nu ) \triangleq \inf _{\mathbf{P}\in \mathcal{C}(\mu,\nu )}\sqrt{\sum _{i\in T } \left (\mathbb{E}_{\mathbf{P} } [\rho _{i } (X_{i }, Y _{i } )] \right ) ^{2}},& &{}\end{array}$$
(8)

where (X, Y ) is a random element of X ×X. Even though \(\bar{W}\) is not a Wasserstein distance, we can use the inequality \(\sqrt{ a + b} \leq \sqrt{a} + \sqrt{b}\) for a, b ≥ 0 to show that \(\bar{W}(\mu,\nu ) \leq W(\mu,\nu )\).

Proposition 2.2

Let V be a Wasserstein matrix for a Markov kernel K on X. Then for any Lipschitz function \(f: \mathsf{X} \rightarrow \mathbb{R}\),

$$\displaystyle\begin{array}{rcl} \left \vert \mathbb{E}_{\mu K}[f(X)] - \mathbb{E}_{\nu K}[f(X)]\right \vert \leq \left \|V \boldsymbol{\delta }(f)\right \|_{\ell^{2}(T)}\bar{W}(\mu,\nu ).& &{}\end{array}$$
(9)

Proof

Fix an arbitrary coupling \(\mathbf{P} \in \mathcal{C}(\mu,\nu )\) and let (X, Y ) be a random element of X ×X with law P. Then

$$\displaystyle\begin{array}{rcl} \left \vert \mathbb{E}_{\mu K}[f(X)] - \mathbb{E}_{\nu K}[f(X)]\right \vert & =& \left \vert \mathbb{E}_{\mu }[Kf(X)] - \mathbb{E}_{\nu }[Kf(X)]\right \vert {}\\ & =& \left \vert \mathbb{E}_{\mathbf{P}}\left [Kf(X) - Kf(Y )\right ]\right \vert {}\\ &\leq & \sum _{i\in T}\delta _{i}(Kf) \cdot \mathbb{E}_{\mathbf{P}}[\rho _{i}(X_{i},Y _{i})] {}\\ & \leq & \sum _{i\in T}\sum _{j\in T}V _{ij}\delta _{j}(f) \cdot \mathbb{E}_{\mathbf{P}}[\rho _{i}(X_{i},Y _{i})]. {}\\ \end{array}$$

where in the last step we have used the definition of the Wasserstein matrix. Using the Cauchy–Schwarz inequality, we obtain

$$\displaystyle\begin{array}{rcl} \left \vert \mathbb{E}_{\mu K}[f(X)] - \mathbb{E}_{\nu K}[f(X)]\right \vert & \leq & \sqrt{\sum _{i\in T } \Big\vert \sum _{j\in T } V _{ij } \delta _{j } (f)\Big\vert ^{2 } \cdot \sum _{i\in T } \left (\mathbb{E}_{\mathbf{P} } [\rho _{i } (X_{i }, Y _{i } )] \right ) ^{2}} {}\\ & =& \left \|V \boldsymbol{\delta }(f)\right \|_{\ell^{2}(T)} \cdot \sqrt{\sum _{i\in T } \left (\mathbb{E}_{\mathbf{P} } [\rho _{i } (X_{i }, Y _{i } )] \right ) ^{2}}. {}\\ \end{array}$$

Taking the infimum of both sides over all \(\mathbf{P} \in \mathcal{C}(\mu,\nu )\), we obtain (9). □ 

Corollary 2.3

Let V be a Wasserstein matrix for a Markov kernel K on X. Then, for any two Borel probability measures μ and ν on X,

$$\displaystyle\begin{array}{rcl} W(\mu K,\nu K) \leq \| V \mathbf{1}\|_{\ell^{2}(T)}\bar{W}(\mu,\nu ),& &{}\end{array}$$
(10)

where \(\mathbf{1} \in \mathbb{R}^{T}\) is the vector of all ones, and therefore

$$\displaystyle\begin{array}{rcl} W(\mu K,\nu K) \leq \| V \mathbf{1}\|_{\ell^{2}(T)}W(\mu,\nu ).& & {}\\ \end{array}$$

Proof

A function \(f: \mathsf{X} \rightarrow \mathbb{R}\) belongs to Lip1(X, ρ) if and only if \(\boldsymbol{\delta }(f) \in [0,1]^{T}\). Using the dual representation (2) of W and applying Proposition 2.2, we can write

$$\displaystyle\begin{array}{rcl} W(\mu K,\nu K)& =& \sup _{f\in \mathrm{Lip}_{1}(\mathsf{X},\rho )}\left \vert \mathbb{E}_{\mu K}[f(X)] - \mathbb{E}_{\nu K}[f(X)]\right \vert {}\\ &\leq & \sup _{\xi \in [0,1]^{T}}\|V \xi \|_{\ell^{2}(T)}\bar{W}(\mu,\nu ). {}\\ \end{array}$$

Since V is a nonnegative matrix, the supremum is achieved by ξ = 1. □ 

2.5 Relative Entropy

Finally, we will need some key notions from information theory. The relative entropy (or information divergence) between two probability measures μ, ν on a space Ω is defined as

$$\displaystyle\begin{array}{rcl} D(\nu \|\mu ) \triangleq \left \{\begin{array}{@{}l@{\quad }l@{}} \int _{\varOmega }\mathrm{d}\mu \,f\log f,\quad &\mbox{ if }\nu \ll \mu \text{ with }f = \mathrm{d}\nu /\mathrm{d}\mu \\ +\infty,\quad &\text{otherwise} \end{array} \right..& & {}\\ \end{array}$$

We use natural logarithms throughout the chapter. The relative entropy is related to the total variation distance via Pinsker’s inequalityFootnote 1

$$\displaystyle\begin{array}{rcl} \left \Vert \mu -\nu \right \Vert _{\mbox{ TV}} \leq \sqrt{\frac{1} {2}D(\mu \|\nu )}.& &{}\end{array}$$
(11)

3 Concentration of Measure and Sufficient Conditions

In this section, we give a precise definition of the concentration of measure phenomenon, review several sufficient conditions for it to hold, and briefly discuss how it can be established under the independence assumption via tensorization. For more details and further references, the reader can consult [2] or [30].

We say that the metric probability space (X, μ, ρ) has the concentration of measure property if there exists a positive constant c > 0, such that, for every Lipschitz function \(f: \mathsf{X} \rightarrow \mathbb{R}\),

$$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\left \{f(X) - \mathbb{E}_{\mu }[f(X)] \geq t\right \} \leq e^{-t^{2}/2c\|f\|_{\mathrm{ Lip}}^{2} },\qquad \forall t> 0& &{}\end{array}$$
(12)

where

$$\displaystyle\begin{array}{rcl} \|f\|_{\mathrm{Lip}} \triangleq \sup _{{ x,y\in \mathsf{X} \atop x\neq y} }\frac{\vert f(x) - f(y)\vert } {\rho (x,y)} & & {}\\ \end{array}$$

is the Lipschitz constant of f. A sufficient (and, up to constants, necessary) condition for (12) is that, for every f ∈ Lip1(X, ρ), the random variable f(X) with X ∼ μ is c-subgaussian, i.e.,

$$\displaystyle\begin{array}{rcl} \log \mathbb{E}_{\mu }\left [e^{\lambda (f(X)-\mathbb{E}_{\mu }[f(X)])}\right ] \leq \frac{c\lambda ^{2}} {2},\qquad \forall \lambda \in \mathbb{R}.& &{}\end{array}$$
(13)

A fundamental result of Bobkov and Götze [1] states that the subgaussian estimate (13) holds for all f ∈ Lip1(X, ρ) if and only if μ satisfies the so-called transportation-information inequality

$$\displaystyle\begin{array}{rcl} W(\mu,\nu ) \leq \sqrt{2c\,D(\nu \|\mu )},& &{}\end{array}$$
(14)

where ν ranges over all Borel probability measures on X. We will use the shorthand μ ∈ T ρ (c) to denote the fact that the inequality (14) holds for all ν. The key role of transportation-information inequalities in characterizing the concentration of measure phenomenon was first recognized by Marton in a breakthrough paper [22], with further developments in [23, 24, 25].

The entropy method (see, e.g., [2, Chap. 6] and [30, Chap. 3]) provides another route to establishing (13). Its underlying idea can be briefly described as follows. Given a measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), consider the logarithmic moment-generating function

$$\displaystyle{ \psi _{f}(\lambda ) \triangleq \log \mathbb{E}_{\mu }\left [e^{\lambda (f(X)-\mathbb{E}_{\mu }[f(X)])}\right ] }$$

of the centered random variable \(f(X) - \mathbb{E}_{\mu }[f(X)]\). For any λ ≠ 0, introduce the tilted probability measure μ (λ f) with

$$\displaystyle{ \frac{\mathrm{d}\mu ^{(\lambda f)}} {\mathrm{d}\mu } = \frac{e^{\lambda f}} {\mathbb{E}_{\mu }[e^{\lambda f}]} = \frac{e^{\lambda (f-\mathbb{E}_{\mu }f)}} {e^{\psi _{f}(\lambda )}}. }$$

Then a simple calculation shows that the relative entropy D(μ (λ f) ∥ μ) can be expressed as

$$\displaystyle\begin{array}{rcl} D(\mu ^{(\lambda f)}\|\mu ) =\lambda \psi _{ f}^{{\prime}}(\lambda ) -\psi _{ f}(\lambda ) \equiv \lambda ^{2}\left (\frac{\psi _{f}(\lambda )} {\lambda } \right )'& & {}\\ \end{array}$$

where the prime denotes differentiation with respect to λ. Using the fact that ψ f (0) = 0 and integrating, we obtain the following formula for ψ f (λ):

$$\displaystyle\begin{array}{rcl} \psi _{f}(\lambda ) =\lambda \int _{ 0}^{\lambda }\frac{D(\mu ^{(tf)}\|\mu )} {t^{2}} \mathrm{d}t.& &{}\end{array}$$
(15)

This representation is at the basis of the so-called Herbst argument, which for our purposes can be summarized as follows:

Lemma 3.1 (Herbst)

The metric probability space (X,μ,ρ) has the concentration property with constant c if, for any f ∈Lip1(X,ρ),

$$\displaystyle\begin{array}{rcl} D(\mu ^{(tf)}\|\mu ) \leq \frac{ct^{2}} {2},\qquad \forall t> 0.& &{}\end{array}$$
(16)

Remark 3.2

Up to a constant, the converse is also true [34, Prob. 3.12]: if the subgaussian estimate (13) holds for every f ∈ Lip1(X, ρ), then

$$\displaystyle\begin{array}{rcl} D(\mu ^{(tf)}\|\mu ) \leq 2ct^{2},\qquad \forall t> 0& & {}\\ \end{array}$$

for every f ∈ Lip1(X, ρ).

In this way, the problem of establishing the concentration phenomenon reduces to showing that (16) holds for every f ∈ Lip1(X, ρ), typically via logarithmic Sobolev inequalities or other functional inequalities.

3.1 Concentration of Measure Under the Independence Assumption

To set the stage for the general treatment of the concentration phenomenon in high dimensions, we first consider the independent case, i.e., when coordinates X i , i ∈ T, of the random object X ∼ μ are mutually independent. In other words, the probability measure μ is equal to the product of its marginals: μ = μ 1μ n . The key to establishing the concentration property in such a setting is tensorization, which is an umbrella term for any result that allows one to derive the “global” concentration property of the high-dimensional product space (X 1X n , ρ, μ 1μ n ) from “local” concentration properties of the coordinate spaces (X i , ρ i , μ i ), i ∈ T.

Below, we list two such tensorization results, one for the transportation-information inequalities and one for the relative entropy. Both of these results are deep consequences of the interplay between the independence structure of μ and the metric structure of ρ. Indeed, a function \(f: \mathsf{X} \rightarrow \mathbb{R}\) belongs to Lip1(X, ρ) if and only if δ i (f) ≤ 1 for all i ∈ T, i.e., if and only if, for every i ∈ T and every x T∖{i} ∈ X T∖{i}, the function f i : X i  → R given by f i (y i ) ≜ f(y i x T∖{i}) is 1-Lipschitz with respect to the metric ρ i on X i . With this in mind, it is reasonable to expect that if one can establish a concentration property for all 1-Lipschitz functions on the coordinate spaces X i , then one can deduce a concentration property for functions on the product space X that are 1-Lipschitz in each coordinate.

Lemma 3.3 (Tensorization of transportation-information inequalities)

Suppose that there exist constants c 1 ,…,c n ≥ 0, such that

$$\displaystyle{ \mu _{i} \in T_{\rho _{i}}(c_{i}),\qquad \forall i \in T. }$$

Then μ = μ 1 ⊗ … ⊗μ n ∈ T ρ (c) with c = ∑ i=1 n c i .

For example, by an appropriate rescaling of Pinsker’s inequality (11), we see that, if each coordinate space X i is endowed with the scaled trivial metric ρ i (x i , z i ) = α i 1{x i z i } for some α i  > 0, then any Borel probability measure μ i on X i satisfies the transportation-information inequality with c i  = α i 2∕4. By the above tensorization lemma, any product measure μ 1μ n on the product space X 1X n equipped with the weighted Hamming metric \(\rho _{\boldsymbol{\alpha }}\) defined in (3) satisfies \(T_{\rho _{\boldsymbol{\alpha }}}(c)\) with \(c = \frac{1} {4}\sum _{i\in T}\alpha _{i}^{2}\). Consequently, by the Bobkov–Götze theorem, the subgaussian estimate (13) holds for any function \(f \in \mathrm{Lip}_{1}(\mathsf{X},\rho _{\boldsymbol{\alpha }})\), which in turn implies, via the Chernoff bound, that

$$\displaystyle{ \mathbb{P}_{\mu }\Big\{f - \mathbb{E}_{\mu }f \geq t\Big\} \leq \exp \left (- \frac{2t^{2}} {\sum _{i\in T}\alpha _{i}^{2}}\right ),\qquad \forall t \geq 0. }$$

This provides an alternative derivation of McDiarmid’s inequality (with the sharp constant in the exponent), which was originally proved using the martingale method.

Lemma 3.4 (Tensorization of relative entropy)

Consider a product measure μ = μ 1 ⊗ … ⊗μ n . Then for any other probability measure ν on X we have

$$\displaystyle\begin{array}{rcl} D(\nu \|\mu ) \leq \sum _{i\in T}\mathbb{E}_{\nu }D\Big(\nu _{i}(\cdot \vert X^{T\setminus \{i\}})\|\mu _{ i}\Big).& &{}\end{array}$$
(17)

The idea is to apply this lemma to ν = μ (tf) for some t ≥ 0 and an arbitrary f ∈ Lip1(X, ρ). In that case, a simple calculation shows that the conditional probability measure ν i (dx i  | x T∖{i}) = μ i (tf)(dx i  | x T∖{i}) is equal to the tilted distribution \(\mu _{i}^{(tf_{i})}\) with f i (x i ) = f(x i x T∖{i}), and therefore

$$\displaystyle{ D(\mu ^{(tf)}\|\mu ) \leq \sum _{ i=1}^{n}\mathbb{E}_{\mu ^{ (tf)}}D\big(\mu _{i}^{(tf_{i})}\big\|\mu _{ i}\big). }$$

If f ∈ Lip1(X, ρ), then f i  ∈ Lip1(X i , ρ i ). Thus, if we can show that, for any g ∈ Lip1(X i , ρ i ),

$$\displaystyle{ D(\mu _{i}^{(tg)}\|\mu _{ i}) \leq \frac{c_{i}t^{2}} {2},\qquad \forall t \geq 0, }$$

then the estimate

$$\displaystyle{ D(\mu ^{(tf)}\|\mu ) \leq \frac{ct^{2}} {2},\qquad \forall t \geq 0 }$$

holds with c =  i = 1 n c i for all f ∈ Lip(X, ρ) by the tensorization lemma. Invoking the Herbst argument, we conclude that (X, μ, ρ) has the concentration property with the same c.

4 The Abstract Martingale Method

In this section, we present a general martingale-based scheme for deriving concentration inequalities for functions of many dependent random variables. Let \(f: \mathsf{X} \rightarrow \mathbb{R}\) be the function of interest, and let X = (X i ) i ∈ T be a random element of the product space X with probability law μ. Let \(\mathcal{F}_{0} \subset \mathcal{F}_{1} \subset \ldots \subset \mathcal{F}_{m}\) be a filtration (i.e., an increasing sequence of σ-fields) on X, such that \(\mathcal{F}_{0}\) is trivial and \(\mathcal{F}_{m} = \mathcal{B}\). The idea is to decompose the centered random variable \(f(X) - \mathbb{E}_{\mu }[f(X)]\) as a sum of martingale differences

$$\displaystyle{ M^{(j)} \triangleq \mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{ j}] - \mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{j-1}],\qquad j = 1,\ldots,m. }$$

By construction, \(\mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{m}] = f(X)\) and \(\mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{0}] = \mathbb{E}_{\mu }[f(X)]\), so the problem of bounding the probability \(\mathbb{P}_{\mu }\big\{\vert f - \mathbb{E}_{\mu }f\vert \geq t\big\}\) for a given t ≥ 0 reduces to bounding the probability

$$\displaystyle{ \mathbb{P}_{\mu }\Bigg\{\Big\vert \sum _{j=1}^{m}M^{(j)}\Big\vert \geq t\Bigg\}. }$$

The latter problem hinges on being able to control the martingale differences M (j). In particular, if each M (j) is a.s. bounded, we have the following:

Theorem 4.1 (Azuma–Hoeffding inequality)

Let {M (j) } j=1 m be a martingale difference sequence with respect to a filtration \(\{\mathcal{F}_{j}\}_{j=0}^{m}\) . Suppose that, for each j, there exist \(\mathcal{F}_{j-1}\) -measurable random variables A (j) and B (j), such that A (j) ≤ M (j) ≤ B (j) a.s. Then

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\exp \left (\lambda \sum _{j=1}^{m}M^{(j)}\right )\right ] \leq \exp \left (\frac{\lambda ^{2}\sum _{ j=1}^{m}\|B^{(j)} - A^{(j)}\|_{ \infty }^{2}} {8} \right ),\quad \forall \lambda \in \mathbb{R}.& &{}\end{array}$$
(18)

Consequently, for any t ≥ 0,

$$\displaystyle\begin{array}{rcl} \mathbb{P}\Bigg\{\Big\vert \sum _{j=1}^{m}M^{(j)}\Big\vert \geq t\Bigg\} \leq 2\exp \left (- \frac{2t^{2}} {\sum _{j=1}^{m}\|B^{(j)} - A^{(j)}\|_{\infty }^{2}}\right ).& &{}\end{array}$$
(19)

The most straightforward choice of the filtration is also the most natural one: take m = | T | = n, and for each i ∈ T take \(\mathcal{F}_{i} =\sigma (X^{[i]})\). For i ∈ T, define a Markov kernel K (i) on X by

$$\displaystyle\begin{array}{rcl} K^{(i)}(x,\mathrm{d}y) \triangleq \delta _{ x^{[i-1]}}(\mathrm{d}y^{[i-1]}) \otimes \mu ^{[i,n]}(\mathrm{d}y^{[i,n]}\vert x^{[i-1]}).& &{}\end{array}$$
(20)

Then, for any f ∈ L 1(μ) we have

$$\displaystyle\begin{array}{rcl} K^{(i)}f(x)& =& \int _{\mathsf{ X}}f(y)K^{(i)}(x,\mathrm{d}y) {}\\ & =& \int _{\mathsf{X}^{[i,n]}}f(x^{[i-1]}y^{[i,n]})\mu ^{[i,n]}(\mathrm{d}y^{[i,n]}\vert x^{[i-1]}) {}\\ & =& \mathbb{E}_{\mu }[f(X)\vert X^{[i-1]} = x^{[i-1]}]; {}\\ \end{array}$$

in particular, \(K^{(1)}f = \mathbb{E}_{\mu }f\). We extend this definition to i = n + 1 in the obvious way:

$$\displaystyle\begin{array}{rcl} K^{(n+1)}(x,\mathrm{d}y) =\delta _{ x}(\mathrm{d}y),& & {}\\ \end{array}$$

so that K (n+1) f = f. Then, for each i ∈ T, we can write M (i) = K (i+1) fK (i) f. With this construction, we can state the following theorem that applies to the case when each coordinate space X i is endowed with a bounded measurable metric ρ i :

Theorem 4.2

Assume that, for all i,

$$\displaystyle\begin{array}{rcl} \|\rho _{i}\| \triangleq \sup _{x_{i},z_{i}\in \mathsf{X}_{i}}\rho _{i}(x_{i},z_{i}) <\infty.& & {}\\ \end{array}$$

For each i ∈{ 1,…,n + 1}, let V (i) be a Wasserstein matrix for the Markov kernel K (i) defined in  (20) , in the sense that \(\boldsymbol{\delta }(K^{(i)}f)\preceq V ^{(i)}\boldsymbol{\delta }(f)\) holds for each f ∈Lip (X,ρ) as in ( 7 ). Define the matrix Γ = (Γ ij ) i,j∈T with entries

$$\displaystyle\begin{array}{rcl} \varGamma _{ij} \triangleq \|\rho _{i}\|V _{ij}^{(i+1)}.& & {}\\ \end{array}$$

Then, for any f ∈Lip (X,ρ) and for any t ≥ 0, we have

$$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Big\{\vert f(X) - \mathbb{E}_{\mu }[f(X)]\vert \geq t\Big\} \leq 2\exp \left (- \frac{2t^{2}} {\|\varGamma \boldsymbol{\delta }(f)\|_{\ell^{2}(T)}^{2}}\right ).& &{}\end{array}$$
(21)

Proof

For each i ∈ T, using the tower property of conditional expectations, we can write

$$\displaystyle\begin{array}{rcl} M^{(i)}& =& \mathbb{E}_{\mu }[f(X)\vert X^{[i]} = x^{[i]}] - \mathbb{E}_{\mu }[f(X)\vert X^{[i-1]} = x^{[i-1]}] {}\\ & =& \mathbb{E}_{\mu }[f(X)\vert X^{[i]} = x^{[i]}] - \mathbb{E}_{\mu }\big[\mathbb{E}_{\mu }[f(X)\vert X^{[i-1]} = x^{[i-1]},X_{ i}]\big\vert X^{[i-1]} = x^{[i-1]}\big] {}\\ & =& \int _{\mathsf{X}^{[i,n]}}\mu ^{[i,n]}(\mathrm{d}y^{[i,n]}\vert x^{[i-1]})\Big(\int _{\mathsf{ X}^{(i,n]}}\mu ^{(i,n]}(\mathrm{d}z^{(i,n]}\vert x^{[i]})f(x^{[i-1]}x_{ i}z^{(i,n]}) {}\\ & & -\int _{\mathsf{X}^{(i,n]}}\mu ^{(i,n]}(\mathrm{d}z^{(i,n]}\vert x^{[i-1]}y_{ i})f(x^{[i-1]}y_{ i}z^{(i,n]})\Big) {}\\ & =& \int _{\mathsf{X}^{[i,n]}}\mu ^{[i,n]}(\mathrm{d}y^{[i,n]}\vert x^{[i-1]})\left (K^{(i+1)}f(x^{[i-1]}x_{ i}y^{(i,n]}) - K^{(i+1)}f(x^{[i-1]}y_{ i}y^{(i,n]})\right ).{}\\ \end{array}$$

From this, it follows that A (i) ≤ M (i) ≤ B (i) a.s., where

$$\displaystyle\begin{array}{rcl} A^{(i)}& \triangleq & \int _{\mathsf{ X}^{[i,n]}}\mu ^{[i,n]}(\mathrm{d}y^{[i,n]}\vert x^{[i-1]})\inf _{ x_{i}\in \mathsf{X}_{i}}\left (K^{(i+1)}f(x^{[i-1]}x_{ i}y^{(i,n]}) - K^{(i+1)}f(x^{[i-1]}y_{ i}y^{(i,n]})\right ) {}\\ B^{(i)}& \triangleq & \int _{\mathsf{ X}^{[i,n]}}\mu ^{[i,n]}(\mathrm{d}y^{[i,n]}\vert x^{[i-1]})\sup _{ x_{i}\in \mathsf{X}_{i}}\left (K^{(i+1)}f(x^{[i-1]}x_{ i}y^{(i,n]}) - K^{(i+1)}f(x^{[i-1]}y_{ i}y^{(i,n]})\right ),{}\\ \end{array}$$

and

$$\displaystyle\begin{array}{rcl} \|B^{(i)} - A^{(i)}\|_{ \infty }\leq \|\rho _{i}\|\delta _{i}\big(K^{(i+1)}f\big).& &{}\end{array}$$
(22)

By definition of the Wasserstein matrix, we have

$$\displaystyle\begin{array}{rcl} \delta _{i}\left (K^{(i+1)}f\right ) \leq \sum _{ j\in T}V _{ij}^{(i+1)}\delta _{ j}(f).& & {}\\ \end{array}$$

Substituting this estimate into (22), we get

$$\displaystyle\begin{array}{rcl} \sum _{i=1}^{n}\|B^{(i)} - A^{(i)}\|_{ \infty }^{2} \leq \sum _{ i=1}^{n}\left \vert \left (\varGamma \boldsymbol{\delta }(f)\right )_{ i}\right \vert ^{2} \equiv \left \|\varGamma \boldsymbol{\delta }(f)\right \|_{\ell^{ 2}(T)}^{2}.& &{}\end{array}$$
(23)

The probability estimate (21) then follows from the Azuma–Hoeffding inequality (19). □ 

We can also use the martingale method to obtain a tensorization result for transportation inequalities without independence assumptions. This result, which generalizes a theorem of Djellout, Guillin, and Wu [8, Thm. 2.11], can be used even when the metrics ρ i are not necessarily bounded.

Theorem 4.3

Suppose that there exist constants c 1 ,…,c n ≥ 0, such that

$$\displaystyle\begin{array}{rcl} \mu _{i}(\cdot \vert x^{[i-1]}) \in T_{\rho _{ i}}(c_{i}),\qquad \forall i \in T,\,x^{[i-1]} \in \mathsf{X}^{[i-1]}.& &{}\end{array}$$
(24)

For each i ∈{ 1,…,n + 1}, let V (i) be a Wasserstein matrix for K (i) . Then μ ∈ T ρ (c) with

$$\displaystyle\begin{array}{rcl} c =\sum _{i\in T}c_{i}\Big(\sum _{j\in T}V _{ij}^{(i+1)}\Big)^{2}.& &{}\end{array}$$
(25)

Proof

By the Bobkov–Götze theorem [1], it suffices to show that, for every \(f: \mathsf{X} \rightarrow \mathbb{R}\) with ∥ f ∥ Lip ≤ 1, the random variable f(X) with X ∼ μ is c-subgaussian, with c given by (25). To that end, we again consider the martingale decomposition

$$\displaystyle{ f - \mathbb{E}_{\mu }[f] =\sum _{i\in T}M^{(i)} }$$

with M (i) = K (i+1) fK (i) f. We will show that, for every i,

$$\displaystyle\begin{array}{rcl} \log \mathbb{E}_{\mu }\Big[e^{\lambda M^{(i)} }\Big\vert X^{[i-1]}\Big] \leq \frac{c_{i}\Big(\sum _{j\in T}V _{ij}^{(i+1)}\Big)^{2}\lambda ^{2}} {2},\qquad \forall \lambda \in \mathbb{R}.& &{}\end{array}$$
(26)

This, in turn, will yield the desired subgaussian estimate

$$\displaystyle\begin{array}{rcl} \mathbb{E}_{\mu }\left [e^{\lambda (f-\mathbb{E}_{\mu }[f])}\right ]& =& \mathbb{E}_{\mu }\left [\exp \left (\lambda \sum _{ i\in T}M^{(i)}\right )\right ] {}\\ & \leq & \exp \left (\frac{c\lambda ^{2}} {2} \right ) {}\\ \end{array}$$

for every \(\lambda \in \mathbb{R}\).

To proceed, note that, for a fixed realization x [i−1] of X [i−1], M (i) = K (i+1) fK (i) f is σ(X i )-measurable, and

$$\displaystyle\begin{array}{rcl} \|M^{(i)}\|_{ \mathrm{Lip}}& \leq & \sup _{{ x,y\in \mathsf{X} \atop x^{T\setminus \{i\}}=y^{T\setminus \{i\}}} }\frac{\left \vert K^{(i+1)}f(x) - K^{(i+1)}f(y)\right \vert } {\rho _{i}(x_{i},y_{i})} {}\\ & \equiv & \delta _{i}\big(K^{(i+1)}f\big) {}\\ & \leq & \sum _{j\in T}V _{ij}^{(i+1)}\delta _{ j}(f) {}\\ & \leq & \sum _{j\in T}V _{ij}^{(i+1)}, {}\\ \end{array}$$

where we have used the definition of the Wasserstein matrix, as well as the fact that ∥ f ∥ Lip ≤ 1 is equivalent to δ j (f) ≤ 1 for all j ∈ T. Since \(\mu _{i}(\cdot \vert x^{[i-1]}) \in T_{\rho _{i}}(c)\) by hypothesis, we obtain the estimate (26) by the Bobkov–Götze theorem. □ 

As a sanity check, let us confirm that, in the case when μ is a product measure and the product space X is endowed with the weighted Hamming metric \(\rho _{\boldsymbol{\alpha }}\) defined in (3), Theorems 4.2 and 4.3 both reduce to McDiarmid’s inequality. To see this, we first note that, when the X i ’s are independent, we can write

$$\displaystyle{ K^{(i)}f(x) =\int _{\mathsf{ X}^{[i,n]}}f(x^{[i-1]}y^{[i,n]})\mu _{ i}(\mathrm{d}y_{i})\mu _{i+1}(\mathrm{d}y_{i+1})\ldots \mu _{n}(\mathrm{d}y_{n}) }$$

for each i ∈ T, f ∈ L 1(μ), and x ∈ X. This, in turn, implies that

$$\displaystyle\begin{array}{rcl} \delta _{i}(K^{(i+1)}f)& =& \alpha _{ i}^{-1}\sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\left \vert K^{(i+1)}f(x) - K^{(i+1)}f(z)\right \vert {}\\ & =& \alpha _{i}^{-1}\sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\Big\vert \int _{\mathsf{X}^{(i,n]}}f(x^{[i]}y^{(i,n]})\mu _{ i+1}(\mathrm{d}y_{i+1})\ldots \mu _{n}(\mathrm{d}y_{n}) {}\\ & & \qquad \qquad \qquad -\int _{\mathsf{X}^{(i,n]}}f(z^{[i]}y^{(i,n]})\mu _{ i+1}(\mathrm{d}y_{i+1})\ldots \mu _{n}(\mathrm{d}y_{n})\Big\vert {}\\ &\leq & \alpha _{i}^{-1}\sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\left \vert f(x) - f(z)\right \vert {}\\ & =& \delta _{i}(f), {}\\ \end{array}$$

where we have used the fact that, with ρ i (x i , z i ) = α i 1{x i z i }, ∥ ρ i  ∥ = α i for every i ∈ T. Therefore, for each i ∈ T, we can always choose a Wasserstein matrix V (i+1) for K (i+1) in such a way that its ith row has zeroes everywhere except for the ith column, where it has a 1. Now, for any function \(f: \mathsf{X} \rightarrow \mathbb{R}\) which is 1-Lipschitz with respect to \(\rho _{\boldsymbol{\alpha }}\), we can take \(\boldsymbol{\delta }(f) = \mathbf{1}\). Therefore, for any such f Theorem 4.2 gives

$$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Bigg\{\vert f(X) - \mathbb{E}_{\mu }[f(X)]\vert \geq t\Bigg\} \leq 2\exp \left (- \frac{2t^{2}} {\sum _{i=1}^{n}\alpha _{i}^{2}}\right ),\qquad \forall t \geq 0& & {}\\ \end{array}$$

which is precisely McDiarmid’s inequality. Since the constant 2 in McDiarmid’s inequality is known to be sharp, this shows that the coefficient 2 in the exponent in (21) is likewise optimal. Moreover, with our choice of ρ i , condition (24) of Theorem 4.3 holds with c i  = α i 2∕4, and, in light of the discussion above, we can arrange j V ij (i+1) = 1. Therefore, by Theorem 4.3, any function \(f: \mathsf{X} \rightarrow \mathbb{R}\) which is 1-Lipschitz with respect to \(\rho _{\boldsymbol{\alpha }}\) is c-subgaussian with constant

$$\displaystyle{ c =\sum _{i\in T}c_{i}\Bigg(\sum _{j\in T}V _{ij}^{(i+1)}\Bigg)^{2} = \frac{1} {4}\sum _{i\in T}\alpha _{i}^{2}, }$$

which is just another equivalent statement of McDiarmid’s inequality.

It is also possible to consider alternative choices of the filtration \(\{\mathcal{F}_{j}\}_{j=0}^{m}\). For example, if we partition the index set T into m disjoint subsets (blocks) T 1, , T m , we can take

$$\displaystyle{ \mathcal{F}_{j} \triangleq \sigma \left (X_{i}: i \in \varLambda _{j}\right ),\qquad \forall i \in T }$$

where Λ j  ≜ T 1T j . Defining for each j ∈ [m] the Markov kernel

$$\displaystyle{ \tilde{K}^{(j)}(x,\mathrm{d}y) \triangleq \delta _{ x^{\varLambda _{i-1}}}(\mathrm{d}y^{\varLambda _{i-1}}) \otimes \mu ^{T\setminus \varLambda _{i-1}}(\mathrm{d}y^{T\setminus \varLambda _{i-1}}\vert x^{\varLambda _{i-1}}), }$$

we can write

$$\displaystyle{ M^{(j)} = \mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{ j}] - \mathbb{E}_{\mu }[f(X)\vert \mathcal{F}_{j-1}] = K^{(j+1)}f - K^{(j)}f }$$

for every j ∈ [m]. As before, we take \(K^{(1)}f = \mathbb{E}_{\mu }[f]\) and K (m+1) f = f. Given a measurable function \(f: \mathsf{X} \rightarrow \mathbb{R}\), we can define the oscillation of f in the jth block T j , j ∈ [m], by

$$\displaystyle\begin{array}{rcl} \tilde{\delta }_{j}(f) \triangleq \sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus T_{j}}=z^{T\setminus T_{j}}} }\frac{\vert f(x) - f(z)\vert } {\rho ^{T_{j}}(x^{T_{j}},z^{T_{j}})}.& & {}\\ \end{array}$$

The definition of a Wasserstein matrix is modified accordingly: we say that a nonnegative matrix \(\tilde{V } = (\tilde{V }_{jk})_{j,k\in [m]}\) is a Wasserstein matrix for a Markov kernel K on X with respect to the partition {T j } j = 1 m if, for any Lipschitz function \(f: \mathsf{X} \rightarrow \mathbb{R}\),

$$\displaystyle\begin{array}{rcl} \tilde{\delta }_{j}(Kf) \leq \sum _{k=1}^{m}\tilde{V }_{ jk}\tilde{\delta }_{k}(f)& & {}\\ \end{array}$$

for all j ∈ [m]. With these definitions at hand, the following theorem, which generalizes a result of Paulin [29, Thm. 2.1], can be proved in the same way as Theorem 4.2:

Theorem 4.4

For each j ∈ [m + 1], let \(\tilde{V }^{(j)} = (\tilde{V }_{k\ell}^{(j)})_{k,\ell\in [m]}\) be a Wasserstein matrix for \(\tilde{K}^{(j)}\) with respect to the partition {T j }. Define the matrix \(\tilde{\varGamma }= (\tilde{\varGamma }_{k\ell})_{k,\ell\in [m]}\) with entries

$$\displaystyle\begin{array}{rcl} \tilde{\varGamma }_{k\ell} \triangleq \|\rho ^{T_{k} }\|\tilde{V }_{k\ell}^{(k+1)},& & {}\\ \end{array}$$

where

$$\displaystyle\begin{array}{rcl} \|\rho ^{T_{k} }\| \triangleq \sup _{x^{T_{k}},z^{T_{k}}}\rho ^{T_{k} }(x^{T_{k} },z^{T_{k} })& & {}\\ \end{array}$$

is the diameter of the metric space \((\mathsf{X}^{T_{k}},\rho ^{T_{k}})\) . Then, for any f ∈Lip (X,ρ) and for any t ≥ 0,

$$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Big\{\vert f(X) - \mathbb{E}_{\mu }[f(X)]\vert \geq t\Big\} \leq 2\exp \left (- \frac{2t^{2}} {\big\|\tilde{\varGamma }\tilde{\boldsymbol{\delta }}(f)\big\|_{\ell^{2}(m)}^{2}}\right ).& & {}\\ \end{array}$$

5 The Martingale Method in Action

We now show that several previously published concentration inequalities for functions of dependent random variables arise as special cases of Theorem 4.2 by exploiting the freedom to choose the Wasserstein matrices V (i). In fact, careful examination of the statement of Theorem 4.2 shows that, for each i ∈ T, we only need to extract the ith row of V (i+1).

5.1 Concentration Inequalities Under the Dobrushin Uniqueness Condition

One particularly clean way of constructing the desired Wasserstein matrices is via the classical comparison theorem of Dobrushin for Gibbs measures [9]. For our purposes, we give its formulation due to Föllmer [11]:

Lemma 5.1

Let ν and \(\tilde{\nu }\) be two Borel probability measures on X. Define the matrix Cν = (Cij ν)i,j∈T and the vector \(b^{\nu,\tilde{\nu }} = (b_{i}^{\nu,\tilde{\nu }})_{i\in T}\) by

$$\displaystyle\begin{array}{rcl} C_{ij}^{\nu } =\sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{j\}}=z^{T\setminus \{j\}}} }\frac{W_{i}\big(\nu _{i}(\cdot \vert x^{T\setminus \{i\}}),\nu _{i}(\cdot \vert z^{T\setminus \{i\}})\big)} {\rho _{j}(x_{j},z_{j})} & &{}\end{array}$$
(27)

and

$$\displaystyle\begin{array}{rcl} b_{i}^{\nu,\tilde{\nu }} =\int _{\mathsf{ X}^{T\setminus \{i\}}}\tilde{\nu }^{T\setminus \{i\}}(\mathrm{d}x^{T\setminus \{i\}})W_{ i}\big(\nu _{i}(\cdot \vert x^{T\setminus \{i\}}),\tilde{\nu }_{ i}(\cdot \vert x^{T\setminus \{i\}})\big).& &{}\end{array}$$
(28)

Suppose that the spectral radius of C ν is strictly smaller than unity. Then, for any f ∈ L 1 (μ),

$$\displaystyle\begin{array}{rcl} \left \vert \mathbb{E}_{\nu }f - \mathbb{E}_{\tilde{\nu }}f\right \vert \leq \sum _{j,k\in T}\delta _{j}(f)D_{jk}^{\nu }b_{ k}^{\nu,\tilde{\nu }},& &{}\end{array}$$
(29)

where D ν ≜ ∑ m=0 (C ν ) m .

Remark 5.2

The matrix C ν is called the Dobrushin interdependence matrix of ν. When the spectral radius of C ν is strictly smaller than unity, we say that ν satisfies the Dobrushin uniqueness condition. This condition is used in statistical physics to establish the absence of phase transitions, which is equivalent to uniqueness of a global Gibbs measure consistent with a given local specification (see the book of Georgii [12] for details).

Given an index i ∈ T, we will extract the ith row of a Wasserstein matrix for K (i+1) by applying the Dobrushin comparison theorem to a particular pair of probability measures on X. Let x, z ∈ X be two configurations that differ only in the ith coordinate: x T∖{i} = z T∖{i} and x i z i . Thus, we can write z = x [i−1] z i x (i, n], and

$$\displaystyle\begin{array}{rcl} & & K^{(i+1)}f(x) - K^{(i+1)}f(z) \\ & & = \mathbb{E}_{\mu }[f(X)\vert X^{[i]} = x^{[i-1]}x_{ i}] - \mathbb{E}_{\mu }[f(X)\vert X^{[i]} = x^{[i-1]}z_{ i}] \\ & & =\int _{\mathsf{X}^{(i,n]}}f(x^{[i-1]}x_{ i}y^{(i,n]})\mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}x_{ i}) -\int _{\mathsf{X}^{(i,n]}}f(x^{[i-1]}z_{ i}y^{(i,n]})\mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}z_{ i}) \\ & & =\int _{\mathsf{X}^{(i,n]}}\left (f(x^{[i-1]}x_{ i}y^{(i,n]}) - f(x^{[i-1]}z_{ i}y^{(i,n]})\right )\mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}x_{ i}) \\ & & \qquad \qquad +\int _{\mathsf{X}^{(i,n]}}f(x^{[i-1]}z_{ i}y^{(i,n]})\mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}x_{ i}) \\ & & \qquad \qquad -\int _{\mathsf{X}^{(i,n]}}f(x^{[i-1]}z_{ i}y^{(i,n]})\mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}z_{ i}). {}\end{array}$$
(30)

By definition of the local oscillation, the first integral in (30) is bounded by δ i (f)ρ i (x i , z i ). To handle the remaining terms, define two probability measures \(\nu,\tilde{\nu }\) on X by

$$\displaystyle\begin{array}{rcl} \nu (\mathrm{d}y)& \triangleq & \delta _{x^{[i-1]}z_{i}}(\mathrm{d}y^{[i]}) \otimes \mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}x_{ i}) {}\\ \tilde{\nu }(\mathrm{d}y)& \triangleq & \delta _{x^{[i-1]}z_{i}}(\mathrm{d}y^{[i]}) \otimes \mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}z_{ i}). {}\\ \end{array}$$

Using this definition and Lemma 5.1, we can write

$$\displaystyle\begin{array}{rcl} & \int & _{\mathsf{X}^{(i,n]}}f(x^{[i-1]}z_{ i}y^{(i,n]})\mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}x_{ i}) -\int _{\mathsf{X}^{(i,n]}}f(x^{[i-1]}z_{ i}y^{(i,n]})\mu ^{(i,n]}(\mathrm{d}y^{(i,n]}\vert x^{[i-1]}z_{ i}) \\ & & \qquad =\int f\mathrm{d}\nu -\int f\mathrm{d}\tilde{\nu } \\ & & \qquad \leq \sum _{j,k\in T}\delta _{j}(f)D_{jk}^{\nu }b_{ k}^{\nu,\tilde{\nu }}. {}\end{array}$$
(31)

It remains to obtain explicit upper bounds on the entries of D ν and \(b^{\nu,\tilde{\nu }}\). To that end, we first note that, for a given j ∈ T and for any u, y ∈ X,

$$\displaystyle\begin{array}{rcl} & & W_{j}\big(\nu _{j}(\cdot \vert u^{T\setminus \{j\}}),\nu _{ j}(\cdot \vert y^{T\setminus \{j\}})\big) {}\\ & & = \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &j \leq i \\ W_{j}\big(\mu _{j}(\cdot \vert x^{[i-1]}x_{i}u^{(i,n]\setminus \{j\}}),\mu _{j}(\cdot \vert x^{[i-1]}z_{i}u^{(i,n]\setminus \{j\}})\big),\quad &j> i \end{array} \right..{}\\ \end{array}$$

Therefore, C jk ν ≤ C jk μ. Likewise, for a given k ∈ T and for any y ∈ X,

$$\displaystyle\begin{array}{rcl} & & W_{k}\big(\nu _{k}(\cdot \vert y^{T\setminus \{k\}}),\tilde{\nu }_{ k}(\cdot \vert y^{T\setminus \{k\}})\big) {}\\ & & = \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &k \leq i \\ W_{k}\big(\mu _{k}(\cdot \vert x^{[i-1]}x_{i}y^{(i,n]\setminus \{k\}}),\mu _{k}(\cdot \vert x^{[i-1]}z_{i}y^{(i,n]\setminus \{k\}})\big),\quad &k> i \end{array} \right.{}\\ \end{array}$$

Therefore, \(b_{k}^{\nu,\tilde{\nu }} \leq C_{ki}^{\mu }\rho _{i}(x_{i},z_{i})\). Since the matrices C ν and C μ are nonnegative, D jk ν ≤ D jk μ. Consequently, we can write

$$\displaystyle\begin{array}{rcl} \int f\mathrm{d}\nu -\int f\mathrm{d}\tilde{\nu }& \leq & \sum _{j,k\in T}\delta _{j}(f)D_{jk}^{\mu }C_{ ki}^{\mu }\rho _{ i}(x_{i},z_{i}) \\ & =& \sum _{j\in T}\delta _{j}(f)(D^{\mu }C^{\mu })_{ji}\rho _{i}(x_{i},z_{i}) \\ & =& \sum _{j\in T}\delta _{j}(f)(D^{\mu } -\mathrm{id})_{ji}\rho _{i}(x_{i},z_{i}).{}\end{array}$$
(32)

Therefore, from (31) and (32), we have

$$\displaystyle\begin{array}{rcl} \frac{K^{(i+1)}f(x) - K^{(i+1)}f(z)} {\rho _{i}(x_{i},z_{i})} & \leq & \delta _{i}(f) +\sum _{j\in T}(D^{\mu } -\mathbf{1})_{ij}^{\text{T}}\delta _{ j}(f) \\ & =& \sum _{j\in T}(D^{\mu })_{ij}^{\text{T}}\delta _{ j}(f). {}\end{array}$$
(33)

We have thus proved the following:

Corollary 5.3

Suppose that the probability measure μ satisfies the Dobrushin uniqueness condition, i.e., the spectral radius of its Dobrushin interdependence matrix C μ is strictly smaller than unity. Then, for any t ≥ 0, the concentration inequality  (21) holds with

$$\displaystyle\begin{array}{rcl} \varGamma _{ij} =\|\rho _{i}\|(D^{\mu })_{ij}^{\text{T}},\qquad i,j \in T.& &{}\end{array}$$
(34)

For example, when each X i is equipped with the trivial metric ρ i (x i , z i ) = 1{x i z i }, we have ∥ ρ i  ∥ = 1 for all i, and consequently obtain the concentration inequality

$$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Big\{\vert f(X) - \mathbb{E}_{\mu }[f(X)]\vert \geq t\Big\} \leq 2\exp \left (- \frac{2t^{2}} {\|(D^{\mu })^{\text{T}}\boldsymbol{\delta }(f)\|_{\ell^{2}(T)}^{2}}\right ).& &{}\end{array}$$
(35)

The same inequality, but with a worse constant in the exponent, was obtained by Külske [19, p. 45].

5.2 Concentration Inequalities Via Couplings

Another method for constructing Wasserstein matrices for the Markov kernels K (i) is via couplings. One notable advantage of this method is that it does not explicitly rely on the Dobrushin uniqueness condition; however, some such condition is typically necessary in order to obtain good bounds for the norm \(\|\varGamma \boldsymbol{\delta }(f)\|_{\ell^{2}(T)}\).

Fix an index i ∈ T and any two x, z ∈ X that differ only in the ith coordinate: x T∖{i} = z T∖{i} and x i z i . Let P x, z [i] be any coupling of the conditional laws μ (i, n](⋅ | x [i]) and μ (i, n](⋅ | z [i]). Then for any f ∈ L 1(μ) we can write

$$\displaystyle\begin{array}{rcl} & & K^{(i+1)}f(x) - K^{(i+1)}f(z) {}\\ & & \qquad =\int _{\mathsf{X}^{(i,n]}\times \mathsf{X}^{(i,n]}}\mathbf{P}_{x,z}^{[i]}(\mathrm{d}u^{(i,n]},\mathrm{d}y^{(i,n]})\left (f(x^{[i]},u^{(i,n]}) - f(z^{[i]},y^{(i,n]})\right ) {}\\ & & \qquad \leq \delta _{i}(f)\rho _{i}(x_{i},z_{i}) +\sum _{j\in T:\,j>i}\delta _{j}(f)\int _{\mathsf{X}^{(i,n]}\times \mathsf{X}^{(i,n]}}\mathbf{P}_{x,z}^{[i]}(\mathrm{d}u^{(i,n]},\mathrm{d}y^{(i,n]})\rho _{ j}(u_{j},y_{j}). {}\\ \end{array}$$

Therefore,

$$\displaystyle\begin{array}{rcl} \frac{\vert K^{(i+1)}f(x) - K^{(i+1)}f(z)\vert } {\rho _{i}(x_{i},z_{i})} & \leq & \delta _{i}(f) +\sum _{j\in T:\,j>i}\frac{\int \rho _{j}\mathrm{d}\mathbf{P}_{x,z}^{[i]}} {\rho _{i}(x_{i},z_{i})} \delta _{j}(f) {}\\ & \leq & \delta _{i}(f) +\sum _{j\in T:\,j>i}\sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\frac{\int \rho _{j}\mathrm{d}\mathbf{P}_{x,z}^{[i]}} {\rho _{i}(x_{i},z_{i})} \delta _{j}(f). {}\\ \end{array}$$

Remembering that we only need the ith row of a Wasserstein matrix for K (i+1), we may take

$$\displaystyle\begin{array}{rcl} V _{ij}^{(i+1)} = \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &i> j \\ 1, \quad &i = j \\ \sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\frac{\int \rho _{j}\mathrm{d}\mathbf{P}_{x,z}^{[i]}} {\rho _{i}(x_{i},z_{i})},\quad &i <j \end{array} \right..& &{}\end{array}$$
(36)

We have thus proved the following:

Corollary 5.4

For each index i ∈ T and for each pair x,z ∈ X of configurations with xT∖{i} = zT∖{i}, pick an arbitrary coupling P x,z [i] of the conditional laws μ {i} (⋅|x [i] ) and μ {i} (⋅|z [i] ). Then, for any t ≥ 0, the concentration inequality  (21) holds with

$$\displaystyle\begin{array}{rcl} \varGamma _{ij} =\|\rho _{i}\|V _{ij}^{(i+1)},\qquad i,j \in T& &{}\end{array}$$
(37)

where the entries V ij (i+1) are given by (36).

In the case when each X i is equipped with the trivial metric ρ i (x i , z i ) = 1{x i z i }, the entries Γ ij for j > i take the form

$$\displaystyle\begin{array}{rcl} \varGamma _{ij} =\sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\mathbf{P}_{x,z}^{[i]}\left \{Y _{ j}^{(0)}\neq Y _{ j}^{(1)}\right \},& &{}\end{array}$$
(38)

where \((Y ^{(0)},Y ^{(1)}) =\big ((Y _{i+1}^{(0)},\ldots,Y _{n}^{(0)}),(Y _{i+1}^{(1)},\ldots,Y _{n}^{(1)})\big)\) is a random object taking values in X (i, n] ×X (i, n]. A special case of this construction, under the name of coupling matrix, was used by Chazottes et al. [6]. In that work, each P x, z [i] was chosen to minimize

$$\displaystyle\begin{array}{rcl} \mathbf{P}\{Y ^{(0)}\neq Y ^{(1)}\},& & {}\\ \end{array}$$

over all couplings P of μ (i, n](⋅ | x [i]) and μ (i, n](⋅ | z [i]), in which case we have

$$\displaystyle\begin{array}{rcl} \mathbf{P}_{x,z}^{[i]}\{Y ^{(0)}\neq Y ^{(1)}\}& =& \inf _{\mathbf{ P}\in \mathcal{C}(\mu ^{(i,n]}(\cdot \vert x^{[i]}),\mu ^{(i,n]}(\cdot \vert z^{[i]}))}\mathbf{P}\{Y ^{(0)}\neq Y ^{(1)}\} {}\\ & =& \left \Vert \mu ^{(i,n]}(\cdot \vert x^{[i]}) -\mu ^{(i,n]}(\cdot \vert z^{[i]})\right \Vert _{\mbox{ TV}}. {}\\ \end{array}$$

However, it is not clear how to relate the quantities \(\mathbf{P}_{x,z}^{[i]}\left \{Y _{j}^{(0)}\neq Y _{j}^{(1)}\right \}\) and \(\mathbf{P}_{x,z}^{[i]}\left \{Y ^{(0)}\neq Y ^{(1)}\right \}\), apart from the obvious bound

$$\displaystyle\begin{array}{rcl} \mathbf{P}_{x,z}^{[i]}\{Y _{ j}^{(0)}\neq Y _{ j}^{(1)}\}& \leq & \mathbf{P}_{ x,z}^{[i]}\{Y ^{(0)}\neq Y ^{(1)}\} = \left \Vert \mu ^{(i,n]}(\cdot \vert x^{[i]}) -\mu ^{(i,n]}(\cdot \vert z^{[i]})\right \Vert _{\mbox{ TV}}, {}\\ \end{array}$$

which gives

$$\displaystyle\begin{array}{rcl} \varGamma _{ij} \leq \sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \{i\}}=z^{T\setminus \{i\}}} }\left \Vert \mu ^{(i,n]}(\cdot \vert x^{[i]}) -\mu ^{(i,n]}(\cdot \vert z^{[i]})\right \Vert _{\mbox{ TV}}.& & {}\\ \end{array}$$

An alternative choice of coupling is the so-called maximal coupling due to Goldstein [13], which for our purposes can be described as follows: let U = (U )  = 1 m and Y = (Y )  = 1 m be two random m-tuples taking values in a product space E = E 1 × ×E m , where each E is Polish. Then there exists a coupling P of the probability laws \(\mathcal{L}(U)\) and \(\mathcal{L}(Y )\), such that

$$\displaystyle\begin{array}{rcl} \mathbf{P}\left \{U^{[\ell,m]}\neq Y ^{[\ell,m]}\right \} = \left \Vert \mathcal{L}(U^{[\ell,m]}) -\mathcal{L}(Y ^{[\ell,m]})\right \Vert _{\mbox{ TV}},\qquad \ell \in \{ 1,\ldots,m\}.& &{}\end{array}$$
(39)

Thus, for each i ∈ T and for every pair x, z ∈ X with x T∖{i} = z T∖{i}, let P x, z [i] be the Goldstein coupling of μ (i, n](⋅ | x [i]) and μ (i, n](⋅ | z [i]). Then for each j ∈ { i + 1, , n}, using (39) we have

$$\displaystyle\begin{array}{rcl} \mathbf{P}_{x,z}^{[i]}\{Y _{ j}^{(0)}\neq Y _{ j}^{(1)}\}& \leq & \mathbf{P}_{ x,z}^{[i]}\{(Y _{ j}^{(0)},\ldots,Y _{ n}^{(0)})\neq (Y _{ j}^{(1)},\ldots,Y _{ n}^{(1)})\} {}\\ & =& \left \Vert \mu ^{[j,n]}(\cdot \vert x^{[i]}) -\mu ^{[j,n]}(\cdot \vert z^{[i]})\right \Vert _{\mbox{ TV}}. {}\\ \end{array}$$

This choice of coupling gives rise to the upper-triangular matrix Γ = (Γ ij ) i, j ∈ T with

$$\displaystyle\begin{array}{rcl} \varGamma _{ij} = \left \{\begin{array}{@{}l@{\quad }l@{}} 0, \quad &i> j \\ 1, \quad &i = j \\ \sup _{{ x,z\in \mathsf{X} \atop x^{T\setminus \left \{i\right \}}=z^{T\setminus \left \{i\right \}}} }\left \Vert \mu ^{[j,n]}(\cdot \vert x^{[i]}) -\mu ^{[j,n]}(\cdot \vert z^{[i]})\right \Vert _{\mbox{ TV}},\quad &i <j \end{array} \right..& &{}\end{array}$$
(40)

Substituting this matrix into (21), we recover the concentration inequality of Kontorovich and Ramanan [17], but with an improved constant in the exponent.

Remark 5.5

It was erroneously claimed in [16, 14, 15] that the basic concentration inequalities of Chazottes et al. [6] and Kontorovich and Ramanan [17] are essentially the same, only derived using different methods. As the discussion above elucidates, the two methods use different couplings (the former, explicitly, and the latter, implicitly) — which yield quantitatively different and, in general, incomparable mixing coefficients.

Remark 5.6

Kontorovich and Ramanan obtained the matrix (40) using analytic methods without constructing an explicit coupling. In 2012, S. Shlosman posed the following question: could this matrix have been derived using a suitable coupling? We can now answer his question in the affirmative: the coupling is precisely Goldstein’s maximal coupling.

As an illustration, let us consider two specific types of the probability law μ: a directed Markov model (i.e., a Markov chain) and an undirected Markov model (i.e., a Gibbsian Markov random field). In the directed case, suppose that the elements of T are ordered in such a way that μ can be disintegrated in the form

$$\displaystyle\begin{array}{rcl} \mu (\mathrm{d}x) =\mu _{1}(\mathrm{d}x_{1}) \otimes K_{1}(x_{1},\mathrm{d}x_{2}) \otimes K_{2}(x_{2},\mathrm{d}x_{3}) \otimes \ldots \otimes K_{n-1}(x_{n-1},\mathrm{d}x_{n}),& &{}\end{array}$$
(41)

where μ 0 is a Borel probability measure on \((\mathsf{X}_{1},\mathcal{B}_{1})\), and, for each i ∈ [1, n − 1], K i is a Markov kernel from X i to X i+1. For each i ∈ [1, n), let

$$\displaystyle{ \theta _{i} \triangleq \sup _{x_{i},z_{i}\in \mathsf{X}_{i}}\left \Vert K_{i}(x_{i},\cdot ) - K_{i}(z_{i},\cdot )\right \Vert _{\mbox{ TV}} }$$

be the Dobrushin contraction coefficient of K i . Fix 1 ≤ i < j ≤ n and x [i−1] ∈ X [i−1], y i , y i ′ ∈ X i . An easy calculation [14] shows that, defining the signed measures η i on X i+1 by η(dx i+1) = K i (y i , dx i+1) − K i (y i ′, dx i+1) and ζ j on X j by

$$\displaystyle{ \zeta _{j} =\eta _{i}K_{i}K_{i+1}K_{i+2}\ldots K_{j-1}, }$$

we have

$$\displaystyle\begin{array}{rcl} \left \Vert \mu ^{[j,n]}(\cdot \vert x^{[i-1]}y_{ i}) -\mu ^{[j,n]}(\cdot \vert x^{[i-1]}y_{ i}')\right \Vert _{\mbox{ TV}}& =& \left \Vert \zeta _{j}\right \Vert _{\mbox{ TV}} \leq \theta _{i}\theta _{i+1}\ldots \theta _{j-1},{}\end{array}$$
(42)

where (4) was repeatedly invoked to obtain the last inequality. The above yields an upper bound on the Γ ij in (40) and hence in the corresponding concentration inequality in (21). When more delicate (e.g., spectral [15, 29]) estimates on \(\left \Vert \zeta _{j}\right \Vert _{\mbox{ TV}}\) are available, these translate directly into tighter concentration bounds.

In the undirected case, μ is a Gibbsian Markov random field induced by pair potentials [12]. To keep things simple, we assume that the local spaces X i are all finite. Define an undirected graph with vertex set T = [n] and edge set \(E = \left \{[i,i + 1]: 1 \leq i <n\right \}\) (i.e., a chain graph with vertex set T). Associate with each edge (i, j) ∈ E a potential function ψ ij : X i ×X j  → [0, ). Together, these define a probability measure μ on X via

$$\displaystyle{ \mu (x) = \frac{\prod _{(i,j)\in E}\psi _{ij}(x_{i},x_{j})} {\sum _{y\in \mathsf{X}}\prod _{(i,j)\in E}\psi _{ij}(y_{i},y_{j})}. }$$

Since μ is a Markov measure on X, there is a sequence of Markov kernels K 1, , K n−1 generating μ in the sense of (41). It is shown in [14] that the contraction coefficient θ i of the kernel K i is bounded by

$$\displaystyle\begin{array}{rcl} \theta _{i} \leq \frac{R_{i} - r_{i}} {R_{i} + r_{i}},& & {}\\ \end{array}$$

where

$$\displaystyle{ R_{i} =\sup _{(x_{i},x_{i+1})\in \mathsf{X}_{i}\times \mathsf{X}_{i+1}}\psi _{i,i+1}(x_{i},x_{i+1}),\qquad r_{i} =\inf _{(x_{i},x_{i+1})\in \mathsf{X}_{i}\times \mathsf{X}_{i+1}}\psi _{i,i+1}(x_{i},x_{i+1}). }$$

The estimate above implies a concentration result, either via (42) or via (35). To apply the latter, recall that D μ =  k = 1 (C μ)k, where C μ is the Dobrushin interdependence matrix defined in (27). Assuming that ρ is the unweighted Hamming metric (i.e., ρ i (x i , z i ) = 1{x i z i } for all i) and that the θ i ’s are all majorized by some θ < 1, it is easy to see that (C μ) ij  ≤ θ  | ij | .

6 Open Questions

Our focus in this chapter has been on the martingale method for establishing concentration inequalities. In the case of product measures, other techniques, such as the entropy method or transportation-information inequalities, often lead to sharper bounds. However, these alternative techniques are less developed in the dependent setting, and there appears to be a gap between what is achievable using the martingale method and what is achievable using other means. We close the chapter by listing some open questions that are aimed at closing this gap:

  • (Approximate) tensorization of entropy. In the independent case, it is possible to derive the same concentration inequality (e.g., McDiarmid’s inequality) using either the martingale method or the entropy method, often with the same sharp constants. However, once the independence assumption is dropped, the situation is no longer so simple. Consider, for example, tensorization of entropy. Several authors (see, e.g., [26, 3, 27]) have obtained so-called approximate tensorization inequalities for the relative entropy in the case of weakly dependent random variables: under certain regularity conditions on μ, there exists a constant A μ  ≥ 1, such that, for any other probability measure ν,

    $$\displaystyle\begin{array}{rcl} D(\nu \|\mu ) \leq A_{\mu } \cdot \sum _{i\in T}\mathbb{E}_{\nu }D\big(\nu _{i}(\cdot \vert X^{T\setminus \left \{i\right \}})\big\|\mu _{ i}(\cdot \vert X^{T\setminus \left \{i\right \}})\big).& & {}\end{array}$$
    (43)

    Having such an inequality in hand, one can proceed to prove concentration for Lipschitz functions in exactly the same way as in the independent case. However, it seems that the constants A μ in (43) are not sharp in the sense that the resulting concentration inequalities are typically worse than what one can obtain using Theorems 4.2 or 4.3 under the same assumptions on μ and f. This motivates the following avenue for further investigation: Derive sharp inequalities of the form (43) by relating the constant A μ to appropriately chosen Wasserstein matrices.

  • General Wasserstein-type matrices. Using the techniques pioneered by Marton, Samson proved the following concentration of measure result: Consider a function \(f: \mathsf{X} \rightarrow \mathbb{R}\) satisfying an “asymmetric” Lipschitz condition of the form

    $$\displaystyle{ f(x) - f(y) \leq \sum _{i\in T}\alpha _{i}(x)\mathbf{1}\{x_{i}\neq y_{i}\},\qquad \forall x,y \in \mathsf{X} }$$

    for some functions \(\alpha _{i}: \mathsf{X} \rightarrow \mathbb{R}\), such that i ∈ T α i 2(x) ≤ 1 for all x ∈ X. Then, for any Borel probability measure μ on X, we have

    $$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Big\{f(X) - \mathbb{E}_{\mu }[f(X)] \geq t\Big\} \leq \exp \left (- \frac{t^{2}} {2\|\varDelta \|_{2}^{2}}\right ),& & {}\end{array}$$
    (44)

    where the matrix Δ has entries \(\varDelta _{ij} = \sqrt{\varGamma _{ij}}\) with Γ ij given by (40), and

    $$\displaystyle{ \|\varDelta \|_{2} \triangleq \sup _{v\in \mathbb{R}^{T}\setminus \{0\}}\frac{\|\varDelta v\|_{\ell^{2}(T)}} {\|v\|_{\ell^{2}(T)}} }$$

    is the operator norm of Δ. A more general result in this vein was derived by Marton [24], who showed that an inequality of the form (44) holds with Δ computed in terms of any matrix Γ of the form (36), where each ρ i is the trivial metric. Samson’s proof relies on a fairly intricate recursive coupling argument. It would be interesting to develop analogs of (44) for arbitrary choices of the metrics ρ i and with full freedom to choose the Wasserstein matrices V (i) for each i ∈ T. A recent paper by Wintenberger [35] pursues this line of work.

  • The method of exchangeable pairs and Wasserstein matrices. An alternative route towards concentration inequalities in the dependent setting is via Stein’s method of exchangeable pairs [4, 5]. Using this method, Chatterjee obtained the following result [4, Chap. 4]: Let \(f: \mathsf{X} \rightarrow \mathbb{R}\) be a function which is 1-Lipschitz with respect to the weighted Hamming metric \(\rho _{\boldsymbol{\alpha }}\) defined in (3). Let μ be a Borel probability measure on X, whose Dobrushin interdependence matrix C μ satisfies the condition ∥ C μ ∥ 2 < 1. Then, for any t ≥ 0,

    $$\displaystyle\begin{array}{rcl} \mathbb{P}_{\mu }\Big\{\vert f(X) - \mathbb{E}_{\mu }[f(X)] \geq t\vert \Big\}\leq 2\exp \left (-\frac{(1 -\| C^{\mu }\|_{2})t^{2}} {\sum _{i\in T}\alpha _{i}^{2}} \right ).& & {}\end{array}$$
    (45)

    The key ingredient in the proof of (45) is the so-called Gibbs sampler, i.e., the Markov kernel \(\bar{K}\) on X given by

    $$\displaystyle{ \bar{K}(x,\mathrm{d}y) \triangleq \frac{1} {\vert T\vert }\sum _{i\in T}\delta _{x^{T\setminus \left \{i\right \}}}(\mathrm{d}y^{T\setminus \left \{i\right \}}) \otimes \mu _{ i}(\mathrm{d}y_{i}\vert x^{T\setminus \left \{i\right \}}). }$$

    This kernel leaves μ invariant, i.e., \(\mu =\mu \bar{ K}\), and it is easy to show (see, e.g., [27]) that it contracts the \(\bar{W}\) distance: for any other probability measure ν on X,

    $$\displaystyle\begin{array}{rcl} \bar{W}(\mu \bar{K},\nu \bar{K}) \leq \left (1 -\frac{1 -\| C^{\mu }\|_{2}} {\vert T\vert } \right )\bar{W}(\mu,\nu ).& & {}\\ \end{array}$$

    Since one can obtain contraction estimates for Markov kernels using Wasserstein matrices, it is natural to ask whether Chatterjee’s result can be derived as a special case of a more general method, which would let us freely choose an arbitrary Markov kernel K that leaves μ invariant and control the constants in the resulting concentration inequality by means of a judicious choice of a Wasserstein matrix for K. Such a method would most likely rely on general comparison theorems for Gibbs measures [31].