1 Introduction

In geometric analysis on manifolds, it is by now well-established that the Ricci curvature of the underlying manifold has profound consequences for functional inequalities and the rate of convergence of Markov semigroups toward equilibrium. One can consult, in particular the books [1] and [17]. Indeed, in the setting of diffusions (see [1, §1.11]), there is an elegant theory around the Bakry–Emery curvature-dimension condition.

Roughly speaking, in the setting of diffusion on a continuous space, when there is an appropriate “integration by parts” formula (that connects the Dirichlet form to the Laplacian), a positive curvature condition implies powerful functional inequalities. Most pertinent to the present discussion, positive curvature yields transport-entropy and logarithmic Sobolev inequalities.

For discrete state spaces, the situation appears substantially more challenging. There are numerous attempts at generalizing lower bounds on the Ricci curvature to discrete metric measure spaces. At a broad level, these approaches suffer from one of two drawbacks: Either the notion of “positive curvature” is difficult to verify for concrete spaces, or the “expected” functional analytic consequences do not follow readily.

In the present note, we consider the notion of coarse Ricci curvature due to Ollivier [14]. It constitutes an approach of the latter type: There is a large body of finite-state Markov chains that have positive curvature in Ollivier’s sense, but for many of them we do not yet know if strong functional-analytic consequences hold. This study is made more fascinating by the straightforward connection between coarse Ricci curvature on graphs and the notion of path coupling arising in the study of rapid mixing of Markov chains [3]. This is a powerful method to establish fast convergence to the stationary measure; see, for example, [10, Ch. 14].

In particular, if there were an analogy to the diffusion setting that allowed coarse Ricci curvature lower bounds to yield logarithmic Sobolev inequalities (or variants thereof), it would even imply new mixing time bounds for well-studied chains arising from statistical physics, combinatorics, and theoretical computer science. A conjecture of Peres and Tetali asserts that a modified log-Sobolev inequality (MLSI) should always hold in this setting. Roughly speaking, this means that the underlying Markov chain has exponential convergence to equilibrium in the relative entropy distance.

Our aim is to give some preliminary results in this direction and to suggest a new approach to establishing MLSI. In particular, we prove a W 1 transport-entropy inequality. By results of Bobkov and Götze [2], this is equivalent to a sub-Gaussian concentration estimate for Lipschitz functions. Sammer has shown that such an inequality follows formally from MLSI [16], thus one can see verification as evidence in support of the Peres-Tetali conjecture. Our result also addresses Problem J in Ollivier’s survey [15].

1.1 Coarse Ricci Curvature and Transport-Entropy Inequalities

Let \(\Omega\) be a countable state space, and let \(p: \Omega \times \Omega \rightarrow [0,1]\) denote a transition kernel. For \(x \in \Omega\), we will use the notation p(x, ⋅ ) to denote the function yp(x, y). For a probability measure π on \(\Omega\) and \(f: \Omega \rightarrow \mathbb{R}_{+}\), we define the entropy of f by

$$\displaystyle{\mathrm{Ent}_{\pi }(\,f) = \mathbb{E}_{\pi }\left [\,f\log \left ( \frac{f} {\mathbb{E}_{\pi }[\,f]}\right )\right ]\,.}$$

We also equip \(\Omega\) with a metric d. If μ and ν are two probability measures on \(\Omega\), we denote by W 1(μ, ν) the transportation cost (or Wasserstein 1-distance) between μ and ν, with the cost function given by the distance d. Namely,

$$\displaystyle{W_{1}(\mu,\nu ) =\inf \left \{\mathbb{E}\left [d(X,Y )\right ]\right \}}$$

where the infimum is taken on all couplings (X, Y ) of (μ, ν). Recall the Monge–Kantorovitch duality formula for W 1 (see, for instance, [17, Case 5.16]):

$$\displaystyle{ W_{1}(\mu,\nu ) =\sup \left \{\int _{\Omega }f\,d\mu -\int _{\Omega }f\,d\nu \right \}, }$$
(1)

where the supremum is taken over 1–Lipschitz functions f. We consider the following notion of curvature introduced by Ollivier [14].

Definition 1.1

The coarse Ricci curvature of \((\Omega,p,d)\) is the largest κ ∈ [−, 1] such that the inequality

$$\displaystyle{W_{1}(\,p(x,\cdot ),p(y,\cdot )) \leq (1-\kappa )\,d(x,y)}$$

holds true for every \(x,y \in \Omega\).

In the sequel we will be interested in positive Ricci curvature. Under this condition the map μμp is a contraction for W 1. As a result, it has a unique fixed point and μp n converges to this fixed point as n. In other words the Markov kernel p has a unique stationary measure and is ergodic. The main purpose of this note is to show that positive curvature yields a transport-entropy inequality, or equivalently a Gaussian concentration inequality for the stationary measure.

Definition 1.2

We say that a probability measure μ on \(\Omega\) satisfies the Gaussian concentration property with constant C if the inequality

$$\displaystyle{\int _{\Omega }\exp (\,f)\,d\mu \leq \exp \left (\int _{\Omega }f\,d\mu + C\,\Vert \,f\Vert _{\text{Lip}}^{2}\right )}$$

holds true for every Lipschitz function f.

Now we spell out the dual formulation of the Gaussian concentration property in terms of transport inequality. Recall first the definition of the relative entropy (or Kullback divergence): for two measures μ, ν on \((\Omega,\mathcal{B})\),

$$\displaystyle{D\!\left (\nu \,\|\,\mu \right ) =\mathrm{ Ent}_{\mu }[\tfrac{d\nu } {d\mu }] =\int _{\Omega }\log \left (\frac{d\nu } {d\mu }\right )\,d\nu }$$

if ν is absolutely continuous with respect to μ and \(D\!\left (\nu \,\|\,\mu \right ) = +\infty\) otherwise. As usual, if X and Y are random variables with laws ν and μ, we will take \(D\!\left (X\,\|\,Y \right )\) to be synonymous with \(D\!\left (\nu \,\|\,\mu \right )\).

Definition 1.3

We say that μ satisfies (T 1) with constant C if for every probability measure ν on \(\Omega\) we have

As observed by Bobkov and Götze [2], the inequality ( 1.3 ) and the Gaussian concentration property are equivalent.

Lemma 1.4

A probability measure μ satisfies the Gaussian concentration property with constant C if and only if it satisfies ( 1.3 ) with constant 4C.

This is a relatively straightforward consequence of the Monge–Kantorovitch duality (1); we refer to [2] for details.

Theorem 1.5

Assume that \((\Omega,p,d)\) has positive coarse Ricci curvature 1∕α and that the one–step transitions all satisfy ( 1.3 ) with the same constant C: Suppose that for every \(x \in \Omega\) and for every probability measure ν we have

$$\displaystyle{ W_{1}(\nu,p(x,\cdot ))^{2} \leq C \cdot D\!\left (\nu \,\|\,p(x,\cdot )\right )\,. }$$
(2)

Then the stationary measure π satisfies ( 1.3 ) with constant \(\frac{C\alpha } {2-1/\alpha }\) .

Remark 1.6

Observe that Theorem 1.5 does not assume reversibility.

The hypothesis (2) might seem unnatural at first sight but it is automatically satisfied for the random walk on a graph when d is the graph distance. Indeed, recall Pinsker’s inequality: For every probability measures μ, ν we have

$$\displaystyle{\mathrm{TV}(\mu,\nu ) \leq \sqrt{\frac{1} {2}\,D\!\left (\mu \,\|\,\nu \right )},}$$

where TV denotes the total variation distance. This yields the following lemma.

Lemma 1.7

Let μ be a probability measure on a metric space (M, d) and assume that the support of μ has finite diameter \(\Delta\) . Then μ satisfies ( 1.3 ) with constant \(\Delta ^{2}/2\) .

Proof

Let ν be absolutely continuous with respect to μ. Then both μ and ν are supported on a set of diameter \(\Delta\). This implies that

$$\displaystyle{W_{1}(\mu,\nu ) \leq \Delta \cdot \mathrm{ TV}(\mu,\nu ).}$$

Combining this with Pinsker’s inequality we get \(W_{1}(\mu,\nu )^{2} \leq \frac{\Delta ^{2}} {2} D\!\left (\nu \,\|\,\mu \right )\), which is the desired result. □

Random walks on graphs A particular case of special interest will be random walks on finite graphs. Let G = (V, E) be a connected, undirected graph, possibly with self-loops. Given non-negative conductances \(c: E \rightarrow \mathbb{R}_{+}\) on the edges, we recall the Markov chain {X t } defined by

$$\displaystyle{\Pr [X_{t+1} = y\mid X_{t} = x] = \frac{c(\{x,y\})} {\sum _{z\in V }c(\{x,z\})}\,.}$$

We refer to any such chain as a random walk on the graph G. If it holds that \(c(\{x,x\}) \geq \frac{1} {2}\sum _{Z\in V }c(\{x,z\})\) for all xV, we say that the corresponding random walk is lazy. We will equip G with its graph distance d.

In this setting, the transitions of the walk are supported on a set of diameter 2. So combining the preceding lemma with Theorem 1.5, one arrives at the following.

Corollary 1.8

If a random walk on a graph has positive coarse Ricci curvature \(\tfrac{1} {\alpha }\) (with respect to the graph distance), then the stationary measure π satisfies

$$\displaystyle{W_{1}(\mu,\pi )^{2} \leq \frac{2\alpha } {2 - 1/\alpha }\,D\!\left (\mu \,\|\,\pi \right ),}$$

for every probability measure μ.

Remark 1.9

One should note that in this context we have

$$\displaystyle{d(x,y) \leq W_{1}\left (\,p(x,\cdot ),p(y,\cdot )\right ) + 2,\quad \forall x,y \in \Omega,}$$

just because after one step the walk is at distance 1 at most from its starting point. As a result, having coarse Ricci curvature 1∕α implies that the diameter \(\Delta\) of the graph is at most 2α. So by the previous lemma, every measure on the graph satisfies T 1 with constant 2α 2. The point of Corollary 1.8 is that for the stationary measure π the constant is order α rather than α 2.

We now present two proofs of Theorem 1.5. The first proof is rather short and based on the duality formula (1). The second argument provides an explicit coupling based on an entropy-minimal drift process. In Sect. 3, we discuss logarithmic Sobolev inequalities. In particular, we present a conjecture about the structure of the entropy-minimal drift that is equivalent to the Peres–Tetali MLSI conjecture.

After the first version of this note was released we were notified that Theorem 1.5 was proved by Djellout, Guillin and Wu in [4, Proposition 2.10]. Note that this article actually precedes Ollivier’s work. The proof given there corresponds to our first proof, by duality. Our second proof is more original but does share some similarities with the argument given by K. Marton in [11, Proposition 1]. Also, after hearing about our work, Fathi and Shu [5] used their transport-information framework to provide yet another proof.

2 The W 1 Transport-Entropy Inequality

We now present two proofs of Theorem 1.5. Recall the relevant data \((\Omega,p,d)\). Define the process {B t } to be the discrete-time random walk on \(\Omega\) corresponding to the transition kernel p. For \(x \in \Omega\), we will use B t (x) to denote the random variable B t ∣{B 0 = x}. For t ≥ 0, we make the definition

$$\displaystyle{P_{t}[\,f](x) = \mathbb{E}[\,f(B_{t}(x))]\,.}$$

2.1 Proof by Duality

Let \(f: \Omega \rightarrow \mathbb{R}\) be a Lipschitz function. Using the hypothesis (2) and Lemma 1.4 we get

$$\displaystyle{P_{1}[\exp (\,f)](x) \leq \exp \left (P_{1}[\,f](x) + \frac{C} {4} \Vert \,f\Vert _{\mathrm{Lip}}^{2}\right ),}$$

for all \(x \in \Omega\). Applying this inequality repeatedly we obtain

$$\displaystyle{ P_{n}[\exp (\,f)](x) \leq \exp \left (P_{n}[\,f](x) + \frac{C} {4} \sum _{k=0}^{n-1}\Vert P_{ k}\,f\Vert _{\mathrm{Lip}}^{2}\right ), }$$
(3)

for every integer n and all \(x \in \Omega\). Now we use the curvature hypothesis. Note that the Monge–Kantorovitch duality (1) yields easily

$$\displaystyle\begin{array}{rcl} 1 -\tfrac{1} {\alpha } \ & =& \sup _{x\neq y}\left \{\frac{W_{1}(\,p(x,\cdot ),p(y,\cdot ))} {d(x,y)} \right \} {}\\ & =& \sup _{x\neq y,g}\left \{\frac{P_{1}[g](x) - P_{1}[g](y)} {\Vert g\Vert _{\mathrm{Lip}}d(x,y)} \right \} {}\\ & =& \sup _{g}\left \{\frac{\Vert P_{1}[g]\Vert _{\mathrm{Lip}}} {\Vert g\Vert _{\mathrm{Lip}}} \right \}. {}\\ \end{array}$$

Therefore ∥P 1[g]∥Lip ≤ (1 − 1∕α)∥gLip for every Lipschitz function g and thus

$$\displaystyle{\Vert P_{n}[\,f]\Vert _{\mathrm{Lip}} \leq (1 - 1/\alpha )^{n}\Vert \,f\Vert _{ \text{Lip}},}$$

for every integer n. Inequality (3) then yields

$$\displaystyle{P_{n}\left [\exp (\,f)\right ](x) \leq \exp \left (P_{n}[\,f](x) + \frac{C\alpha } {4(2 - 1/\alpha )}\Vert \,f\Vert _{\mathrm{Lip}}^{2}\right ).}$$

Letting n yields

$$\displaystyle{\int _{\Omega }\exp (\,f)\,d\pi \leq \exp \left (\int _{\Omega }f\,d\pi + \frac{C\alpha } {4\,(2 - 1/\alpha )}\Vert \,f\Vert _{\mathrm{Lip}}^{2}\right ).}$$

The stationary measure π thus satisfies Gaussian concentration with constant \(\frac{C\alpha } {4\,(2-1/\alpha )}\). Another application of the duality, Lemma 1.4, yields the desired outcome, proving Theorem 1.5.

2.2 An Explicit Coupling

As promised, we now present a second proof of Theorem 1.5 based on an explicit coupling. The proof does not rely on duality, and our hope is that the method presented will be useful for establishing MLSI; see Sect. 3.

The first step of the proof follows a similar idea to the one used in [11, Proposition 1]. Given the random walk {B t } and another process {X t } (not necessarily Markovian), there is a natural coupling between the two processes that takes advantage of the curvature condition and gives a bound on the distance between the processes at time T in terms of the relative entropy. This step is summarized in the following result.

Proposition 2.1

Assume that \((\Omega,p,d)\) satisfies the conditions of Theorem  1.5 . Fix a time T and a point x 0M. Let {B 0 = x 0, B 1, … , B T } be the corresponding discrete time random walk starting from x 0 and let {X 0 = x 0, X 1, … , X T } be an arbitrary random process on \(\Omega\) starting from x 0 . Then, there exists a coupling between the processes (X t ) and (B t ) such that

$$\displaystyle{ E[d(X_{T},B_{T})] \leq \sqrt{ \frac{C\alpha } {2 - 1/\alpha }D\!\left (\{X_{0},X_{1},\ldots,X_{T}\}\,\|\,\{B_{0},B_{1},\ldots,B_{T}\}\right )}. }$$

In view of the above proposition, proving a transportation-entropy inequality for \((\Omega,p,d)\) is reduced to the following: given a measure ν on \(\Omega\), we are looking for a process {X t } which satisfies: (i) X T ν and (ii) the relative entropy between (X 0, … , X T ) and (B 0, … , B T ) is as small as possible.

To achieve the above, our key idea is the construction of a process X t which is entropy minimal in the sense that it satisfies

$$\displaystyle{ X_{T} \sim \nu \text{ and }D\!\left (\{X_{0},X_{1},\ldots,X_{T}\}\,\|\,\{B_{0},B_{1},\ldots,B_{T}\}\right ) = D\!\left (X_{T}\,\|\,B_{T}\right ). }$$
(4)

This process can be thought of as the Doob transform of the random walk with a given target law. In the setting of Brownian motion on \(\mathbb{R}^{n}\) equipped with the Gaussian measure, the corresponding process appears in work of Föllmer [6, 7]. See [8] for applications to functional inequalities, and the work of Léonard [9] for a somewhat different perspective on the connection to optimal transportation.

2.2.1 Proof of Proposition 2.1: Construction of the Coupling

Given t ∈ {1, … , T} and x 1, … , x t−1M, let ν(t, x 0, … , x t−1, ⋅ ) be the conditional law of X t given X 0 = x 0, … , X t−1 = x t−1. Now we construct the coupling of X and B as follows. Set X 0 = B 0 = x 0 and given (X 1, B 1), … , (X t−1, B t−1) set (X t , B t ) to be a coupling of ν(t, X 0, … , X t−1, ⋅ ) and p(B t−1, ⋅ ) which is optimal for W 1. Then by construction the marginals of this process coincide with the original processes {X t } and {B t }.

The next lemma follows from the coarse Ricci curvature property and the definition of our coupling.

Lemma 2.2

For every t ∈ {1, … , T},

$$\displaystyle{\mathbb{E}_{t-1}\left [d(X_{t},B_{t})\right ]\leq \sqrt{C \cdot D\!\left (\nu (t, X_{0 }, \mathop{\ldots }, X_{t-1 }, \cdot )\,\|\,p(X_{t-1 }, \cdot ) \right )}+\left (1 -\frac{1} {\alpha } \right )d(X_{t-1},B_{t-1})}$$

where \(\mathbb{E}_{t-1}[\cdot ]\) stands for the conditional expectation given (X 0, B 0), … , (X t−1, B t−1).

Proof

By definition of the coupling, the triangle inequality for W 1, the one-step transport inequality (2) and the curvature condition

$$\displaystyle\begin{array}{rcl} & & \mathbb{E}_{t-1}\left [d(X_{t},B_{t})\right ] {}\\ & & \ \quad = W_{1}\left (\nu (t,X_{0},\mathop{\ldots },X_{t-1},\cdot ),p(B_{t-1},\cdot )\right ) {}\\ & & \ \quad \leq W_{1}\left (\nu (t,X_{0},\mathop{\ldots },X_{t-1},\cdot ),p(X_{t-1},\cdot )\right ) + W_{1}\left (\,p(X_{t-1},\cdot ),p(B_{t-1},\cdot )\right ) {}\\ & & \ \quad \leq \sqrt{C \cdot D\!\left (\nu (t, X_{0 }, \mathop{\ldots }, X_{t-1 }, \cdot )\,\|\,p(X_{t-1 }, \cdot ) \right )} + \left (1 -\frac{1} {\alpha } \right )d(X_{t-1},B_{t-1})\,.\quad \ \qquad {}\\ \end{array}$$

Remark that the chain rule for relative entropy asserts that

$$\displaystyle{ \sum _{t=1}^{T}\mathbb{E}[D\!\left (\nu (t,X_{ 0},\mathop{\ldots },X_{t-1},\cdot )\,\|\,p(X_{t-1},\cdot )\right )] = D\!\left (\{X_{0},X_{1},\ldots,X_{T}\}\,\|\,\{B_{0},B_{1},\ldots,B_{T}\}\right ). }$$
(5)

Using the preceding lemma inductively and then Cauchy-Schwarz yields

$$\displaystyle\begin{array}{rcl} \mathbb{E}[d(X_{T},B_{T})]& \leq &\sum _{t=1}^{T}\left (1 -\frac{1} {\alpha } \right )^{T-t}\mathbb{E}\left [\sqrt{C \cdot D\!\left (\nu (t, X_{ 0},\mathop{\ldots },X_{t-1},\cdot )\,\|\,p(X_{t-1},\cdot )\right )}\right ] {}\\ & \leq &\sqrt{\sum _{t=1 }^{T }\left (1 - \frac{1} {\alpha } \right )^{2(T-t)}}\sqrt{\sum _{t=1 }^{T }C \cdot \mathbb{E}\left [D\!\left (\nu (t, X_{0 }, \mathop{\ldots }, X_{t-1 }, \cdot )\,\|\,p(X_{t-1 }, \cdot ) \right ) \right ]} {}\\ & \stackrel{\text{(5)}}{\leq }& \sqrt{ \frac{\alpha } {2 - 1/\alpha }}\sqrt{ C \cdot D\!\left (\{X_{0},X_{1},\ldots,X_{T}\}\,\|\,\{B_{0},B_{1},\ldots,B_{T}\}\right )}, {}\\ \end{array}$$

completing the proof of Proposition 2.1.

2.2.2 The Entropy-Optimal Drift Process

Our goal in this section is to construct a process x 0 = X 0, X 1, , X T satisfying equation (4). Suppose that we are given a measure ν on \(\Omega\) along with an initial point \(x_{0} \in \Omega\) and a time T ≥ 1. We define the Föllmer drift process associated to (ν, x 0, T) as the stochastic process {X t } t = 0 T defined as follows.

Let μ T be the law of B T (x 0) and denote by f the density of ν with respect to μ T . Note that f is well-defined as long as the support of μ T is \(\Omega\). Now let {X t } t = 0 T be the non homogeneous Markov chain on \(\Omega\) whose transition probabilities at time t are given by

$$\displaystyle{ q_{t}(x,y):= \mathbb{P}(X_{t} = y)\mid X_{t-1} = x) = \frac{P_{T-t}f(y)} {P_{T-t+1}f(x)}\,p(x,y). }$$
(6)

We will take care in what follows to ensure the denominator does not vanish. Note that (q t ) is indeed a transition matrix as

$$\displaystyle{ \sum _{y\in \Omega }P_{T-t}f(y)\,p(x,y) = P_{T-t+1}f(x). }$$
(7)

We now state a key property of the drift.

Lemma 2.3

If p T(x 0, x) > 0 for all x ∈ supp(μ), then {X t } is well-defined. Furthermore, for every \(x_{1},\ldots c,x_{T} \in \Omega\) we have

$$\displaystyle{ \mathbb{P}\left ((X_{1},\ldots,X_{T}) = (x_{1},\ldots,x_{T})\right ) = \mathbb{P}\left ((B_{1},\ldots,B_{T}) = (x_{1},\ldots,x_{T})\right )\,f(x_{T}). }$$
(8)

In particular X T has law dν = f dμ T .

Proof

By definition of the process (X t ) we have

$$\displaystyle{\begin{array}{rcl} \mathbb{P}\left ((X_{1},\ldots,X_{T})=(x_{1},\ldots,x_{T})\right )& =&\prod _{t=1}^{T}\mathbb{P}\left (X_{t} = x_{t}\mid (X_{1},\ldots,X_{t-1}) = (x_{1},\ldots,x_{t-1})\right ) \\ & =&\prod _{t=1}^{T} \frac{P_{T-t}f(x_{t})} {P_{T-t+1}f(x_{t-1}} p(x_{t-1},x_{t}) \\ & =& \frac{f(x_{T})} {P_{T}f(x_{0})}\left (\prod _{t=1}^{T}p(x_{ t-1},x_{t})\right ),\end{array} }$$

which is the result. □

In words, the preceding lemma asserts that the law of the process {X t } has density f(x T ) with respect to the law of the process {B t }. As a result we have in particular

$$\displaystyle{D\!\left (\{X_{0},X_{1},\ldots,X_{T}\}\,\|\,\{B_{0},B_{1},\ldots,B_{T}\}\right ) = \mathbb{E}[\log f(X_{T})] = D\!\left (\nu \,\|\,\mu _{T}\right ),}$$

since X T has law μ. Note that for any other process {Y t } such that Y 0 = x 0 and Y T has law ν, one always has the inequality

$$\displaystyle{ D\!\left (\{Y _{0},Y _{1},\ldots,Y _{T}\}\,\|\,\{B_{0},B_{1},\ldots,B_{T}\}\right ) \geq D\!\left (Y _{T}\,\|\,B_{T}\right ) = D\!\left (\nu \,\|\,\mu _{T}\right ). }$$
(9)

Besides {X t } is the unique random process for which this inequality is tight. Uniqueness follows from strict convexity of the relative entropy.

We summarize this section in the following lemma.

Lemma 2.4

Let \((\Omega,p)\) be a Markov chain. Fix \(x_{0} \in \Omega\) and let x 0 = B 0, B 1, … be the associated random walk. Let ν be a measure on \(\Omega\) and let T > 0 be such that for any \(y \in \Omega\) one has that \(\mathbb{P}(B_{T} = y)> 0\) . Then there exists a process x 0 = X 0, X 1, , X T such that:

  • {X 0, … , X T } is a (time inhomogeneous) Markov chain.

  • X T is distributed with the law ν.

  • The process satisfies Eq. (4), namely

    $$\displaystyle{ D\!\left (X_{T}\,\|\,B_{T}\right ) = D\!\left (\{X_{0},X_{1},\ldots,X_{T}\}\,\|\,\{B_{0},B_{1},\ldots,B_{T}\}\right ). }$$

2.3 Finishing up the Proof

Fix an arbitrary \(x_{0} \in \Omega\) and consider some \(T \geq \mathrm{ diam}(\Omega,d)\). Let {X t } be the Föllmer drift process associated to the initial data (ν, x 0, T). Then combining Proposition 2.1 and Eq. (4), we have

$$\displaystyle{ W_{1}(\nu,\mu _{T}) = \mathbb{E}[d(X_{T},B_{T})] \leq \sqrt{ \frac{C\alpha } {2 - 1/\alpha }\,D\!\left (\nu \,\|\,\mu _{T}\right )}. }$$
(10)

Now let T so that μ T π, yielding the desired claim.

3 The Peres–Tetali Conjecture and Log-Sobolev Inequalities

Recall that \(p: \Omega \times \Omega \rightarrow [0,1]\) is a transition kernel on the finite state space \(\Omega\) with a unique stationary measure π. Let \(L^{2}(\Omega,\pi )\) denote the space of real-valued functions \(f: \Omega \rightarrow \mathbb{R}\) equipped with the inner product \(\langle \,f,g\rangle = \mathbb{E}_{\pi }[\,fg]\). From now on we assume that the measure π is reversible, which amounts to saying that the operator fpf is self-adjoint in \(L^{2}(\Omega,\pi )\).

We define the associated Dirichlet form

$$\displaystyle{\mathcal{E}(\,f,g) =\langle \, f,(\,p - I)g\rangle = \tfrac{1} {2}\sum _{x,y\in \Omega }\pi (x)p(x,y)(\,f(x) - f(y))(g(x) - g(y))\,.}$$

Recall the definition of the entropy of a function \(f: \Omega \rightarrow \mathbb{R}_{+}\):

$$\displaystyle{\mathrm{Ent}_{\pi }(\,f) = \mathbb{E}_{\pi }\left [\,f\log \left ( \frac{f} {\mathbb{E}_{\pi }f}\right )\right ]\,.}$$

Now define the quantities

$$\displaystyle\begin{array}{rcl} \rho & =& \inf _{f:\Omega \rightarrow \mathbb{R}_{+}} \frac{\mathcal{E}(\sqrt{f},\sqrt{f})} {\mathrm{Ent}_{\pi }(\,f)} {}\\ \rho _{0}& =& \inf _{f:\Omega \rightarrow \mathbb{R}_{+}} \frac{\mathcal{E}(\,f,\log f)} {\mathrm{Ent}_{\pi }(\,f)}\,. {}\\ \end{array}$$

These numbers are called, respectively, the log-Sobolev and modified log-Sobolev constants of the chain \((\Omega,p)\). We refer to [13] for a detailed discussion of such inequalities on discrete-space Markov chains and their relation to mixing times.

One can understand both numbers as measuring the rate of convergence to equilibrium in appropriate senses. The modified log-Sobolev constant, in particular, can be equivalently characterized as the largest value ρ 0 such that

$$\displaystyle{ \mathrm{Ent}_{\pi }(H_{t}\,f) \leq e^{-\rho _{0}t}\mathrm{Ent}_{\pi }(\,f) }$$
(11)

for all \(f: \Omega \rightarrow \mathbb{R}_{+}\) and t > 0 (see [13, Prop. 1.7]). Here, \(H_{t}: L^{2}(\Omega,\pi ) \rightarrow L^{2}(\Omega,\pi )\) is the heat-flow operator associated to the continuous-time random walk, i.e., H t = e t(IP), where P is the operator defined by \(Pf(x) =\sum _{y\in \Omega }p(x,y)f(y)\).

The log-Sobolev constant ρ controls the hypercontractivity of the semigroup (H t ), which in turn yields a stronger notion of convergence to equilibrium; again see [13] for a precise statement. Interestingly, in the setting of diffusions, there is no essential distinction between the two notions; one should consider the following calculation only in a formal sense:

$$\displaystyle{\ \mathcal{E}(\,f,\log f) =\int \nabla f\nabla \log f =\int \frac{\vert \nabla f\vert ^{2}} {f} = 4\int \left \vert \nabla \sqrt{f}\right \vert ^{2} = 4\,\mathcal{E}\!\left (\sqrt{f},\sqrt{f}\right )\,.\ }$$

However, in the discrete-space setting, the tools of differential calculus are not present. Indeed, one has the bound ρ ≤ 2ρ 0 [13, Prop 1.10], but there is no uniform bound on ρ 0 in terms of ρ.

3.1 MLSI and Curvature

We are now in position to state an important conjecture linking curvature and the modified log-Sobolev constant; it asserts that on spaces with positive coarse Ricci curvature, the random walk should converge to equilibrium exponentially fast in the relative entropy distance.

Conjecture 3.1 (Peres–Tetali, unpublished)

Suppose \((\Omega,p)\) corresponds to lazy random walk on a finite graph and d is the graph distance. If \((\Omega,p,d)\) has coarse Ricci curvature κ > 0, then the modified log-Sobolev constant satisfies

$$\displaystyle{ \rho _{0} \geq C\kappa \,. }$$
(12)

where C > 0 is a universal constant.

A primary reason for our interest in Corollary 1.8 is that, by results of Sammer [16], Conjecture 3.1 implies Corollary 1.8. We suspect that a stronger conclusion should hold in many cases; under stronger assumptions, it should be that one can obtain a lower bound on the (non-modified) log-Sobolev constant ρ. See, for instance, the beautiful approach of Marton [12] that establishes a log-Sobolev inequality for product spaces assuming somewhat strong contraction properties of the Gibbs sampler.

However, we recall that this cannot hold under just the assumptions of Conjecture 3.1. Indeed, if G = (V, E) is the complete graph on n vertices, it is easy to see that the coarse Ricci curvature κ of the lazy random walk is 1∕2. On the other hand, one can check that the log-Sobolev constant ρ decays asymptotically like \(\frac{1} {\log n}\) (use the test function f = δ x for some fixed xV ).

3.2 An Entropic Interpolation Formulation of MLSI

We now suggest an approach to Conjecture 3.1 using an entropy-optimal drift process. While we chose to work with discrete-time chains in Sect. 2.2.2, working in continuous-time will allow us more precision in exploring Conjecture 3.1. We will use the notation introduced at the beginning of this section.

A continuous-time drift process Suppose we have some initial data ( f, x 0, T) where \(x_{0} \in \Omega\) and \(f: \Omega \rightarrow \mathbb{R}_{+}\) satisfies \(\mathbb{E}_{\pi }[\,f] = 1\). Let {B t : t ∈ [0, )} denote the continuous-time random walk with jump rates p on the discrete state space \(\Omega\) starting from x 0. We let μ T be the law of B T and let ν be the probability measure defined by

$$\displaystyle{d\nu = \frac{f} {H_{T}f(x_{0})}\,d\mu _{T},}$$

where (H t ) is the semigroup associated to the jump rates p(x, y). Note that ν is indeed a probability measure as ∫f T = H T f(x 0) by definition of μ T .

We now define the continuous-time Föllmer drift process associated to the data (x 0, T, f) as the (time inhomogeneous) Markov chain {X t , tT} starting from x 0 and having transition rates at time t given by

$$\displaystyle{ q_{t}(x,y) = p(x,y)\frac{H_{T-t}f(y)} {H_{T-t}f(x)}\,. }$$
(13)

Informally this means that the conditional probability that the process {X t } jumps from x to y between time t and t + dt given the past is q t (x, y)dt. This should be thought as the continuous-time analogue of the discrete Föllmer process defined by (6). We claim that again the law of the process {X t , tT} has density f(x T )∕H T f(x 0) with respect to the law of {B t , tT}. Let us give a brief justification of this claim. Define a new probability measure \(\mathbb{Q}\) by setting

$$\displaystyle{ \frac{d\mathbb{Q}} {d\mathbb{P}} = \frac{f(B_{T})} {H_{T}f(x_{0})}. }$$
(14)

We want to prove that the law of B under \(\mathbb{Q}\) coincides with the law of X under \(\mathbb{P}\). Let \((\mathcal{F}_{t})\) be the natural filtration of the process (B t ), let t ∈ [0, T) and let yM. We then have the following computation:

$$\displaystyle{\begin{array}{rcl} \mathbb{Q}(B_{t+\Delta t} = y\mid \mathcal{F}_{t})& =&\frac{\mathbb{E}^{\mathbb{P}}[\,f(B_{ T})\,\mathbf{1}_{\{B_{t+\Delta t}=y\}}\mid \mathcal{F}_{t}]} {\mathbb{E}^{\mathbb{P}}[\,f(B_{T})\mid \mathcal{F}_{t}]} + o(\Delta t) \\ & =& \frac{H_{T-t}f(y)} {H_{T-t}f(B_{t})}\,\mathbb{P}(B_{t+\Delta (t)} = y\mid \mathcal{F}_{t}) + o(\Delta t) \\ & =& \frac{H_{T-t}f(y)} {H_{T-t}f(B_{t})}\,p(B_{t},y)\,\Delta t + o(\Delta t). \end{array} }$$

This shows that under \(\mathbb{Q}\), the process {B t , tT} is Markovian (non homogeneous) with jump rates at time t given by (13). Hence the claim.

This implies in particular that X T has law ν. This also yields the following formula for the relative entropy of {X t }:

$$\displaystyle{ D\!\left (\{X_{t},\,t \leq T\}\,\|\,\{B_{t},\,t \leq T\}\right ) = \mathbb{E}\left [\log \frac{f(X_{T})} {H_{T}f(x_{0})}\right ] = D\!\left (\nu \,\|\,\mu _{T}\right ). }$$
(15)

The process {X t } starts from x 0 and has law ν at time T. Because X T has law ν and B T has law μ T , the two processes must evolve differently. One can think of the process {X t } as “spending information” in order to achieve the discrepancy between X T and B T . The amount of information spent must at least account for the difference in laws at the endpoint, i.e.,

$$\displaystyle{D\!\left (\{X_{t},\,t \leq T\}\,\|\,\{B_{T},\,t \leq T\}\right ) \geq D\!\left (X_{T}\,\|\,B_{T}\right ).}$$

As pointed out in Sect. 2.2.2, the content of (15) is that {X t } spends exactly this minimum amount.

For 0 ≤ ss′, we use the notations B [s, s′] = {B t : t ∈ [s, s′]} and X [s, s′] = {X t : t ∈ [s, s′]} for the corresponding trajectories. From the definition of \(\mathbb{Q}\) we easily get

$$\displaystyle{ \frac{d\mathbb{Q}} {d\mathbb{P}} \Big\vert _{\mathcal{F}_{t}} = \mathbb{E}\left [ \frac{f(B_{T})} {H_{T}f(x_{0})}\mid \mathcal{F}_{t}\right ] = \frac{H_{T-t}f(B_{t})} {H_{T}f(x_{0})}. }$$
(16)

As a result

$$\displaystyle{ D\!\left (X_{[0,t]}\,\|\,B_{[0,t]}\right ) = \mathbb{E}\left [\log \frac{H_{T-t}f(X_{t})} {H_{T}f(x_{0})} \right ] }$$
(17)

for all tT. Let us now define the rate of information spent at time t:

$$\displaystyle{I_{t} = \frac{d} {dt}D\!\left (X_{[0,t]}\,\|\,B_{[0,t]}\right )\,.}$$

Intuitively, the entropy-optimal process {X t } will spend progressively more information as t approaches T. Information spent earlier in the process is less valuable (as the future is still uncertain). Let us observe that a formal version of this statement for random walks on finite graphs is equivalent to Conjecture 3.1.

Conjecture 3.2

Suppose \((\Omega,p)\) corresponds to a lazy random walk on a finite graph and d is the graph distance, and that \((\Omega,p,d)\) has coarse Ricci curvature 1∕α. Given \(f: \Omega \rightarrow \mathbb{R}_{+}\) with \(\mathbb{E}_{\pi }[\,f] = 1\) and \(x_{0} \in \Omega\) , for all sufficiently large times T, it holds that if {X t : t ∈ [0, T]} is the associated continuous-time Föllmer drift process process with initial data ( f, x 0, T), then

$$\displaystyle{ D\!\left (X_{T}\,\|\,B_{T}\right ) \leq C\alpha I_{T}\,, }$$
(18)

where C > 0 is a universal constant.

As T, we have \(P_{T}f(x_{0}) \rightarrow \mathbb{E}_{\pi }f = 1\) and thus

$$\displaystyle{D\!\left (X_{T}\,\|\,B_{T}\right ) \rightarrow \mathrm{ Ent}_{\pi }(\,f)}$$

Moreover, we claim that \(I_{T} \rightarrow \mathcal{E}(\,f,\log f)\) as T. Together, these show that Conjectures 3.1 and 3.2 are equivalent.

To verify the latter claim, note that from (17) and (16) we have

$$\displaystyle{\begin{array}{rcl} D\!\left (X_{[0,t]}\,\|\,B_{[0,t]}\right )& =&\mathbb{E}\left [\log \left (\frac{H_{T-t}f(X_{t})} {H_{T}f(x_{0})} \right )\right ] \\ & =&\mathbb{E}\left [\log \left (\frac{H_{T-t}f(B_{t})} {H_{T}f(x_{0})} \right )\frac{H_{T-t}f(B_{t})} {H_{T}f(x_{0})} \right ] \\ & =& \frac{1} {H_{T}f(x_{0})}\,H_{t}\left (H_{T-t}f\log H_{T-t}f\right )(x_{0}) -\log H_{T}f(x_{0}).\end{array} }$$

Differentiating at t = T yields

$$\displaystyle{I_{T} = \frac{1} {H_{T}f(x_{0})}\,H_{T}\left (\Delta (\,f\log f) - (\Delta f)(\log f + 1)\right )(x_{0}).}$$

where \(\Delta = I - p\) denotes the generator of the semigroup (H t ). Recall that \(\delta _{x_{0}}H_{T}\) converges weakly to π, and that by stationarity \(\mathbb{E}_{\pi }\Delta g = 0\) for every function g. Thus

$$\displaystyle{\lim _{T\rightarrow \infty }I_{T} = -\mathbb{E}_{\pi }[(\Delta f)\log f].}$$

The latter equals \(\mathcal{E}(\,f,\log f)\) by reversibility, hence the claim.