Introduction

1. Endowing a countable group \(G\) with a probability measure \(\mu\) rigs the group with an additional structure in a sense akin to a Riemannian metric on a smooth manifold. In both situations this additional structure gives rise to a Markov dynamics on the original state space: this is a Brownian motion in the manifold case and a random walk in the group case.

In the framework of the general theory of Markov processes, one can then define the Poisson boundary of the state space. In probabilistic language this is a measure space that describes all stochastically significant behaviour of the sample paths of the respective Markov process at infinity. In ergodic terms this is the space of ergodic components of the time shift on the path space, whereas in analytic terms this is the spectrum of the commutative Banach algebra of bounded harmonic functions.

It is then natural to ask about a complete description (identification) of the Poisson boundary in terms of the “observed” limit behaviour of Markov sample paths. This behaviour can be characterized by their analytic, geometric, algebraic, or combinatorial properties. In the particular case where no limit behaviour is observed at all, already the question about triviality of the Poisson boundary, i. e., about absence of non-constant bounded harmonic functions (the Liouville property), or, equivalently, about ergodicity of the time shift on the path space, is highly non-trivial (in the same way as the question about ergodicity of many other dynamical systems).

From a probabilistic point of view, the Liouville property is equivalent to asymptotic vanishing of the dependence of the one-dimensional distributions of the Markov process on its initial distribution as time goes to infinity. This is why for random walks on groups, this property can also be formulated in an entirely analytic language, without relying on any probabilistic interpretations. Namely, an irreducible probability measure \(\mu\) on a group \(G\) is Liouville if and only if the means on \(G\) that arise by passing to a Banach limit of the sequence of the convolution powers \(\mu^{*t}\) are left invariant (see § 1. D, as well as Remarks 18 and 53).

2. The fact that random walks on groups are homogeneous not only in time, but also in space, allows one to define the asymptotic entropy \(h(\mu)\), which makes the aforementioned question about the Liouville property entirely quantitative: the Poisson boundary \(\partial_\mu G\) of a random walk \((G,\mu)\) is trivial if and only if \(h(\mu)=0\) (for the sake of simplicity, we assume, throughout the Introduction, that the Poisson boundary is endowed with the primary harmonic measure \(\nu\) that corresponds to the initial distribution concentrated at the group identity, see § 1.B). There is an analogous criterion (based on the asymptotic entropy of conditional random walks) for the general problem of identification of the Poisson boundary as well. Nonetheless, these criteria only work under the finiteness assumption on the entropy \(H(\mu)\) of the step distribution \(\mu\).

The asymptotic entropy

$$h(\mu)=\lim \frac{H(\mu^{*t})}t$$

(where \(\mu^{*t}\) denotes the \(t\)-fold convolution of the measure \(\mu\) with itself) was introduced by A. Avez in 1972 [14] (and a bit later, independently, by A. M. Vershik), and the closely related (as became clear later) differential entropy of the boundary had been used by H. Furstenberg even earlier [29]. However, the full development of the entropy theory was achieved only on the basis of ergodic methods related to an analogy between the asymptotic entropy of random walks on groups and the entropy of dynamical systems. This program was proposed by A. M. Vershik in the 1970s (see [3, p. 60]) and implemented in his joint works with the author [6], [36], as well as independently (and using a somewhat different approach) by Y. Derriennic [21]. Subsequently, the entropy theory of random walks was extended by the author to a much broader class of “stochastically homogeneous” Markov chains (see Remark 22).

A rather unexpected consequence of the entropy theory was the appearance of the finiteness condition for the entropy of the step distribution \(\mu\) in a situation that, at first glance, is completely unrelated to entropy. The entropy of the reflected measure \(\check\mu(g)=\mu(g^{-1})\) obviously coincides with the entropy of the original measure \(\mu\), and therefore their asymptotic entropies also coincide. Thus, both measures \(\mu\) and \(\check\mu\) are either Liouville or non-Liouville simultaneously, i. e., left invariance of the corresponding limit means on the group is automatically equivalent to their right invariance. On the other hand, for measures with infinite entropy this is not necessarily the case: already in 1983, the author [7, Theorem 1.4] constructed an example of a measure \(\mu\) with infinite entropy that is “one-sided Liouville” (it is Liouville itself, while the reflected measure \(\check\mu\) is not Liouville).

3. The aim of the present paper is to exhibit the critical role of entropy finiteness in yet another situation: for random walks on products of several groups.

Let us consider the product \(\widetilde G=G\times G'\) of two (for simplicity) groups \(G\) and \(G'\), with the random walk defined by a step distribution \(\widetilde\mu\). The projections of this walk onto each coordinate are the random walks on \(G\) and \(G'\) determined by the respective marginal distributions \(\mu\) and \(\mu'\) of the measure \(\widetilde\mu\). It is natural to ask about the relationship between the Liouville properties of these three measures, or, more generally, to ask how the Poisson boundaries of these three walks are related.

The simplest way to obtain a measure with the given marginals \(\mu\) and \(\mu'\) is to take their product \(\widetilde\mu=\mu\otimes\mu'\). One can also consider a convex combination

$$\widetilde\mu=\alpha\mu\otimes\delta_{e'}+(1-\alpha)\delta_e\otimes\mu',$$

where \(e\) and \(e'\) are the identities of the respective groups. In the latter case, the marginals are not the measures \(\mu,\mu'\) themselves, but their “lazy” modifications \(\alpha\mu+(1-\alpha)\delta_e\) and \((1-\alpha)\mu'+\alpha\delta_{e'}\), which does not really change anything as the Poisson boundaries of the lazy measures are the same as those of the respective original measures \(\mu,\mu'\). In both cases the Poisson boundary \(\partial_{\widetilde\mu}\widetilde G\) is the product of the Poisson boundaries of the quotient groups \(\partial_\mu G\) and \(\partial_{\mu'}G'\) as measure spaces (i. e., the primary harmonic measure of the random walk on \(\widetilde G\) is a product measure), and, in particular, the measure \(\widetilde\mu\) is Liouville if and only if both measures \(\mu\) and \(\mu'\) are simultaneously Liouville.

These definitions are not specific to random walks and they can be applied to arbitrary Markov chains (by a formal replacement of step distributions with Markov operators). The resulting Markov chains are called, respectively, direct and Cartesian products of the original chains, and the proofs of the aforementioned properties boil down to verifying that the product of two minimal positive harmonic functions is also minimal with respect to the product operator [39, p. 146]. Conversely, the statements about the factorization of minimal positive harmonic functions of the product operator (see S. A. Molchanov [8] for direct products, and M. Picardello and W. Woess [48] for Cartesian products) are much less obvious (in particular, because they require involvement of \(\lambda\)-harmonic functions with \(\lambda\neq 1\)) and their proofs rely on the Martin theory.

In the context of arbitrary Markov chains, there are no other universal methods for obtaining a Markov chain on the product space whose marginals would be given chains on the spaces-multipliers, but this is not the case for random walks on groups. Here we are talking precisely about the aforementioned random walks whose step distribution \(\widetilde\mu\) has given marginal distributions \(\mu,\mu'\). Although this setup is very natural, such walks have not been subject to a systematic study even though “multicomponent” Poisson boundaries did arise, for example, in the work of the author [40] for discrete subgroups in the product of several simple Lie groups, or in the work of S. Brofferio and B. Shapira [19] for groups of rational matrices. The author’s interest in this subject was sparked by a question posed by R. I. Grigorchuk about random walks on the product of several copies of the same group, which arise when considering self-similar groups, as such products have a finite index in the corresponding permutational wreath products [33].

4. It is not hard to verify (Proposition 21 and Remark 22) that under the condition of finiteness of the entropy of the measure \(\widetilde\mu\), the Poisson boundary \(\partial_{\widetilde\mu}\widetilde G\) is fully described by the behaviour of the quotient walks on the groups-multipliers, i. e., the mapping \(\partial_{\widetilde\mu}\widetilde G\to \partial_\mu G\times\partial_{\mu'} G'\) is an isomorphism of measure spaces. It should be emphasized that although the marginals of the primary harmonic measure \(\widetilde\nu\) are the respective primary harmonic measures \(\nu,\nu'\), this by no means implies that \(\widetilde\nu\) is their product. The question of describing the measure \(\widetilde\nu\) seems to be quite interesting and it remains completely open even in the simplest situations; for example, for the measure \(\widetilde\mu=\frac12(\mu\otimes\mu+\operatorname{diag}\mu)\) in the case when \(G=G'\) is a free group, \(\mu\) is the uniform distribution on the (symmetric) set of free generators and \(\operatorname{diag}\mu\) is the corresponding measure on the diagonal of the product \(G\times G\). Note that spatial homogeneity is essential in defining a random walk on \(G\times G\) with the transition measure \(\operatorname{diag}\mu\), and such a “diagonal” Markov operator cannot be defined for an arbitrary Markov chain.

In particular, if the entropy \(H(\widetilde\mu)\) of the measure \(\widetilde\mu\) is finite, then the Liouville property for this measure is equivalent to its marginals \(\mu\) and \(\mu'\) being Liouville simultaneously (Proposition 21). We show that if the entropy finiteness condition is dropped, this is no longer necessarily the case by explicitly constructing the corresponding counterexample (Theorem 54).

This construction is based on using, in a slightly modified form, the aforementioned example of a one-sided Liouville measure \(\mu\) on the wreath product \(\mathcal L=\mathbb Z\wr\mathbb Z_2\) (the one-dimensional lamplighter group), whose projection onto \(\mathbb Z\) has a non-zero mean. The idea is quite transparent and consists in taking the “semidiagonal” subgroup \(\widetilde{\mathcal L}_0\) with a common \(\mathbb Z\)-component in the product \(\widetilde{\mathcal L}=\mathcal L\times\mathcal L\), and then considering a measure \(\widetilde\mu\) on it whose marginals coincide with \(\mu\), and thus are Liouville. On the other hand, it is easy to choose \(\widetilde\mu\) in such a way that it is “almost” (but not completely) diagonal in the sense that its image on \(\mathcal L\) under the mapping \((n,\varphi,\varphi')\to (n,\varphi-\varphi')\) has a non-singleton finite support and is not Liouville, thus ensuring that the measure \(\widetilde\mu\) is not Liouville either.

5. With rare exceptions (for example, see the “trunk” criterion of A. Erschler and the author [24, Proposition 3.8]), the entropy theory (more precisely, the criterion from [40]) remains the only means of identifying a non-trivial Poisson boundary of random walks on countable groups. Very recently, the Ultima Thule of the entropy theory has been reached for two classes of groups serving as the main examples of this kind: Gromov hyperbolic groups (and their various modifications) in the work of K. Chawla, B. Forghani, J. Frisch, and G. Tiozzo [20], and wreath products in the work of J. Frisch and E. Silva [27]. Namely, it was shown that in both cases convergence to the “natural” boundary (the hyperbolic one in the first case and the space of infinite configurations in the second case) indeed fully describes the Poisson boundary without any additional conditions on the step distribution other than finiteness of its entropy. It is worth noting, though, that in the hyperbolic case, convergence to the boundary does not require any assumptions about the step distribution at all (except for the purely geometric condition of non-elementarity) and the problem of identifying the Poisson boundary, for example, for a random walk on a free group, posed by A. M. Vershik and the author 40 years ago [36, p. 485], still remains completely open (it remains open even for free semigroups in the absence of cancellations). For a description of the current state of affairs, see [20]. The situation with wreath products is somewhat more complicated, and we refrain from discussing the preceding very interesting results in this direction, referring the reader instead to the detailed review given in [27].

In light of this, it should be noted that the example we constructed is, to the best of the author’s knowledge, the first one in which the Poisson boundary of a random walk with infinite entropy (in our example described by non-trivial joint behaviour of pairs of configurations) is qualitatively different from the Poisson boundaries of random walks with finite entropy (described in our case by separate behaviour of the marginal walks).

The recent breakthrough of J. Frisch, Y. Hartman, O. Tamuz, and P. Vahidi Ferdowsi [26] led to a complete description of the groups for which both Liouville and non-Liouville measures coexist. This is precisely the class of amenable groups that are not hyper-FC-central (do not coincide with their hyper-FC-center), or, equivalently, the amenable groups with an ICC quotient group (i. e., one in which all non-trivial conjugacy classes are infinite). Soon thereafter, A. V. Alpeev [12] showed that one-sided Liouville measures exist on any group from this class. After the author had informed him of the main result of this work, A. V. Alpeev [13] proved that an example with the same properties can be constructed for the product of arbitrary groups from this class, and moreover, all measures involved in the construction can be chosen to be symmetric and irreducible.

Finally, it should be noted that it would be wrong to think that “all” measures with infinite entropy are “pathological.” There is a very general construction described in a joint work with B. Forghani [25] that allows, starting from a certain fixed step distribution, to build numerous other measures (in particular, those with infinite entropy) with the same Poisson boundary.

6. The paper is organized as follows. We begin by recalling the main definitions and results used throughout, while formulating and proving necessary auxiliary statements along the way. In the first two sections, we discuss the Poisson boundary (§ 1) and the entropy theory of random walks (§ 2). Then we talk about wreath products of groups (§ 3) and random walks on them (§ 4). Here we describe the random walks on the groups \(\mathcal L\) and \(\widetilde{\mathcal L}_0\) (mentioned in no. 4 above) and establish conditions under which they are not Liouville (Claims 42, 43 and 46). Finally, in § 5, we first establish the main technical ingredient (Lemma 47) – the equivalence of the Liouville property and infiniteness of the step distribution entropy for the considered class of measures on the group \(\mathcal L\) (which, for this class of measures, is also equivalent to the one-sided Liouville property, Theorem 52). Then, we obtain our main result: the existence of a non-Liouville measure (as noted in no. 4, it must have infinite entropy) on the product of two groups, both of whose marginals are Liouville (Theorem 54). Paying tribute to the constant interest of Anatoly Moiseevich Vershik in the history of mathematics and its contemporary impact (see [3]–[5] regarding the topics discussed here), we conclude the article with a brief historical commentary on the origin of the main concepts featured in its title: Liouville’s property and Shannon’s entropy.

1. Poisson Boundary and Liouville Property

figure 1

1.A. The random walk \((G,\mu)\) on a countable group \(G\) determined by a probability measure \(\mu\) is the Markov chain with the state space \(G\) and transition probabilities

$$p(g,g')=\mu(g^{-1} g'),$$

invariant under the left action of the group \(G\) on itself. In other words, the Markov transitions

$$ g \overset{h\sim\mu}{\rightsquigarrow} g'=gh$$
(1)

of the random walk \((G,\mu)\) consist in right multiplication by a \(\mu\)-distributed random increment \(h\). The resulting Markov operator \(P=P_\mu\) acts on functions on the group \(G\) by averaging as

$$ Pf (g)=\sum_h \mu(h) f(gh),$$
(2)

and its dual operator

$$ \theta \mapsto \theta P=\theta * \mu$$
(3)

acts on measures by applying the Markov transitions (1), i. e., as the right convolution with the measure \(\mu\).

Any initial distribution \(\theta\) (not necessarily a probability one) determines the associated Markov measure \(\mathbf P_\theta\) on the space \(G^{\mathbb Z_+}\) of sample paths \(\mathbf{g}=(g_0,g_1,g_2,\dots)\), where

$$g_t=g_0 h_1 h_2 \dots h_t \quad \forall\,t\in\mathbb Z_+,$$

and \((h_t)\) is a sequence of independent \(\mu\)-distributed increments of the random walk. Thus, the one-dimensional distribution of the measure \(\mathbf P_\theta\) at time \(t\) is \(\theta P^t=\theta*\mu^{*t}\), where \(\mu^{*t}\) denotes the \(t\)-fold convolution of the measure \(\mu\) with itself. Since the space of sample paths is endowed with a natural coordinate-wise action of the group \(G\)

$$ g\mathbf{g}=(gg_0, gg_1, gg_2, \dots) \quad \forall\,g\in G, \quad \mathbf{g}=(g_0,g_1,g_2,\dots)\in G^{\mathbb Z_+},$$
(4)

and the mapping \(\theta\mapsto\mathbf P_\theta\) is equivariant due to the spatial homogeneity of the transition probabilities, the measure \(\mathbf P_\theta\) is the convolution of the initial distribution \(\theta\) with the measure \(\mathbf P=\mathbf P_{\delta_e}\), corresponding to the initial distribution concentrated at the identity \(e\) of the group \(G\), namely,

$$\mathbf P_\theta=\theta * \mathbf P=\sum_g \mu(g) g\mathbf P,$$

where \(g\mathbf P=\mathbf P_g=\mathbf P_{\delta_g}\) is the measure on the space of sample paths issued from a point \(g\in G\). The counting measure \(\#\) on the group \(G\) is preserved by the transition operator (3) of the random walk, and therefore the corresponding \(\sigma\)-finite measure \(\mathbf P_\#\) on the space of sample paths is invariant under the shift

$$ T\colon (g_0,g_1,g_2,\dots) \mapsto (g_1,g_2,g_3,\dots).$$
(5)

1.B. Two sample paths \(\mathbf{g}=(g_0,g_1,\dots)\) and \(\mathbf{g}'=(g'_0,g'_1,\dots)\) are called (asynchronously) asymptotically equivalent if they coincide from some moment onwards (specific to each sample path), i. e., if there exist integers \(t,t'\ge 0\) such that \(g_{t+i}=g'_{t'+i}\) for all \(i\ge 0\). In other words, this is the orbital equivalence relation of the shift (5): sample paths \(\mathbf{g}\) and \(\mathbf{g}'\) are equivalent if \(T^t\mathbf{g}=T^{t'}\mathbf{g}'\) for some \(t,t'\). The measurable hull \(\mathfrak E\) of this equivalence relation, i. e., the \(\sigma\)–algebra of all measurable subsets of the space \((G^{\mathbb Z_+},\mathbf P_\#)\) that are unions of equivalence classes (\(\mathbf P_\#-\operatorname{mod}0\)), is called the exit \(\sigma\)-algebra of the random walk \((G,\mu)\). Equivalently, \(\mathfrak E\) is the completion of the \(\sigma\)-algebra of \(T\)-invariant subsets of the space of sample paths \(G^{{\mathbb Z}_+}\) with respect to the measure \(\mathbf P_\#\). The arising quotient of the space of sample paths \((G^{\mathbb Z_+},\mathbf P_\#)\), i. e., the space of ergodic components of the shift \(T\) with respect to the invariant measure \(\mathbf P_\#\), is called the Poisson boundary \(\partial_\mu G\) of the random walk \((G,\mu)\), and we denote by

$$\operatorname{bnd}\colon G^{\mathbb Z_+} \to \partial_\mu G$$

the corresponding canonical projection. The harmonic measure of an initial distribution \(\theta\) is the image

$$\nu_\theta=\operatorname{bnd}\mathbf P_\theta$$

(which is well defined because the corresponding measure \(\mathbf P_\theta\) on the space of sample paths is absolutely continuous with respect to \(\mathbf P_\#\)). We denote by \(\boldsymbol{\nu}\) the harmonic measure class on \(\partial_\mu G\), i. e., the common class of the harmonic measures \(\nu_\theta\) corresponding to the initial distributions \(\theta\) with full support \(\operatorname{supp}\theta=G\). Note that the image \(\operatorname{bnd}\mathbf P_\#\) of the measure \(\mathbf P_\#\) itself is trivially infinite, which is why we have to introduce the harmonic measure class (alternatively, one could fix a reference measure from this class). The harmonic measure \(\nu_\theta\) determined by any initial probability distribution \(\theta\) is, of course, a probability one, and it is dominated by the class \(\boldsymbol{\nu}\), i. e., it is absolutely continuous with respect to \(\boldsymbol{\nu}\).

Since the action (4) of the group \(G\) on the space of sample paths \(G^{\mathbb Z_+}\) commutes with the shift (5), it descends to the Poisson boundary, and the harmonic measure class \(\boldsymbol{\nu}\) is quasi-invariant under this action. We denote by

$$ \nu=\operatorname{bnd}\mathbf P$$
(6)

the primary harmonic measure determined by the single-point initial distribution concentrated at the identity of the group. Then

$$\nu_g=\operatorname{bnd}\mathbf P_g=g\nu$$

for any initial point \(g\in G\), and \(\nu_\theta=\theta*\nu\) for any initial distribution \(\theta\). The primary harmonic measure \(\nu\) is \(\mu\)-stationary in the sense that

$$ \nu=\operatorname{bnd} \mathbf P=\operatorname{bnd} (T\mathbf P)=\operatorname{bnd} \mathbf P_\mu=\sum_g \mu(g) g\nu=\mu*\nu.$$
(7)

As we have already noted, the measure \(\nu\) is dominated by the harmonic measure class \(\boldsymbol{\nu}\). If the step distribution \(\mu\) is non-degenerate in the sense that the semigroup \(\operatorname{sgr}\mu\), generated by its support \(\operatorname{supp}\mu\), coincides with the whole group \(G\), then, as follows from (7), the measure \(\nu\) is quasi-invariant and belongs to the class \(\boldsymbol{\nu}\). In general, even if the measure \(\mu\) is irreducible (the group \(\operatorname{gr}\mu\) generated by its support coincides with \(G\)), this domination can be strict (which leads to non-trivial questions, see the discussion in the paper by A. Erschler and the author [24, § 5.D]). It is worth adding that the irreducibility condition (unlike non-degeneracy) is not restrictive as it can always be satisfied by passing to the smaller group \(\operatorname{gr}\mu\).

The image of the random walk \((G,\mu)\) under a group epimorphism \(\pi\colon G\mapsto \overline G\) is the random walk on the quotient group \(\overline G\) with the step distribution \(\overline\mu=\pi(\mu)\), and therefore the Poisson boundary \(\partial_{\overline\mu} \overline G\) is a quotient of the Poisson boundary \(\partial_\mu G\) (more precisely, the space of ergodic components of the action of the normal subgroup \(\ker\pi\)). In particular, if the quotient measure \(\overline\mu\) is not Liouville, then the original measure \(\mu\) is not Liouville either.

1.C. A function \(f\) on a group \(G\) is called \(\mu\)-harmonic if \(Pf=f\) for the Markov operator \(P\) (2). Because of the \(\mu\)-stationarity of the primary harmonic measure \(\nu\), any function \(\widehat f\in L^\infty(\partial_\mu G,\boldsymbol{\nu})\) gives rise to the \(\mu\)-harmonic function

$$ f(g)=\langle \widehat f,g\nu\rangle.$$
(8)

Furthermore, the Poisson formula (8) establishes an isometric isomorphism between the space \(L^\infty(\partial_\mu G,\boldsymbol{\nu})\) and the Banach space \(H^\infty(G,\mu)\) of bounded \(\mu\)-harmonic functions on \(G\) endowed with the \(\sup\)-norm. In this sense, the correspondence (8) is an analogue of the classical Poisson formula for harmonic functions on the unit disk of the complex plane. In particular, the Poisson boundary \(\partial_\mu G\) is trivial (i. e., is a singleton) if and only if there are no non-constant bounded \(\mu\)-harmonic functions on the group \(G\) (the Liouville property).

We call a measure \(\mu\) on a group \(G\) Liouville if the primary harmonic measure \(\nu\) is a delta measure, which is equivalent to the absence of non-constant bounded \(\mu\)-harmonic functions on the semigroup \(\operatorname{sgr}\mu\), i. e., on the set of points attainable by the random walk issued from the group identity. It is clear that the same is then true for any initial point. In probabilistic terms (see the Introduction), this means that there is no non-trivial stochastically significant behaviour of the random walk at infinity for an arbitrary one-point initial distribution. The convenience of this definition of the Liouville property is that it allows one to drop the irreducibility assumption on the measure \(\mu\). It is easy to see (e.g., using the Poisson formula) that the Liouville property for the measure \(\mu\) implies the absence of non-constant bounded \(\mu\)-harmonic functions on the group \(\operatorname{gr}\mu\) as well, and therefore triviality of the Poisson boundary \(\partial_\mu G\) is equivalent to the combination of the Liouville property for the measure \(\mu\) with its irreducibility. In the general case, the Poisson boundary of a Liouville measure \(\mu\) is just the homogeneous space of the left cosets of the subgroup \(\operatorname{gr}\mu\).

1.D. In more analytical terms, the Liouville property of the measure \(\mu\) is equivalent to strong convergence of the Cesàro averages

$$ \sigma_t=\frac1t (\mu+\mu^{*2}+\dots+\mu^{*t} )$$
(9)

of convolutions of the measure \(\mu\) to left invariance on the group \(\operatorname{gr}\mu\), i. e., to the left asymptotic invariance of the sequence \(\sigma_t\):

$$ \|g\sigma_t-\sigma_t \| \to 0 \quad \forall\,g\in\operatorname{gr}\mu$$
(10)

(here \(\|\cdot\|\) denotes the total variation, or in other words, the \(\ell^1\)-norm). In particular, if the measure \(\mu\) is Liouville, then the group \(\operatorname{gr}\mu\) is amenable. Instead of the Cesàro averages, one can also consider the measures obtained by averaging the convolutions \(\mu^{*t}\) with the help of any other asymptotically invariant sequence of measures on \(\mathbb Z_+\), for example, the binomial distributions \(\bigl(\frac12(\delta_0+\delta_1)\bigr)^{*t}\), i. e., the sequence of one-dimensional distributions \(\mathring{\mu}^{*t}\) of the “lazy” random walk with the transition measure \(\mathring\mu=\tfrac12(\delta_e+\mu)\), which is the half-sum of the initial measure \(\mu\) and the delta measure at the identity element (see [39, Theorem 2.3]). By passing from asymptotically invariant sequences of measures to invariant means, one can then say that the Liouville property of the measure \(\mu\) is equivalent to the fact that some (\(\equiv\) any) Banach limit of the sequence of its convolutions is a left-invariant mean on the group \(\operatorname{gr}\mu\).

Remark 11.

It would be interesting to understand the structure of the set of means on a given group \(G\) arising as the Banach limits of the sequences of convolutions \(\mu^{*t}\) of a certain irreducible measure \(\mu\). This question is of interest even in the situation when the measure \(\mu\) is not Liouville (in particular, when the group \(G\) is non-amenable), see the work of A. Fisher and the author [42]. In the latter case, the corresponding means are not invariant but only \(\mu\)-stationary (for this class, see the work of H. Furstenberg and E. Glasner [30]).

In the opposite direction, the existence of an irreducible Liouville measure on any amenable group was established by A. M. Vershik and the author [6], [36] and independently by J. Rosenblatt [51]. On the other hand, the characterization of the groups on which every measure is Liouville (these are the groups that coincide with their hyper-FC-center) was recently completed by J. Frisch, Y. Hartman, O. Tamuz, and P Vahidi Ferdowsi [26], who proved the existence of non-Liouville non-degenerate measures on all groups without this property (for further results in this direction see the work of A. Erschler and the author [24]).

1.E. Let us note for further reference another property of Liouville measures, directly derived from their characterization in terms of asymptotic invariance (10).

Claim 12.

The Liouville property for the product measure \(\mu\otimes\mu'\) on the product of groups \(G \times G'\) is equivalent to both measures-multipliers \(\mu\) and \(\mu'\) being Liouville simultaneously.

Remark 13.

This statement is a special case of the fact that for the product \((G\times G',\mu\otimes\mu')\) of the random walks \((G,\mu)\) and \((G',\mu')\), the primary harmonic measure is the product \(\nu\otimes\nu'\) of the primary harmonic measures of each multiplier, which in turn follows from the general result that the tail boundary of the product of two Markov chains is the product of their tail boundaries. Regarding the Poisson boundaries (which are endowed with the harmonic measure classes determined by initial distributions with full support), in general,

$$\partial_{\mu\otimes\mu'} (G\times G') \cong \partial_\mu G\times\partial_{\mu'}G'$$

only if all measures \(\mu\), \(\mu'\), \(\mu\otimes\mu'\) are non-degenerate. A detailed discussion of the relationship between the Poisson boundary and the tail boundary for general Markov chains and their triviality can be found in [39]; their coincidence with respect to a single point initial distribution for random walks on groups is a cornerstone of the entropy theory of random walks (see § 2).

For further information on the Poisson boundary of random walks on groups and of general Markov chains with discrete state spaces, we refer to the papers by A. M. Vershik and the author [36], [39], [40] and the literature cited therein, as well as to more recent surveys by A. Furman [28], A. Erschler [22] and T. Zheng [56].

2. Entropy of Random Walks

figure 2

2.A. Let us now consider the situation where the step distribution \(\mu\) of a random walk on the group \(G\) has finite entropy

$$H(\mu)=- \sum_g \mu(g)\log\mu(g).$$

In this case, the entropies of convolutions of the measure \(\mu\) with itself are also finite and their sequence is subadditive. The resulting limit

$$h(\mu)=\lim_{t\to\infty} \frac{H(\mu^{*t})}{t}$$

is called the asymptotic entropy of the measure \(\mu\).

Theorem 14 (A. M. Vershik, V. A. Kaimanovich [6], [36], Y. Derriennic [21]).

A measure \(\mu\) with finite entropy \(H(\mu)\) is Liouville if and only if \(h(\mu)=0\).

2.B. Let us recall that the opposite (reflected, reversed) measure \(\check\mu\) is the image of a measure \(\mu\) on a group \(G\) under the operation of group inversion:

$$ \check\mu(g)=\mu(g^{-1}) \quad \forall\,g\in G.$$
(15)

The random walk \((G,\check\mu)\) is the reversal of the original random walk with respect to the invariant counting measure \(\#\). Obviously, the entropies \(H(\mu)\) and \(H(\check\mu)\) of the original and of the reversed measures always coincide. Furthermore, since the inversion \(\mu \mapsto \check\mu\) is an anti-automorphism of the group \(\ell^1\)-algebra, the entropies of the respective convolutions are also the same, and therefore, the asymptotic entropies \(h(\mu)\) and \(h(\check\mu)\) coincide as well. Thus, Theorem 14 leads to a rather unexpected property.

Proposition 16 (see [7], [36]).

If \(H(\mu)<\infty\), then the measure \(\mu\) and the reflected measure \(\check\mu\) are either both Liouville or both non-Liouville.

In light of the discussion in § 1, the Liouville property of the reflected measure \(\check\mu\) is equivalent to the left asymptotic invariance of the Cesàro averages of the convolutions \(\check\mu^{*t}\) with respect to the group \(\operatorname{gr}\check\mu=\operatorname{gr}\mu\), or equivalently, to the right asymptotic invariance of the Cesàro averages \(\sigma_t\) (9) of the convolutions of the original measure \(\mu\). Thus, we have the following.

Corollary 17 (see [7], [36]).

If \(H(\mu)<\infty\), then the following properties of asymptotic invariance of the Cesàro averages of the convolution powers of the measure \(\mu\) with respect to the action of the group \(\operatorname{gr}\mu\) are equivalent:

  1. (i)

    left asymptotic invariance;

  2. (ii)

    right asymptotic invariance;

  3. (iii)

    bilateral (i. e., both left and right) asymptotic invariance.

Remark 18.

This statement can also be reformulated directly in terms of the means on the group \(\operatorname{gr}\mu\), obtained by applying Banach limits to the sequence of convolutions \(\mu^{*t}\) (see § 1.D): if \(H(\mu)<\infty\), then any such mean is either invariant on both sides or not invariant on either side.

2.C. Another interesting consequence of the entropy theory is related to random walks on group products (cf. Claim 12). Let \(\widetilde\mu\) be a probability measure on the product \(\widetilde G=G\times G'\) of two countable groups (for simplicity, we consider the case of two multipliers only). Denote by \(\mu\) and \(\mu'\) its marginal distributions, i. e., the results of applying to the measure \(\widetilde\mu\) the coordinate projections onto the groups \(G\) and \(G'\), respectively. Then, as is well known (for example, see V. A. Rokhlin’s article [10, § 4.3 and § 4.8]),

$$ H(\mu), H(\mu') \le H(\widetilde\mu) \le H(\mu)+H(\mu'),$$
(19)

and the same inequalities hold for the corresponding convolution powers as well. Consequently,

$$ h(G,\mu), h(G',\mu') \le h(\widetilde G,\widetilde\mu) \le h(G,\mu)+h(G',\mu').$$
(20)

By the entropy criterion from Theorem 14, we then have the following result.

Proposition 21.

If a probability measure \(\widetilde\mu\) on the product \(\widetilde G=G\times G'\) of two groups \(G\) and \(G'\) has finite entropy \(H(\widetilde\mu)\), then it is Liouville if and only if both its marginal distributions are Liouville.

Remark 22.

This proposition (essentially with the same proof) holds in all other situations where the entropy theory is applicable, namely when there is a natural probability measure preserving transformation on the corresponding path space. This is the case, for example, for random walks in random environment [37], on equivalence relations [43], or in the most general setting of invariant Markov chains with a finite stationary measure on groupoids with countable orbits [41]. By considering conditional random walks or Poisson extensions (bundles) of the corresponding groupoids [41, § 5.A], it follows that, assuming finiteness of the entropy of the measure \(\widetilde\mu\), the Poisson boundary \(\partial_{\widetilde\mu}\widetilde G\) is fully described by the boundary behaviour of the two quotient walks \((G,\mu)\) and \((G',\mu')\). The latter result can also be obtained by a direct application of the author’s entropy criterion for identification of the Poisson boundary [40], as was done by A. Erschler and J. Frisch in [23, Claim 4.1].

Remark 23.

It is well known (for example, see [10, § 4.8]) that the right-hand side inequality in (19) holds as an equality if and only if the measure \(\widetilde\mu\) is the product \(\mu\otimes\mu'\) of its marginals. It is natural to ask the same question about the additivity of the asymptotic entropy, i. e., about the conditions under which the right-hand side inequality in (20) becomes an equality:

$$h(\widetilde G,\widetilde\mu)=h(G,\mu)+h(G',\mu').$$

It is reasonable to assume that this property should be related to the possibility of decomposing the primary harmonic measure of the random walk \((\widetilde G,\widetilde\mu)\) (or, in a weaker form, its measure class) into the product of the primary harmonic measures of the quotient random walks \((G,\mu)\) and \((G',\mu')\) (respectively, of their measure classes).

Our goal is to present examples showing that Propositions 16 and 21 are no longer valid when the entropy finiteness assumption is dropped.

3. Wreath Products, Dynamical Configurations, Etc.

figure 3

3.A. Let us denote by

$$\Phi=\bigoplus_{-\infty}^\infty \mathbb Z_2=\operatorname{fun}(\mathbb Z,\mathbb Z_2)$$

the countable direct sum of copies of the two-element group \(\mathbb Z_2=\{0,1\}\) (addition \(\operatorname{mod} 2\)), indexed by integers, i. e., the group of functions (configurations) \(\varphi\colon\mathbb Z\to\mathbb Z_2\) with finite support

$$\operatorname{supp}\varphi=\varphi^{-1}(1)$$

with respect to the pointwise addition operation. The group \(\Phi\) is generated by the one-point configurations (“\(\delta\)-functions”) \(\varepsilon_n\), concentrated at the points \(n\in\mathbb Z\), and its identity is the empty configuration \(\vartheta\), i. e., the function identically equal to zero. The shifts

$$ (T^n\varphi)(z)=\varphi(z-n), \qquad n\in\mathbb Z,$$
(24)

are automorphisms of \(\Phi\), and

$$T^n\varepsilon_m=\varepsilon_{n+m} \quad\forall\,n,m\in\mathbb Z.$$

Of course, the support of the configuration \(T^n\varphi\) is the shift of the support of \(\varphi\):

$$ \operatorname{supp} T^n \varphi=n+\operatorname{supp}\varphi \quad\forall\, n\in\mathbb Z, \quad \varphi\in\Phi.$$
(25)

The resulting semidirect product

$$ \mathcal L=\mathbb Z \rightthreetimes \Phi=\mathbb Z \wr \mathbb Z_2$$
(26)

is called the direct (restricted) wreath product of the active group \(\mathbb Z\) with the passive group \(\mathbb Z_2\) (in contrast to the group-theoretic custom, in our notation the active group stands on the left). The group operation in \(\mathcal L\) is

$$(n_1,\varphi_1)(n_2,\varphi_2)=(n_1+n_2, \varphi_1+T^{n_1}\varphi_2).$$

We will assume that the groups \(\mathbb Z\) and \(\Phi\) are embedded into \(\mathcal L\) by means of the mappings

$$n\mapsto (n,\vartheta), \qquad \varphi\mapsto (0,\varphi).$$

Thus, the representation \((n,\varphi)\) of elements of the group \(\mathcal L\) determines its unique decomposition into the product \(\Phi\mathbb Z\), the subgroup \(\Phi\) is a normal subgroup of \(\mathcal L\), and the resulting quotient homomorphism takes the form

$$\mathcal L \to \mathbb Z\cong\mathcal L/\Phi, \qquad (n,\varphi) \mapsto n.$$

It is also worth noting that the group \(\mathcal L\) is generated by the generator of the group \(\mathbb Z\) and the singleton configuration at \(0\)

$$ \varepsilon=\varepsilon_0\in\Phi.$$
(27)

Remark 28.

The history of wreath products dates back almost a hundred years (see the survey of C. Wells [55, § 16]), and the modern terminology and notation originate from P. Hall [34, § 1.4]. As for functional and stochastic analysis on groups, the idea of using wreath products was first suggested by A. M. Vershik, who considered in [2] the growth of Følner sets for the group \(\mathbb Z\wr\mathbb Z\). The family of wreath products \(\mathbb Z^d\wr\mathbb Z_2\) was then used by A. M. Vershik and the author [6], [7], [36] to construct a number of new examples in the theory of random walks on groups. Although the “lamp” interpretation of wreath products with the passive group \(\mathbb Z_2\) was on numerous occasions mentioned by A. M. Vershik at that time, a more formal name group of dynamic configurations was chosen in [36]. Starting from the 1990s, the concise and expressive term lamplighter groups has been used in the English language literature (apparently, for the first time by R. Lyons, R. Pemantle, and Y. Peres [45]): the components \(n\in\mathbb Z\) and \(\varphi\in\Phi\) of the element \((n,\varphi)\) of the group \(\mathcal L\) describe, respectively, the position of the lamplighter and the configuration of the lit lamps on the integer line \(\mathbb Z\). The Russian terminology has not stabilized; among the proposed options are are лампочная группа, группа мигающих лампочек and группа фонарщика (cf. the somewhat cumbersome French term groupes de l’allumeur de réverbères).

3.B. Let us denote by

$$\widetilde{\mathcal L}=\mathcal L\times\mathcal L$$

the product of the group \(\mathcal L\) by itself. Below we will need the “semi-diagonal” subgroup of \(\widetilde{\mathcal L}\) with a common \(\mathbb Z\)–component, i. e., the semidirect product

$$ \widetilde{\mathcal L}_0=\mathbb Z \rightthreetimes (\Phi\times\Phi),$$
(29)

determined by the diagonal action of the group \(\mathbb Z\) on \(\Phi\times\Phi\) by shifts (24). The elements of the group \(\widetilde{\mathcal L}_0\) are triples \((n,\varphi,\varphi')\), and the group operation takes the form

$$(n_1,\varphi_1,\varphi'_1)(n_2,\varphi_2,\varphi'_2)=(n_1+n_2, \varphi_1+T^{n_1}\varphi_2, \varphi'_1+T^{n_1}\varphi'_2).$$

In addition to the natural “coordinate” homomorphisms

$$\pi\colon \widetilde{\mathcal L}_0 \to \mathcal L, \qquad \pi(n,\varphi,\varphi')=(n,\varphi),$$
(30)
$$\pi'\colon \widetilde{\mathcal L}_0 \to \mathcal L, \qquad \pi(n,\varphi,\varphi')=(n,\varphi'),$$
(31)

there is also a “combined” or “difference” homomorphism

$$ \overline\pi\colon \widetilde{\mathcal L}_0 \to \mathcal L, \qquad \overline\pi(n,\varphi,\varphi')=(n,\varphi+\varphi')$$
(32)

(we recall that all elements of the group \(\Phi\) have order 2, so that \(\varphi+\varphi'=\varphi-\varphi'\) for any two configurations \(\varphi,\varphi'\in\Phi\)).

4. Random Walks on the Group \(\mathcal L\)

figure 4

4.A. Let us denote by

$$\Phi_n^m=\langle \varepsilon_i \rangle_{i=n}^m$$

the finite subgroup of the group \(\Phi\) that consists of those configurations whose support is contained in the integer interval \([n\,..\,m]\). We define the sequence

$$\begin{aligned} \, \notag K_n &=\Phi_0^n \setminus \Phi_0^{n-1} =\varepsilon_n+\Phi_0^{n-1} \\ &=\{\varphi\in\Phi\colon \operatorname{supp}\varphi\subset [0\,..\,n]\;\text{and}\; \varphi(n)=1\}, \qquad n\ge 1, \end{aligned}$$
(33)

of pairwise disjoint finite subsets of \(\Phi\), and note that

$$ \Phi_0^n=2^{n+1}, \qquad |K_n|=2^n.$$
(34)

Let us denote by

$$ \varkappa_n=\operatorname{Unif}(K_n)$$
(35)

the corresponding uniform probability measures on the sets \(K_n\subset\Phi\). Now, let us fix a probability measure

$$ \alpha=(\alpha_n)_{n\ge 1}$$
(36)

on the set of positive integers; it gives rise to the probability measure

$$ \lambda=\lambda_\alpha=\sum_{n=1}^\infty \alpha_n \varkappa_n$$
(37)

on \(\Phi\), which is the weighted average of the measures \(\varkappa_n\) with weights \(\alpha_n\). Finally, we define the probability measure

$$ \mu=\mu_\alpha=\delta_{-1}\otimes\lambda$$
(38)

on the group \(\mathcal L\).

By definition of the measure \(\mu\), the sample paths \(\mathbf{g}=(g_t)\) of the random walk \((\mathcal L,\mu)\) issued from the group identity have the form

$$ g_t=h_1h_2\dots h_t=(-1,\varphi_1)(-1,\varphi_2)\dots (-1,\varphi_t)=(-t,\psi_t),$$
(39)

where \(h_t=(-1,\varphi_t)\) are independent \(\mu\)-distributed increments of the random walk, i. e., \((\varphi_t)\) is a sequence of independent \(\lambda\)-distributed \(\Phi\)-valued random variables, and

$$ \psi_t=\varphi_1+T^{-1} \varphi_2+\dots+T^{-(t-1)} \varphi_t.$$
(40)

In other words, the movement of the lamplighter himself (i. e., of the \(\mathbb Z\)–component of the random walk) is deterministic, and at time \(t\) his position is \(-t\), while the configuration of the lamps (the \(\Phi\)-component \(\psi_t\) of the random walk) is random. Namely, when passing from time \(t\) to time \(t+1\), the lamplighter samples a random configuration \(\varphi_{t+1}\) (or equivalently, a random subset \(\operatorname{supp}\varphi_{t+1}\subset\mathbb Z\)) according to the distribution \(\lambda\), and then changes the states of the lamps around him at the points whose coordinates relative to lamplighter’s position lie in \(\operatorname{supp}\varphi_{t+1}\) (i. e., at the points whose absolute positions are described by the shift \(T^{-t}\varphi_{t+1}\)), so that

$$ \psi_{t+1}=\psi_t+T^{-t} \varphi_{t+1},$$
(41)

after which the lamplighter himself moves by \(-1\) and transitions from point \(-t\) to point \(-t-1\).

4.B. Now let us look at the Liouville property for the measures described above.

Claim 42.

If the measure \(\alpha\) (36) has finite support, then the corresponding measure \(\mu\) (38) on the group \(\mathcal L\) is non-Liouville.

Proof.

Since the supports of all configurations \(\varphi_t\) in formula (40) are contained in the finite interval \([0\,..\,M]\) for some integer \(M>0\), one can apply to the random walk \((\mathcal L,\mu)\) an argument stemming from the work of A. M. Vershik and the author [6], [7], [36] (it connects the transience of the projection of the random walk onto the active group of the wreath product with the non-triviality of the Poisson boundary). Indeed, according to (25), in this case,

$$\operatorname{supp} T^{-t} \varphi_{t+1} \subset [-t\,..\,-t+M] \quad\forall\, t\ge 0,$$

and therefore the values \(\psi_t(z)\) of the configurations \(\psi_t\) (40) at any point \(z\in\mathbb Z\) stabilize as \(t\to\infty\). It is easy to verify that the resulting (infinite) configuration \(\psi_\infty=\lim_t\psi_t\) is random (see [38, Theorem 3.3]), and hence the primary harmonic measure of the random walk \((\mathcal L,\mu)\) is non-trivial.

\(\square\)

This reasoning does not hold in the case where the support \(\operatorname{supp}\alpha\) is infinite. Below we will see that in this situation the measure \(\mu\) (38) may also be Liouville, and in Lemma 47 we establish a simple necessary and sufficient condition for this. But before that, keeping in mind our goal of constructing counterexamples to Propositions 16 and 21 in the case of infinite entropy, let us have a look at two other closely related random walks.

4.C. The first one is the random walk on the same group \(\mathcal L\), determined by the reflection \(\check\mu\) (15) of the measure \(\mu\).

Claim 43.

The reflected measure \(\check\mu\) is non-Liouville for any measure \(\alpha\) (36) from the definition (38).

Proof.

Since inversion in the group \(\mathcal L\) is given by

$$(n,\varphi)^{-1}=(-n, T^{-n}\varphi ) \quad\forall\,(n,\varphi)\in\mathcal L,$$

one has

$$\check\mu=\delta_1 \otimes T\lambda,$$

where the shift \(T\lambda\) of the measure \(\lambda\) (37) only charges the configurations concentrated on the positive ray \([1\,..\,\infty)\subset\mathbb Z\). Therefore, the same reasoning as in the proof of Claim 42 applies without any restrictions on the measure \(\alpha\).

\(\square\)

4.D. The state space of the second random walk is the group \(\widetilde{\mathcal L}_0\) (29). To define its step distribution, let us first take a sequence of probability measures \(\widetilde\varkappa_n\) on the squares \(K_n\times K_n\) of the sets \(K_n\) (33) such that their marginal distributions (i. e., the images under the projections onto each of the copies of \(K_n\)) coincide with the corresponding measures \(\varkappa_n\) (35). In other words, since the distributions \(\varkappa_n\) are uniform, the measures \(\widetilde\varkappa_n\) are exactly the result of normalizing bistochastic matrices parameterized by the sets \(K_n\).

We are interested in the images

$$\overline\varkappa_n=\omega(\widetilde\varkappa_n)$$

of the measures \(\widetilde\varkappa_n\) under the action of the group operation in \(\Phi\)

$$ \omega\colon(\varphi,\varphi')\mapsto \varphi+\varphi', \qquad \varphi,\varphi'\in\Phi.$$
(44)

It is clear that \(\overline\varkappa_n=\delta_{\vartheta}\) (we recall that the trivial “empty” configuration \(\vartheta\) is the identity of the group \(\Phi\)) if and only if the measure \(\widetilde\varkappa_n\) is concentrated on the diagonal of the product \(K_n\times K_n\). Of course, there are other measures on \(K_n\times K_n\) with the prescribed marginals \(\varkappa_n\); according to the Birkhoff–von Neumann Theorem, all of them are convex combinations of the extreme measures corresponding to permutations of the sets \(K_n\). In particular, for \(n=1\), the set \(K_1\) consists of two configurations \(\varepsilon_1\) and \(\varepsilon_0+\varepsilon_1\), and we can take for \(\widetilde\varkappa_1\) the uniform distribution on the antidiagonal in \(K_1\times K_1\)

$$\widetilde\varkappa_1=\operatorname{Unif} \{ (\varepsilon_1,\varepsilon_0+\varepsilon_1), (\varepsilon_0+\varepsilon_1,\varepsilon_1) \}$$

that corresponds to the transposition of \(\varepsilon_1\) and \(\varepsilon_0+\varepsilon_1\). With this choice, \(\overline\varkappa_1=\omega(\widetilde\varkappa_1)\) is the delta measure concentrated on the configuration \(\varepsilon=\varepsilon_0\) (27).

In general, it is obvious that

$$\operatorname{supp}\overline\varkappa_n \subset K_n+K_n=\Phi_0^{n-1},$$

and it can be easily seen that for any prescribed measure \(\rho\) on \(\Phi_0^{n-1}\), we can choose a measure \(\widetilde\varkappa_n\) on the product \(K_n\times K_n\) in such a way that its marginal distributions coincide with \(\varkappa_n\), and the image \(\overline\varkappa_n\) under the action of the mapping (44) is \(\rho\). For instance, using the fact that the uniform distribution \(m=\operatorname{Unif}(\Phi_0^{n-1})\) on \(\Phi_0^{n-1}\) is preserved under convolution with any measure on this subgroup, we can take the image of the product measure \(m\otimes\rho\) under the mapping

$$(\varphi_0,\overline\varphi) \mapsto (\varphi_0+\varepsilon_n,\varphi_0+\overline\varphi+\varepsilon_n).$$

Now, let us take the weighted average

$$\widetilde\lambda=\sum_{n=1}^\infty \alpha_n \widetilde\varkappa_n$$

with the same weights \(\alpha_n\) as in the definition (37) of the measure \(\lambda\), and finally set

$$ \widetilde\mu=\delta_{-1} \otimes \widetilde\lambda.$$
(45)

Claim 46.

If the support of the distribution \(\alpha\) (36) is infinite and the set of indices \(n\in\operatorname{supp}\alpha\) for which \(\widetilde\kappa_n \neq \operatorname{diag}\kappa_n\), is non-empty and finite, then the measure \(\widetilde\mu\) is non-Liouville.

Proof.

By definition, the image of the random walk \((\widetilde{\mathcal L}_0,\widetilde\mu)\) under the homomorphism (32) is the random walk on the group \(\mathcal L\) with the step distribution

$$\overline\mu=\delta_{-1} \otimes \overline\lambda,$$

where

$$\overline\lambda=\sum_{n=1}^\infty \alpha_n \overline\varkappa_n.$$

Thus, to prove that the measure \(\widetilde\mu\) is not Liouville, it suffices to establish that the quotient measure \(\overline\mu\) is not Liouville. By assumption, the support of the measure \(\overline\lambda\) is finite and contains at least one configuration other than the empty configuration \(\vartheta\) (since \(\operatorname{supp}\overline\varkappa_n=\{\vartheta\}\) if and only if \(\widetilde\kappa_n = \operatorname{diag}\kappa_n\)); therefore, the measure \(\overline\mu\) is non-Liouville for the same reasons as in the proof of Claim 42. \(\square\)

5. Infinite Entropy

figure 5

5.A. The key technical result is the following.

Lemma 47.

The measure \(\mu\) (38) on the group \(\mathcal L\) (26) is Liouville if and only if the entropy \(H(\mu)\) is infinite.

Proof.

Since the reflected measure \(\check\mu\) is always non-Liouville (Claim 43), if the entropy \(H(\mu)\) is finite, then by Proposition 16 the measure \(\mu\) itself is also non-Liouville.

Now, let us consider the situation when the entropy \(H(\mu)\) is infinite and show that in this case the measure \(\mu\) is Liouville. First of all, note that since \(|K_n|=2^n\), the entropy of the measure \(\mu\) is

$$H(\mu)=H(\alpha)+(\log 2)|\alpha|,$$

where

$$|\alpha|=\sum_n n \alpha_n$$

denotes the first moment of the measure \(\alpha\) (36). On the other hand, \(H(\alpha)\le (\log 2)|\alpha|\) as a result of applying the Gibbs inequality to the measures \(\alpha\) and \(\xi=(\xi_n)_{n\ge 1}\) where \(\xi_n=2^{-n}\). Thus, the entropy \(H(\mu)\) is infinite if and only if the first moment \(|\alpha|\) of the measure \(\alpha\) is infinite.

An immediate consequence of this fact is the following observation. Let us denote by

$$|\varphi|=\max \{ |n|\colon n\in\operatorname{supp}\varphi \}$$

the “range” of the configuration \(\varphi\in\Phi\), also putting \(|\vartheta|=0\) for the empty configuration \(\vartheta\). In particular, \(|\varphi|=n\) for all configurations \(\varphi\) from any set \(K_n\) (33), and therefore the first moment \(|\alpha|\) is exactly the expectation of the ranges \(|\varphi_n|\) of the increments of the random walk (39). If it is infinite, then a routine application of the Borel–Cantelli Lemma implies that almost surely

$$ \limsup_{t\to\infty} (|\varphi_t|-t)=\infty.$$
(48)

By the definition of the measure \(\mu\), its convolutions (i. e., the one-dimensional distributions of the associated random walk) have the form

$$\mu^{*t}=\delta_{-t} \otimes \lambda_t,$$

where \(\lambda_t\) are the probability measures on the group \(\Phi\) representing the distributions of the component \(\psi_t\) (40) of the random walk. In other words, they are the convolutions

$$\lambda_t=\lambda * T^{-1}\lambda * \dots * T^{-t+1}\lambda$$

of the shifts of the measure \(\lambda\) (37) on the commutative group \(\Phi\), and in particular

$$\lambda_{t+1}=\lambda * T^{-1}\lambda_t \quad\forall\,t\ge 1.$$

Therefore, as follows from the characterization of the Liouville property in terms of the asymptotic invariance of the Cesàro means (10), for proving the Liouville property of the measure \(\mu\), it is sufficient to verify the asymptotic invariance of the sequence of measures \((\lambda_t)\) with respect to the action of the group \(\Phi\), namely

$$ \| \varphi \lambda_t-\lambda_t \| \xrightarrow[t\to\infty]{} 0 \quad\forall\,\varphi\in\Phi.$$
(49)

The idea behind the proof of the convergence (49) is to replace one of the increments \(\varphi_\tau\) of each sample path \(\mathbf{g}\) (39) with

$$\varphi'_\tau=\varphi_\tau+T^{\tau-1} \varphi,$$

while leaving the other increments unchanged. As follows from formula (40), for the resulting new sample paths,

$$ \psi'_t=\psi_t+T^{-\tau+1} (\varphi'_\tau-\varphi_\tau)=\psi_t+\varphi \quad \forall\,t\ge\tau.$$
(50)

If this replacement preserves the measure on the space of sample paths, or equivalently, on the space of sequences of independent increments \((\varphi_t)\) with a common distribution \(\lambda\) (37), then the corresponding one-dimensional distributions \(\lambda_t\) will have the desired property of asymptotic invariance (49).

To implement this idea, let us fix a non-empty configuration \(\varphi\in\Phi\), and, by using the fact that the limit (48) is infinite, for almost every sample path \(\mathbf{g}\) put

$$ \tau=\tau(\mathbf{g} )=\min \{ t> |\varphi|\colon |\varphi_t|-t> |\varphi| \}.$$
(51)

Now let us define the transformation

$$U=U_\varphi\colon (\varphi_t)\mapsto(\varphi'_t)$$

on the space of sequences of independent \(\lambda\)-distributed increments \((\varphi_t)\) by putting

$$\varphi'_t= \begin{cases} \varphi_t+ T^{\tau-1}\varphi, & t=\tau, \\ \varphi_t, & t\neq\tau. \end{cases}$$

The conditional distribution of \(\varphi_\tau\), given the value \(\tau=t_0\), is a convex combination of the measures \(\varkappa_n\) (35) with the weights obtained by normalizing the restriction of the distribution \(\alpha\) (36) to the ray \((|\varphi|+t_0\,..\,\infty)\), and therefore it is invariant under the action of the subgroup \(\Phi_0^{|\varphi|+t_0}\). On the other hand, the support of the configuration \(\varphi\) is contained in the interval \([-|\varphi|\,..\,|\varphi|]\) by definition of the range \(|\varphi|\), and therefore, due to the inequalities from definition (51), the support of the shift \(T^{t_0-1}\varphi\) is contained in the interval \([0\,..\,|\varphi|+\tau_0]\), implying that \(T^{t_0-1}\varphi\in \Phi_0^{|\varphi|+t_0}\).

As follows from formula (50), for any \(t_0>0\),

$$\| \varphi\lambda_t-\lambda_t \| \le 2 \mathbf P\{\tau > t_0 \} \quad\forall\,t\ge t_0,$$

which implies the convergence (49) and completes the proof. \(\square\)

5.B. Lemma 47 makes it easy to proceed with the presentation of our main examples.

Theorem 52.

If the measure \(\mu\) (38) on the group \(\mathcal L\) (26) has infinite entropy \(H(\mu)\), then the reflected measure \(\check\mu\) is non-Liouville, while the measure \(\mu\) itself is Liouville.

Proof.

This is a combination of Claim 43 and Lemma 47. \(\square\)

Remark 53.

In light of Remark 18, Theorem 52 implies that if the entropy of the measure \(\mu\) is infinite, then all means on the group \(\operatorname{gr}\mu\) obtained as Banach limits of the sequence of convolutions \(\mu^{*t}\), are left-invariant but not right-invariant. Further elaborating on the question from Remark 11, it would be interesting to understand the structure of the set of the strictly one-sided means that arise in this way from convolutions of an irreducible measure \(\mu\) on an arbitrary amenable group \(G\) (let us denote this set by \(\mathcal S(G)\)).

The question of describing the class of groups on which all invariant means (not just those arising from convolutions) are two-sided was first posed by G. M. Adelson-Velsky and Yu. A. Shreider [1, Theorem 7] (we note in parentheses that they were not familiar with any of the works on invariant means that appeared after the publication of the Banach – Tarski paradox, including von Neumann’s 1929 paper), who, however, erroneously claimed that all groups of locally subexponential growth belong to this class. This error was discovered by A. Paterson [47], who proved that, in fact, this class is much smaller and consists (if we restrict ourselves to discrete groups only) of FC-central groups, i. e., those in which all conjugacy classes are finite (see also further works by P. Milnes [46], J. Rosenblatt–M. Talagrand [52], and J. Hopfensperger [35]). Thus, for all groups \(G\) that are hyper-FC-central (see the discussion in § 1), but not FC-central, the set \(\mathcal S(G)\) is empty. On the other hand, a recent result of A. V. Alpeev [12] implies that \(\mathcal S(G)\) is non-empty for all other amenable groups \(G\).

Theorem 54.

If the measure \(\widetilde\mu\) (45) on the group \(\widetilde{\mathcal L}_0 \subset \mathcal L\times\mathcal L\) (29) has the property that the measure \(\alpha\) (36) has infinite first moment, and the set of indices \(n\in\operatorname{supp}\alpha\) with \(\widetilde\kappa_n \neq \operatorname{diag}\kappa_n\) is non-empty and finite, then the measure \(\widetilde\mu\) is non-Liouville, while both coordinate projections of the measure \(\widetilde\mu\) onto the multipliers of the product \(\mathcal L\times\mathcal L\) are Liouville.

Proof.

This is a combination of Claim 46 and Lemma 47. \(\square\)

6. Appendix. “Liouville’s Theorem” and “Shannon’s Theorem”

figure 6

6.A. The events related to the emergence of what is now called Liouville’s theorem on the absence of non-constant bounded analytic (as well as harmonic) functions on the complex plane are very well documented in the reports (Comptes Rendus) of the meetings of l’Académie des sciences for the years 1844 and 1851, and even in a brief account they are quite interesting (we omit numerous even more picturesque details, which we hope to return to on another occasion).

At the meeting on December 9, 1844, Liouville, in his remarks on a memoir on elliptic functions presented that day by Chasles, announces, among other things, his result on the absence of non-constant bounded doubly periodic analytic functions on the complex plane (referred to by him as a “principle” because, according to Liouville, it “paraît imprimer à l’étude de ces fonctions un caractère d’unité et de simplicité tout particulier”). Already at the next meeting on December 16 (exactly a week later), Cauchy asserts that Liouville’s principle is a special case of his own results, published as early as in 1827, and presents a complete (by the standards of that time) proof without any assumptions of periodicity (this is what we currently know as “Liouville’s theorem”). A week later (December 23), Cauchy presents another proof, and altogether, according to U. Bottazzini’s count [17, p. 165], publishes five different proofs in less than a year.

The second act takes place on March 31, 1851. As the chairman of a special ad hoc committee, Cauchy presents to the Academy a report on a memoir by Hermite on doubly periodic functions. At the very end of this report, Liouville’s announcement is mentioned (the proof still has not been published). Liouville immediately states that a complete proof is contained in the notes of lectures he himself gave in 1847 to two German mathematicians and were recorded by one of them, Borchardt (or j’ai chez moi, et je pourrai déposer sur le bureau, avant la fin de la séance, une pièce manuscrite qui paraîtra concluante à cet égard), and indeed manages to present this manuscript before the end of the session (it was published by Borchardt much later, only in 1879). The table of contents of these lecture notes reproduced in Comptes Rendus begins with the theorem on the absence of non-constant bounded doubly periodic functions. But the last word on this day belongs to Cauchy, who once again asserts his priority by referring not only to the note of 1844, but also to his earlier works.

The presentation of Liouville’s theorem in “A Course of Modern Analysis” by Whittaker and Watson is accompanied with a note (starting from the second edition of 1915): “This theorem, which is really due to Cauchy, was given this name by Borchardt, who heard it in Liouville’s lectures in 1847.” This story has been discussed many times by mathematical historians (in addition to the aforementioned book [17], see also detailed analyses by J. Lützen [44, §§ XIII.9–17, 21] and U. Bottazzini–J. Gray [18, §§ 3.5.6, 4.2.4]), who ultimately tend to give priority to Liouville ([44, pp. 543, 544] and [18, pp. 231, 232], respectively), referring to, among other things, the note found in his records in the summer of 1844 mentioning how the general case can be obtained from the doubly periodic case. And yet, as for the general case, Cauchy’s reasoning appears much more conceptual from a modern point of view (since in modern terms the doubly periodic case is “just” an exercise in applying the maximum principle …).

Ever since Bernhard Riemann “virtually put equality signs between two-dimensional potential theory and complex function theory” (to quote L. Ahlfors [11]), the statement about the absence of non-constant bounded harmonic functions on the plane has also been called Liouville’s theorem. We will continue the story of the Liouville property for harmonic functions elsewhere.

6.B. The definition of the entropy

$$ H(p)=- \sum p_i \log p_i$$
(55)

of a discrete probability distribution \(p=(p_i)\) is usually associated with Claude Shannon. However, in his seminal paper “A Mathematical Theory of Communication”, he explicitly writes that “entropy appears in the same form in certain formulations of statistical mechanics” [53, § 6]. There is a widely circulated, albeit unconfirmed by Shannon himself, anecdote about his conversation with John von Neumann, who allegedly suggested using the term “entropy”, saying: “You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name. In the second place, and more importantly, no one knows what entropy really is, so in a debate you will always have the advantage” (for instance, see O. Rioul [50, § 12]).

When discussing the genesis of entropy (55) from statistical mechanics, the first names that come to mind are Ludwig Boltzmann and Josiah Willard Gibbs (for entropy in classical and quantum mechanics, see the very informative survey by S. Goldstein, J. Lebowitz, R. Tumulka, and N. Zanghi [32]). Boltzmann provided a probabilistic interpretation of Clausius’ thermodynamic entropy as the negative logarithm of the “probability” that the system is in a given state. The use of quotation marks is explained by the fact that in reality this quantity (the measure of permutability, Permutabilitätsmaß according to Boltzmann [15, p. 192]; in modern terms, this is the differential entropy of a distribution on the phase space) arises as a result of some limiting procedure based on combinatorial calculations with infinitesimal regions, and therefore it is defined (even without demanding excessive rigor from the limiting process) up to an additive constant. Boltzmann himself repeatedly emphasized that the discretization he used “is nothing more than an artificial device, Hilfsmittel, that helps to calculate physical processes”, explicitly calling it “mathematical fiction” [16, p. 348].

Although the sums of the form (55) often appear in the works of Boltzmann and (slightly later) Gibbs, their appearance always serves as an auxiliary step towards the differential entropy (“all the infinite in nature means nothing other than some limiting procedure”, as Boltzmann says [15, p. 167]), and the normalization condition for the weights \(\sum p_i=1\) is never imposed. For example, the proof of Theorem VIII in Gibbs’ “Elementary Principles in Statistical Mechanics” [31] essentially boils down to establishing what is now called the Gibbs inequality (the non-negativity of the Kullback – Leibler divergence in the discrete case), but Gibbs does not feel any need to consider it as a property of probability distributions.

Max Planck poses and solves the problem of the physical meaning of Boltzmann’s limiting procedure, which leads him to the understanding that quantization is a physical reality. He vividly recalls in his Nobel lecture of 1920 [9] how, following the discovery of the law of black-body radiation, the necessity arose to give it a conceptual interpretation: “But even if this radiation formula should prove to be absolutely accurate it would after all be only an interpolation formula found by happy guesswork, and would thus leave one rather unsatisfied. I was, therefore, from the day of its origination, occupied with the task of giving it a real physical meaning, and this question led me, along Boltzmann’s line of thought, to the consideration of the relation between entropy and probability; until after some weeks of the most intense work of my life clearness began to dawn upon me, and an unexpected view revealed itself in the distance.”

The perspective that opened up was nothing other than the first glimpse of quantum theory. Later in the same lecture, Planck explains in detail why the new theory implies that the physical entropy (not just its increments) has an “absolute value”, i. e., in modern terms, it is a function and not an additive cocycle. In an expanded form, this explanation is given at the beginning (§§ 113–131) of the section “Entropy and Probability” of his book “The Theory of Heat Radiation” (starting from the second edition in 1913 [49]). As he writes in the preface, “the entropy of a state has a quite definite, positive value, which, as a minimum, becomes zero, while in contrast therewith the entropy may, according to the classical thermodynamics, decrease without limit to minus infinity. For the present, I would consider this proposition as the very quintessence of the hypothesis of quanta.” Formula (173) in § 124 of Planck’s book

$$S=- kN \sum w_n\log w_n,$$

where \(S\) is the physical entropy, \(k\) is the Boltzmann constant introduced by Planck, and \(N\) is the number of molecules, is apparently the first appearance of the “mathematical” entropy (55) with probability weights \(w_n\).

Let us now return to von Neumann. It is precisely this aforementioned formula (and, of course, with an explicit use of the fact that the weights in it are normalized; in the quantum setup this means that the trace of the corresponding state is one) that he quotes when defining the quantum “von Neumann entropy” [54].

In conclusion, it should be emphasized that all of the above applies only to the entropy of a single discrete probability distribution. Shannon introduced into consideration sequences of distributions the entropies of which are subadditive, making it possible to talk about the arising asymptotic entropy, i. e., the rate of linear growth of these entropies. The additional structure used in this case is a stationary sequence of symbols, and the resulting sequence of measures is its finite dimensional distributions. Another example is the sequence of convolutions described in § 2; it is generated by a group structure on a countable set of states (and it is precisely Shannon’s asymptotic entropy that served as a model for defining the asymptotic entropy of random walks by A. Avez). The question of a unified approach to these two situations was raised back in the 1970s by A. M. Vershik; later [3, p. 64], he proposed to use Poisson boundary polymorphisms for this purpose. As for the Boltzmann–Planck–von Neumann entropy in statistical physics, as recently noted by Vershik [4, p. 49], its connection with Shannon’s theory remains problematic.