1 Introduction and preliminaries

A metric random walk space [Xdm] is a metric space (Xd) together with a family \(m = (m_x)_{x \in X}\) of probability measures that encode the jumps of a Markov chain. Important examples of metric random walk spaces are: locally finite weighted connected graphs, finite Markov chains and \([{\mathbb {R}}^N, d, m^J]\) with d the Euclidean distance and

$$\begin{aligned} m^J_x(A) := \int _A J(x - y) d\mathcal {L}^N(y) \ \hbox { for every Borel set } A \subset {\mathbb {R}}^N \ , \end{aligned}$$

where \(J:{\mathbb {R}}^N\rightarrow [0,+\infty [\) is a measurable, nonnegative and radially symmetric function with \(\int J =1\). Furthermore, given a metric measure space \((X,d, \mu )\) satisfying certain properties we can obtain a metric random walk space \([X, d, m^{\mu ,\epsilon }]\), called the \(\epsilon \)-step random walk associated to \(\mu \), where

$$\begin{aligned} m^{\mu ,\epsilon }_x:= \frac{\mu |\!\_B(x, \epsilon )}{\mu (B(x, \epsilon ))}. \end{aligned}$$

Since its introduction as a means of solving the denoising problem in the seminal work by Rudin et al. [45], the total variation flow has remained one of the most popular tools in Image Processing. Recall that, from the mathematical point of view, the study of the total variation flow in \({\mathbb {R}}^N\) was established in [5]. On the other hand, the use of neighbourhood filters by Buades et al. [12], that was originally proposed by Yaroslavsky [52], has led to an extensive literature in nonlocal models in image processing (see for instance [8, 28, 31, 32] and the references therein). Consequently, there is great interest in studying the total variation flow in the nonlocal context. As further motivation, note that an image can be considered as a weighted graph, where the pixels are taken as the vertices and the “similarity” between pixels as the weights. The way in which these weights are defined depends on the problem at hand, see for instance [24, 32].

The aim of this paper is to study the total variation flow in metric random walk spaces, obtaining general results that can be applied, for example, to the different points of view in Image Processing. In this regard, we introduce the 1-Laplacian operator associated with a metric random walk space, as well as the notions of perimeter and mean curvature for subsets of a metric random walk space. In doing so, we generalize results obtained in [34, 35] for the particular case of \([{\mathbb {R}}^N, d, m^J]\), and, moreover, generalize results in graph theory. We then proceed to prove existence and uniqueness of solutions of the total variation flow in metric random walk spaces and to study its asymptotic behaviour with the help of some Poincaré type inequalities. Furthermore, we introduce the concepts of Cheeger and calibrable sets in metric random walk spaces and characterize calibrability by using the 1-Laplacian operator. Let us point out that, to our knowledge, some of these results were not yet known for graphs, nonetheless, we have specified in the main text which important results were already known for graphs. Moreover, in the forthcoming paper [37], we apply the theory developed here to obtain the \((BV,L^p)\)-decomposition, \(p=1,2\), of functions in metric random walk spaces. This decomposition can be applied to Image Processing if, for example, images are regarded as graphs and, moreover, to other nonlocal models.

Partitioning data into sensible groups is a fundamental problem in machine learning, computer science, statistics and science in general. In these fields, it is usual to face large amounts of empirical data, and getting a first impression of the data by identifying groups with similar properties can prove to be very useful. One of the most popular approaches to this problem is to find the best balanced cut of a graph representing the data, such as the Cheeger ratio cut [17]. Consider a finite weighted connected graph \(G =(V, E)\), where \(V = \{x_1, \ldots , x_n \}\) is the set of vertices (or nodes) and E the set of edges, which are weighted by a function \(w_{ji}= w_{ij} \ge 0\), \((i,j) \in E\). The degree of the vertex \(x_i\) is denoted by \(d_i:= \sum _{j=1}^n w_{ij}\), \(i=1,\ldots , n\). In this context, the Cheeger cut value of a partition \(\{ S, S^c\}\) (\(S^c:= V {\setminus } S\)) of V is defined as

$$\begin{aligned} \mathcal {C}(S):= \frac{\mathrm{Cut}(S,S^c)}{\min \{\mathrm{vol}(S), \mathrm{vol}(S^c)\}}, \end{aligned}$$

where

$$\begin{aligned} \mathrm{Cut}(A,B) = \sum _{i \in A, j \in B} w_{ij}, \end{aligned}$$

and \(\mathrm{vol}(S)\) is the volume of S, defined as \(\mathrm{vol}(S):= \sum _{i \in S} d_i\). Furthermore,

$$\begin{aligned} h(G) = \min _{S \subset V} \mathcal {C}(S) \end{aligned}$$

is called the Cheeger constant, and a partition \(\{ S, S^c\}\) of V is called a Cheeger cut of G if \(h(G)=\mathcal {C}(S)\). Unfortunately, the Cheeger minimization problem of computing h(G) is NP-hard [29, 47]. However, it turns out that h(G) can be approximated by the second eigenvalue \(\lambda _2\) of the graph Laplacian thanks to the following Cheeger inequality [18]:

$$\begin{aligned} \frac{\lambda _2}{2} \le h(G) \le \sqrt{2\lambda _2}. \end{aligned}$$
(1.1)

This motivates the spectral clustering method [51], which, in its simplest form, thresholds the second eigenvalue of the graph Laplacian to get an approximation to the Cheeger constant and, moreover, to a Cheeger cut. In order to achieve a better approximation than the one provided by the classical spectral clustering method, a spectral clustering based on the graph p-Laplacian was developed in [13], where it is showed that the second eigenvalue of the graph p-Laplacian tends to the Cheeger constant h(G) as \(p \rightarrow 1^+\). In [47] the idea was taken up by directly considering the variational characterization of the Cheeger constant h(G)

$$\begin{aligned} h(G) = \min _{u \in L^1} \frac{ \vert u \vert _{TV}}{\Vert u - \mathrm{median}(u)) \Vert _1}, \end{aligned}$$
(1.2)

where

$$\begin{aligned} \vert u \vert _{TV} := \frac{1}{2} \sum _{i,j=1}^n w_{ij} \vert u(x_i) - u(x_j) \vert . \end{aligned}$$

The subdifferential of the energy functional \(\vert \cdot \vert _{TV}\) is the 1-Laplacian in graphs \(\Delta _1\). Using the nonlinear eigenvalue problem \(0 \in \Delta _1 u - \lambda \, \mathrm{sign}(u)\), the theory of 1-Spectral Clustering is developed in [14,15,16, 29], and good results on the Cheeger minimization problem have been obtained.

In [36], we obtained a generalization, in the framework of metric random walk spaces, of the Cheeger inequality (1.1) and of the variational characterization of the Cheeger constant (1.2). In this paper, in connection with the 1-Spectral Clustering, also in metric random walk spaces, we study the eigenvalue problem of the 1-Laplacian and then relate it to the optimal Cheeger cut problem. Then again, these results apply, in particular, to locally finite weighted connected graphs, complementing the results given in [14,15,16, 29].

Additionally, regarding the notion of a function of bounded variation in a metric measure space \((X,d, \mu )\) introduced by Miranda in [41], we provide, via the \(\epsilon \)-step random walk associated to \(\mu \), a characterization of these functions.

1.1 Metric random walk spaces

Let (Xd) be a Polish metric space equipped with its Borel \(\sigma \)-algebra. A random walk m on X is a family of probability measures \(m_x\) on X, \(x \in X\), satisfying the two technical conditions: (i) the measures \(m_x\) depend measurably on the point \(x \in X\), i.e., for any Borel set A of X and any Borel set B of \({\mathbb {R}}\), the set \(\{ x \in X \ : \ m_x(A) \in B \}\) is Borel; (ii) each measure \(m_x\) has finite first moment, i.e. for some (hence any, by the triangle inequality) \(z \in X\), and for any \(x \in X\) one has \(\int _X d(z,y) dm_x(y) < +\infty \) (see [44]).

A metric random walk space [Xdm] is a Polish metric space (Xd) together with a random walk m on X.

Let [Xdm] be a metric random walk space. A Radon measure \(\nu \) on X is invariant for the random walk \(m=(m_x)\) if

$$\begin{aligned} d\nu (x)=\int _{y\in X}d\nu (y)dm_y(x), \end{aligned}$$

that is, for any \(\nu \)-measurable set A, it holds that A is \(m_x\)-measurable for \(\nu \)-almost all \(x\in X\), \(\displaystyle x\mapsto m_x(A)\) is \(\nu \)-measurable, and

$$\begin{aligned} \nu (A)=\int _X m_x(A)d\nu (x). \end{aligned}$$

Consequently, if \(\nu \) is an invariant measure with respect to m and \(f \in L^1(X, \nu )\), it holds that \(f \in L^1(X, m_x)\) for \(\nu \)-a.e. \(x \in X\), \(\displaystyle x\mapsto \int _X f(y) d{m_x}(y)\) is \(\nu \)-measurable, and

$$\begin{aligned} \int _X u(x) d\nu (x) = \int _X \left( \int _X u(y) d{m_x}(y) \right) d\nu (x). \end{aligned}$$

The measure \(\nu \) is said to be reversible for m if, moreover, the following detailed balance condition holds:

$$\begin{aligned} dm_x(y)d\nu (x) = dm_y(x)d\nu (y), \end{aligned}$$
(1.3)

that is, for any Borel set \(C \subset X \times X\),

$$\begin{aligned} \int _{X}\left( \int _X \upchi _{C}(x,y) dm_x(y)\right) d\nu (x) = \int _X\left( \int _X\upchi _C(x,y) dm_y(x)\right) d\nu (y), \end{aligned}$$

where \(\upchi _{C}\) is the characteristic function of the set C defined as

$$\begin{aligned} \upchi _{C}(x):=\left\{ \begin{array}{ll} 1 &{}\quad \hbox {if } x\in C,\\ 0 &{}\quad \hbox {otherwise.} \end{array}\right. \end{aligned}$$

Note that the reversibility condition implies the invariance condition. However, we will sometimes write that \(\nu \) is invariant and reversible so as to emphasize both conditions.

We now give some examples of metric random walk spaces that illustrate the general abstract setting. In particular, Markov chains serve as paradigmatic examples that capture many of the properties of this general setting that we will encounter during our study.

Example 1.1

  1. (1)

    Consider \(({\mathbb {R}}^N, d, \mathcal {L}^N)\), with d the Euclidean distance and \(\mathcal {L}^N\) the Lebesgue measure. For simplicity we will write dx instead of \(d\mathcal {L}^N(x)\). Let \(J:{\mathbb {R}}^N\rightarrow [0,+\infty [\) be a measurable, nonnegative and radially symmetric function verifying \(\int _{{\mathbb {R}}^N}J(x)dx=1\). In \(({\mathbb {R}}^N, d, \mathcal {L}^N)\) we have the following random walk, starting at x,

    $$\begin{aligned} m^J_x(A) := \int _A J(x - y) dy \quad \hbox { for every Borel set } A \subset {\mathbb {R}}^N . \end{aligned}$$

    Applying Fubini’s Theorem it is easy to see that the Lebesgue measure \(\mathcal {L}^N\) is an invariant and reversible measure for this random walk.

    Observe that, if we assume that in \({\mathbb {R}}^N\) we have an homogeneous population and \(J(x-y)\) is thought of as the probability distribution of jumping from location x to location y, then, for a Borel set A in \({\mathbb {R}}^N\), \(m^J_x(A)\) is measuring how many individuals are going to A from x following the law given by J. See also the interpretation of the m-interaction between sets given in Sect. 2.1. Finally, note that the same ideas are applicable to the countable spaces given in the following two examples.

  2. (2)

    Let \(K: X \times X \rightarrow {\mathbb {R}}\) be a Markov kernel on a countable space X, i.e.,

    $$\begin{aligned} K(x,y) \ge 0 \quad \forall x,y \in X, \quad \quad \sum _{y\in X} K(x,y) = 1 \quad \forall x \in X. \end{aligned}$$

    Then, for

    $$\begin{aligned} m^K_x(A):= \sum _{y \in A} K(x,y), \end{aligned}$$

    \([X, d, m^K]\) is a metric random walk space for any metric d on X.

    Moreover, in Markov chain theory terminology, a measure \(\pi \) on X satisfying

    $$\begin{aligned} \sum _{x \in X} \pi (x) = 1 \quad \hbox {and} \quad \pi (y) = \sum _{x \in X} \pi (x) K(x,y) \quad \forall y \in X, \end{aligned}$$

    is called a stationary probability measure (or steady state) on X. This is equivalent to the definition of invariant probability measure for the metric random walk space \([X, d, m^K]\). In general, the existence of such a stationary probability measure on X is not guaranteed. However, for irreducible and positive recurrent Markov chains (see, for example, [30] or [43]) there exists a unique stationary probability measure.

    Furthermore, a stationary probability measure \(\pi \) is said to be reversible for K if the following detailed balance equation holds:

    $$\begin{aligned} K(x,y) \pi (x) = K(y,x) \pi (y) \ \hbox { for } x, y \in X. \end{aligned}$$

    By Tonelli’s Theorem for series, this balance condition is equivalent to the one given in (1.3) for \(\nu =\pi \):

    $$\begin{aligned} dm^K_x(y)d\pi (x) = dm^K_y(x)d\pi (y). \end{aligned}$$
  3. (3)

    Consider a locally finite weighted discrete graph \(G = (V(G), E(G))\), where each edge \((x,y) \in E(G)\) (we will write \(x\sim y\) if \((x,y) \in E(G)\)) has a positive weight \(w_{xy} = w_{yx}\) assigned. Suppose further that \(w_{xy} = 0\) if \((x,y) \not \in E(G)\).

    A finite sequence \(\{ x_k \}_{k=0}^n\) of vertices on the graph is called a path if \(x_k \sim x_{k+1}\) for all \(k = 0, 1, \ldots , n-1\). The length of a path \(\{ x_k \}_{k=0}^n\) is defined as the number n of edges in the path. Then, \(G = (V(G), E(G))\) is said to be connected if, for any two vertices \(x, y \in V\), there is a path connecting x and y, that is, a path \(\{ x_k \}_{k=0}^n\) such that \(x_0 = x\) and \(x_n = y\). Finally, if \(G = (V(G), E(G))\) is connected, define the graph distance \(d_G(x,y)\) between any two distinct vertices xy as the minimum of the lengths of the paths connecting x and y. Note that this metric is independent of the weights. We will always assume that the graphs we work with are connected.

    For \(x \in V(G)\) we define the weight at the vertex x as

    $$\begin{aligned} d_x:= \sum _{y\sim x} w_{xy} = \sum _{y\in V(G)} w_{xy}, \end{aligned}$$

    and the neighbourhood of x as \(N_G(x) := \{ y \in V(G) \, : \, x\sim y\}\). Note that, by definition of locally finite graph, the sets \(N_G(x)\) are finite. When \(w_{xy}=1\) for every \(x\sim y\), \(d_x\) coincides with the degree of the vertex x in a graph, that is, the number of edges containing vertex x.

    For each \(x \in V(G)\) we define the following probability measure

    $$\begin{aligned} m^G_x:= \frac{1}{d_x}\sum _{y \sim x} w_{xy}\,\delta _y. \end{aligned}$$

    We have that \([V(G), d_G, m^G]\) is a metric random walk space and it is not difficult to see that the measure \(\nu _G\) defined as

    $$\begin{aligned} \nu _G(A):= \sum _{x \in A} d_x, \quad A \subset V(G), \end{aligned}$$

    is an invariant and reversible measure for this random walk.

    Given a locally finite weighted discrete graph \(G = (V(G), E(G))\), there is a natural definition of a Markov chain on the vertices. We define the Markov kernel \(K_G: V(G)\times V(G) \rightarrow {\mathbb {R}}\) as

    $$\begin{aligned} K_G(x,y):= \frac{1}{d_x} w_{xy}. \end{aligned}$$

    We have that \(m^G\) and \(m^{K_G}\) define the same random walk. If \(\nu _G(V(G))\) is finite, the unique stationary and reversible probability measure is given by

    $$\begin{aligned} \pi _G(x):= \frac{1}{\nu _G(V(G))} \sum _{z \in V(G)} w_{xz}. \end{aligned}$$
  4. (4)

    From a metric measure space \((X,d, \mu )\) we can obtain a metric random walk space, the so called \(\epsilon \)-step random walk associated to \(\mu \), as follows. Assume that balls in X have finite measure and that \(\mathrm{Supp}(\mu ) = X\). Given \(\epsilon > 0\), the \(\epsilon \)-step random walk on X starting at \(x\in X\), consists in randomly jumping in the ball of radius \(\epsilon \) centered at x with probability proportional to \(\mu \); namely

    $$\begin{aligned} m^{\mu ,\epsilon }_x:= \frac{\mu |\!\_B(x, \epsilon )}{\mu (B(x, \epsilon ))}. \end{aligned}$$

    Note that \(\mu \) is an invariant and reversible measure for the metric random walk space \([X, d, m^{\mu ,\epsilon }]\).

  5. (5)

    Given a metric random walk space [Xdm] with invariant and reversible measure \(\nu \), and given a \(\nu \)-measurable set \(\Omega \subset X\) with \(\nu (\Omega ) > 0\), if we define, for \(x\in \Omega \),

    $$\begin{aligned} m^{\Omega }_x(A):=\int _A d m_x(y)+\left( \int _{X{\setminus } \Omega }d m_x(y)\right) \delta _x(A) \ \hbox { for every Borel set } A \subset \Omega , \end{aligned}$$

    we have that \([\Omega ,d,m^{\Omega }]\) is a metric random walk space and it easy to see that \(\nu |\!\_\Omega \) is reversible for \(m^{\Omega }\).

In particular, if \(\Omega \) is a closed and bounded subset of \({\mathbb {R}}^N\), we obtain the metric random walk space \([\Omega , d, m^{J,\Omega }]\), where \(m^{J,\Omega } = (m^J)^{\Omega }\), that is

$$\begin{aligned} m^{J,\Omega }_x(A):=\int _A J(x-y)dy+\left( \int _{{\mathbb {R}}^n{\setminus } \Omega }J(x-z)dz\right) d\delta _x \ \hbox { for every Borel set } A \subset \Omega . \end{aligned}$$

From this point onwards, when dealing with a metric random walk space, we will assume that there exists an invariant and reversible measure for the random walk, which we will always denote by \(\nu \). In this regard, when it is clear from the context, a measure denoted by \(\nu \) will always be an invariant and reversible measure for the random walk under study. Furthermore, we assume that the metric measure space \((X,d,\nu )\) is \(\sigma \)-finite.

1.2 Completely accretive operators and semigroup theory

Since Semigroup Theory will be used along the paper, we would like to conclude this introduction with some notations and results from this theory along with results from the theory of completely accretive operators (see [9, 10, 22], or the Appendix in [6], for more details). We denote by \(J_0\) and \(P_0\) the following sets of functions:

$$\begin{aligned} J_0 := \{ j : {\mathbb {R}}\rightarrow [0, +\infty ] \ : \ \text{ j } \text{ is } \text{ convex, } \text{ lower } \text{ semi-continuous } \text{ and } \ j(0) = 0 \}, \\ P_0:= \left\{ q\in C^\infty ({\mathbb {R}}) \ : \ 0\le q'\le 1, \hbox { supp}(q') \hbox { is compact and } 0\notin \hbox {supp}(q) \right\} . \end{aligned}$$

Let \(u,v\in L^1(X,\nu )\). The following relation between u and v is defined in [9]:

$$\begin{aligned} u\ll v \ \hbox { if, and only if,} \ \int _{X} j(u)\, d\nu \le \int _{X} j(v) \, d\nu \ \ \hbox {for all} \ j \in J_0. \end{aligned}$$

An operator \(\mathcal {A} \subset L^1(X,\nu )\times L^1(X,\nu )\) is called completely accretive if

$$\begin{aligned} \int _X (v_1 - v_2) q(u_1 - u_2)d\nu \ge 0 \quad \hbox {for every} \ \ q \in P_0 \end{aligned}$$

and every \((u_i, v_i) \in \mathcal {A}\), \(i=1,2\). Moreover, an operator \(\mathcal {A}\) in \(L^1(X,\nu )\) is m-completely accretive in \(L^1(X,\nu )\) if \(\mathcal {A}\) is completely accretive and \({Range}(I + \lambda \mathcal {A}) = L^1(X,\nu )\) for all \(\lambda > 0\) (or, equivalently, for some \(\lambda >0\)).

Theorem 1.2

[9, 10] If \(\mathcal {A}\) is an m-completely accretive operator in \(L^1(X,\nu )\), then, for every \(u_0 \in \overline{D(\mathcal {A})}\) (the closure of the domain of \(\mathcal {A}\)), there exists a unique mild solution (see [22]) of the problem

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{du}{dt} + \mathcal {A}u \ni 0, \\ u(0) = u_{0}. \end{array} \right. \end{aligned}$$

Moreover, if \(\mathcal {A}\) is the subdifferential of a proper convex and lower semicontinuous function in \(L^2(X,\nu )\) then the mild solution of the above problem is a strong solution.

Furthermore we have the following contraction and maximum principle in any \(L^q(X,\nu )\) space, \(1\le q\le +\infty \): for \(u_{1,0},u_{2,0} \in \overline{D(\mathcal {A})}\) and denoting by \(u_i\) the unique mild solution of the problem

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{du_i}{dt} + \mathcal {A}u_i \ni 0, \\ u_i(0) = u_{i,0}, \end{array} \right. \end{aligned}$$

\( i=1,2,\) we have

$$\begin{aligned} \Vert (u_1(t)-u_2(t))^+\Vert _{L^q(X,\nu )}\le \Vert (u_{1,0}-u_{2,0})^+\Vert _{L^q(X,\nu )}\quad \forall \, 0<t<T, \end{aligned}$$

where \(r^+:=\max \{r,0\}\) for \(r\in {\mathbb {R}}\).

2 Perimeter, curvature and total variation in metric random walk spaces

2.1 m-perimeter

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \). We define the m-interaction between two \(\nu \)-measurable subsets A and B of X as

$$\begin{aligned} L_m(A,B):= \int _A \int _B dm_x(y) d\nu (x). \end{aligned}$$

Whenever \(L_m(A,B) < +\infty \), by the reversibility assumption on \(\nu \) with respect to m, we have

$$\begin{aligned} L_m(A,B)=L_m(B,A). \end{aligned}$$

Following the interpretation given after Example 1.1 (1), for a \(\nu \)-homogeneous population which moves according to the law provided by the random walk m, \(L_m(A,B)\) measures how many individuals are moving from A to B, and, thanks to the reversibility, this is equal to the amount of individuals moving from B to A. In this regard, the following concept measures the total flux of individuals that cross the “boundary” (in a very weak sense) of a set.

We define the concept of m-perimeter of a \(\nu \)-measurable subset \(E \subset X\) as

$$\begin{aligned} P_m(E)=L_m(E,X{\setminus } E) = \int _E \int _{X{\setminus } E} dm_x(y) d\nu (x). \end{aligned}$$

It is easy to see that

$$\begin{aligned} P_m(E) = \frac{1}{2} \int _{X} \int _{X} \vert \upchi _{E}(y) - \upchi _{E}(x) \vert dm_x(y) d\nu (x). \end{aligned}$$

Moreover, if E is \(\nu \)-integrable, we have

$$\begin{aligned} \displaystyle P_m(E)=\nu (E) -\int _E\int _E dm_x(y) d\nu (x). \end{aligned}$$
(2.1)

The notion of m-perimeter can be localized to a bounded open set \(\Omega \subset X\) by defining

$$\begin{aligned} \displaystyle P_m(E, \Omega ):= L_m( E\cap \Omega , X {\setminus } E) + L_m(E {\setminus } \Omega , \Omega {\setminus } E). \end{aligned}$$

Observe that

$$\begin{aligned} \displaystyle L_m(E, X{\setminus } E) =L_m( E\cap \Omega , X {\setminus } E) + L_m(E {\setminus } \Omega , \Omega {\setminus } E)+L_m(E{\setminus } \Omega , X {\setminus } (E \cup \Omega )) \end{aligned}$$

and, consequently, we have

$$\begin{aligned} P_m(E, \Omega ) = \int _E \int _{X {\setminus } E} dm_x(y) d\nu (x) - \int _{E{\setminus } \Omega } \int _{X {\setminus } (E \cup \Omega )} dm_x(y) d\nu (x), \end{aligned}$$

when both integrals are finite.

Example 2.1

  1. (1)

    Let \([{\mathbb {R}}^N, d, m^J]\) be the metric random walk space given in Example 1.1 (1) with invariant measure \(\mathcal {L}^N\). Then,

    $$\begin{aligned} P_{m^J} (E) = \frac{1}{2} \int _{{\mathbb {R}}^N} \int _{{\mathbb {R}}^N} \vert \upchi _{E}(y) - \upchi _{E}(x) \vert J(x -y) dy dx, \end{aligned}$$

    which coincides with the concept of J-perimeter introduced in [34]. On the other hand,

    $$\begin{aligned} P_{ m^{J,\Omega }} (E) = \frac{1}{2} \int _{\Omega } \int _{\Omega } \vert \upchi _{E}(y) - \upchi _{E}(x) \vert J(x -y) dy dx. \end{aligned}$$

    Note that, in general, \(P_{ m^{J,\Omega }} (E) \not = P_{m^J} (E).\)

    Moreover,

    $$\begin{aligned} P_{ m^{J,\Omega }} (E)= & {} \mathcal {L}^N(E) - \int _E \int _E dm_x^{J,\Omega }(y) dx = \mathcal {L}^N(E) - \int _E \int _E J(x-y) dy dx \\&-\, \int _E \left( \int _{{\mathbb {R}}^N {\setminus } \Omega } J(x - z) dz\right) dx \end{aligned}$$

    and, therefore,

    $$\begin{aligned} P_{ m^{J,\Omega }} (E) = P_{ m^{J}} (E) - \int _E \left( \int _{{\mathbb {R}}^N {\setminus } \Omega } J(x - z) dz\right) dx, \quad \forall \, E \subset \Omega . \end{aligned}$$
    (2.2)
  2. (2)

    In the case of the metric random walk space \([V(G), d_G, m^G ]\) associated to a finite weighted discrete graph G, given \(A, B \subset V(G)\), \(\mathrm{Cut}(A,B)\) is defined as

    $$\begin{aligned} \mathrm{Cut}(A,B):= \sum _{x \in A, y \in B} w_{xy} = L_{m^G}(A,B), \end{aligned}$$

    and the perimeter of a set \(E \subset V(G)\) is given by

    $$\begin{aligned} \vert \partial E \vert := \mathrm{Cut}(E,E^c) = \sum _{x \in E, y \in V {\setminus } E} w_{xy}. \end{aligned}$$

    Consequently, we have that

    $$\begin{aligned} \vert \partial E \vert = P_{m^G}(E) \quad \hbox {for all} \ E \subset V(G). \end{aligned}$$

Let us now give some properties of the m-perimeter.

Proposition 2.2

Let \(A,\ B \subset X\) be \(\nu \)-measurable sets with finite m-perimeter such that \(\nu (A \cap B) = 0\). Then,

$$\begin{aligned} P_m( A \cup B) = P_m( A) + P_m(B) - 2 L_m(A,B). \end{aligned}$$

Proof

We have

$$\begin{aligned} \displaystyle P_m( A \cup B)= & {} \int _{ A \cup B} \left( \int _{X {\setminus } (A \cup B)} dm_x(y) \right) d\nu (x) \\ \displaystyle= & {} \int _{ A} \left( \int _{X {\setminus } (A \cup B)} dm_x(y) \right) d\nu (x) + \int _{B} \left( \int _{X {\setminus } (A \cup B)} dm_x(y) \right) d\nu (x) \\ \displaystyle= & {} \int _{ A} \left( \int _{X{\setminus } A} dm_x(y) - \int _B dm_x(y) \right) d\nu (x) \\&+\, \int _{B} \left( \int _{X {\setminus } B} dm_x(y) - \int _A dm_x(y)\right) d\nu (x), \end{aligned}$$

and then, by the reversibility assumption on \(\nu \) with respect to m,

$$\begin{aligned} \begin{array}{l} \displaystyle P_m( A \cup B) = P_m( A) + P_m(B)- 2 \int _A \left( \int _B dm_x(y)\right) d\nu (x). \end{array} \end{aligned}$$

\(\square \)

Corollary 2.3

Let \(A,\ B,\ C\) be \(\nu \)-measurable sets in X with pairwise \(\nu \)-null intersections. Then

$$\begin{aligned} P_m( A \cup B\cup C)=P_m( A \cup B) +P_m(A\cup C) + P_m(B\cup C) - P_m( A) - P_m(B)-P_m(C) . \end{aligned}$$

2.2 m-mean curvature

Let \(E \subset X\) be \(\nu \)-measurable. For a point \(x \in X\) we define the m-mean curvature of \(\partial E\) at x as

$$\begin{aligned} H^m_{\partial E}(x):= \int _{X} (\upchi _{X {\setminus } E}(y) - \upchi _E(y)) dm_x(y). \end{aligned}$$

Observe that

$$\begin{aligned} H^m_{\partial E}(x) = 1 - 2 \int _E dm_x(y). \end{aligned}$$
(2.3)

Note that \(H^m_{\partial E}(x)\) can be computed for every \(x \in X\), not only for points in \(\partial E\). This fact will be used later in the paper. Having in mind (2.1), we have that, for a \(\nu \)–integrable set \(E\subset X\),

$$\begin{aligned} \int _E H^m_{\partial E}(x) d\nu (x)= & {} \int _E \left( 1 - 2 \int _E dm_x(y) \right) d\nu (x) = \nu (E) - 2\int _E\int _E dm_x(y) d\nu (x) \\= & {} P_m(E) - \int _E\int _E dm_x(y) d\nu (x) = 2P_m(E) -\nu (E). \end{aligned}$$

Consequently,

$$\begin{aligned} \displaystyle \int _E H^m_{\partial E}(x) d\nu (x)=2P_m(E) -\nu (E). \end{aligned}$$
(2.4)

2.3 m-total variation

Associated to the random walk \(m=(m_x)\) and the invariant measure \(\nu \), we define the space

$$\begin{aligned} BV_m(X,\nu ):= \left\{ u :X \rightarrow {\mathbb {R}}\ \,\nu \hbox {-measurable} \, : \, \int _{X} \int _{X} \vert u(y) - u(x) \vert dm_x(y) d\nu (x) < \infty \right\} . \end{aligned}$$

We have that \(L^1(X,\nu )\subset BV_m(X,\nu )\). The m-total variation of a function \(u\in BV_m(X,\nu )\) is defined by

$$\begin{aligned} TV_m(u):= \frac{1}{2} \int _{X} \int _{X} \vert u(y) - u(x) \vert dm_x(y) d\nu (x). \end{aligned}$$

Note that

$$\begin{aligned} P_m(E) = TV_m(\upchi _E). \end{aligned}$$
(2.5)

Observe that the space \(BV_m(X,\nu )\) is the nonlocal counterpart of classical local bounded variation spaces. Note further that, in the local context, given a Lebesgue measurable set \(E\subset {\mathbb {R}}^n\), its perimeter is equal to the total variation of its characteristic function (see (2.19)) and the above Eq. (2.5) provides the nonlocal counterpart. In (2.21) and Theorem 2.22 we illustrate further relations between these spaces.

However, although they represent analogous concepts in different settings, the classical local BV-spaces and the nonlocal BV-spaces are of a different nature. For example, in our nonlocal framework \(L^1(X,\nu )\subset BV_m(X,\nu )\) in contrast with classical local bounded variation spaces that are, by definition, contained in \(L^1\). Indeed, since each \(m_x\) is a probability measure, \(x\in X\), and \(\nu \) is invariant with respect to m, we have that

$$\begin{aligned} TV_m(u)\le \frac{1}{2} \int _X\int _X |u(y)|dm_x(y)d\nu (x)+\frac{1}{2}\int _X\int _X|u(x)|dm_x(y)d\nu (x)= \Vert u\Vert _{L^1(X,\nu )}. \end{aligned}$$

Recall the definition of the generalized product measure \(\nu \otimes m_x\) (see, for instance, [3]), which is defined as the measure on \(X \times X\) given by

$$\begin{aligned} \nu \otimes m_x(U) := \int _X \int _X \upchi _{U}(x,y) dm_x(y) d\nu (x)\quad \hbox {for } U\in \mathcal {B}(X\times X), \end{aligned}$$

where it is required that the map \(x \mapsto m_x(E)\) is \(\nu \)-measurable for any Borel set \(E \in \mathcal {B}(X)\). Moreover, it holds that

$$\begin{aligned} \int _{X \times X} g d(\nu \otimes m_x) = \int _X \int _X g(x,y) dm_x(y) d\nu (x) \end{aligned}$$

for every \(g\in L^1(X\times X,\nu \otimes m_x)\). Therefore, we can write

$$\begin{aligned} TV_m(u)= \frac{1}{2} \int _{X\times X} \vert u(y) - u(x) \vert d(\nu \otimes m_x)(x,y). \end{aligned}$$

Example 2.4

Let \([V(G), d_G, (m^G_x)]\) be the metric random walk space given in Example 1.1 (3) with invariant and reversible measure \(\nu _G\). Then,

$$\begin{aligned} TV_{m^G} (u)= & {} \frac{1}{2} \int _{V(G)} \int _{V(G)} \vert u(y) - u(x) \vert dm^G_x(y) d\nu _G(x) \\= & {} \frac{1}{2} \int _{V(G)} \frac{1}{d_x} \left( \sum _{y \in V(G)} \vert u(y) - u(x) \vert w_{xy}\right) d\nu _G(x) \\= & {} \frac{1}{2} \sum _{x \in V(G)} d_x \left( \frac{1}{d_x} \sum _{y \in V(G)} \vert u(y) - u(x) \vert w_{xy}\right) \\= & {} \frac{1}{2} \sum _{x \in V(G)} \sum _{y \in V(G)} \vert u(y) - u(x) \vert w_{xy}, \end{aligned}$$

which coincides with the anisotropic total variation defined in [50].

In the following results we give some properties of the total variation.

Proposition 2.5

If \(\phi : {\mathbb {R}}\rightarrow {\mathbb {R}}\) is Lipschitz continuous then, for every \(u \in BV_m(X,\nu )\), \(\phi (u) \in BV_m(X,\nu )\) and

$$\begin{aligned} TV_m(\phi (u)) \le \Vert \phi \Vert _{Lip} TV_m(u). \end{aligned}$$

Proof

$$\begin{aligned} TV_m(\phi (u))= & {} \frac{1}{2} \int _{X} \int _{X} \vert \phi (u)(y) - \phi (u)(x) \vert dm_x(y) d\nu (x)\\\le & {} \Vert \phi \Vert _{Lip} \frac{1}{2} \int _{X} \int _{X} \vert u(y) - u(x) \vert dm_x(y) d\nu (x) = \Vert \phi \Vert _{Lip} TV_m(u). \end{aligned}$$

\(\square \)

Proposition 2.6

\(TV_m\) is convex and continuous in \(L^1(X, \nu )\).

Proof

Convexity follows easily. Let us see that it is continuous. Let \(u_n \rightarrow u\) in \(L^1(X, \nu )\). Since \(\nu \) is invariant and reversible with respect to m, we have

$$\begin{aligned} \vert TV_m(u_n) - TV_m(u) \vert= & {} \frac{1}{2} \left| \int _{X} \int _{X} \left( \vert u_n(y) - u_n(x) \vert - \vert u(y) - u(x) \vert \right) dm_x(y) d\nu (x)\right| \\\le & {} \frac{1}{2} \left( \int _{X} \int _{X} \vert u_n(y) - u(y) \vert dm_x(y) d\nu (x) \right. \\&\left. +\int _{X} \int _{X} \vert u_n(x) - u(x) \vert dm_x(y) d\nu (x)\right) \\= & {} \frac{1}{2} \left( \int _{X} \vert u_n(y) - u(y) \vert d\nu (y) \right. \\&\left. +\int _{X} \vert u_n(x) - u(x) \vert d\nu (x)\right) = \Vert u_n - u \Vert _{L^1(X, \nu )}. \end{aligned}$$

\(\square \)

As in the local case, we have the following coarea formula relating the total variation of a function with the perimeter of its superlevel sets.

Theorem 2.7

(Coarea formula) For any \(u \in L^1(X,\nu )\), let \(E_t(u):= \{ x \in X \ : \ u(x) > t \}\). Then,

$$\begin{aligned} TV_m(u) = \int _{-\infty }^{+\infty } P_m(E_t(u))\, dt. \end{aligned}$$
(2.6)

Proof

Since

$$\begin{aligned} u(x) = \int _0^{+\infty } \upchi _{E_t(u)}(x) \, dt - \int _{-\infty }^0 (1 - \upchi _{E_t(u)}(x)) \, dt, \end{aligned}$$

we have

$$\begin{aligned} u(y) - u(x) = \int _{-\infty }^{+\infty } \upchi _{E_t(u)}(y) - \upchi _{E_t(u)} (x) \, dt. \end{aligned}$$

Moreover, since \(u(y) \ge u(x)\) implies \(\upchi _{E_t(u)}(y) \ge \upchi _{E_t(u)} (x)\), we obtain that

$$\begin{aligned} \vert u(y) - u(x) \vert = \int _{-\infty }^{+\infty } \vert \upchi _{E_t(u)}(y) - \upchi _{E_t(u)} (x) \vert \,dt. \end{aligned}$$

Therefore, we get

$$\begin{aligned} \displaystyle TV_m(u)= & {} \frac{1}{2} \int _{X} \int _{X} \vert u(y) - u(x) \vert dm_x(y) d\nu (x) \displaystyle \\= & {} \displaystyle \frac{1}{2} \int _{X} \int _{X} \left( \int _{-\infty }^{+\infty } \vert \upchi _{E_t(u)}(y) - \upchi _{E_t(u)} (x) \vert dt \right) dm_x(y) d\nu (x) \\= & {} \displaystyle \int _{-\infty }^{+\infty } \left( \frac{1}{2} \int _{X} \int _{X} \vert \upchi _{E_t(u)}(y) - \upchi _{E_t(u)} (x) \vert dm_x(y) d\nu (x) \right) dt \\= & {} \int _{-\infty }^{+\infty } P_m(E_t(u)) dt, \end{aligned}$$

where Tonelli–Hobson’s Theorem is used in the third equality. \(\square \)

Let us recall the following concept of m-connectedness introduced in [34]: A metric random walk space [Xdm] with invariant and reversible measure \(\nu \) is m-connected if, for any pair of \(\nu \)-non-null measurable sets \( A,B\subset X\) such that \(A\cup B=X\), we have \(L_m(A,B)> 0\). Moreover, in [36, Theorem 2.19], we see that this concept is equivalent to the following concept of ergodicity (see [30]) when \(\nu \) is a probability measure.

Definition 2.8

Let [Xdm] be a metric random walk space with invariant and reversible probability measure \(\nu \). A Borel set \(B \subset X\) is said to be invariant with respect to the random walk m if \(m_x(B) = 1\) whenever x is in B. The invariant probability measure \(\nu \) is said to be ergodic if \(\nu (B) = 0\) or \(\nu (B) = 1\) for every invariant set B with respect to the random walk m.

Furthermore, by [36, Theorem 2.21], we have that \(\nu \) is ergodic if, and only if, for \(u\in L^2(X,\nu )\), \(\Delta _m u = 0\) implies that u is \(\nu \)-a.e. equal to a constant, where

$$\begin{aligned} \Delta _m u(x) := \int _X (u(y) - u(x)) dm_x(y). \end{aligned}$$

As an example, note that the metric random walk space associated to an irreducible and positive recurrent Markov chain on a countable space together with its steady state is m-connected (see [30]). Moreover, the metric random walk space \([V(G), d_G, m^G]\) associated to a locally finite weighted connected discrete graph \(G = (V(G), E(G))\) is \(m^G\)-connected. In [36] we give further examples involving the metric random walk space given in Example 1.1 (1).

Observe that, for a metric random walk space [Xdm] with invariant and reversible measure \(\nu \), if the space is m-connected, then the m-perimeter of any \(\nu \)-measurable set E with \(0<\nu (E)<\nu (X)\) is positive.

Lemma 2.9

Assume that \(\nu \) is ergodic and let \(u\in BV_m(X,\nu )\). Then,

$$\begin{aligned} TV_m(u) = 0 \iff u \ \hbox {is constant} \ \nu -\hbox {a.e.}. \end{aligned}$$

Proof

(\(\Leftarrow \)) Suppose that u is \(\nu \)-a.e. equal to a constant k, then, since \(\nu \) is invariant with respect to m, we have

$$\begin{aligned} \displaystyle TV_m(u)= & {} \frac{1}{2} \int _{X}\int _X \vert u(y) - u(x) \vert dm_x(y)d\nu (x)\\ \displaystyle= & {} \int _{X}\int _X \vert u(y) - k \vert dm_x(y)d\nu (x) \\ \displaystyle= & {} \int _{X} \vert u(x) - k \vert d\nu (x)=0. \end{aligned}$$

(\(\Rightarrow \)) Suppose that

$$\begin{aligned} 0 = TV_m(u) = \frac{1}{2} \int _{X}\int _X \vert u(y) - u(x) \vert dm_x(y)d\nu (x). \end{aligned}$$

Then, \(\int _X |u(y)-u(x)| dm_x(y)=0\) for \(\nu \)-a.e. \(x\in X\), thus

$$\begin{aligned} |\Delta _mu(x)|=\left| \int _X \big (u(y)-u(x)\big ) dm_x(y)\right| \le \int _X |u(y)-u(x)| dm_x(y)=0 \quad \hbox {for }\nu \hbox {-a.e. }x\in X, \end{aligned}$$

and we are done by the comments preceding the lemma. \(\square \)

From now on we will assume that the metric random walk spaces that we work with are m-connected (this assumption is only dropped in Sect. 2.5). However, we would like to point out that if a metric random walk space [Xdm] is not m-connected then it may be broken down as \(X=A\cup B\) where A, \(B\subset X\) have \(\nu \)-positive measure and \(L_m(A,B)=0\), allowing us to work with A and B independently. Then, for example, if \(E\subset X\) is a \(\nu \)-measurable set we get

$$\begin{aligned} P_m(E)=P_m(E\cap A)+P_m(E\cap B) \end{aligned}$$

and, if \(u\in BV_m(X,\nu )\),

$$\begin{aligned} TV_m(u)= \frac{1}{2} \int _{A}\int _A \vert u(y) - u(x) \vert dm_x(y)d\nu (x)+ \frac{1}{2} \int _{B}\int _B \vert u(y) - u(x) \vert dm_x(y)d\nu (x). \end{aligned}$$

2.4 Isoperimetric and Sobolev inequalities

The n-dimensional isoperimetric inequality states that

$$\begin{aligned} \mathcal {L}^n(\Omega ){}^{\frac{n-1}{n}} \le c_n \mathcal {H}^{n-1}(\partial \Omega ) \end{aligned}$$
(2.7)

for every domain \(\Omega \subset {\mathbb {R}}^n\) with smooth boundary and compact closure, where \(c_n = \frac{1}{n \omega _n}\), and \(\omega _n\) is the volume of the unit ball. It is well known (see for instance [39]) that (2.7) is equivalent to the Sobolev inequality

$$\begin{aligned} \Vert u \Vert _{\frac{n}{n-1}} \le c_n \int _{{\mathbb {R}}^n} \vert \nabla u \vert dx \quad \forall u \in C_0^{\infty }({\mathbb {R}}^n). \end{aligned}$$

If we replace the Euclidean space \({\mathbb {R}}^n\) by a Riemannian manifold M with measure \(\mu _n\), then the isoperimetric inequality takes the following form:

$$\begin{aligned} \mu _n(\Omega )^{\frac{n-1}{n}} \le C_n \mu _{n-1}(\partial \Omega ) \end{aligned}$$
(2.8)

for all bounded sets \(\Omega \subset M\) with smooth boundary, being \(\mu _{n-1}\) the surface measure. As in the Euclidean case (see [38] or [46]), (2.8) is equivalent to the Sobolev inequality

$$\begin{aligned} \left( \int _M \vert u \vert ^{\frac{n}{n-1}} d\mu _n \right) ^{\frac{n-1}{n}} \le C_n \int _M \vert \nabla u \vert d\mu _n \quad \forall u \in C_0^{\infty } (M). \end{aligned}$$
(2.9)

Consequently, it is natural to say that a Riemannian manifold M has isoperimetric dimension n if (2.9) holds (see [21]). The equivalence between isoperimetric inequalities and Sobolev inequalities in the context of Markov chains was obtained by Varopoulos in [49]. Let us state these results under the context treated here.

Definition 2.10

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \). We say that \([X,d,m,\nu ]\) has isoperimetric dimension n if there exists a constant \(I_n>0\) such that

$$\begin{aligned} \nu (A)^{\frac{n-1}{n}} \le I_n P_m(A) \quad \hbox {for all }A\subset X\hbox { with } 0<\nu (A)<\nu (X). \end{aligned}$$
(2.10)

We assume that, for \(n = 1\), \( \frac{n}{n-1} = +\infty \) by convention.

We will denote by \(BV^0_m(X,\nu )\) the set of functions \(u \in BV_m(X,\nu )\) satisfying that there exists \(A\subset X\), with \( 0< \nu (A)<\nu (X)\), such that \(u=0\) in \(X{\setminus } A\).

Theorem 2.11

\([X,d,m,\nu ]\) has isoperimetric dimension n if, and only if,

$$\begin{aligned} \Vert u \Vert _{L^{\frac{n}{n-1}}(X, \nu ) } \le I_n TV_m(u) \quad \hbox {for all} \ u \in BV^0_m(X,\nu ). \end{aligned}$$
(2.11)

The constant \(I_n\) is the same as in (2.10).

Proof

(\(\Leftarrow \)) Given \(A \subset X\) with \(0<\nu (A)<\nu (X)\), applying (2.11) to \(\upchi _A\), we get

$$\begin{aligned} \nu (A)^{\frac{n-1}{n}} = \Vert \upchi _A \Vert _{L^{\frac{n}{n-1}}(X, \nu )} \le I_n TV_m(\upchi _A) = I_n P_m(A). \end{aligned}$$

(\(\Rightarrow \)) Let us see that (2.10) implies (2.11). Since \(TV_m(|u|) \le TV_m( u )\), we may assume that \(u \ge 0\) without loss of generality.

Suppose first that \(n=1\) and let \(u \in BV^0_m(X,\nu )\) such that \(u\ge 0\) and is not \(\nu \)-a.e. equal to 0 (otherwise, (2.11) is trivially satisfied). Note that, in this case, since u is null outside of a \(\nu \)-measurable set A with \(\nu (A)<\nu (X)\), we have \(\nu (E_t(u))<\nu (X)\) for \(t>0\) and, moreover, by the definition of the \(L^\infty (X,\nu )\)-norm, \(0<\nu (E_t(u))\) for \(t<\Vert u \Vert _{L^\infty (X,\nu )}\). Then, by the coarea formula and (2.10), we have

$$\begin{aligned} TV_m(u)= & {} \int _{0}^{+\infty } P_m(E_t(u))\, dt = \int _{0}^{\Vert u \Vert _{L^\infty (X,\nu )}} P_m(E_t(u))\, dt\\\ge & {} \int _{0}^{\Vert u \Vert _{L^\infty (X,\nu )}} \frac{1}{I_n} dt = \frac{1}{I_n} \Vert u \Vert _{L^\infty (X,\nu )}. \end{aligned}$$

Therefore, we may suppose that \(n >1\). Let \(p:= \frac{n}{n-1}\). Again, by the coarea formula and (2.10), if \(u \in BV^0_m(X,\nu )\), \(u\ge 0\) and not identically \(\nu \)-null, we get

$$\begin{aligned} TV_m(u) = \int _{0}^{+\infty } P_m(E_t(u))\, dt \ge \int _{0}^{\Vert u \Vert _{L^\infty (X,\nu )}} \frac{1}{I_n} \nu (E_t(u))^{\frac{1}{p}}\, dt, \end{aligned}$$
(2.12)

where \(\Vert u \Vert _{L^\infty (X,\nu )}=+\infty \) if \(u\notin L^\infty (X,\nu )\). On the other hand, since the function \(\varphi (t):= \nu (E_t(u))^{\frac{1}{p}}\) is nonnegative and non-increasing, we have

$$\begin{aligned} p t^{p-1} \varphi (t)^p \le p \left( \int _0^t \varphi (s) ds \right) ^{p-1} \varphi (t) = \frac{d}{dt} \left[ \left( \int _0^t \varphi (s) ds \right) ^{p}\right] . \end{aligned}$$

Integrating over (0, t) and letting \(t \rightarrow \Vert u \Vert _{L^\infty (X,\nu )}\), we obtain

$$\begin{aligned} \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} p t^{p-1} \varphi (t)^p \, dt \le \left( \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} \varphi (t) dt \right) ^{p}, \end{aligned}$$

that is,

$$\begin{aligned} \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}}p t^{p-1} \nu (E_t(u)) \, dt \le \left( \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} \nu (E_t(u))^{\frac{1}{p}} dt \right) ^{p}. \end{aligned}$$
(2.13)

Now,

$$\begin{aligned} \Vert u \Vert ^p_{L^p(X, \nu )}= & {} \int _X u^p(x) d \nu (x) = \int _X \left( \int _0^{u(x)} \frac{dt^p}{dt} dt \right) d\nu (x) \\= & {} \int _X \left( \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} p t^{p-1} \upchi _{E_t(u)} dt \right) d\nu (x) \\= & {} \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} p t^{p-1} \nu (E_t(u)) dt. \end{aligned}$$

Thus, by (2.13), we get

$$\begin{aligned} \Vert u \Vert _{L^{p}(X, \nu )} \le \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} \nu (E_t(u))^{\frac{1}{p}} dt. \end{aligned}$$
(2.14)

Finally, from (2.12) and (2.14), we obtain (2.11). \(\square \)

Note that, if we take \(\Psi _n (r):= \frac{1}{I_n} r^{-\frac{1}{n}}\), we can rewrite (2.10) as

$$\begin{aligned} \nu (A) \Psi _n(\nu (A)) \le P_m(A) \quad \hbox {for all }A\subset X\hbox { with } 0<\nu (A)<\nu (X). \end{aligned}$$

The next definition was given in [21] for Riemannian manifolds.

Definition 2.12

Given a non-increasing function \(\Psi : ]0, \infty [ \rightarrow [0,\infty [\), we say that \([X,d,m,\nu ]\) satisfies a \(\Psi \) -isoperimetric inequality if

$$\begin{aligned} \nu (A) \Psi (\nu (A)) \le P_m(A) \quad \hbox {for all }A\subset X\hbox { with } 0<\nu (A)<\nu (X). \end{aligned}$$

Example 2.13

  1. (1)

    In [48] (see also the references therein) it is shown that the lattice \({\mathbb {Z}}^n\) has isoperimetric dimension n with constant \(I_n= \frac{1}{2n}\), and that the complete graph \(K_n\) satisfies a \(\Psi \)-isoperimetric inequality with \(\Psi (r) = n - r \). In addition, it is also proved that the n-cube \(Q_n\) satisfies a \(\Psi \)-isoperimetric inequality with \(\Psi (r) = \mathrm{log}_2 (\frac{\nu (Q_n)}{r})\).

  2. (2)

    In [35], for \([\mathbb {R}^N,d,m^J]\), it is proved that

    $$\begin{aligned} \Psi _{_{J,N}}(|A|)\le P_J(A) \quad \hbox {for all } \ A \subset X \ \hbox {with} \ |A| < +\infty , \end{aligned}$$

    being

    $$\begin{aligned} \Psi _{_{J,N}}(r)= \int _{B_{\left( r/\omega _N\right) ^\frac{1}{N}}}H^J_{\partial B_{\Vert x\Vert }}(x)dx =\int _0^{r} H_{\partial B_{\left( s/\omega _N\right) ^\frac{1}{N}}}^J (\left( s/\omega _N\right) ^\frac{1}{N}, 0, \ldots , 0)ds, \end{aligned}$$

    where \(B_r\) is the ball of radius r centered at 0 and \(H_{\partial B_r}^J\) is the \(m^J\)-mean curvature of \(\partial B_r\) (see Sect. 2.2). Therefore, \([\mathbb {R}^N,d,m^J, \mathcal {L}^N]\) satisfies a \(\Psi \)-isoperimetric inequality, where \(\Psi (r) = \frac{1}{r} \Psi _{_{J,N}}(r)\) is a decreasing function.

The next result was proved in [21] for Riemannian manifolds and in [20] for graphs (see also [48, Theorem 2]).

Proposition 2.14

Given a non-increasing function \(\Psi : ]0, \infty [ \rightarrow [0,\infty [\), we have that \([X,d,m,\nu ]\) satisfies a \(\Psi \)-isoperimetric inequality if, and only if, the following inequality holds:

$$\begin{aligned} \Psi (\nu (A)) \Vert u \Vert _{L^1(X, \nu )} \le TV_m(u) \end{aligned}$$
(2.15)

for all \(\nu \)-measurable sets \(A \subset X\) with \(0<\nu (A)<\nu (X)\) and all \(u \in L^1(X, \nu )\) with \(u = 0 \ \hbox {in} \ X {\setminus } A.\)

Proof

Taking \(u = \upchi _A\) in (2.15), we obtain that \([X,d,m,\nu ]\) satisfies a \(\Psi \)-isoperimetric inequality. Conversely, since \(TV_m(|u|) \le TV_m( u )\), it is enough to prove (2.15) for \(u \ge 0\). If \(u\equiv 0\) in X the result is trivial. Therefore, let A be a \(\nu \)-measurable set with \(0<\nu (A)<\nu (X)\) and \(0 \le u \in L^1(X, \nu )\) a non-\(\nu \)-null function with \(u \equiv 0 \ \hbox {in} \ X {\setminus } A\). For \(t>0\) we have that \(E_t(u) \subset A\) and, therefore, \(\nu (E_t(u)) \le \nu (A)\), thus, since \(\Psi \) is non-increasing, we have that \(\Psi (\nu (E_t(u)) \ge \Psi (A)\). Therefore, by the coarea formula, we have

$$\begin{aligned} \displaystyle TV_m(u)= & {} \int _0^{+\infty } P_m(E_t(u)) dt = \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} P_m(E_t(u)) dt \\\ge & {} \int _0^{\Vert u \Vert _{L^\infty (X,\nu )}} \nu (E_t(u)) \Psi (\nu (E_t(u))) dt \\ \displaystyle\ge & {} \Psi (\nu (A)) \int _0^{+\infty } \nu (E_t(u)) dt = \Psi (\nu (A)) \Vert u \Vert _{L^1(X, \nu )}. \end{aligned}$$

\(\square \)

As a consequence of Theorem 2.11 and Proposition 2.14, we obtain the following result.

Corollary 2.15

The following assertions are equivalent:

  1. (i)

    \(\Vert u \Vert _{L^{\frac{n}{n-1}}(X, \nu ) } \le I_n TV_m(u) \quad \forall u \in BV^0_m(X,\nu ).\)

  2. (ii)

    \( \Vert u \Vert _{L^1(X, \nu )} \le I_n \nu (A)^{\frac{1}{n}}TV_m(u)\) for all \(A \subset X\) with \(0<\nu (A)<\nu (X)\) and all \(u \in L^1(X, \nu )\) with \(u = 0\) in \(X {\setminus } A.\)

Consider the Dirichlet energy functional \(\mathcal {H}_m : L^2(X, \nu ) \rightarrow [0, + \infty ]\) defined as

$$\begin{aligned} \mathcal {H}_m(u)= \left\{ \begin{array}{ll} \displaystyle \frac{1}{2} \int _{X \times X} (u(x) - u(y))^2 dm_x(y) d\nu (x) \quad &{}\hbox { if } u\in L^2(X, \nu ) \cap L^1(X, \nu ). \\ \\ + \infty , \quad &{}\hbox {else}. \end{array}\right. \end{aligned}$$

The next result, in the context of Markov chains, was obtained by Varopoulos in [49].

Theorem 2.16

Let \(n >2\). If the Sobolev inequality

$$\begin{aligned} \Vert u \Vert _{L^{\frac{n}{n-1}}(X, \nu ) } \le I_n TV_m(u) \quad \hbox {for all} \ u \in BV^0_m(X,\nu ) \end{aligned}$$
(2.16)

holds, then there exists \(C_n >0\) such that

$$\begin{aligned} \Vert u \Vert _{L^{\frac{2n}{n-2}}(X, \nu )}^2 \le C_n \mathcal {H}_m(u) \quad \hbox {for all} \ u \in BV^0_m(X,\nu ) \end{aligned}$$

Proof

We can assume that \(u \ge 0\). Let \(p:= \frac{2(n-1)}{n-2}\). By (2.16), we have

$$\begin{aligned} \Vert u \Vert _{\frac{2n}{n-2}}^p = \Vert u \Vert _{\frac{pn}{n-1}}^p = \Vert u^p \Vert _{\frac{n}{n-1}} \le I_n TV_m(u^p). \end{aligned}$$
(2.17)

On the other hand, since, for \(a,b>0\),

$$\begin{aligned} \vert b^p - a^p\vert \le p(a^{p-1} + b^{p-1} ) \vert b - a\vert \end{aligned}$$

by the convexity of \(|x|^p\), and having in mind the reversibility of \(\nu \), we have

$$\begin{aligned} TV_m(u^p)\le & {} \frac{1}{2} \int _X \int _X p(u^{p-1}(x) + u^{p-1}(y) ) \vert u(y) - u(x)\vert dm_x(y) d\nu (x) \\= & {} p \int _X \int _X u^{p-1}(x) \vert u(y) - u(x)\vert dm_x(y) d\nu (x) \\\le & {} p \left( \int _X \int _X u^{2(p-1)}(x)dm_x(y) d\nu (x) \right) ^{\frac{1}{2}} \left( \int _X \int _X \vert u(y) - u(x)\vert ^2 dm_x(y) d\nu (x)\right) ^{\frac{1}{2}} \\= & {} p \Vert u^{p-1} \Vert _{L^2(X,\nu )} \left( 2 \mathcal {H}_m(u)\right) ^{\frac{1}{2}}. \end{aligned}$$

Then, by (2.17), we get

$$\begin{aligned} \Vert u \Vert _{\frac{2n}{n-2}}^p \le p I_n \Vert u^{p-1} \Vert _{L^2(X,\nu )} \left( 2 \mathcal {H}_m(u)\right) ^{\frac{1}{2}}. \end{aligned}$$
(2.18)

Now,

$$\begin{aligned} \Vert u^{p-1} \Vert _{L^2(X,\nu )} = \left( \int _X u^{\frac{2n}{n-2}} d\nu \right) ^{\frac{1}{2}} = \Vert u \Vert _{\frac{2n}{n-2}}^{\frac{n}{n-2}}, \end{aligned}$$

thus, from (2.18),

$$\begin{aligned} \Vert u \Vert _{\frac{2n}{n-2}}^{\frac{2(n-1)}{ n-2}} \le \textstyle \frac{2(n-1)}{n-2} I_n \Vert u \Vert _{\frac{2n}{n-2}}^{\frac{n}{n-2}} \left( 2 \mathcal {H}_m(u)\right) ^{\frac{1}{2}}, \end{aligned}$$

and, therefore,

$$\begin{aligned} \Vert u \Vert _{\frac{2n}{n-2}}^2 \le C_n \mathcal {H}_m(u) \end{aligned}$$

where \(C_n = \frac{8(n-1)^2}{(n-2)^2} I_n^2.\) \(\square \)

Following Theorems 2.11 and 2.16 we can also obtain a Sobolev inequality as a consequence of the isoperimetric dimensional inequality.

Corollary 2.17

Assume that \(\nu (X) < \infty \). Let \(n >2\). If \([X,d,m,\nu ]\) has isoperimetric dimension n then there exists \(C_n >0\) such that

$$\begin{aligned} \Vert u \Vert _{L^{\frac{2n}{n-2}}(X, \nu )}^2 \le C_n \mathcal {H}_m(u) \quad \hbox {for all} \ u \in BV_m^0(X,\nu ). \end{aligned}$$

Let us point out that an important consequence of this result is Theorem 5 in [19], which corresponds to Corollary 2.17 for the particular case of finite weighted graphs.

2.5 m-TV versus TV in metric measure spaces

Let \((X, d, \nu )\) be a metric measure space and recall that, for functions in \(L^1(X, \nu )\), Miranda introduced a local notion of total variation in [41] (see also [2]). To define this notion, first note that for a function \(u : X \rightarrow {\mathbb {R}}\), its slope (or local Lipschitz constant) is defined as

$$\begin{aligned} \vert \nabla u \vert (x) := \limsup _{y \rightarrow x} \frac{\vert u(y) - u(x)\vert }{d(x,y)}, \ \ x\in X, \end{aligned}$$

with the convention that \(\vert \nabla u \vert (x) = 0\) if x is an isolated point.

A function \(u \in L^1(X, \nu )\) is said to be a BV-function if there exists a sequence \((u_n)\) of locally Lipschitz functions converging to u in \(L^1(X, \nu )\) and such that

$$\begin{aligned} \sup _{n \in {\mathbb {N}}} \int _X \vert \nabla u_n \vert d\nu (x) < \infty . \end{aligned}$$

We shall denote the space of all BV-functions by \(BV(X,d, \nu )\). Let \(u \in BV(X,d, \nu )\), the total variation of u on an open set \(A \subset X\) is defined as:

$$\begin{aligned} \vert D u \vert _{\nu }(A):= \inf \left\{ \liminf _{n \rightarrow \infty } \int _A \vert \nabla u_n \vert (x) d \nu (x) \ : \ u_n \in Lip_{loc}(X,\nu ), \ u_n \rightarrow u \ \hbox {in} \ L^1(A, \nu ) \right\} . \end{aligned}$$

A set \(E \subset X\) is said to be of finite perimeter if \(\upchi _E \in BV(X,d, \nu )\) and its perimeter is defined as

$$\begin{aligned} \mathrm{Per}_{\nu }(E):= \vert D \upchi _E \vert _{\nu }(X). \end{aligned}$$
(2.19)

We want to point out that in [2] the BV-functions are characterized using different notions of total variation.

As aforementioned, the local classical BV-spaces and the nonlocal BV-spaces are of different nature although they represent analogous concepts in different settings. In this section we compare these spaces, showing that it is possible to relate the nonlocal concept to the local one after rescaling and taking limits.

Remark 2.18

Obviously,

$$\begin{aligned} \vert D u \vert _\nu \le \vert \nabla u \vert \, \nu \quad \hbox {if } u\hbox { is locally Lipschitz}. \end{aligned}$$

Furthermore, there exist metric measures spaces in which the equality in this expression does not hold (see [4, Remark 4.4]).

Proposition 2.19

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \). Let \(u\in BV(X,d,\nu )\). Then \(u\in BV(X,d,m_x)\) for \(\nu \)-a.e. \(x\in X\) and

$$\begin{aligned} \int _X |Du|_{m_x}(X) d\nu (x)\le |Du|_{\nu }(X). \end{aligned}$$

Proof

Since \(u \in BV(X,d, \nu )\), there exists a sequence \(\{ u_n \}_{n \in {\mathbb {N}}} \subset Lip_{\text {loc}}(X,\nu )\) such that

$$\begin{aligned} \lim _{n \rightarrow \infty } \Vert u_n - u \Vert _{L^1(X,\nu )} = 0 \quad \hbox { and} \quad \lim _{n \rightarrow \infty } \int _{X} \vert \nabla u_n \vert (x) d\nu (x) = \vert Du \vert _{\nu }(X). \end{aligned}$$

Now, using the invariance of \(\nu \),

$$\begin{aligned} \begin{array}{l} \displaystyle \int _X \Vert u_n - u \Vert _{L^1(X, m_x)} d\nu (x)= \int _{X} \left( \int _{X} |u_n(y) - u(y)| dm_x(y) \right) d\nu (x) \\ \displaystyle \qquad = \int _{X} |u_n(y) - u(y)| d\nu (y) = \Vert u_n - u \Vert _{L^1(X,\nu )} \xrightarrow {n\rightarrow \infty } 0 . \\ \end{array} \end{aligned}$$

Therefore, we may take a subsequence, which we still denote by \(u_{n}\), such that \(\lim _{n \rightarrow \infty }\Vert u_n - u \Vert _{L^1(X, m_x)}=0\) for \(\nu \)-a.e. \(x\in X\).

Moreover, by Fatou’s lemma and the invariance of \(\nu \),

$$\begin{aligned} \begin{array}{l} \displaystyle \int _X \left( \liminf _{n \rightarrow \infty } \int _{X} \vert \nabla u_n \vert (y) dm_x(y)\right) d\nu (x) \le \liminf _{n \rightarrow \infty }\int _X \left( \int _{X} \vert \nabla u_n \vert (y) dm_x(y)\right) d\nu (x) \\ \displaystyle \qquad = \liminf _{n \rightarrow \infty }\int _X \vert \nabla u_n \vert (y) d\nu (x) = |Du|_{\nu }(X). \\ \end{array} \end{aligned}$$

Consequently, \(\liminf _{n \rightarrow \infty } \int _{X} \vert \nabla u_n \vert (y) dm_x(y)<\infty \) and \(\lim _{n\rightarrow \infty }u_n=u\) in \(L^1(X,m_x)\) for \(\nu \)-a.e. \(x\in X\), thus \(u\in BV(X,d,m_x)\) for \(\nu \)-a.e. \(x\in X\), and

$$\begin{aligned} \int _X |Du|_{m_x}(X) d\nu (x)\le |Du|_{\nu }(X) \, . \end{aligned}$$

\(\square \)

It is shown in [35] that, in the context of Example 1.1 (1), and assuming that J satisfies

$$\begin{aligned} M_J:=\int _{{\mathbb {R}}^N}J(z)|z|dz<+\infty , \end{aligned}$$

we have that

$$\begin{aligned} TV_{m^J}(u) \le \frac{M_J}{2}|Du|_{\mathcal {L}^N} \end{aligned}$$
(2.20)

for every \(u \in BV({\mathbb {R}}^N)\).

In the next example we see that there exist metric random walk spaces in which it is not possible to obtain an inequality like (2.20).

Example 2.20

Let \(G= (V(G), E(G))\) be a locally finite weighted discrete graph with weights \(w_{x,y}\). For a fixed \(x_0 \in V(G)\) the function \(u = \upchi _{\{x_0\}}\) is a Lipschitz function and, since every vertex is isolated for the graph distance, \(|\nabla u|\equiv 0\), thus

$$\begin{aligned} |Du|_{\nu _G}(V(G))\le \int |\nabla u| d\nu _G(x) =0 . \end{aligned}$$

However, by Example 2.4, we have

$$\begin{aligned} TV_{m^G}(u) = \frac{1}{2} \sum _{x \in V(G))} \sum _{y \in V(G)} \vert u(x) - u(y) \vert w_{xy} = \sum _{x \in V(G)), x \not = x_0} w_{x_0 x} > 0. \end{aligned}$$

Let \([\mathbb {R}^N,d,m^J]\) be the metric random walk space of Example 1.1 (1). Then, if J is compactly supported and \(u \in BV({\mathbb {R}}^N)\) has compact support we have that (see [23] and [34])

$$\begin{aligned} \lim _{\epsilon \downarrow 0} \frac{C_J}{\epsilon } TV_{m^{J_{\epsilon }}}(u) = \int _{{\mathbb {R}}^N} \vert Du \vert d\mathcal {L}^N, \end{aligned}$$
(2.21)

where

$$\begin{aligned} J_\epsilon (x):=\frac{1}{\epsilon ^{N}} J\left( \frac{x}{\epsilon }\right) \quad \hbox {and} \quad C_J = \frac{2}{\displaystyle \int _{{\mathbb {R}}^N} J(z) \vert z_N \vert dz}. \end{aligned}$$

In particular, if we take

$$\begin{aligned} J(x):= \frac{1}{\mathcal {L}^N(B(0,1))}\upchi _{B(0,1)}(x), \end{aligned}$$

then

$$\begin{aligned} J_\epsilon (x) = \frac{1}{\mathcal {L}^N(B(0,\epsilon ))}\upchi _{B(0,\epsilon )}(x). \end{aligned}$$

Hence,

$$\begin{aligned} m^{{\mathcal {L}^N,\epsilon }}_x = m^{J_{\epsilon }}_x, \end{aligned}$$

and, consequently, by (2.21), we have

$$\begin{aligned} \lim _{\epsilon \downarrow 0} \frac{C_J}{\epsilon } TV_{m^{{\mathcal {L}^N,\epsilon }}}(u) = \int _{{\mathbb {R}}^N} \vert Du \vert d\mathcal {L}^N = \vert Du \vert _{\mathcal {L}^N} ({\mathbb {R}}^N). \end{aligned}$$

Therefore, it is natural to pose the following problem: Let \((X,d, \mu )\) be a metric measure space and let \(m^{\mu ,\epsilon }\) be the \(\epsilon \)-step random walk associated to \(\mu \), that is,

$$\begin{aligned} m^{\mu ,\epsilon }_x:= \frac{\mu |\!\_B(x, \epsilon )}{\mu (B(x, \epsilon ))}. \end{aligned}$$

Are there metric measure spaces for which

$$\begin{aligned} \lim _{\epsilon \downarrow 0} \frac{1}{\epsilon }TV_{m^{\mu ,\epsilon }}(u) \approx \vert Du \vert _{\mu }(X) \quad \hbox {for all} \ u \in BV(X,d,\mu )? \end{aligned}$$

To give a positive answer to the previous question we recall the following concepts on a metric measure space \((X,d, \nu )\): The measure \(\nu \) is said to be doubling if there exists a constant \(C_D \ge 1\) such that

$$\begin{aligned} 0< \nu (B(x,2r)) \le C_D \nu (B(x,r)) < \infty \quad \forall \, x \in X, \ \hbox {and all} \ r >0. \end{aligned}$$

A doubling measure \(\nu \) has the following property. For every \(x \in X\) and \(0< r \le R < \infty \) if \(y \in B(x,R)\) then

$$\begin{aligned} \frac{\nu (B(x,R))}{\nu (B(y,r))} \le C \left( \frac{R}{r} \right) ^{q_{\nu }}, \end{aligned}$$
(2.22)

where C is a positive constant depending only on \(C_D\) and \(q_{\nu } = \log _2 C_D\).

On the other hand, the metric measure space \((X,d, \nu )\) is said to support a 1-Poincaré inequality if there exist constants \(c>0\) and \(\lambda \ge 1\) such that, for any \(u \in \mathrm{Lip}(X,d)\), the inequality

$$\begin{aligned} \int _{B(x,r)} \vert u(y) - u_{B(x,r)} \vert d\nu (y) \le c r \int _{ B(x,\lambda r)} \vert \nabla u \vert (y) d \nu (y) \end{aligned}$$

holds, where

$$\begin{aligned} u_{B(x,r)}:= \frac{1}{\nu (B(x,r))}\int _{ B(x,r) } u(y) d\nu (y). \end{aligned}$$

The following result is proved in [33, Theorem 3.1].

Theorem 2.21

[33] Let \((X,d, \nu )\) be a metric measure space with \(\nu \) doubling and supporting a 1-Poincaré inequality. Given \(u \in L^1(X, \mu )\), we have that \(u \in BV(X,d,\nu )\) if, and only if,

$$\begin{aligned} \liminf _{\epsilon \rightarrow 0^+} \frac{1}{\epsilon } \int _{\Delta _{\epsilon }} \frac{\vert u(y) - u(x) \vert }{\sqrt{\nu (B(x,\epsilon ))} \sqrt{\nu (B(y,\epsilon ))}} d\nu (y) d \nu (x) < \infty , \end{aligned}$$

where \(\Delta _\epsilon := \{ (x,y) \in X \times X \ : \ d(x,y) < \epsilon \}\). Moreover, there is a constant \(C \ge 1\), that depends only on \((X,d, \nu )\), such that

$$\begin{aligned} \frac{1}{C} \vert Du \vert _\nu (X) \le \liminf _{\epsilon \rightarrow 0^+} \frac{1}{\epsilon } \int _{\Delta _{\epsilon }} \frac{\vert u(y) - u(x) \vert }{\sqrt{\nu (B(x,\epsilon ))} \sqrt{\nu (B(y,\epsilon ))}} d\nu (y) d \nu (x) \le C \vert Du \vert _\nu (X). \end{aligned}$$

Now, by Fubini’s Theorem, we have

$$\begin{aligned}&\int _{\Delta _{\epsilon }} \frac{\vert u(y) - u(x) \vert }{\sqrt{\nu (B(x,\epsilon ))} \sqrt{\nu (B(y,\epsilon ))}} d\nu (y) d \nu (x) \nonumber \\&\quad = \int _X \int _{B(x,\epsilon )} \frac{\vert u(y) - u(x) \vert }{\sqrt{\nu (B(x,\epsilon ))} \sqrt{\nu (B(y,\epsilon ))}} d\nu (y) d \nu (x). \end{aligned}$$
(2.23)

On the other hand, by (2.22), there exists a constant \(C_1 >0\), depending only on \(C_D\), such that

$$\begin{aligned} \frac{\nu (B(x,\epsilon ))}{\nu (B(y,\epsilon ))} \le C_1 \ . \end{aligned}$$
(2.24)

By (2.24), we have

$$\begin{aligned} \frac{1}{\sqrt{C_1}} \frac{1}{\nu (B(x,\epsilon ))} \le \frac{1}{\sqrt{\nu (B(x,\epsilon ))} \sqrt{\nu (B(y,\epsilon ))}} \le \sqrt{C_1} \frac{1}{\nu (B(x,\epsilon ))} \quad \forall \, y \in B(x, \epsilon ).\nonumber \\ \end{aligned}$$
(2.25)

Hence, from (2.23) and (2.25), we get

$$\begin{aligned} \displaystyle \frac{1}{\sqrt{C_1}} \frac{1}{\epsilon }TV_{m^{\nu ,\epsilon }}(u)&\displaystyle =\frac{1}{\sqrt{C_1}} \frac{1}{\epsilon } \frac{1}{2} \int _X \frac{1}{\nu (B(x,\epsilon ))} \int _{B(x,\epsilon )} \vert u(y) - u(x) \vert d\nu (y) d \nu (x) \\&\displaystyle \le \frac{1}{\epsilon } \frac{1}{2} \int _{\Delta _{\epsilon }} \frac{\vert u(y) - u(x) \vert }{\sqrt{\nu (B(x,\epsilon ))} \sqrt{\nu (B(y,\epsilon ))}} d\nu (y) d \nu (x) \\&\displaystyle \le \sqrt{C_1} \frac{1}{\epsilon } \frac{1}{2} \int _X \frac{1}{\nu (B(x,\epsilon ))} \int _{B(x,\epsilon )} \vert u(y) - u(x) \vert d\nu (y) d \nu (x) \\&\displaystyle = \sqrt{C_1} \frac{1}{\epsilon }TV_{m^{\nu ,\epsilon }}(u). \end{aligned}$$

Therefore, we can rewrite Theorem 2.21 as follows.

Theorem 2.22

Let \((X,d, \nu )\) be a metric measure space with doubling measure \(\nu \) and supporting a 1-Poincaré inequality. Given \(u \in L^1(X, \nu )\), we have that \(u \in BV(X,d,\nu )\) if, and only if,

$$\begin{aligned} \liminf _{\epsilon \rightarrow 0^+} \frac{1}{\epsilon }TV_{m^{\nu ,\epsilon }}(u) < \infty . \end{aligned}$$

Moreover, there is a constant \(C \ge 1\), that depends only on \((X,d, \nu )\), such that

$$\begin{aligned} \frac{1}{C} \vert Du \vert _\nu (X) \le \liminf _{\epsilon \rightarrow 0^+} \frac{1}{\epsilon }TV_{m^{\nu ,\epsilon }}(u) \le C \vert Du \vert _\nu (X). \end{aligned}$$

Remark 2.23

Monti, in [42], defines

$$\begin{aligned} \Vert \nabla u\Vert _{L^1(X,\nu )}^-:= 2\liminf _{\epsilon \rightarrow 0^+} \frac{1}{\epsilon }TV_{m^{\nu ,\epsilon }}(u), \end{aligned}$$

and uses this to prove rearrangement theorems in the setting of metric measure spaces. Moreover, he proposes \(\Vert \nabla u\Vert _{L^1(X,\mu )}^-\) as a possible definition of the \(L_1\)-length of the gradient of functions in metric measure spaces.

3 The 1-Laplacian and the total variation flow in metric random walk spaces

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \). Assume, as aforementioned, that [Xdm] is m-connected.

Given a function \(u : X \rightarrow {\mathbb {R}}\) we define its nonlocal gradient \(\nabla u: X \times X \rightarrow {\mathbb {R}}\) as

$$\begin{aligned} \nabla u (x,y):= u(y) - u(x) \quad \forall \, x,y \in X, \end{aligned}$$

which should not be confused with the slope \(\vert \nabla u \vert (x)\), \(x\in X\), introduced in Sect. 2.5.

For a function \(\mathbf{z}: X \times X \rightarrow {\mathbb {R}}\), its m -divergence \(\mathrm{div}_m \mathbf{z}: X \rightarrow {\mathbb {R}}\) is defined as

$$\begin{aligned} (\mathrm{div}_m \mathbf{z})(x):= \frac{1}{2} \int _{X} (\mathbf{z}(x,y) - \mathbf{z}(y,x)) dm_x(y), \end{aligned}$$

and, for \(p \ge 1\), we define the space

$$\begin{aligned} X_m^p(X):= \left\{ \mathbf{z}\in L^\infty (X\times X, \nu \otimes m_x) \ : \ \mathrm{div}_m \mathbf{z}\in L^p(X,\nu ) \right\} . \end{aligned}$$

Let \(u \in BV_m(X,\nu ) \cap L^{p'}(X,\nu )\) and \(\mathbf{z}\in X_m^p(X)\), \(1\le p\le \infty \), having in mind that \(\nu \) is reversible, we have the following Green’s formula:

$$\begin{aligned} \int _{X} u(x) (\mathrm{div}_m \mathbf{z})(x) dx = -\frac{1}{2} \int _{X \times X} \nabla u(x,y) \mathbf{z}(x,y) d(\nu \otimes m_x)(x,y). \end{aligned}$$
(3.1)

In the next result we characterize \(TV_m\) and the m-perimeter using the m-divergence operator. Let us denote by \(\hbox {sign}_0(r)\) the usual sign function and by \(\hbox {sign}(r)\) the multivalued sign function:

$$\begin{aligned} \begin{array}{cc} \mathrm{sign}_0(r):= \left\{ \begin{array}{lll} 1 &{}\quad \hbox {if} \ \ r> 0, \\ \ 0 &{}\quad \hbox {if} \ \ r = 0,\\ -1 &{}\quad \hbox {if} \ \ r< 0; \end{array}\right. \quad &{}\quad \mathrm{sign}(r):= \left\{ \begin{array}{lll} 1 &{}\quad \hbox {if} \ \ r > 0, \\ \left[ -1,1\right] &{}\quad \hbox {if} \ \ r = 0,\\ -1 &{}\quad \hbox {if} \ \ r < 0. \end{array}\right. \end{array} \end{aligned}$$

Proposition 3.1

Let \(1\le p\le \infty \). For \(u \in BV_m(X,\nu ) \cap L^{p'}(X,\nu )\), we have

$$\begin{aligned} TV_m(u) = \sup \left\{ \int _{X} u(x) (\mathrm{div}_m \mathbf{z})(x) d\nu (x) \ : \ \mathbf{z}\in X_m^p(X), \ \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1 \right\} .\nonumber \\ \end{aligned}$$
(3.2)

In particular, for any \(\nu \)-measurable set \(E \subset X\), we have

$$\begin{aligned} P_m(E) = \sup \left\{ \int _{E} (\mathrm{div}_m \mathbf{z})(x) d\nu (x) \ : \ \mathbf{z}\in X_m^1(X), \ \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1 \right\} . \end{aligned}$$

Proof

Let \(u \in BV_m(X,\nu ) \cap L^{p'}(X,\nu )\). Given \(\mathbf{z}\in X_m^p(X)\) with \(\Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1\), applying Green’s formula (3.1), we have

$$\begin{aligned}&\int _{X} u(x) (\mathrm{div}_m \mathbf{z})(x) d\nu (x) =-\frac{1}{2} \int _{X\times X} \nabla u(x,y) \mathbf{z}(x,y) d(\nu \otimes m_x)(x,y) \\&\quad \le \frac{1}{2} \int _{X\times X} \vert u(y) - u(x)\vert dm_x(y)d\nu (x) = TV_m(u). \end{aligned}$$

Therefore,

$$\begin{aligned} \sup \left\{ \int _{X} u(x) (\mathrm{div}_m \mathbf{z})(x) d\nu (x) \ : \ \mathbf{z}\in X_m^p(X), \ \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1 \right\} \le TV_m(u). \end{aligned}$$

On the other hand, since (Xd) is \(\sigma \)-finite, there exists a sequence of sets \(K_1 \subset K_2 \subset \cdots \subset K_n \subset \cdots \) of \(\nu \)-finite measure, such that \(X = \cup _{n=1}^\infty K_n\). Then, if we define \(\mathbf{z}_n(x,y):= \mathrm{sign}_0(u(y) - u(x))\upchi _{K_n \times K_n}(x,y)\), we have that \(\mathbf{z}_n \in X_m^p(X)\) with \(\Vert \mathbf{z}_n \Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1\) and

$$\begin{aligned} \displaystyle TV_m(u) \displaystyle= & {} \frac{1}{2} \int _{X\times X} \vert u(y) - u(x)\vert d(\nu \otimes m_x)(x,y) \\= & {} \lim _{n \rightarrow \infty } \frac{1}{2} \int _{K_n \times K_n} \vert u(y) - u(x)\vert d(\nu \otimes m_x)(x,y)\\= & {} \lim _{n \rightarrow \infty } \frac{1}{2} \int _{X \times X} \nabla u(x,y) \mathbf{z}_n(x,y) d(\nu \otimes m_x)(x,y) \\= & {} \lim _{n \rightarrow \infty } \int _{X} u(x) (\mathrm{div}_m (-\mathbf{z}_n))(x) d\nu (x) \\\le & {} \sup \left\{ \int _{X} u(x) (\mathrm{div}_m (\mathbf{z}))(x) d\nu (x) \ : \ \mathbf{z}\in X_m^p(X), \ \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1 \right\} . \end{aligned}$$

\(\square \)

Corollary 3.2

\(TV_m\) is lower semi-continuous with respect to the weak convergence in \(L^2(X, \nu )\).

Proof

If \(u_n \rightharpoonup u\) weakly in \(L^2(X, \nu )\) then, given \(\mathbf{z}\in X_m^2(X)\) with \(\Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1\), we have that

$$\begin{aligned} \int _X u(x) (\mathrm{div}_m \mathbf{z})(x) d\nu (x) = \lim _{n \rightarrow \infty } \int _X u_n(x) (\mathrm{div}_m \mathbf{z})(x) d\nu (x) \le \liminf _{n \rightarrow \infty } TV_m(u_n) \end{aligned}$$

by Proposition 3.1. Now, taking the supremum over \(\mathbf{z}\) in this inequality, we get

$$\begin{aligned} TV_m(u) \le \liminf _{n \rightarrow \infty } TV_m(u_n). \end{aligned}$$

\(\square \)

Consider the formal nonlocal evolution equation

$$\begin{aligned} u_t(x,t) = \int _{X} \frac{u(y,t) - u(x,t)}{\vert u(y,t) - u(x,t) \vert } dm_x(y), \quad x \in X, t \ge 0. \end{aligned}$$
(3.3)

In order to study the Cauchy problem associated to the previous equation, we will see in Theorem 3.8 that we can rewrite it as the gradient flow in \(L^2(X,\nu )\) of the functional \(\mathcal {F}_m : L^2(X, \nu ) \rightarrow ]-\infty , + \infty ]\) defined by

$$\begin{aligned} \mathcal {F}_m(u):= \left\{ \begin{array}{ll} \displaystyle TV_m(u) &{}\quad \hbox {if} \ u\in L^2(X,\nu )\cap BV_m(X,\nu ), \\ + \infty &{}\quad \hbox {if } u\in L^2(X,\nu ){\setminus } BV_m(X,\nu ), \end{array} \right. \end{aligned}$$

which is convex and lower semi-continuous. Following the method used in [5] we will characterize the subdifferential of the functional \(\mathcal {F}_m\).

Given a functional \(\Phi : L^2(X,\nu ) \rightarrow [0, \infty ]\), we define \(\widetilde{\Phi }: L^2(X,\nu ) \rightarrow [0, \infty ]\) as

$$\begin{aligned} \widetilde{\Phi }(v):= \sup \left\{ \frac{\displaystyle \int _{X} v(x) w(x) d\nu (x)}{\Phi (w)} \ : \ w \in L^2(X,\nu ) \right\} \end{aligned}$$

with the convention that \(\frac{0}{0} = \frac{0}{\infty } = 0\). Obviously, if \(\Phi _1 \le \Phi _2\), then \(\widetilde{\Phi }_2 \le \widetilde{\Phi }_1\).

Theorem 3.3

Let \(u \in L^2(X,\nu )\) and \(v \in L^2(X,\nu )\). The following assertions are equivalent:

  1. (i)

    \(v \in \partial \mathcal {F}_m (u)\);

  2. (ii)

    there exists \(\mathbf{z}\in X_m^2(X)\), \(\Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1\) such that

    $$\begin{aligned} v = - \mathrm{div}_m \mathbf{z}\end{aligned}$$
    (3.4)

    and

    $$\begin{aligned} \int _{X} u(x) v(x) d\nu (x) = \mathcal {F}_m (u); \end{aligned}$$
  3. (iii)

    there exists \(\mathbf{z}\in X_m^2(X)\), \(\Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1\) such that (3.4) holds and

    $$\begin{aligned} \mathcal {F}_m (u) = \frac{1}{2}\int _{X \times X} \nabla u(x,y) \mathbf{z}(x,y) d(\nu \otimes m_x)(x,y); \end{aligned}$$
  4. (iv)

    there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) such that

    $$\begin{aligned} -\int _{X}\mathbf{g}(x,y)\,dm_x(y)= v(x) \quad \hbox {for }\nu -\text{ a.e } x\in X, \end{aligned}$$
    (3.5)

    and

    $$\begin{aligned} -\int _{X} \int _{X}{} \mathbf{g}(x,y)dm_x(y)\,u(x)d\nu (x)=\mathcal {F}_m(u). \end{aligned}$$
    (3.6)
  5. (v)

    there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) verifying (3.5) and

    $$\begin{aligned} \mathbf{g}(x,y) \in \mathrm{sign}(u(y) - u(x)) \quad \hbox {for }(\nu \otimes m_x)-a.e. \ (x,y) \in X \times X. \end{aligned}$$
    (3.7)

Proof

Since \(\mathcal {F}_m\) is convex, lower semi-continuous and positive homogeneous of degree 1, by [5, Theorem 1.8], we have

$$\begin{aligned} \partial \mathcal {F}_m (u) = \left\{ v \in L^2(X,\nu ) \ : \ \widetilde{\mathcal {F}_m}(v) \le 1, \ \int _{X} u(x) v(x) d\nu (x) = \mathcal {F}_m (u)\right\} . \end{aligned}$$
(3.8)

We define, for \(v \in L^2(X,\nu )\),

$$\begin{aligned} \Psi (v):= \inf \left\{ \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \ : \ \mathbf{z}\in X^2_m(X), \ v = - \mathrm{div}_m \mathbf{z}\right\} . \end{aligned}$$
(3.9)

Observe that \(\Psi \) is convex, lower semi-continuous and positive homogeneous of degree 1. Moreover, it is easy to see that, if \(\Psi (v) < \infty \), the infimum in (3.9) is attained i.e., there exists some \(\mathbf{z}\in X_m^2(X)\) such that \(v = - \mathrm{div}_m\mathbf{z}\) and \(\Psi (v) = \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)}.\)

Let us see that

$$\begin{aligned} \Psi = \widetilde{\mathcal {F}_m}. \end{aligned}$$

We begin by proving that \(\widetilde{\mathcal {F}_m}(v) \le \Psi (v)\). If \(\Psi (v) = +\infty \) then this assertion is trivial. Therefore, suppose that \(\Psi (v) < +\infty \). Let \(\mathbf{z}\in L^\infty (X \times X,\nu \otimes m_x)\) such that \(v = - \mathrm{div}_m \mathbf{z}\). Then, for \(w\in L^2(X,\nu )\), we have

$$\begin{aligned} \int _{X} w(x) v(x) d\nu (x)= & {} \frac{1}{2}\int _{X \times X} \nabla w(x,y) \mathbf{z}(x,y)d(\nu \otimes m_x)(x,y) \\\le & {} \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} {\mathcal {F}_m}(w). \end{aligned}$$

Taking the supremum over w we obtain that \(\widetilde{\mathcal {F}_m}(v) \le \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)}\). Now, taking the infimum over \(\mathbf{z}\), we get \(\widetilde{\mathcal {F}_m}(v) \le \Psi (v)\).

To prove the opposite inequality let us denote

$$\begin{aligned} D:= \{ \mathrm{div}_m \mathbf{z}\ : \ \mathbf{z}\in X^2_m(X) \}. \end{aligned}$$

Then, by (3.2), we have that, for \(v\in L^2(X, \nu )\),

$$\begin{aligned} \begin{array}{rl} \displaystyle \displaystyle \widetilde{\Psi }(v) &{}= \displaystyle \sup _{w \in L^2(X,\nu )} \frac{\displaystyle \int _{X} w(x) v(x) d\nu (x)}{\Psi (w)} \ge \sup _{w \in D } \frac{\displaystyle \int _{X} w(x) v(x) d\nu (x)}{\Psi (w)} \\ \\ &{}= \displaystyle \sup _{\mathbf{z}\in X^2_m(X)} \frac{\displaystyle \int _{X} \mathrm{div}_m\mathbf{z}(x) v(x) d\nu (x)}{\Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)}} = \mathcal {F}_m(v). \end{array} \end{aligned}$$

Thus, \( \mathcal {F}_m \le \widetilde{ \Psi }\), which implies, by [5, Proposition 1.6], that \(\Psi = \widetilde{\widetilde{\Psi }} \le \widetilde{ \mathcal {F}_m}\). Therefore, \(\Psi = \widetilde{\mathcal {F}_m}\), and, consequently, from (3.8), we get

$$\begin{aligned} \displaystyle \partial \mathcal {F}_m (u)= & {} \left\{ v \in L^2(X,\nu ) \ : \ \Psi (v) \le 1, \ \int _{X} u(x) v(x) d\nu (x) = \mathcal {F}_m(u)\right\} \\ \displaystyle= & {} \left\{ v \in L^2(X,\nu ) \ : \ \exists \mathbf{z}\in X_m^2(X),\right. \\ v= & {} \left. - \mathrm{div}_m\mathbf{z}, \ \Vert \mathbf{z}\Vert _{L^\infty (X\times X, \nu \otimes m_x)} \le 1, \ \int _{X} u(x) v(x) d\nu (x) = \mathcal {F}_m(u)\right\} , \end{aligned}$$

from where the equivalence between (i) and (ii) follows .

To prove the equivalence between (ii) and (iii) we only need to apply Green’s formula (3.1).

On the other hand, to see that (iii) implies (iv), it is enough to take \(\mathbf{g}(x,y)=\frac{1}{2}(\mathbf{z}(x,y)-\mathbf{z}(y,x))\). Moreover, to see that (iv) implies (ii), take \(\mathbf{z}(x,y)= \mathbf{g}(x,y)\) (observe that, from (3.5), \(- \mathrm{div}_m(\mathbf{g}) = v\), so \(\mathbf{g}\in X^2_m(X)\)). Finally, to see that (iv) and (v) are equivalent, we need to show that (3.6) and (3.7) are equivalent. Now, since \(\mathbf{g}\) is antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) and \(\nu \) is reversible, we have

$$\begin{aligned} - 2 \int _{X} \int _{X}{} \mathbf{g}(x,y)dm_x(y)\,u(x)d\nu (x) = \int _{X\times X}{} \mathbf{g}(x,y) (u(y)- u(x))d(\nu \otimes m_x)(x,y) , \end{aligned}$$

from where the equivalence between (3.6) and (3.7) follows. \(\square \)

By Theorem 3.3 and following [6, Theorem 7.5], the next result is easy to prove.

Proposition 3.4

\(\partial \mathcal {F}_m\) is an m-completely accretive operator in \(L^2(X,\nu )\).

Definition 3.5

We define in \(L^2(X,\nu )\) the multivalued operator \(\Delta ^m_1\) by

\((u, v ) \in \Delta ^m_1\) if, and only if, \(-v \in \partial \mathcal {F}_m(u)\).

As usual, we will write \(v\in \Delta ^m_1 u\) for \((u,v)\in \Delta ^m_1\).

Chang in [14] and Hein and Bühler in [29] define a similar operator in the particular case of finite graphs:

Example 3.6

Let \([V(G), d_G, (m^G_x)]\) be the metric random walk given in Example 1.1 (3) with invariant measure \(\nu _G\). By Theorem 3.3, we have

$$\begin{aligned}&(u, v ) \in \Delta ^{m^G}_1 \iff \hbox {there exists} \ \mathbf{g}\in L^\infty (V(G)\times V(G), \nu _G \otimes m^G_x) \ \hbox { antisymmetric with} \\&\quad \Vert \mathbf{g} \Vert _{L^\infty (V(G)\times V(G), \nu _G \otimes m^G_x)}\le 1 \ \hbox {such that } \frac{1}{d_x}\sum _{y \in V(G)}\mathbf{g}(x,y) w_{xy}= v(x) \quad \forall \, x\in V(G), \end{aligned}$$

and

$$\begin{aligned} \mathbf{g}(x,y) \in \mathrm{sign}(u(y) - u(x)) \quad \hbox {for }(\nu _G \otimes m^G_x)-a.e. \ (x,y) \in V(G) \times V(G). \end{aligned}$$

The next example shows that the operator \(\Delta ^{m^G}_1\) is indeed multivalued. Let \(V(G) = \{ a, b \}\) and \(w_{aa} = w_{bb} = p\), \(w_{ab} = w_{ba} = 1- p\), with \(0< p <1\). Then,

$$\begin{aligned} (u, v ) \in \Delta ^{m^G}_1 \iff&\hbox {there exists} \ \mathbf{g}\in L^\infty (\{ a, b \}\times \{ a, b \}, \nu _G \otimes m^G_x) \hbox { antisymmetric with} \\&\Vert \mathbf{g} \Vert _{ L^\infty (\{ a, b \}\times \{ a, b \}, \nu _G \otimes m^G_x)} \le 1\hbox { such that }\\&\quad \mathbf{g}(a,a) p +\, \mathbf{g}(a,b) (1-p)= v(a),\\&\mathbf{g}(b,b) p + \mathbf{g}(b,a) (1-p)= v(b) \end{aligned}$$

and

$$\begin{aligned} \mathbf{g}(a,b) \in \mathrm{sign}(u(b) - u(a)) . \end{aligned}$$

Now, since \(\mathbf{g}\) is antisymmetric, we get

$$\begin{aligned} v(a) = \mathbf{g}(a,b) (1-p), \quad v(b) = - \mathbf{g}(a,b) (1-p) \quad \hbox {and} \quad \mathbf{g}(a,b) \in \mathrm{sign}(u(b) - u(a)). \end{aligned}$$

Proposition 3.7

[Integration by parts] For any \((u,v)\in \Delta _1^m\) it holds that

$$\begin{aligned} - \int _X v w d \nu \le TV_m(w) \qquad \hbox {for all } \ w \in BV_m(X,\nu )\cap L^2(X,\nu ), \end{aligned}$$
(3.10)

and

$$\begin{aligned} - \int _X v u d \nu = TV_m(u). \end{aligned}$$
(3.11)

Proof

Since \(-v \in \partial \mathcal {F}_m(u)\), given \(w \in BV_m(X,\nu )\), we have that

$$\begin{aligned} -\int _X v w d \nu \le \mathcal {F}_m(u+w) - \mathcal {F}_m(u) \le \mathcal {F}_m(w), \end{aligned}$$

so we get (3.10). On the other hand, (3.11) is given in Theorem 3.3. \(\square \)

As a consequence of Theorem 3.3, Proposition 3.4 and on account of Theorem 1.2, we can give the following existence and uniqueness result for the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t - \Delta ^m_1 u \ni 0 &{}\quad \hbox {in} \ (0,T) \times X \\ u(0,x) = u_0 (x) &{}\quad \hbox {for } x \in X, \end{array}\right. \end{aligned}$$
(3.12)

which is a rewrite of the formal expression (3.3).

Theorem 3.8

For every \(u_0 \in L^2( X,\nu )\) and any \(T>0\), there exists a unique solution of the Cauchy problem (3.12) in (0, T) in the following sense: \(u \in W^{1,1}(0,T; L^2(X,\nu ))\), \(u(0, \cdot ) = u_0\) in \(L^2(X,\nu )\), and, for almost all \(t \in (0,T)\),

$$\begin{aligned} u_t(t,\cdot ) - \Delta ^m_1 u(t) \ni 0. \end{aligned}$$

Moreover, we have the following contraction and maximum principle in any \(L^q(X,\nu )\)–space, \(1\le q\le \infty \):

$$\begin{aligned} \Vert (u(t)-v(t))^+\Vert _{L^q(X,\nu )}\le \Vert (u_0-v_0)^+\Vert _{L^q(X,\nu )}\quad \forall \, 0<t<T, \end{aligned}$$

for any pair of solutions, \(u,\, v\), of problem (3.12) with initial data \(u_0,\, v_0\) respectively.

Definition 3.9

Given \(u_0 \in L^2(X, \nu )\), we denote by \(e^{t \Delta ^m_1}u_0\) the unique solution of problem (3.12). We call the semigroup \(\{e^{t\Delta ^m_1} \}_{t \ge 0}\) in \(L^2(X, \nu )\) the Total Variational Flow in the metric random walk space [Xdm] with invariant and reversible measure \(\nu \).

In the next result we give an important property of the total variational flow in metric random walk spaces.

Proposition 3.10

The TVF satisfies the mass conservation property: for \(u_0 \in L^2(X, \nu )\),

$$\begin{aligned} \int _X e^{t\Delta ^m_1 }u_0 d \nu = \int _X u_0 d \nu \quad \hbox {for any} \ t \ge 0. \end{aligned}$$

Proof

By Proposition 3.7, we have

$$\begin{aligned} - \frac{d}{dt} \int _X e^{t\Delta ^m_1}u_0 d \nu \le TV_m(1) = 0, \end{aligned}$$

and

$$\begin{aligned} \frac{d}{dt} \int _X e^{t\Delta ^m_1}u_0 d \nu \le TV_m(-1) = 0. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{d}{dt} \int _X e^{t\Delta ^m_1}u_0 d \nu =0, \end{aligned}$$

and, consequently,

$$\begin{aligned} \int _X e^{t\Delta ^m_1}u_0 d \nu = \int _X u_0 d \nu \quad \hbox {for any} \ t \ge 0. \end{aligned}$$

\(\square \)

4 Asymptotic behaviour of the TVF and Poincaré type inequalities

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \). Assume as always that [Xdm] is m-connected.

Proposition 4.1

For every initial data \(u_0 \in L^2(X, \nu )\),

$$\begin{aligned} \lim _{t \rightarrow \infty } e^{t\Delta ^m_1}u_0 = u_\infty \quad \hbox {in } L^2(X,\nu ), \end{aligned}$$

with

$$\begin{aligned} u_\infty \in \{ u \in L^2(X, \nu ) \, : \, 0 \in \Delta _1^m (u) \}. \end{aligned}$$

Moreover, if \(\nu (X) < \infty \) then

$$\begin{aligned} u_\infty = \frac{1}{\nu (X)} \int _X u_0(x) d\nu (x). \end{aligned}$$

Proof

Since \(\mathcal {F}_m\) is a proper and lower semicontinuous function in X attaining a minimum at the constant zero function and, moreover, \(\mathcal {F}_m\) is even, by [11, Theorem 5], we have

$$\begin{aligned} \lim _{t \rightarrow \infty } e^{t\Delta ^m_1}u_0 = u_\infty \quad \hbox {in } L^2(X,\nu ), \end{aligned}$$

with

$$\begin{aligned} u_\infty \in \{ u \in L^2(X, \nu ) \, : \, 0 \in \Delta _1^m (u) \}. \end{aligned}$$

Now, since \(0 \in \Delta _1^m (u_\infty ) \), we have that \(TV_m(u_\infty ) = 0\) thus, by Lemma 2.9, if \(\nu (X)<\infty \) (then \(\frac{1}{\nu (X)}\nu \) is ergodic) we get that \(u_\infty \) is constant. Therefore, by Proposition 3.10,

$$\begin{aligned} u_\infty = \frac{1}{\nu (X)} \int _X u_0(x) d\nu (x). \end{aligned}$$

\(\square \)

Let us see that we can get a rate of convergence of the total variational flow \((e^{t\Delta ^m_1})_{t \ge 0}\) when a Poincaré type inequality holds.

From now on in this section we will assume that

$$\begin{aligned} \nu (X) < +\infty . \end{aligned}$$

Hence, \(\mathcal {F}_m(u) = TV_m(u)\) for all \(u\in L^2(X,\nu )\).

Definition 4.2

We say that \([X,d,m,\nu ]\) satisfies a (qp)-Poincaré inequality (\(p, q\in [1,+ \infty [\)) if there exists a constant \(c>0\) such that, for any \(u \in L^q(X,\nu )\),

$$\begin{aligned} \left\| u \right\| _{L^p(X,\nu )} \le c\left( \left( \int _{X}\int _{X} |u(y)-u(x)|^q dm_x(y) d\nu (x) \right) ^{\frac{1}{q}}+\left| \int _X u\,d\nu \right| \right) , \end{aligned}$$

or, equivalently (by the triangle inequality for one direction and taking \({\tilde{u}}=u-\nu (u)\) for the other), there exists a \(\lambda > 0\) such that

$$\begin{aligned} \lambda \left\| u - \nu (u) \right\| _{L^p(X, \nu )} \le \Vert \nabla u \Vert _{L^q(X \times X, d(\nu \otimes m_x)) } \quad \hbox {for all} \ u \in L^q(X,\nu ), \end{aligned}$$

where \(\nu (u):= \frac{1}{\nu (X)} \int _X u(x) d \nu (x)\).

When \([X,d,m,\nu ]\) satisfies a (qp)-Poincaré inequality, we will denote

$$\begin{aligned} \lambda ^{(q,p)}_{[X,d,m,\nu ]} := \inf \left\{ \frac{\Vert \nabla u \Vert _{L^q(X \times X, d(\nu \otimes m_x)) }}{\Vert u \Vert _{ L^p(X, \nu )}} \ : \ \Vert u \Vert _{L^p(X,\nu )} \not = 0, \ \int _X u(x) d \nu (x) = 0 \right\} . \end{aligned}$$

When \([X,d,m,\nu ]\) satisfies a (1, p)-Poincaré inequality, we will say that \([X,d,m,\nu ]\) satisfies a p-Poincaré inequality and write

$$\begin{aligned} \lambda ^p_{[X,d,m,\nu ]} := \lambda ^{(1,p)}_{[X,d,m,\nu ]} = \inf \left\{ \frac{TV_m(u)}{\Vert u \Vert _{L^p(X, \nu )}} \ : \ \Vert u \Vert _{L^p(X, \nu )}\not = 0, \ \int _X u(x) d \nu (x) = 0 \right\} .\nonumber \\ \end{aligned}$$
(4.1)

The following result was proved in [6, Theorem 7.11] for the particular case of the metric random walk space \([\Omega , d, m^{J,\Omega }]\).

Theorem 4.3

If \([X,d,m,\nu ]\) satisfies a 1-Poincaré inequality, then, for any \(u_0 \in L^2(X, \nu )\),

$$\begin{aligned} \left\| e^{t\Delta ^m_1}u_0 - \nu (u_0) \right\| _{L^1(X, \nu )} \le \frac{1}{2 \lambda ^{1}_{[X,d,m,\nu ]} } \frac{\Vert u_0 \Vert ^2_{L^2(X, \nu )}}{t} \quad \hbox {for all} \ t >0. \end{aligned}$$

Proof

Since the semigroup \(\{e^{t\Delta ^m_1 } \ : \ t \ge 0 \}\) preserves the mass (Proposition 3.10), we have

$$\begin{aligned} v(t): = e^{t\Delta ^m_1}u_0 - \frac{1}{\nu (X)} \int _X e^{t\Delta ^m_1}u_0 d \nu = e^{t\Delta ^m_1}u_0 - \frac{1}{\nu (X)} \int _X u_0 d \nu . \end{aligned}$$

Furthermore, the complete accretivity of the operator \(-\Delta ^m_1\) (see Sect. 1.2) implies that

$$\begin{aligned} \mathcal {L}(v):= \left\| v - \nu ( u_0)\right\| _{L^1(X, \nu )} \end{aligned}$$

is a Liapunov functional for the semigroup \(\{e^{t\Delta ^m_1 } \ : \ t \ge 0 \}\), which implies that

$$\begin{aligned} \Vert v(t) \Vert _{L^1(X, \nu )} \le \Vert v(s) \Vert _{L^1(X, \nu )} \quad \hbox {if} \ t \ge s. \end{aligned}$$
(4.2)

Now, by the Poincaré inequality we get

$$\begin{aligned} \lambda ^1_{[X,d,m,\nu ]} \Vert v(s) \Vert _{L^1(X, \nu )} \le TV_m(v(s)) \end{aligned}$$
(4.3)

and, by (4.2) and (4.3), we obtain that

$$\begin{aligned} t \Vert v(t) \Vert _{L^1(X, \nu )} \le \int _0^t \Vert v(s) \Vert _{L^1(X, \nu )} ds \le \frac{1}{\lambda ^{(1,1)}_{[X,d,m,\nu ]} } \int _0^t TV_m(v(s)) ds. \end{aligned}$$
(4.4)

On the other hand, by integration by parts (Proposition 3.7),

$$\begin{aligned} - \frac{1}{2} \frac{d}{dt} \Vert e^{t\Delta ^m_1}u_0 \Vert ^2_{L^2(X, \nu )} = - \int _X e^{t\Delta ^m_1}u_0 \frac{d}{dt}e^{t\Delta ^m_1}u_0 d\nu = TV_m(e^{t\Delta ^m_1}u_0), \end{aligned}$$

and then

$$\begin{aligned} \frac{1}{2} \Vert e^{t\Delta ^m_1}u_0 \Vert ^2_{L^2(X, \nu )} - \frac{1}{2} \Vert u_0 \Vert ^2_{L^2(X, \nu )} = - \int _0^t TV_m(e^{s\Delta ^m_1}u_0) ds = -\int _0^t TV_m(v(s)) ds, \end{aligned}$$

which implies

$$\begin{aligned} \int _0^t TV_m(v(s)) ds \le \frac{1}{2} \Vert u_0 \Vert ^2_{L^2(X, \nu )}. \end{aligned}$$

Hence, by (4.4)

$$\begin{aligned} \Vert v(t) \Vert _{L^1(X, \nu )} \le \frac{1}{2 \lambda ^1_{[X,d,m,\nu ]} } \frac{\Vert u_0 \Vert ^2_{L^2(X, \nu )}}{t}, \end{aligned}$$

which concludes the proof. \(\square \)

To obtain a family of metric random walk spaces for which a 1-Poincaré inequality holds, we need the following result.

Lemma 4.4

Suppose that \(\nu \) is a probability measure (thus ergodic) and

$$\begin{aligned} m_x\ll \nu \quad \hbox {for all }x\in X. \end{aligned}$$

Let \(q\ge 1\). Let \(\{u_n\}_n\subset L^q(X,\nu )\) be a bounded sequence in \(L^1(X,\nu )\) satisfying

$$\begin{aligned} \lim _n \int _{X}\int _{X}|u_n(y)-u_n(x)|^q dm_x(y)d\nu (x)= 0 . \end{aligned}$$
(4.5)

Then, there exists \(\lambda \in \mathbb {R}\) such that

$$\begin{aligned}&u_n \rightarrow \lambda \quad \hbox {for } \nu -\hbox {a.e. } x\in X,\\&\Vert u_n-\lambda \Vert _{L^q (X,m_x)}\rightarrow 0\quad \hbox {for } \nu -\hbox {a.e. } x\in X. \end{aligned}$$

Proof

Let

$$\begin{aligned} F_n(x,y)=|u_n(y)-u_n(x)| \end{aligned}$$

and

$$\begin{aligned} f_n(x)=\int _{X} |u_n(y)-u_n(x)|^q\, dm_x(y). \end{aligned}$$

From (4.5), it follows that

$$\begin{aligned} f_n\rightarrow 0\quad \hbox {in } L^1(X,\nu ). \end{aligned}$$

Passing to a subsequence if necessary, we can assume that

$$\begin{aligned} f_n(x)\rightarrow 0\quad \forall x\in X{\setminus } B_1,\quad \hbox {where } B_1\subset X \hbox { is } \nu \hbox {-null}. \end{aligned}$$
(4.6)

On the other hand, by (4.5), we also have that

$$\begin{aligned} F_n\rightarrow 0\quad \hbox {in } L^q(X\times X, \nu \otimes m_x). \end{aligned}$$

Therefore, we can suppose that, up to a subsequence,

$$\begin{aligned} F_n(x,y)\rightarrow 0\quad \forall (x,y)\in X^2{\setminus } C,\quad \hbox {where } C\subset X\times X \hbox { is } \nu \otimes m_x\hbox {-null}. \end{aligned}$$
(4.7)

Let \(B_2\subset X\) be a \(\nu \)-null set satisfying that,

$$\begin{aligned} \hbox {for all }x\in X{\setminus } B_2\hbox {, the section }C_{x} := \{ y \in X : (x,y) \in C \}\hbox { of }C\hbox { is }m_x\hbox {-null.} \end{aligned}$$

Finally, set \(B:=B_1\cup B_2.\)

Fix \(x_0\in X{\setminus } B\). Up to a subsequence we have that \(u_n(x_0)\rightarrow \lambda \) for some \(\lambda \in [-\infty ,+\infty ]\), but then, by (4.7), for every \(y\in X{\setminus } C_{x_0}\) we also have that \(u_n(y)\rightarrow \lambda \). However, since \(m_{x_0}\ll \nu \) and \(m_{x_0}( X {\setminus } C_{x_0})>0\), we have that \(\nu (X{\setminus } C_{x_0})>0\); thus, if \(A=\{x\in X: u_n(x)\rightarrow \lambda \}\) then \(\nu (A)>0\).

Let us see that

$$\begin{aligned} m_x(X{\setminus } A)=0 \quad \hbox {for all } x\in A{\setminus } B. \end{aligned}$$

Indeed, let \(x\in A{\setminus } B\). Then, for \(y\in X {\setminus } C_x\), \(u_n(y)\rightarrow \lambda \), thus \(y\in A\); that is, \(X{\setminus } C_x\subset A\), and, consequently, \(m_x(A)=1\). Now, since \(m_x(B)=0\), we have

$$\begin{aligned} m_x(X{\setminus } (A{\setminus } B))=0 \quad \hbox {for all } x\in A{\setminus } B. \end{aligned}$$
(4.8)

Therefore, since \(\nu \) is ergodic, (4.8) implies that \(1=\nu (A{\setminus } B)=\nu (A)\).

Consequently, we have obtained that \(u_n\) converges \(\nu \)-a.e. in X to \(\lambda \):

$$\begin{aligned} u_n(x)\rightarrow \lambda \quad \hbox { for } x\in A,\ \nu (X{\setminus } A)=0. \end{aligned}$$

Since \(\Vert u_n\Vert _{L^1 (X,\nu )}\) is bounded, by Fatou’s Lemma, we must have that \(\lambda \in {\mathbb {R}}\). On the other hand, by (4.6),

$$\begin{aligned} F_n(x,\cdot )\rightarrow 0\quad \hbox {in } L^{q}(X,m_x)\ , \end{aligned}$$

for every \(x\in X{\setminus } B_1\). In other words, \(\Vert u_n(\cdot )-u_n(x)\Vert _{L^q (X, m_x)}\rightarrow 0\). Thus

$$\begin{aligned} \Vert u_n-\lambda \Vert _{L^q (X,m_x)}\rightarrow 0\quad \hbox {for }\nu \hbox {-a.e. }x\in X. \end{aligned}$$

\(\square \)

Theorem 4.5

Suppose that \(\nu \) is a probability measure and

$$\begin{aligned} m_x\ll \nu \quad \hbox {for all }x\in X. \end{aligned}$$

Let (H1) and (H2) denote the following hypothesis.

  1. (H1)

    Given a \(\nu \)-null set B, there exist \(x_1,x_2,\ldots , x_N\in X{\setminus } B\), \(\nu \)-measurable sets \(\Omega _1,\Omega _2,\ldots ,\Omega _N\subset X\) and \(\alpha >0\), such that \( X= \bigcup \nolimits _{i=1}^N\Omega _i\) and \(\displaystyle \frac{dm_{x_i}}{d\nu }\ge \alpha >0\) on \(\Omega _i\), \(i=1,2,\ldots ,N\).

  2. (H2)

    Let \( 1\le p<q\). Given a \(\nu \)-null set B, there exist \(x_1,x_2,\ldots , x_N\in X{\setminus } B\) and \(\nu \)-measurable sets \(\Omega _1,\Omega _2,\ldots ,\Omega _N\subset X\), such that \( X= \bigcup \nolimits _{i=1}^N\Omega _i\) and, for \(g_i:=\displaystyle \frac{dm_{x_i}}{d\nu } \) on \(\Omega _i\), \(\displaystyle g_i^{-\frac{p}{q-p}}\in L^{1}(\Omega _i,\nu )\), \(i=1,2,\ldots ,N\).

Then, if (H1) holds, we have that \([X,d,m,\nu ]\) satisfies a (pp)-Poincaré inequality for every \(p\ge 1\), and, if (H2) holds, then \([X,d,m,\nu ]\) satisfies a (qp)-Poincaré inequality.

Proof

Let \(1\le p\le q\). We want to prove that there exists a constant \(c>0\) such that

$$\begin{aligned} \left\| u \right\| _{L^p(X,\nu )}\le & {} c\left( \left( \int _{X}\int _{X} |u(y)-u(x)|^q dm_x(y) d\nu (x) \right) ^{\frac{1}{q}}\right. \\&\left. +\left| \int _X u\,d\nu \right| \right) \, \, \, \hbox {for every }u \in L^q(X,\nu ), \end{aligned}$$

for any \(p=q\ge 1\) when assuming (H1) and for the \(1\le p<q\) appearing in (H2) when this hypothesis is assumed. Suppose that this inequality is not satisfied. Then, there exists a sequence \((u_n)_{n\in {\mathbb {N}}}\subset L^q(X,\nu )\), with \(\Vert u_n\Vert _{L^p (X,\nu )}=1\), satisfying

$$\begin{aligned} \lim _n \int _{X}\int _{X}|u_n(y)-u_n(x)|^q dm_x(y)d\nu (x)= 0 \end{aligned}$$

and

$$\begin{aligned} \lim _n\int _Xu_n\, d\nu = 0. \end{aligned}$$

Therefore, by Lemma 4.4, there exist \(\lambda \in \mathbb {R}\) and a \(\nu \)-null set \(B\subset X\) such that

$$\begin{aligned} u_n\rightarrow \lambda \ \hbox { and }\ \Vert u_n-\lambda \Vert _{L^q (X,m_x)}\rightarrow 0\quad \hbox {for } x\in X{\setminus } B. \end{aligned}$$

We will now prove, distinguishing the cases in which we assume hypothesis (H1) or (H2), that

$$\begin{aligned} \Vert u_n-\lambda \Vert _{L^p (X,\nu )}\rightarrow 0. \end{aligned}$$
(4.9)

Suppose first that hypothesis (H1) is satisfied. Then, there exist \(x_1,x_2,\ldots , x_N\in X{\setminus } B\), \(\nu \)-measurable sets \(\Omega _1,\Omega _2,\ldots ,\Omega _N\subset X\) and \(\alpha >0\), such that \(\displaystyle X= \bigcup _{i=1}^N\Omega _i\) and \(\displaystyle g_i:=\frac{dm_{x_i}}{d\nu }\ge \alpha >0\) on \(\Omega _i\), \(i=1,2,\ldots ,N\). Note that, in this case, \(p=q\) in the previous computations. Now,

$$\begin{aligned} \displaystyle \Vert u_n-\lambda \Vert ^q_{L^q (\Omega _i,\nu )}= & {} \int _{\Omega _i}|u_n(y)-\lambda |^qd\nu (y) \\ \displaystyle\le & {} \frac{1}{\alpha }\int _{\Omega _i}|u_n(y)-\lambda |^qg_i(y)d\nu (y)=\frac{1}{\alpha }\int _{\Omega _i}|u_n(y)-\lambda |^qdm_{x_i}(y). \end{aligned}$$

Consequently, since \(X= \bigcup _{i=1}^N\Omega _i\),

$$\begin{aligned} \Vert u_n-\lambda \Vert ^q_{L^q (X,\nu )}\le \frac{1}{\alpha }\sum _{i=1}^N\Vert u_n-\lambda \Vert ^q_{L^q (\Omega _i,m_{x_i})}. \end{aligned}$$

Therefore,

$$\begin{aligned} \Vert u_n-\lambda \Vert _{L^q (X,\nu )}\rightarrow 0. \end{aligned}$$

Suppose now that hypothesis (H2) holds. Then, there exist \( 1\le p<q\), such that, given a \(\nu \)-null set B, there exist \(x_1,x_2,\ldots , x_N\in X{\setminus } B\) and \(\nu \)-measurable sets \(\Omega _1,\Omega _2,\ldots ,\Omega _N\subset X\), such that \(\displaystyle X= \bigcup _{i=1}^N\Omega _i\) and, for \(g_i:=\displaystyle \frac{dm_{x_i}}{d\nu }\) on \(\Omega _i\), \(\displaystyle g_i^{-\frac{p}{q-p}} \in L^{1}(\Omega _i)\), \(i=1,2,\ldots ,N\). Hence,

$$\begin{aligned} \displaystyle \Vert u_n-\lambda \Vert ^p_{L^p (\Omega _i,\nu )}= & {} \int _{\Omega _i}|u_n(y)-\lambda |^pd\nu (y) \\ \displaystyle= & {} \int _{\Omega _i}|u_n(y)-\lambda |^p \frac{g_i(y)^{\frac{p}{q}}}{g_i(y)^{\frac{p}{q}}}d\nu (y) \\ \displaystyle\le & {} \left( \int _{\Omega _i}|u_n(y)-\lambda |^q g_i(y)d\nu (y)\right) ^{\frac{p}{q}} \left( \int _{\Omega _i}\frac{1}{g_i(y)^{\frac{p}{q-p}}}d\nu (y)\right) ^{\frac{q-p}{q}} \\ \displaystyle= & {} \left( \int _{\Omega _i}|u_n(y)-\lambda |^q dm_{x_i}(y)\right) ^{\frac{p}{q}} \left( \int _{\Omega _i}\frac{1}{g_i(y)^{\frac{p}{q-p}}}d\nu (y)\right) ^{\frac{q-p}{q}} . \end{aligned}$$

Consequently, since \(X= \bigcup _{i=1}^N\Omega _i\),

$$\begin{aligned} \Vert u_n-\lambda \Vert ^p_{L^p (X,\nu )} \le \sum _{i=1}^{N} \Vert u_n-\lambda \Vert ^p_{L^q (\Omega _i,m_{x_i})} \left\| \frac{1}{g_i^{\frac{p}{q-p}}}\right\| ^{\frac{q-p}{q}}_{L^{1} (\Omega _i,\nu )} . \end{aligned}$$

Therefore,

$$\begin{aligned} \Vert u_n-\lambda \Vert _{L^p (X,\nu )}\rightarrow 0, \end{aligned}$$

which concludes the proof of (4.9) in both cases.

Now, since \(\displaystyle \lim _n\int _X u_n\,d\nu =0\), by (4.9) we get that \(\lambda =0\), but this implies

$$\begin{aligned} \Vert u_n \Vert _{L^p (X,\nu )}\rightarrow 0, \end{aligned}$$

which is a contradiction with \(||u_n||_p=1\), \(n\in {\mathbb {N}}\), so we are done. \(\square \)

On account of Theorem 4.3, we obtain the following result on the asymptotic behaviour of the TVF.

Corollary 4.6

Under the hypothesis of Theorem 4.5, for any \(u_0 \in L^2(X, \nu )\),

$$\begin{aligned} \left\| e^{t\Delta ^m_1}u_0 - \nu (u_0) \right\| _{L^1(X, \nu )} \le \frac{1}{2\lambda ^1_{[X,d,m,\nu ]} } \frac{\Vert u_0 \Vert ^2_{L^2(X, \nu )}}{t} \quad \hbox {for all} \ t >0. \end{aligned}$$

Example 4.7

We give two examples of metric random walk spaces in which a 1-Poincaré inequality does not hold.

  1. (1)

    A locally finite weighted discrete graph with infinitely many vertices: Let \([V(G),d_G,m^G]\) be the metric random walk space associated to the locally finite weighted discrete graph with vertex set \(V(G):=\{x_3,x_4,x_5\ldots ,x_n\ldots \}\) and weights:

    $$\begin{aligned} w_{x_{3n},x_{3n+1}}=\frac{1}{n^3} , \ w_{x_{3n+1},x_{3n+2}}=\frac{1}{n^2}, \ w_{x_{3n+2},x_{3n+3}}=\frac{1}{n^3} , \end{aligned}$$

    for \(n\ge 1\), and \(w_{x_i,x_j}=0\) otherwise (recall Example 1.1 (3)). Moreover, let

    $$\begin{aligned} f_n(x):=\left\{ \begin{array}{ll} \displaystyle n^2 &{}\quad \hbox {if} \ \ x=x_{3n+1},x_{3n+2} \\ \\ 0 &{}\quad \hbox {else}. \end{array}\right. \end{aligned}$$

    Note that \(\nu _G(V)<+\infty \) (we avoid its normalization for simplicity). Now,

    $$\begin{aligned} 2TV_{m^G}(f_n)= & {} \int _V\int _V \vert f_n(x)-f_n(y)\vert dm_x(y)d\nu _G(x) \\= & {} d_{x_{3n}}\int _V \vert f_n(x_{3n})-f_n(y)\vert dm_{x_{3n}}(y)\\&+\,d_{x_{3n+1}}\int _V \vert f_n(x_{3n+1})-f_n(y) \vert dm_{x_{3n+1}}(y) \\&+\,d_{x_{3n+2}}\int _V \vert f_n(x_{3n+2})-f_n(y) \vert dm_{x_{3n+2}}(y)\\&+\,d_{x_{3n+3}}\int _V \vert f_n(x_{3n+3})-f_n(y) \vert dm_{x_{3n+3}}(y) \\= & {} n^2\frac{1}{n^3}+n^2\frac{1}{n^3}+n^2\frac{1}{n^3}+n^2\frac{1}{n^3}=\frac{4}{n} . \end{aligned}$$

    However, we have

    $$\begin{aligned} \int _V f_n(x)d\nu _G(x)=n^2 (d_{x_{3n+1}}+d_{x_{3n+2}})=2n^2 \left( \frac{1}{n^2}+\frac{1}{n^3}\right) =2\left( 1+\frac{1}{n}\right) , \end{aligned}$$

    thus

    $$\begin{aligned} \nu _G(f_n)=\frac{2\left( 1+\frac{1}{n}\right) }{\nu _G(V)}=O\left( 1\right) , \end{aligned}$$

    where we use the notation

    $$\begin{aligned} \varphi (n) = O(\psi (n)) \iff \limsup _{n \rightarrow \infty } \left| \frac{\varphi (n)}{\psi (n)}\right| = C \ne 0. \end{aligned}$$

    Therefore,

    $$\begin{aligned} \vert f_n(x)-\nu _G(f_n)\vert =\left\{ \begin{array}{ll} \displaystyle O(n^2) &{}\quad \hbox {if} \ \ x=x_{3n+1},x_{3n+2}, \\ \\ O\left( 1\right) &{}\quad \hbox {otherwise}. \end{array}\right. \end{aligned}$$

    Finally,

    $$\begin{aligned} \int _V \vert f_n(x)-\nu _G(f_n)\vert d\nu _G(x)= & {} O\left( 1\right) \sum _{x\ne x_{3n+1},x_{3n+2}}d_{x}+ O(n^2)(d_{x_{3n+1}}+d_{x_{3n+2}}) \\= & {} O\left( 1\right) +2 O(n^2)\left( \frac{1}{n^2}+\frac{1}{n^3}\right) =O(1) . \end{aligned}$$

    Consequently,

    $$\begin{aligned} \inf \left\{ \frac{TV_{m^G}(u)}{\Vert u - \nu _G(u) \Vert _{L^1(V(G),\nu _G)}} \ : \ u\in L^1(V,\nu _G) , \ \Vert u \Vert _{L^1(V(G),\nu _G)} \not = 0 \right\} =0, \end{aligned}$$

    and a 1-Poincaré inequality does not hold for this space.

  2. (2)

    The metric random walk space \([{\mathbb {R}}, d, m^J]\), where d is the Euclidean distance and \(J(x)= \frac{1}{2} \upchi _{[-1,1]}\): Define, for \(n \in {\mathbb {N}}\),

    $$\begin{aligned} u_n= \frac{1}{2^{n+1}} \upchi _{[2^n, 2^{n+1}]} - \frac{1}{2^{n+1}}\upchi _{[-2^{n+1}, - 2^n]}. \end{aligned}$$

    Then \(\Vert u_n \Vert _1 := 1\), \(\displaystyle \int _{\mathbb {R}}u_n(x) dx = 0\) and it is easy to see that, for n large enough,

    $$\begin{aligned} TV_{m^J} (u_n) = \frac{1}{2^{n+1}}. \end{aligned}$$

    Therefore, \((m^J, \mathcal {L}^1)\) does not satisfy a 1-Poincaré inequality.

Let us see that, when \([X,d,m,\nu ]\) satisfies a 2-Poincaré inequality, the solution of the Total Variational Flow reaches the steady state in finite time.

Theorem 4.8

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \). If \([X,d,m,\nu ]\) satisfies a 2-Poincaré inequality then, for any \(u_0 \in L^2(X, \nu )\),

$$\begin{aligned} \Vert e^{t\Delta ^m_1}u_0-\nu (u_0)\Vert _{L^2(X,\nu )}\le \left( \Vert u_0-\nu (u_0)\Vert _{L^2(X,\nu )}-\lambda ^{2}_{[X,d,m,\nu ]}t\right) ^+\quad \hbox { for all }t \ge 0, \end{aligned}$$

where \(\lambda ^{2}_{[X,d,m,\nu ]}\) is given in (4.1). Consequently,

$$\begin{aligned} e^{t\Delta ^m_1}u_0=\nu (u_0)\qquad \forall \, t\ge {\hat{t}}:=\frac{\left\| u_0-\nu (u_0)\right\| _{L^2(X,\nu )}}{\lambda ^{2}_{[X,d,m,\nu ]}}. \end{aligned}$$

Proof

Let \(v(t):= u(t) - \nu (u_0)\), where \(u(t):= e^{t\Delta ^m_1}u_0\). Since \(\Delta _1^m u(t)=\Delta _1^m \big (u(t) - \nu (u_0)\big )\), we have that

$$\begin{aligned} \frac{d}{dt} v(t) \in \Delta _1^m v(t). \end{aligned}$$

Note that \(v(t)\in BV_m(X,\nu )\) for every \(t>0\). Indeed, since \(-\Delta _m^1= \partial \mathcal {F}_m\) is a maximal monotone operator in \(L^2(X,\nu )\), by [10, Theorem 3.7] in the context of the Hilbert space \(L^2(X,\nu )\), we have that \(v(t)\in D(\Delta _m^1)\subset BV_m(X,\nu )\) for every \(t>0\).

Hence, for each \(t>0\), by Theorem 3.3, there exists \(\mathbf{g}_t\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g}_t \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) such that

$$\begin{aligned} \int _{X}{} \mathbf{g}_t(x,y)\,dm_x(y)= \frac{d}{dt} v(t)(x) \quad \hbox {for } \nu -\text{ a.e } x\in X, \end{aligned}$$
(4.10)

and

$$\begin{aligned} -\int _{X} \int _{X}\mathbf{g}_t(x,y)dm_x(y)\,v(t)(x)d\nu (x)=\mathcal {F}_m(v(t)) =TV_m(u(t)). \end{aligned}$$
(4.11)

Then, multiplying (4.10) by v(t) and integrating over X with respect to \(\nu \), having in mind (4.11), we get

$$\begin{aligned} \frac{1}{2} \frac{d}{dt} \int _X v(t)^2 d \nu + TV_m(v(t))=0, \ \forall t>0. \end{aligned}$$

Now, the semigroup \(\{e^{t\Delta ^m_1 } \ : \ t \ge 0 \}\) preserves the mass (Proposition 3.10), so we have that \(\nu (u(t)) = \nu (u_0)\) for all \(t \ge 0\), and, since \([X,d,m,\nu ]\) satisfies a 2-Poincaré inequality, we have

$$\begin{aligned} \lambda ^{2}_{[X,d,m,\nu ]} \Vert v(t) \Vert _{L^2(X,\nu )} \le TV_m(v(t))\quad \hbox { for all }\,\,t \ge 0. \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{1}{2}\frac{d}{dt} \Vert v(t) \Vert _{L^2(X,\nu )}^2 + \lambda ^{2}_{[X,d,m,\nu ]} \Vert v(t) \Vert _{L^2(X,\nu )} \le 0\quad \hbox {for all }\,\,t \ge 0. \end{aligned}$$

Now, integrating this ordinary differential inequation we get

$$\begin{aligned} \Vert v(t) \Vert _{L^2(X,\nu )} \le \left( \Vert v(0) \Vert _{L^2(X,\nu )} - \lambda ^{2}_{[X,d,m,\nu ]} t\right) ^+\quad \hbox { for all }\,\,t \ge 0, \end{aligned}$$

that is,

$$\begin{aligned} \Vert u(t)-\nu (u_0)\Vert _{L^2(X,\nu )}\le \left( \Vert u_0-\nu (u_0)\Vert _{L^2(X,\nu )}-\lambda ^{2}_{[X,d,m,\nu ]}t\right) ^+\quad \hbox { for all }\,\,t \ge 0. \end{aligned}$$

\(\square \)

We define the extinction time as

$$\begin{aligned} T^*(u_0):= \inf \{ t >0 \ : \ e^{t\Delta ^m_1}u_0 = \nu (u_0) \} , \ u_0\in L^2(X,\nu ). \end{aligned}$$

Under the conditions of Theorem 4.8, we have

$$\begin{aligned} T^*(u_0) \le \frac{\left\| u_0-\nu (u_0)\right\| _{L^2(X,\nu )}}{\lambda ^{2}_{[X,d,m,\nu ]}}, \ u_0\in L^2(X,\nu ) . \end{aligned}$$

To obtain a lower bound on the extinction time, we introduce the following norm which, in the continuous setting, was introduced in [40]. Given a function \(f \in L^2(X, \nu )\), we define

$$\begin{aligned} \Vert f \Vert _{m,*}:= \sup \left\{ \int _X f(x) u(x) d\nu (x) : u \in L^2(X, \nu )\cap BV_m(X,\nu ), \ TV_m(u) \le 1\right\} . \end{aligned}$$

Theorem 4.9

Let \(u_0 \in L^2(X, \nu )\). If \(T^*(u_0) < \infty \) then

$$\begin{aligned} T^*(u_0) \ge \Vert u_0 - \nu (u_0)\Vert _{m,*}. \end{aligned}$$

Proof

If \(u(t):= e^{t\Delta ^m_1}u_0\), we have

$$\begin{aligned} u_0 - \nu (u_0) = - \int _0^{T^*(u_0)} u'(t)dt. \end{aligned}$$

Then, by integration by parts (Proposition 3.7), we get

$$\begin{aligned} \Vert u_0 - \nu (u_0) \Vert _{m,*}= & {} \sup \left\{ \int _X w (u_0 - \nu (u_0)) d \nu \ : \ TV_m(w) \le 1 \right\} \\= & {} \sup \left\{ \int _X w \left( \int _0^{T^*(u_0)} - u'(t)dt\right) d \nu \ : \ TV_m(w) \le 1 \right\} \\= & {} \sup \left\{ \int _0^{T^*(u_0)} \int _X -w u'(t)dt d \nu \ : \ TV_m(w) \le 1 \right\} \\\le & {} \sup \left\{ \int _0^{T^*(u_0)} TV_m(w) dt \ : \ TV_m(w) \le 1 \right\} = T^*(u_0). \end{aligned}$$

\(\square \)

We will now see that we can get a 2-Poincaré inequality for finite graphs.

Theorem 4.10

Let \(G = (V(G), E(G))\) be a finite weighted connected discrete graph. Then, following the notation of Example 1.1 (3), \([V(G),d_G,m^G, \nu _G]\) satisfies a 2-Poincaré inequality, that is,

$$\begin{aligned}&\lambda ^{2}_{[V(G),d_G,m^G, \nu _G]} \nonumber \\&\quad =\inf \left\{ \frac{TV_{m^G}(u)}{\Vert u \Vert _{L^2(V(G),\nu _G)}} \ : \ \Vert u \Vert _{L^2(V(G),\nu _G)} \not = 0, \ \int _V u(x) d \nu _G(x) = 0 \right\} >0.\nonumber \\ \end{aligned}$$
(4.12)

Proof

Let \(V := V(G) = \{x_1, \ldots , x_m\}\) and suppose that (4.12) is false. Then, there exists a sequence \((u_n)_{n\in {\mathbb {N}}} \subset L^2(V, \nu _G)\) with \(\Vert u_n \Vert _{L^2(V,\nu _G)} =1\) and \(\int _V u_n(x) d \nu _G(x) =0\), \(n\in {\mathbb {N}}\), such that

$$\begin{aligned} 0 = \lim _{n \rightarrow \infty }TV_{m^G}(u_n) = \lim _{n \rightarrow \infty } \sum _{k=1}^m \sum _{y \sim x_k}w_{x_k y} \vert u_n(x_k) - u_n(y) \vert . \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{n \rightarrow \infty } \vert u_n(x_k) - u_n(y) \vert =0 \quad \hbox {if} \ y \sim x_k, \quad \hbox {for any} \ k \in \{1, \ldots , m\}. \end{aligned}$$

Moreover, since \(\Vert u_n \Vert _{L^2(V,\nu _G)} =1\), we have that, up to a subsequence,

$$\begin{aligned} \lim _{n \rightarrow \infty } u_n(x_k) = \lambda _k \in {\mathbb {R}}\quad \hbox {for} \ \ k= 1, \ldots , m. \end{aligned}$$

Now, since the graph is connected, we have that \(\lambda = \lambda _k\) for \(k= 1, \ldots , m\), thus

$$\begin{aligned} \lim _{n \rightarrow \infty } u_n(y) = \lambda \in {\mathbb {R}}\quad \hbox {for all} \ y \in V. \end{aligned}$$

However, by the Dominated Convergence Theorem, we get that \(u_n \rightarrow \lambda \) in \(L^2(V,\nu _G )\) and, therefore, since \(\int _V u_n(x) d \nu _G(x) =0\), we have \(\lambda =0\), which is a contradiction with \(\Vert u_n \Vert _{L^2(V,\nu _G)} =1\). \(\square \)

As a consequence of this last result and Theorem 4.8, we get:

Theorem 4.11

Let \(G = (V(G), E(G))\) be a finite weighted connected discrete graph. Then,

$$\begin{aligned} \Vert e^{t\Delta ^{m^G}_1}u_0-\nu (u_0)\Vert _{L^2(V(G),\nu _G)} \le \lambda ^{2}_{[V(G),d_G,m^G,\nu _G]}\left( {\hat{t}}-t\right) ^+, \end{aligned}$$

where \({\hat{t}}:=\frac{\left\| u_0-\nu (u_0)\right\| _{L^2(V(G),\nu _G)}}{\lambda ^{2}_{[V(G),d_G,m^G,\nu _G]}}\). Consequently,

$$\begin{aligned} e^{t\Delta ^{m^G}_1}u_0 = \nu (u_0) \quad \hbox { for all} \ \ t\ge {\hat{t}}. \end{aligned}$$

5 m-Cheeger and m-calibrable sets

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \). Assume, as before, that [Xdm] is m-connected.

Given a set \(\Omega \subset X\) with \(0< \nu (\Omega ) < \nu (X)\), we define its m-Cheeger constant by

$$\begin{aligned} h_1^m(\Omega ) := \inf \left\{ \frac{P_m(E)}{\nu (E)} \, : \, E \subset \Omega , \ E \ \nu \hbox {-measurable with } \, \nu ( E)>0 \right\} , \end{aligned}$$
(5.1)

where the notation \(h_1^m(\Omega )\) is chosen together with the one that we will use for the classical Cheeger constant (see (5.2)). In both of these, the subscript 1 is there to further distinguish them from the upcoming notation \(h_m(X)\) for the m-Cheeger constant of X (see (6.6)). Note that, by (2.1), we have that \(h_1^m(\Omega ) \le 1\).

A \(\nu \)-measurable set \(E \subset \Omega \) achieving the infimum in (5.1) is said to be an m-Cheeger set of \(\Omega \). Furthermore, we say that \(\Omega \) is m-calibrable if it is an m-Cheeger set of itself, that is, if

$$\begin{aligned} h_1^m(\Omega ) = \frac{P_m(\Omega )}{\nu (\Omega )}. \end{aligned}$$

For ease of notation, we will denote

$$\begin{aligned} \lambda ^m_\Omega := \frac{P_m(\Omega )}{\nu (\Omega )}, \end{aligned}$$

for any \(\nu \)-measurable set \(\Omega \subset X\) with \(0<\nu (\Omega )<\nu (X)\).

Remark 5.1

  1. (1)

    Let \([{\mathbb {R}}^N, d, m^J]\) be the metric random walk space given in Example 1.1 (1) with invariant and reversible measure \(\mathcal {L}^N\). Then, the concepts of m-Cheeger set and m-calibrable set coincide with the concepts of J-Cheeger set and J-calibrable set introduced in [34] (see also [35]).

  2. (2)

    If \(G = (V(G), E(G))\) is a locally finite weighted discrete graph without loops (i.e., \(w_{xx} =0\) for all \(x \in V\)) and more than two vertices, then any subset consisting of two vertices is \(m^G\)-calibrable. Indeed, let \(\Omega = \{x, y \}\), then, by (2.1), we have

    $$\begin{aligned} \frac{P_{m^G}(\{x\})}{\nu _G(\{x\})}= 1 - \int _{\{x\}}\int _{\{x\}} dm^G_x(z) d\nu _G(z) = 1 \ge \frac{P_{m^G}(\Omega )}{\nu _G(\Omega )}, \end{aligned}$$

    and, similarly,

    $$\begin{aligned} \frac{P_{m^G}(\{y\})}{\nu _G(\{y\})} = 1 \ge \frac{P_{m^G}(\Omega )}{\nu _G(\Omega )}. \end{aligned}$$

    Therefore, \(\Omega \) is \(m^G\)-calibrable.

In [34] it is proved that, for the metric random walk space \([{\mathbb {R}}^N, d, m^J]\), each ball is a J-calibrable set. In the next example we will see that this result is not true in general.

Example 5.2

Let \(V(G)=\{ x_1,x_2,\ldots , x_7 \} \) be a finite weighted discrete graph with the following weights: \( w_{x_1,x_2}=2 , \ w_{x_2,x_3}=1 , \ w_{x_3,x_4}=2 , \ w_{x_4,x_5}=2 , \ w_{x_5,x_6}=1 , \ w_{x_6,x_7}=2 \) and \(w_{x_i,x_j}=0\) otherwise. Then, if \(E_1=B(x_4,\frac{5}{2})=\{ x_2, x_3, \ldots , x_6 \}\), by (6.7) we have

$$\begin{aligned} \frac{P_{m^G}(E_1)}{\nu _G(E_1)} =\frac{w_{x_1x_2}+w_{x_6x_7}}{d_{x_2}+d_{x_3}+d_{x_4}+d_{x_5} +d_{x_6}} =\frac{1}{4}. \end{aligned}$$

But, taking \(E_2=B(x_4,\frac{3}{2})=\{ x_3, x_4, x_5 \}\subset E_1\), we have

$$\begin{aligned} \frac{P_{m^G}(E_2)}{\nu _G(E_2)} = \frac{w_{x_2x_3}+w_{x_5x_6}}{d_{x_3}+d_{x_4}+d_{x_5}} = \frac{1}{5}. \end{aligned}$$

Consequently, the ball \(B(x_4,\frac{5}{2})\) is not m-calibrable.

In the next Example we will see that there exist metric random walk spaces with sets that do not contain m-Cheeger sets.

Example 5.3

Consider the same graph of Example 6.21, that is, \(V(G)=\{x_0,x_1,\ldots ,x_n\ldots \}\) with the following weights:

$$\begin{aligned} w_{x_{2n}x_{2n+1}}=\frac{1}{2^n} , \quad w_{x_{2n+1}x_{2n+2}}=\frac{1}{3^n} \quad \hbox {for} \ n=0, 1, 2, \ldots , \end{aligned}$$

and \(w_{x_i,x_j}=0\) otherwise. If \(\Omega :=\{ x_1, x_2, x_3\ldots \}\), then \(\frac{P_{m^G}(D)}{\nu _G(D)} > 0\) for every \(D\subset \Omega \) with \(\nu _G(D)>0\) but, working as in Example 6.21, we get \(h_1^m(\Omega )=0\). Therefore, \(\Omega \) has no m-cheeger set.

It is well known (see [25]) that the classical Cheeger constant

$$\begin{aligned} h_1(\Omega ):= \inf \left\{ \frac{Per(E)}{\vert E \vert } \, : \, E\subset \Omega , \ \vert E \vert >0 \right\} , \end{aligned}$$
(5.2)

for a bounded smooth domain \(\Omega \), is an optimal Poincaré constant, namely, it coincides with the first eigenvalue of the 1-Laplacian:

$$\begin{aligned} h_1(\Omega )=\Lambda _1(\Omega ):= \inf \left\{ \frac{\displaystyle \int _\Omega \vert Du \vert +\displaystyle \int _{\partial \Omega } \vert u \vert d \mathcal {H}^{N-1}}{ \displaystyle \Vert u \Vert _{L^1(\Omega )}} \, : \, u \in BV(\Omega ), \ \Vert u \Vert _{L^\infty (\Omega )} = 1 \right\} . \end{aligned}$$

In order to get a nonlocal version of this result, we introduce the following constant. For \(\Omega \subset X\) with \(0<\nu (\Omega )< \nu (X)\), we define

$$\begin{aligned} \displaystyle \Lambda _1^m(\Omega )= & {} \inf \left\{ TV_m(u) \ : \ u \in L^1(X,\nu ), \ u= 0 \ \hbox {in} \ X {\setminus } \Omega , \ u \ge 0, \ \int _X u(x) d\nu (x) = 1 \right\} \\ \\= & {} \displaystyle \inf \left\{ \frac{ TV_m (u)}{\displaystyle \int _X u(x) d\nu (x)} \ : \ u \in L^1(X,\nu ), \ u= 0 \ \hbox {in} \ X {\setminus } \Omega ,\ u \ge 0, \ u\not \equiv 0 \right\} . \end{aligned}$$

Theorem 5.4

Let \(\Omega \subset X\) with \(0< \nu (\Omega ) < \nu (X)\). Then,

$$\begin{aligned} h_1^m(\Omega ) = \Lambda _1^m(\Omega ). \end{aligned}$$

Proof

Given a \(\nu \)-measurable subset \(E \subset \Omega \) with \(\nu (E )> 0\), we have

$$\begin{aligned} \frac{ TV_m(\upchi _E)}{\Vert \upchi _E \Vert _{L^1(X, \nu )}} = \frac{P_m(E)}{\nu (E)}. \end{aligned}$$

Therefore, \(\Lambda _1^m(\Omega ) \le h_1^m(\Omega )\). For the opposite inequality we will follow an idea used in [25]. Given \(u \in L^1(X,\nu )\), with \(u= 0\) in \(X {\setminus } \Omega \), \(u \ge 0\) and \(u\not \equiv 0\), we have

$$\begin{aligned} TV_m(u)= & {} \displaystyle \int _{0}^{+\infty } P_m(E_t(u))\, dt = \displaystyle \int _{0}^{ \Vert u\Vert _{L^\infty (X,\nu )}} \frac{ P_m(E_t(u))}{ \nu (E_t(u))} \nu (E_t(u))\, dt \\\ge & {} h_1^m(\Omega ) \displaystyle \int _{0}^{+\infty } \nu (E_t(u))\, dt = h_1^m(\Omega ) \int _X u(x) d\nu (x) \end{aligned}$$

where the first equality follows by the coarea formula (2.6) and the last one by Cavalieri’s Principle. Taking the infimum over u in the above expression we get \(\Lambda _1^m(\Omega ) \ge h_1^m(\Omega )\). \(\square \)

Let us recall that, in the local case, a set \(\Omega \subset {\mathbb {R}}^N\) is called calibrable if

$$\begin{aligned} \frac{\text{ Per }(\Omega )}{\vert \Omega \vert } = \inf \left\{ \frac{\text{ Per }(E)}{\vert E\vert } \ : \ E \subset \Omega , \ E \ \hbox { with finite perimeter,} \ \vert E \vert > 0 \right\} . \end{aligned}$$

The following characterization of convex calibrable sets is proved in [1].

Theorem 5.5

[1] Given a bounded convex set \(\Omega \subset {\mathbb {R}}^N\) of class \(C^{1,1}\), the following assertions are equivalent:

  1. (a)

    \(\Omega \) is calibrable.

  2. (b)

    \(\upchi _\Omega \) satisfies \(- \Delta _1 \upchi _\Omega = \frac{\text{ Per }(\Omega )}{\vert \Omega \vert } \upchi _\Omega \), where \(\Delta _1 u:= \mathrm{div} \left( \frac{Du}{\vert Du \vert }\right) \).

  3. (c)

    \(\displaystyle (N-1) \underset{x \in \partial \Omega }{\mathrm{ess\, sup}} H_{\partial \Omega } (x) \le \frac{\hbox {Per}(\Omega )}{\vert \Omega \vert }.\)

Remark 5.6

  1. (1)

    Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\) and assume that there exists a constant \(\lambda >0\) and a measurable function \(\tau \) such that \(\tau (x)=1\) for \(x\in \Omega \) and

    $$\begin{aligned} - \lambda \tau \in \Delta _1^m \upchi _\Omega \ \ \hbox {on} \ X. \end{aligned}$$

    Then, by Theorem 3.3, there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) satisfying

    $$\begin{aligned} -\int _{X}\mathbf{g}(x,y)\,dm_x(y)= \lambda \tau (x) \quad \hbox {for }\nu -\text{ a.e } x\in X \end{aligned}$$

    and

    $$\begin{aligned} -\int _{X} \int _{X}\mathbf{g}(x,y)dm_x(y)\,\upchi _\Omega (x)d\nu (x)=\mathcal {F}_m(\upchi _\Omega ) = P_m(\Omega ). \end{aligned}$$

    Then,

    $$\begin{aligned} \displaystyle \lambda \nu (\Omega )= & {} \int _{X} \lambda \tau (x)\upchi _\Omega (x) d\nu (x) \\ \displaystyle= & {} -\int _{X} \left( \int _{X}\mathbf{g}(x,y)\,dm_x(y) \right) \upchi _\Omega (x) d\nu (x) \\ \displaystyle= & {} P_m(\Omega ) \end{aligned}$$

    and, consequently,

    $$\begin{aligned} \lambda = \frac{P_m(\Omega )}{\nu (\Omega )}=:\lambda _\Omega ^m. \end{aligned}$$
  2. (2)

    Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\), and \(\tau \) a \(\nu \)-measurable function with \(\tau (x)=1\) for \(x\in \Omega \). Then

    $$\begin{aligned} - \lambda ^m_\Omega \tau \in \Delta _1^m \upchi _\Omega \, \quad \hbox {in} \ X \ \Longleftrightarrow \ - \lambda ^m_\Omega \tau \in \Delta _1^m 0 \, \quad \hbox {in} \ X. \end{aligned}$$
    (5.3)

    Indeed, the left to right implication follows from the fact that

    $$\begin{aligned} \partial \mathcal {F}_m(u)\subset \partial \mathcal {F}_m(0), \end{aligned}$$

    and for the converse implication, we have that there exists \(\mathbf{g}\in L^\infty (X \times X,\nu \otimes m_x)\), \(\mathbf{g}(x,y) = -\mathbf{g}(y,x)\) for almost all \((x,y) \in X \times X\), \(\Vert \mathbf{g}\Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\), satisfying

    $$\begin{aligned} - \lambda ^m_\Omega \tau (x)=\int _{X} \mathbf{g}(x,y)\,dm_x(y) \quad \hbox {for }\nu -\text{ a.e. } x\in X. \end{aligned}$$

    Now, multiplying by \(\upchi _\Omega \), integrating over X and applying integrating by parts we get

    $$\begin{aligned} \displaystyle \lambda ^m_\Omega \nu ( \Omega )= & {} \lambda ^m_\Omega \int _{X} \tau (x)\upchi _\Omega (x) d\nu (x) = - \int _{X} \int _{X} \mathbf{g}(x,y) \upchi _\Omega (x) dm_x(y)d\nu (x) \\ \displaystyle= & {} \frac{1}{2} \int _{X} \int _{X} \mathbf{g}(x,y) (\upchi _\Omega (y) - \upchi _\Omega (x) ) dm_x(y)d\nu (x) \\ \displaystyle\le & {} \frac{1}{2} \int _{X} \int _{X} \left| \upchi _\Omega (y) - \upchi _\Omega (x)\right| dm_x(y)d\nu (x) = P_m(\Omega ) . \end{aligned}$$

    Then, since \(P_m(\Omega )=\lambda ^m_\Omega \nu ( \Omega )\), the previous inequality is, in fact, an equality and, therefore, we get

    $$\begin{aligned} \mathbf{g}(x,y) \in \hbox {sign}(\upchi _\Omega (y) - \upchi _\Omega (x) ) \quad \hbox {for }(\nu \otimes m_x)-\text{ a.e. } (x,y) \in X \times X, \end{aligned}$$

    and, consequently,

    $$\begin{aligned} - \lambda ^m_\Omega \tau \in \Delta ^m_1 \upchi _\Omega \quad \hbox {in} \ X. \end{aligned}$$

The next result is the nonlocal version of the fact that (a) is equivalent to (b) in Theorem 5.5.

Theorem 5.7

Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\). Then, the following assertions are equivalent:

  1. (i)

    \(\Omega \) is m-calibrable,

  2. (ii)

    there exists a \(\nu \)-measurable function \(\tau \) equal to 1 in \(\Omega \) such that

    $$\begin{aligned} - \lambda ^m_\Omega \tau \in \Delta _1^m \upchi _\Omega \, \quad \hbox {in} \ X, \end{aligned}$$
    (5.4)
  3. (iii)
    $$\begin{aligned} - \lambda ^m_\Omega \tau ^* \in \Delta _1^m \upchi _\Omega \, \quad \hbox {in} \ X, \end{aligned}$$

    for

    $$\begin{aligned} \tau ^*(x)=\left\{ \begin{array}{ll} 1 &{}\quad \hbox {if } x\in \Omega ,\\ \displaystyle - \frac{1}{\lambda _\Omega ^m} m_x(\Omega )&{}\quad \hbox {if } x\in X{\setminus }\Omega . \end{array} \right. \end{aligned}$$

Proof

Observe that, since we are assuming that the metric random walk space is m-connected, we have \(P_m(\Omega )>0\) and, therefore, \(\lambda _\Omega ^m>0\).

\((iii)\Rightarrow (ii)\) is trivial.

\((ii)\Rightarrow (i)\): Suppose that there exists a \(\nu \)-measurable function \(\tau \) equal to 1 in \(\Omega \) satisfying (5.4). Hence, there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) satisfying

$$\begin{aligned} -\int _{X}\mathbf{g}(x,y)\,dm_x(y)= \lambda _\Omega ^m \tau (x) \quad \nu -\text{ a.e. } x\in X \end{aligned}$$

and

$$\begin{aligned} -\int _{X} \int _{X}\mathbf{g}(x,y)dm_x(y)\,\upchi _\Omega (x)d\nu (x)=P_m(\Omega ). \end{aligned}$$

Then, for \(F\subset \Omega \) with \(\nu (F) >0\), since \(\mathbf{g}\) antisymmetric, by using the reversibility of \(\nu \) with respect to m, we have

$$\begin{aligned} \displaystyle \lambda ^m_\Omega \nu ( F)= & {} \lambda ^m_\Omega \int _{X} \tau (x) \upchi _F(x) d\nu (x) = - \int _{X} \int _{X}\mathbf{g}(x,y) \upchi _F(x) \,dm_x(y) d\nu (x) \\ \displaystyle= & {} \frac{1}{2}\int _{X} \int _{X} \mathbf{g}(x,y) (\upchi _F(y) - \upchi _F(x)) \,dm_x(y) d\nu (x) \le P_m(F). \end{aligned}$$

Therefore, \(h_1^m(\Omega ) = \lambda ^m_\Omega \) and, consequently, \(\Omega \) is m-calibrable.

\((i)\Rightarrow (iii)\) Suppose that \(\Omega \) is m-calibrable. Let

$$\begin{aligned} \tau ^*(x)=\left\{ \begin{array}{ll} 1 &{}\quad \hbox {if } x\in \Omega ,\\ \displaystyle - \frac{1}{\lambda _\Omega ^m} m_x(\Omega )&{}\quad \hbox {if } x\in X{\setminus }\Omega . \end{array} \right. \end{aligned}$$

We claim that \(-\lambda ^m_\Omega \tau ^* \in \Delta _1^m0\), that is,

$$\begin{aligned} \lambda ^m_\Omega \tau ^* \in \partial \mathcal {F}_m (0). \end{aligned}$$
(5.5)

Take \( w \in L^2(X,\nu )\) with \(\mathcal {F}_m(w)<+\infty \). Since

$$\begin{aligned} w(x) = \int _0^{+\infty } \upchi _{E_t(w)}(x) dt - \int _{-\infty }^0(1-\upchi _{E_t(w)})(x) dt, \end{aligned}$$

and

$$\begin{aligned} \displaystyle \int _X\tau ^*(x)d\nu (x)= \int _\Omega 1d\nu (x)-\frac{1}{\lambda _\Omega ^m}\int _{X{\setminus }\Omega } m_x(\Omega )d\nu (x)=\nu (\Omega )-\frac{1}{\lambda _\Omega ^m}P_m(\Omega )= 0, \end{aligned}$$

we have

$$\begin{aligned} \int _{X} \lambda ^m_\Omega \tau ^*(x) w(x) d\nu (x) = \lambda ^m_\Omega \int _{-\infty }^{+\infty } \int _{X} \tau ^*(x) \upchi _{E_t(w)}(x) d\nu (x) dt. \end{aligned}$$

Now, using that \(\tau ^*=1\) in \(\Omega \) and \(\Omega \) is m-calibrable we have that

$$\begin{aligned}&\displaystyle \lambda _\Omega ^m\int _{-\infty }^{+\infty } \int _{X} \tau ^*(x) \upchi _{E_t(w)}(x) d\nu (x)dt = \lambda _\Omega ^m \int _{-\infty }^{+\infty } \nu ( E_t(w)\cap \Omega ) dt\\&\qquad +\,\lambda _\Omega ^m \int _{-\infty }^{+\infty } \int _{E_t(w){\setminus } \Omega } \tau ^*(x) d\nu (x) dt \\&\quad \displaystyle \le \int _{-\infty }^{+\infty } P_m( E_t(w)\cap \Omega ) dt +\lambda _\Omega ^m\int _{-\infty }^{+\infty } \int _{E_t(w){\setminus } \Omega } \tau ^*(x) d\nu (x) dt. \end{aligned}$$

By Proposition 2.2 and the coarea formula given in Theorem 2.7 we get

$$\begin{aligned} \displaystyle \int _{-\infty }^{+\infty } P_m( E_t(w)\cap \Omega ) dt= & {} \int _{-\infty }^{+\infty } P_m( E_t(w)\cap \Omega ) dt +\int _{-\infty }^{+\infty } P_m( E_t(w){\setminus } \Omega ) dt \\&-\,\int _{-\infty }^{+\infty } 2L_m(E_t(w){\setminus } \Omega ,E_t(w)\cap \Omega )dt\\&\displaystyle - \int _{-\infty }^{+\infty } P_m( E_t(w){\setminus } \Omega ) dt\\&+\,\int _{-\infty }^{+\infty }2L_m(E_t(w){\setminus } \Omega ,E_t(w)\cap \Omega )dt \\= & {} \int _{ -\infty }^{+\infty } P_m( E_t(w) ) dt - \int _{-\infty }^{+\infty } P_m( E_t(w){\setminus } \Omega ) dt\\&+\,\int _{-\infty }^{+\infty }2L_m(E_t(w){\setminus } \Omega ,E_t(w)\cap \Omega )dt \\= & {} \mathcal {F}_m(w) - \int _{-\infty }^{+\infty } P_m( E_t(w){\setminus } \Omega ) dt\\&+\,\int _{-\infty }^{+\infty }2L_m(E_t(w){\setminus } \Omega ,E_t(w)\cap \Omega )dt. \end{aligned}$$

Hence, if we prove that

$$\begin{aligned} I= & {} - \int _{-\infty }^{+\infty } P_m( E_t(w){\setminus } \Omega ) dt+\int _{-\infty }^{+\infty }2L_m(E_t(w){\setminus } \Omega ,E_t(w)\cap \Omega )dt \\&+\,\lambda _\Omega ^m\int _{-\infty }^{+\infty } \int _{E_t(w){\setminus } \Omega } \tau ^*(x) d\nu (x) dt \le 0, \end{aligned}$$

we get

$$\begin{aligned} \int _{X} \lambda ^m_\Omega \tau ^*(x) w(x) d\nu (x) \le \mathcal {F}_m(w), \end{aligned}$$

which proves (5.5). Now, since

$$\begin{aligned} P_m( E_t(w){\setminus } \Omega )= & {} L_m(E_t(w){\setminus } \Omega , X {\setminus } (E_t(w){\setminus } \Omega ) ) \\= & {} L_m(E_t(w){\setminus } \Omega , (E_t(w)\cap \Omega )\overset{.}{\cup } (X {\setminus } E_t(w)) ), \end{aligned}$$

and \(\tau ^*(x)=- \frac{1}{\lambda _\Omega ^m} m_x(\Omega )\) for \(x\in X{\setminus }\Omega \), we have

$$\begin{aligned} \displaystyle I= & {} - \int _{-\infty }^{+\infty } L_m(E_t(w){\setminus } \Omega , X{\setminus } E_t(w) ) dt + \int _{-\infty }^{+\infty }L_m(E_t(w){\setminus } \Omega ,E_t(w)\cap \Omega )dt \\ \displaystyle&-\, \int _{-\infty }^{+\infty } \int _{E_t(w){\setminus } \Omega } \int _{ \Omega }dm_x(y) d\nu (x) dt \\ \displaystyle\le & {} \int _{-\infty }^{+\infty }L_m(E_t(w){\setminus } \Omega ,E_t(w)\cap \Omega )dt -\int _{-\infty }^{+\infty } L_m(E_t(w){\setminus } \Omega ,\Omega )dt \le 0. \end{aligned}$$

Then, by (5.3), we have that

$$\begin{aligned} - \lambda ^m_\Omega \tau ^* \in \Delta _1^m \upchi _\Omega \, \quad \hbox {in} \ X, \end{aligned}$$

and this concludes the proof. \(\square \)

Even though, in principle, the m-calibrability of a set is a nonlocal concept, in the next result we will see that the m-calibrability of a set depends only on the set itself.

Theorem 5.8

Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\). Then, \(\Omega \) is m-calibrable if, and only if, there exists an antisymmetric function \(\mathbf{g}\) in \(\Omega \times \Omega \) such that

$$\begin{aligned} -1\le \mathbf{g}(x,y)\le 1 \qquad \hbox {for }(\nu \otimes m_x)-\hbox {a.e. }(x,y) \in \Omega \times \Omega , \end{aligned}$$
(5.6)

and

$$\begin{aligned} \lambda _\Omega ^m = -\int _{\Omega }\mathbf{g}(x,y)\,dm_x(y) + 1 - m_x(\Omega ), \quad x \in \Omega . \end{aligned}$$
(5.7)

Observe that, on account of (2.1), (5.7) is equivalent to

$$\begin{aligned} m_x(\Omega )=\frac{1}{\nu (\Omega )}\int _\Omega m_z(\Omega )d\nu (z)-\int _{\Omega } \mathbf{g}(x,y) \,dm_x(y) \qquad \hbox {for }\nu \hbox {-a.e. }x\in \Omega . \end{aligned}$$
(5.8)

Proof

By Theorem 5.7, we have that \(\Omega \) is m-calibrable if, and only if, there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric, \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) with \(g(x,y) \in \mathrm{sign}(\upchi _\Omega (y) - \upchi _\Omega (x))\) for \(\nu \otimes m_x\)-a.e. \((x,y) \in X\times X\), satisfying

$$\begin{aligned} \lambda _\Omega ^m = -\int _{X}\mathbf{g}(x,y)\,dm_x(y)\quad \hbox {for } \nu -\text{ a.e. } x\in \Omega \end{aligned}$$
(5.9)

and

$$\begin{aligned} m_x(\Omega ) = \int _{X}\mathbf{g}(x,y)\,dm_x(y) \quad \hbox {for }\nu -\text{ a.e. } x\in X {\setminus } \Omega . \end{aligned}$$

Now, having in mind that \(g(x,y) = -1\) if \(x \in \Omega \) and \(y \in X {\setminus } \Omega \), we have that, for \(x\in \Omega \),

$$\begin{aligned} \lambda _\Omega ^m= & {} 1 - \frac{1}{\nu (\Omega )}\int _\Omega m_x(\Omega ) d \nu (x) = -\int _{X}\mathbf{g}(x,y)\,dm_x(y) \\= & {} -\int _{\Omega }\mathbf{g}(x,y)\,dm_x(y) -\int _{X {\setminus } \Omega }\mathbf{g}(x,y)\,dm_x(y)\\= & {} -\int _{\Omega }\mathbf{g}(x,y)\,dm_x(y) + m_x(X {\setminus } \Omega ) = -\int _{\Omega }\mathbf{g}(x,y)\,dm_x(y) + 1 - m_x(\Omega ). \end{aligned}$$

Bringing together (5.9) and these equalities we get (5.6) and (5.7).

Let us now suppose that we have an antisymmetric function \(\mathbf{g}\) in \(\Omega \times \Omega \) satisfying (5.6) and (5.7). To check that \(\Omega \) is m-calibrable we need to find \({\tilde{\mathbf{g}}}(x,y)\in \hbox {sign}\left( \upchi _\Omega (y)-\upchi _\Omega (x)\right) \) antisymmetric such that

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle -\lambda _\Omega ^m=\int _X{\tilde{\mathbf{g}}}(x,y)dm_x(y),\quad x\in \Omega ,\\ \displaystyle m_x(\Omega )=\int _X{\tilde{\mathbf{g}}}(x,y)dm_x(y),\quad x\in X{\setminus } \Omega , \end{array}\right. \end{aligned}$$

which is equivalent to

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle -\lambda _\Omega ^m=\int _\Omega {\tilde{\mathbf{g}}}(x,y)dm_x(y)-m_x(X{\setminus }\Omega ), \quad x\in \Omega ,\\ \displaystyle m_x(\Omega )=\int _{X{\setminus }\Omega }{\tilde{\mathbf{g}}}(x,y)dm_x(y)+m_x(\Omega ),\quad x\in X{\setminus } \Omega , \end{array}\right. \end{aligned}$$

since, necessarily, \({\tilde{\mathbf{g}}}(x,y)=-1\) for \(x\in \Omega \) and \(y\in X{\setminus }\Omega \), and \({\tilde{\mathbf{g}}}(x,y)=1\) for \(x\in X{\setminus }\Omega \) and \(y\in \Omega \). Now, the second equality in this system is satisfied if we take \({\tilde{\mathbf{g}}}(x,y)=0\) for \(x,y\in X{\setminus }\Omega \), and the first one is equivalent to (5.8) if we take \({\tilde{\mathbf{g}}}(x,y)=\mathbf{g}(x,y)\) for \(x,y\in \Omega \).

\(\square \)

Set

$$\begin{aligned} \Omega _m:=\Omega \cup \partial _m\Omega \end{aligned}$$
(5.10)

where

$$\begin{aligned} \partial _m\Omega =\{ x\in X{\setminus } \Omega \ : \ m_x(\Omega )>0\}. \end{aligned}$$

Corollary 5.9

A \(\nu \)-measurable set \(\Omega \subset X\) is m-calibrable if, and only if, it is \(m^{\Omega _m}\)-calibrable as a subset of \([\Omega _m,d,m^{\Omega _m}]\) with reversible measure \(\nu |\!\_\Omega _m\) (see Example 1.1 (5)).

Remark 5.10

  1. (1)

    Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\). Observe that, as we have proved,

    $$\begin{aligned} \Omega \ \hbox {is }m\hbox {-calibrable} \ \Longleftrightarrow \ -\lambda _\Omega ^m\,\upchi _\Omega +m_{_{(.)}}(\Omega )\,\upchi _{X{\setminus }\Omega } \in \Delta _1^m\upchi _\Omega \, . \end{aligned}$$
    (5.11)
  2. (2)

    Let \(\Omega \subset X\) be a \(\nu \)-measurable set. If

    $$\begin{aligned} -\lambda _\Omega ^m\,\upchi _\Omega +h\,\upchi _{X{\setminus }\Omega } \in \Delta _1^m\upchi _\Omega \end{aligned}$$
    (5.12)

    for some \(\nu \)-measurable function h, then there exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) satisfying

    $$\begin{aligned} \mathbf{g}(x,y)\in \mathrm{sign}(\upchi _{\Omega }(y) - \upchi _{\Omega }(x)) \quad (\nu \otimes m_x)-a.e. \ (x,y) \in X \times X \end{aligned}$$

    and

    $$\begin{aligned} -\lambda _\Omega ^m \,\upchi _\Omega (x) +h(x)\,\upchi _{X{\setminus }\Omega }(x)=\int _{X}\mathbf{g}(x,y)\,dm_x(y) \quad \nu -\text{ a.e } x\in X. \end{aligned}$$

    Hence, if

    $$\begin{aligned} \mathbf{g}\hbox { is }\nu \otimes m_x\hbox {-integrable} \end{aligned}$$

    we have that

    $$\begin{aligned} \int _{X{\setminus }\Omega }h(x)d\nu (x)=P_m(\Omega ). \end{aligned}$$

    Indeed, from (5.12), for \(x\in X{\setminus }\Omega \),

    $$\begin{aligned} h(x)= & {} \int _{X}\mathbf{g}(x,y)\,dm_x(y)=\int _{\Omega }\mathbf{g}(x,y)\,dm_x(y)+\int _{X{\setminus }\Omega }\mathbf{g}(x,y)\,dm_x(y)\\= & {} \int _{\Omega }\,dm_x(y)+\int _{X{\setminus }\Omega }\mathbf{g}(x,y)\,dm_x(y)\\= & {} m_x(\Omega )+\int _{X{\setminus }\Omega }\mathbf{g}(x,y)\,dm_x(y). \end{aligned}$$

    Hence, integrating over \(X{\setminus }\Omega \) with respect to \(\nu \), we get

    $$\begin{aligned} \int _{X{\setminus }\Omega }h(x)d\nu (x)=P_m(\Omega )+\int _{X{\setminus }\Omega }\int _{X{\setminus }\Omega }\mathbf{g}(x,y)\,dm_x(y)d\nu (x). \end{aligned}$$

    Moreover, since \(\mathbf{g}\) is antisymmetric and \(\nu \otimes m_x\)-integrable, we have

    $$\begin{aligned} \int _{X{\setminus }\Omega }\int _{X{\setminus }\Omega }\mathbf{g}(x,y)\,dm_x(y)d\nu (x) =\int _{(X{\setminus }\Omega )\times (X{\setminus }\Omega )}\mathbf{g}(x,y)\,d(\nu \otimes m_x)(x,y)=0, \end{aligned}$$

    and, consequently, we get

    $$\begin{aligned} \int _{X{\setminus }\Omega }h(x)d\nu (x)=P_m(\Omega ). \end{aligned}$$
    (5.13)

    As a consequence of (5.13), if \(\nu (X)<\infty \), since the metric random walk space is m-connected, the relation

    $$\begin{aligned} -\lambda _\Omega ^m\,\upchi _\Omega \in \Delta _1^m\upchi _\Omega \quad \hbox {in }X \end{aligned}$$
    (5.14)

    does not hold true for any \(\nu \)-measurable set \(\Omega \) with \(0<\nu (\Omega )<\nu (X)\) (recall that, for these \(\Omega \), \(P_m(\Omega )>0\) by [36, Theorem 2.21 & 2.24] thus h is non–null by (5.13)). Now, if \(\nu (X)= +\infty \), then (5.14) may be satisfied, as shown in the next example.

Example 5.11

Consider the metric random walk space \([{\mathbb {R}}, d, m^J]\) with \(\nu =\mathcal {L}^1\) and \(J=\frac{1}{2}\upchi _{[-1,1]}\). Let us see that

$$\begin{aligned} -\lambda _{[-1,1]}^{m^J}\upchi _{[-1,1]}\in \Delta _1^{m^J}\upchi _{[-1,1]}, \end{aligned}$$

where \(\lambda _{[-1,1]}^{m^J}=\frac{1}{4}\). Indeed, take \(\mathbf{g}(x,y)\) to be antisymmetric and defined as follows for \(y<x\):

$$\begin{aligned} \mathbf{g}(x,y)= & {} -\upchi _{\{y<x<y+1<0\}}(x,y)-\frac{1}{2}\upchi _{\{-1<y< x< 0\}}(x,y)\\&+\,\frac{1}{2}\upchi _{\{0<y< x< 1\}}(x,y)+\upchi _{\{0<x-1<y<x\}}(x,y). \end{aligned}$$

Then, \(\mathbf{g}\in L^\infty ({\mathbb {R}}\times {\mathbb {R}}, \nu \otimes m^J_x)\), \(\Vert \mathbf{g}\Vert _{L^\infty ({\mathbb {R}}\times {\mathbb {R}},\nu \otimes m^J_x)} \le 1\),

$$\begin{aligned} \mathbf{g}(x,y)\in \mathrm{sign}(\upchi _{[-1,1]}(y) - \upchi _{[-1,1]}(x)) \quad \hbox {for }(\nu \otimes m^J_x)-a.e. \ (x,y) \in {\mathbb {R}}\times {\mathbb {R}}, \end{aligned}$$

and

$$\begin{aligned} -\frac{1}{4} \upchi _{[-1,1]}(x) =\int _{{\mathbb {R}}}\mathbf{g}(x,y)\,dm^J_x(y) \quad \hbox {for }\nu -\text{ a.e } x\in {\mathbb {R}}. \end{aligned}$$

Note that \(\mathbf{g}\) is not \(\nu \otimes m^J_x\) integrable.

Remark 5.12

As a consequence of Theorem 5.5, it holds that (see [1, Introduction] or [5, Section 4.4]) a bounded convex set \(\Omega \subset {\mathbb {R}}^N\) is calibrable if, and only if, \(u(t,x) = \left( 1 - \frac{\text{ Per }(\Omega )}{\vert \Omega \vert } t\right) ^+ \upchi _\Omega (x)\) is a solution of the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_t -\Delta _1 u\ni 0 \quad \hbox {in} \ (0, \infty ) \times {\mathbb {R}}^N, \\ u(0) = \upchi _\Omega .\end{array}\right. \end{aligned}$$

That is, a calibrable set \(\Omega \) is that for which the gradient descent flow associated to the total variation tends to decrease linearly the height of \(\upchi _\Omega \) without distortion of its boundary.

Now, as a consequence of (5.11), we can obtain a similar result in our context if we introduce an abortion term in the corresponding Cauchy problem. The appearance of this term is due to the nonlocality of the diffusion considered. Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\), then \(\Omega \) is m-calibrable if, and only if, \(u(t,x) = \left( 1 - \lambda ^m_\Omega t\right) ^+ \upchi _\Omega (x)\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{ll} u_t(t,x) -\Delta ^m_1 u(t,x)\ni - m_{x}(\Omega )\,\upchi _{X{\setminus }\Omega }(x) \upchi _{[0,1/\lambda _\Omega ^m)}(t) &{}\quad \hbox {in} \ (0,\infty ) \times X, \\ u(0,x) = \upchi _\Omega (x), &{}\quad x \in X. \end{array}\right. \end{aligned}$$

Note that the only if direction follows by the uniqueness of the solution.

The following result relates the m-calibrability with the m-mean curvature, this is the nonlocal version of one of the implications in the equivalence between (a) and (c) in Theorem 5.5.

Proposition 5.13

Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X) \). Then,

$$\begin{aligned} \displaystyle \Omega \ m\hbox {-calibrable} \ \Rightarrow \ \frac{1}{\nu (\Omega )}\int _\Omega m_x(\Omega )d\nu (x) \le 2\,\nu \hbox {-}\underset{x\in \Omega }{\mathrm{ess\ inf}}\ m_x(\Omega ). \end{aligned}$$
(5.15)

Equivalently,

$$\begin{aligned} \Omega \ m\hbox {-calibrable} \ \Rightarrow \ \nu \hbox {-}\underset{x\in \Omega }{\mathrm{ess\,sup}} \ H^{m}_{\partial \Omega }(x) \le \lambda ^m_\Omega . \end{aligned}$$
(5.16)

Proof

By Theorem 5.8, there exists an antisymmetric function \(\mathbf{g}\) in \(\Omega \times \Omega \) such that

$$\begin{aligned} -1\le \mathbf{g}(x,y)\le & {} 1 \qquad \hbox {for }(\nu \otimes m_x)-\text{ a.e. } (x,y) \in \Omega \times \Omega , \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\nu (\Omega )}\int _\Omega m_z(\Omega )d\nu (z)=m_x(\Omega )+\int _{\Omega } \mathbf{g}(x,y) \,dm_x(y) \qquad \hbox {for }\nu \hbox {-a.e. }x\in \Omega . \end{aligned}$$

Hence,

$$\begin{aligned} \frac{1}{\nu (\Omega )}\int _\Omega m_z(\Omega )d\nu (z)\le 2 m_x(\Omega )\quad \hbox {for }\nu \hbox {-a.e. }x\in \Omega , \end{aligned}$$

from where (5.15) follows.

The equivalent thesis (5.16) follows from (5.15) and the fact that

$$\begin{aligned} \nu \hbox {-}\underset{x\in \Omega }{\mathrm{ess\,sup}}\ H^{m}_{\partial \Omega }(x) \le \lambda ^m_\Omega \ \Longleftrightarrow \ \displaystyle \frac{1}{\nu (\Omega )}\int _\Omega m_x(\Omega )d\nu (x) \le 2 \,\nu \hbox {-}\underset{x\in \Omega }{\mathrm{ess\ inf}}\ m_x(\Omega ). \end{aligned}$$

For this last equivalence recall from (2.3) that

$$\begin{aligned} H^m_{\partial \Omega }(x) = 1 - 2 m_x(\Omega ) \end{aligned}$$

and that

$$\begin{aligned} \lambda _\Omega ^m=\frac{P_m(\Omega )}{\nu (\Omega )}=1-\frac{1}{\nu (\Omega )}\int _\Omega m_x(\Omega ) d\nu (x). \end{aligned}$$

\(\square \)

The converse of Proposition 5.13 is not true in general, an example is given in [34] (see also [35]) for \([{\mathbb {R}}^3, d, m^J]\), with d the Euclidean distance and \(J= \frac{1}{|B_1(0)|} \upchi _{B_1(0)}\). Let us see an example, in the case of graphs, where the converse of Proposition 5.13 is not true

Example 5.14

Let \(V(G)=\{ x_1,x_2,\ldots , x_8 \} \) be a finite weighted discrete graph with the following weights: \( w_{x_1,x_2}= w_{x_2,x_3} = w_{x_6,x_7} = w_{x_7,x_8}=2 , \ w_{x_3,x_4}= w_{x_4,x_5} = 1 , \ w_{x_4,x_5}=10\) and \(w_{x_i,x_j}=0\) otherwise. If \(\Omega := \{ x_2, x_3, x_4, x_5, x_6, x_7 \}\), we have

$$\begin{aligned} \lambda _\Omega ^{m^G} = \frac{1}{9} \quad \hbox {and} \quad H^{m^G}_{\partial \Omega }(x) \le 0 \ \quad \forall \, x \in \Omega . \end{aligned}$$

Therefore, (5.16) holds. However, \(\Omega \) is not \(m^G\)-calibrable since, if \(A:= \{ x_4, x_5 \}\), we have

$$\begin{aligned} \frac{P_{m^G}(A)}{\nu _G(A)} = \frac{1}{11}. \end{aligned}$$

Proposition 5.15

Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X) \).

  1. (1)

    If \(\Omega =\Omega _1\cup \Omega _2\) with \(\nu (\Omega _1\cap \Omega _2)=0\), \(\nu (\Omega _1)>0\), \(\nu (\Omega _2)>0\), and \(L_m(\Omega _1,\Omega _2)=0\) (whenever this non-trivial decomposition is satisfied we will write \(\Omega =\Omega _1\cup _m\Omega _2\)), then

    $$\begin{aligned} \min \{\lambda _{\Omega _1}^m,\lambda _{\Omega _2}^m\}\le \lambda _\Omega ^m. \end{aligned}$$
  2. (2)

    If \(\Omega =\Omega _1\cup _m\Omega _2\) is m-calibrable, then each \(\Omega _i\) is m-calibrable, \(i=1,2\), and

    $$\begin{aligned} \lambda _\Omega ^m=\lambda _{\Omega _1}^m=\lambda _{\Omega _2}^m. \end{aligned}$$

Proof

(1) is a direct consequence of Proposition 2.2 and the fact that, for abcd positive real numbers, \(\min \left\{ \frac{a}{b},\frac{c}{d}\right\} \le \frac{a+c}{b+d}.\) (2) is a direct consequence of (1) together with the definition of m-calibrability. \(\square \)

6 The eigenvalue problem for the 1-Laplacian in metric random walk spaces

Let [Xdm] be a metric random walk space with invariant and reversible measure \(\nu \) and assume that [Xdm] is m-connected.

In this section we introduce the eigenvalue problem associated with the 1-Laplacian \(\Delta ^m_1\) and its relation with the Cheeger minimization problem. For the particular case of finite weighted discrete graphs where the weights are either 0 or 1, this problem was first studied by Hein and Bühler [29] and a more complete study was subsequently performed by Chang in [14].

Definition 6.1

A pair \((\lambda , u) \in {\mathbb {R}}\times L^2(X, \nu )\) is called an m -eigenpair of the 1-Laplacian \(\Delta ^m_1\) on X if \(\Vert u \Vert _{L^1(X,\nu )} = 1\) and there exists \(\xi \in \mathrm{sign}(u)\) (i.e., \(\xi (x) \in \mathrm{sign}(u(x))\) for every \(x\in X\)) such that

$$\begin{aligned} \lambda \, \xi \in \partial \mathcal {F}_m(u) = - \Delta ^m_1 u. \end{aligned}$$

The function u is called an m-eigenfunction and \(\lambda \) an m-eigenvalue associated to u.

Observe that, if \((\lambda , u)\) is an m-eigenpair of \(\Delta ^m_1\), then \((\lambda , - u)\) is also an m-eigenpair of \(\Delta ^m_1\).

Remark 6.2

By Theorem 3.3, the following statements are equivalent:

  1. (1)

    \((\lambda , u)\) is an m-eigenpair of the 1-Laplacian \(\Delta ^m_1\) .

  2. (2)

    There exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\), such that

    $$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle -\int _{X}\mathbf{g}(x,y)\,dm_x(y)= \lambda \xi (x) \quad \hbox {for }\nu -\text{ a.e. } x\in X, \\ \\ \displaystyle -\int _{X} \int _{X}\mathbf{g}(x,y)dm_x(y)\,u(x)d\nu (x)= TV_m(u). \end{array}\right. \end{aligned}$$
    (6.1)
  3. (3)

    There exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\), such that

    $$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle -\int _{X}{} \mathbf{g}(x,y)\,dm_x(y)= \lambda \xi (x) \quad \hbox {for }\nu -\text{ a.e. } x\in X, \\ \\ \displaystyle \mathbf{g}(x,y)(u(y)-u(x))=|u(y)-u(x)|\quad \hbox {for }\nu \otimes m_x-\hbox {a.e. } (x,y)\in X\times X; \end{array}\right. \end{aligned}$$
    (6.2)
  4. (4)

    There exists \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)}\le 1\), such that

    $$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle -\int _{X}\mathbf{g}(x,y)\,dm_x(y)= \lambda \xi (x) \quad \hbox {for }\nu -\text{ a.e. } x\in X, \\ \\ \displaystyle \lambda = TV_m(u); \end{array}\right. \end{aligned}$$

Remark 6.3

Note that, since \(TV_m(u)=\lambda \) for any m-eigenpair \((\lambda , u)\) of \(\Delta ^m_1\), then

$$\begin{aligned} \lambda =TV_m(u)= & {} \frac{1}{2} \int _X\int _X \vert u(y)-u(x)\vert dm_x(y)d\nu (x)\\\le & {} \frac{1}{2} \int _X\int _X (\vert u(y)\vert + \vert u(x)\vert ) dm_x(y)d\nu (x)=\Vert u \Vert _1=1, \end{aligned}$$

thus

$$\begin{aligned} 0\le \lambda \le 1. \end{aligned}$$

Example 6.4

Let \([V(G), d_G, m^G]\) be the metric random walk space given in Example 1.1 (3) with invariant and reversible measure \(\nu _G\). Then, a pair \((\lambda , u) \in {\mathbb {R}}\times L^2(V(G), \nu _G)\) is an \(m^G\)-eigenpair of \(\Delta ^{m^G}_1\) if \(\Vert u \Vert _{L^1(V(G),\nu _G)} = 1\) and there exists \(\xi \in \mathrm{sign}(u)\) and \(\mathbf{g}\in L^\infty (V(G)\times V(G), \nu _G \otimes m^{G}_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (V(G)\times V(G),\nu _G\otimes m^G_x)} \le 1\) such that

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle - \sum _{y \in V(G)}\mathbf{g}(x,y)\frac{w_{xy}}{dx} = \lambda \xi (x) \quad \hbox {for }\nu _G-\text{ a.e. } x\in V(G), \\ \\ \displaystyle \mathbf{g}(x,y) \in \mathrm{sign}(u(y) - u(x))\quad \hbox {for }\nu _G \otimes m^{G}_x-\hbox {a.e. } (x,y)\in V(G)\times V(G). \end{array}\right. \end{aligned}$$

In [14], Chang gives the 1-Laplacian spectrum for some special graphs like the Petersen graph, the complete graph \(K_n\), the circle graph with n vertices \(C_n\), etc. We will now provide an example in which the vertices have loops. Let \(V = V(G)=\{ a, b \}\) and \(w_{aa} = w_{bb} = p\), \(w_{ab} = w_{ba} = 1- p\), with \(0< p <1\). Then, \((\lambda , u) \in {\mathbb {R}}\times L^2(V, \nu _G)\) is an \(m^G\)-eigenpair of \(\Delta ^{m^G}_1\) if \(\vert u(a) \vert + \vert u(b) \vert = 1\) and there exists \(\xi \in \mathrm{sign}(u)\) and \(\mathbf{g}\in L^\infty (V\times V, \nu _G \otimes m^{G}_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (V\times V,\nu _G\otimes m^G_x)} \le 1\) such that

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle \mathbf{g}(a,a)=\mathbf{g}(b,b)=0, \ \mathbf{g}(a,b)=-\mathbf{g}(b,a), \\ \\ \displaystyle -\mathbf{g}(a,b) (1-p)= \lambda \xi (a), \\ \\ \displaystyle \displaystyle \mathbf{g}(a,b)(1-p)= \lambda \xi (b), \\ \\ \displaystyle \mathbf{g}(a,b)(u(b)-u(a))=|u(b)-u(a)|. \end{array}\right. \end{aligned}$$
(6.3)

Now, it is easy to see from system (6.3), using a case-by-case argument, that the m-eigenvalues of \(\Delta ^{m^G}_1\) are

$$\begin{aligned} \lambda =0\ \hbox { and } \ \lambda =1-p, \end{aligned}$$

and the following pairs are m-eigenpairs of \(\Delta ^{m^G}_1\) (observe that the measure \(\nu _G\) is not normalized):

$$\begin{aligned} \begin{array}{l}\displaystyle \lambda = 0, \quad \hbox {and} \quad (u(a),u(b))= (1/2, 1/2), \\ \displaystyle \lambda = 1 - p, \quad \hbox {and} \quad (u(a),u(b)) =(0,-1)+\mu (1,1) \quad \forall 0\le \mu \le 1. \end{array} \end{aligned}$$

For example, suppose that \((\lambda , u)\) is an m-eigenpair with \(u(a)=u(b)\). Then, \(u(a)=u(b)=\frac{1}{2}\) (\(u(a)=u(b)=-\frac{1}{2}\) yields the same eigenvalue) and, therefore, \(\xi =1\) thus \(\lambda =0\). Alternatively, we could have \(u(a)>u(b)\) thus \(g(a,b)=-1\) and we continue by using (6.3).

Observe that, if a locally finite weighted discrete graph contains a vertex x with no loop, i.e. \(w_{x,x}=0\), then \(\left( 1,\frac{1}{d_x}\delta _x\right) \) is an m-eigenpair of the 1-Laplacian. Conversely, if 1 is an m-eigenvalue of \(\Delta ^{m^G}_1\), then there exists at least one vertex in the graph with no loop (this follows easily from Proposition 6.12).

We have the following relation between m-calibrable sets and m-eigenpairs of \(\Delta ^{m}_1\).

Theorem 6.5

Let \(\Omega \subset X\) be a \(\nu \)-measurable set with \(0<\nu (\Omega )<\nu (X)\). We have:

  1. (i)

    If \((\lambda ^m_\Omega , \frac{1}{\nu (\Omega )} \upchi _\Omega )\) is an m-eigenpair of \(\Delta _1^m\), then \(\Omega \) is m-calibrable.

  2. (ii)

    If \(\Omega \) is m-calibrable and

    $$\begin{aligned} m_x(\Omega ) \le \lambda _\Omega ^m \quad \hbox {for }\nu \hbox {-almost every } \ x \in X {\setminus } \Omega , \end{aligned}$$
    (6.4)

    then \((\lambda ^m_\Omega , \frac{1}{\nu (\Omega )} \upchi _\Omega )\) is an m-eigenpair of \(\Delta _1^m\).

Proof

  1. (i):

    Since \((\lambda ^m_\Omega , \frac{1}{\nu (\Omega )} \upchi _\Omega )\) is an m-eigenpair of \(\Delta _1^m\), there exists \(\xi \in \mathrm{sign}(\upchi _\Omega )\) such that \(- \lambda _\Omega ^m \xi \in \Delta _1^m(\upchi _\Omega )\). Then, by Theorem 5.7, we have that \(\Omega \) is m-calibrable.

  2. (ii):

    If \(\Omega \) is m-calibrable, by Theorem 5.7, we have

    $$\begin{aligned} - \lambda ^m_\Omega \tau ^* \in \Delta _1^m \upchi _\Omega \, \quad \hbox {in} \ X \end{aligned}$$

    for

    $$\begin{aligned} \tau ^*(x)=\left\{ \begin{array}{ll} 1 &{}\quad \hbox {if } x\in \Omega ,\\ \displaystyle - \frac{1}{\lambda _\Omega ^m} m_x(\Omega )&{}\quad \hbox {if } x\in X{\setminus }\Omega . \end{array} \right. \end{aligned}$$

    Now, by (6.4), we have that \(\tau ^* \in \mathrm{sign} (\upchi _\Omega )\) and, consequently, \(\left( \lambda ^m_\Omega , \frac{1}{\nu (\Omega )} \upchi _\Omega \right) \) is an m-eigenpair of \(\Delta _1^m\).

\(\square \)

In the next example we see that, in Theorem 6.5, the reverse implications of (i) and (ii) are false in general.

Example 6.6

  1. (1)

    Let \(G=(V,E)\) be the weighted discrete graph where \(V= \{ a,b,c \}\) is the vertex set and the weights are given by \(w_{ab} = w_{ac} = w_{bc} = \frac{1}{2}\) and \(w_{aa}=w_{bb}=w_{cc}=0\). Then, \(m_a = \frac{1}{2} \delta _b +\frac{1}{2} \delta _c\), \(m_b = \frac{1}{2} \delta _a +\frac{1}{2} \delta _c\), \(m_c = \frac{1}{2} \delta _a +\frac{1}{2} \delta _b\) and \(\nu _G = \delta _a + \delta _b + \delta _c\). By Remark 5.1(2), we have that \(\Omega := \{a,b \}\) is \(m^G\)-calibrable. However, \( \lambda _\Omega ^{m^G} = \frac{1}{2}\) and \((\frac{1}{2}, \upchi _\Omega )\) is not an m-eigenpair of \(\Delta _1^m\) since \(0 \not \in \mathrm{med}_\nu (\upchi _\Omega )\) (see Corollary 6.11 and the definition of \(\mathrm{med}_\nu \) above that Corollary). Therefore, (6.4) does not hold (it follows by a simple calculation that \(m^G_c (\Omega ) = 1>\frac{1}{2}=\lambda _\Omega ^{m^G}\)).

  2. (2)

    Consider the locally finite weighted discrete graph \([\mathbb {Z}^2, d_{\mathbb {Z}^2}, m^{\mathbb {Z}^2}]\), where \(d_{\mathbb {Z}^2}\) is the Hamming distance and the weights are defined as usual: \(w_{xy}=1\) if \(d_{\mathbb {Z}^2}(x,y)=1\) and \(w_{xy}=0\) otherwise (see Example 1.1 (3)). For ease of notation we denote \(m:=m^{{\mathbb {Z}}^2}\). Let

    $$\begin{aligned} \Omega _k:=\{(i,j)\in \mathbb {Z}^2:0\le i,j\le k-1\} \ \hbox {for} \ k\ge 1 . \end{aligned}$$

    It is easy to see that

    $$\begin{aligned} \lambda _{\Omega _k}^m=\frac{1}{k}. \end{aligned}$$

    For \(1\le k \le 4\) these sets are m-calibrable and satisfy (6.4). Therefore, for \(1 \le k \le 4\), \(\left( \frac{1}{k},\frac{1}{\nu (\Omega _k)}\upchi _{\Omega _k}\right) \) is an m-eigenpair of the 1-Laplacian in \(\mathbb {Z}^2\) and with the same reasoning they are still m-eigenpairs of the 1-Laplacian in the metric random walk space \(\left[ (\Omega _k)_{m},d_{\mathbb {Z}^2}, m^{(\Omega _k)_{m}}\right] \) (recall Corollary 5.9, for ease of notation let \(m_k:=m^{(\Omega _k)_{m}}\)). For this last space, recall the definition of \((\Omega _k)_{m}\) from (5.10) and that of \(m_k=m^{(\Omega _k)_{m}}\) from Example 1.1 (5). Note further that, in the case of graphs, \(\partial _{m^{G}}\Omega \) is the set of vertices outside of \(\Omega \) which are related to vertices in \(\Omega \), i.e., the vertices outside of \(\Omega \) which are at a graph distance of 1 from \(\Omega \). For example, \(\Omega _2=\{(0,0),(1,0),(1,1),(0,1)\}\) and \((\Omega _2)_{m}=\Omega _2\cup \partial _{m}\Omega _2\), where

    $$\begin{aligned} \partial _{m}\Omega _2=\{(2,0),(2,1),(1,2),(0,2),(-1,1), (-1,0),(0,-1),(1,-1)\}. \end{aligned}$$

    Moreover, recalling again Example 1.1 (5), we have that \((m_2)_x(\{y\})=m_x(\{y\})\) for every x, \(y\in (\Omega _2)_m\), i.e., the probabilities associated to the jumps between different vertices in \((\Omega _2)_m\) do not vary. On the other hand,

    $$\begin{aligned} (m_2)_x(\{x\})=m_x(\partial _{m}\Omega _2)=\frac{1}{4}+\frac{1}{4}=\frac{1}{2}, \end{aligned}$$

    for every \(x\in \partial _m\Omega _2\) (note that, in this case, each vertex in \(\partial _m\Omega _2\) is related to 2 vertices outside of \((\Omega _2)_m\)). Consequently, informally speaking, a loop “appears” at each vertex of \(\partial _m\Omega _2\) since there is now the possibility of staying at the same vertex after a jump. However, this new metric random walk space \(\left[ (\Omega _2)_{m},d_{\mathbb {Z}^2}, m_2\right] \) can be reframed so as to regard it as associated to a weighted discrete graph, thus making the previous formal comment rigorous. In other words, we may define a weighted discrete graph which gives the same associated metric random walk space. This is easily done by taking the vertex set \(V:=(\Omega _2)_m\) and the following weights: \(w_{x,y}=1\) for x, \(y\in (\Omega _2)_m\) with \(d_{{\mathbb {Z}}^2}(x,y)=1\), \(w_{x,x}=2\) for \(x\in \partial _m(\Omega _2)_m\) and \(w_{x,y}=0\) otherwise (see Fig. 2).

Let us see what happens for

$$\begin{aligned} \Omega _5:=\{(i,j)\in \mathbb {Z}^2:0\le i,j\le 4\} . \end{aligned}$$

In this case,

$$\begin{aligned} \lambda _{\Omega _5}^{m}=\frac{1}{5}, \end{aligned}$$

and an algebraic calculation gives that \(\left( \frac{1}{5},\frac{1}{\nu ({\Omega _5})}\upchi _{\Omega _5}\right) \) is an m-eigenpair in \(\mathbb {Z}^2\) (see Fig. 1). Moreover, \(\left( \frac{1}{5},\frac{1}{\nu ({\Omega _5})}\upchi _{\Omega _5}\right) \) is also an \(m^A\)-eigenpair of the 1-Laplacian in the metric random walk space

$$\begin{aligned} \left[ A:=\{(i,j)\in \mathbb {Z}^2:-2\le i,j\le 6\}, d_{\mathbb {Z}^2}, m^A\right] \end{aligned}$$

or even in the metric random walk space obtained, in the same way, with the smaller set shown in Fig. 1.

Fig. 1
figure 1

The numbers in the graph are the values of a function \(\mathbf{g}(x,y)\) satisfying (6.2), where x is the vertex to the left of the number represented in the graph and y the one to the right, or, alternatively, x is the one above and y the one below. Elsewhere, \(\mathbf{g}(x,y)\) is taken as 0. The vertex (0, 0) is labelled in the graph. As an example, \(\mathbf{g}((0,0),(1,0))=1/5\) and \(\mathbf{g}((0,1),(0,0))=-1\)

However,

$$\begin{aligned} (m_5)_{(i,j)}({\Omega _5})= \frac{1}{4}\quad \forall (i,j)\in (\Omega _5)_{m}{\setminus }{\Omega _5} \end{aligned}$$

so (6.4) is not satisfied. Furthermore, \(\left( \frac{1}{5},\frac{1}{\nu ({\Omega _5})}\upchi _{\Omega _5}\right) \) fails to be an \(m_5\)-eigenpair of the 1-Laplacian in the metric random walk space \(\left[ (\Omega _5)_{m},d_{\mathbb {Z}^2}, m_5 \right] \) since the condition on the median given in Corollary 6.11 is not satisfied; nevertheless, \(\Omega _5\) is still \(m_5\)-calibrable in this setting.

Remark 6.7

Let us give some characterizations of (6.4).

  1. (1)

    In terms of the m-mean curvature we have that,

    $$\begin{aligned} (6.4) \Longleftrightarrow \nu \hbox {-}\underset{x\in \Omega ^c}{\hbox {esssup}}\,H_{\partial \Omega ^c}^m(x)\le \frac{1}{\nu (\Omega )}\int _\Omega H_{\partial \Omega }^m(x)d\nu (x), \end{aligned}$$

    where \(\Omega ^c=X{\setminus } \Omega \). Indeed, (6.4) is equivalent to

    $$\begin{aligned} 1-2m_x(\Omega )\ge 1-2\frac{P_m(\Omega )}{\nu (\Omega )}=\frac{\nu (\Omega )-2P_m(\Omega )}{\nu (\Omega )} \quad \hbox {for }\nu \hbox {-almost every } x \in \Omega ^c, \end{aligned}$$

    and this inequality can be rewritten as

    $$\begin{aligned} -H_{\partial \Omega }^m(x)\le \frac{1}{\nu (\Omega )}\int _\Omega H_{\partial \Omega }^m(y)d\nu (y) \quad \hbox {for }\nu \hbox {-almost every } x \in \Omega ^c \end{aligned}$$

    thanks to (2.3) and (2.4). Hence, since \( H_{\partial \Omega }^m(x)= -H_{\partial \Omega ^c}^m(x)\), we are done.

  2. (2)

    Furthermore, we have that

    $$\begin{aligned} (6.4) \Longleftrightarrow \frac{1}{\nu (\Omega )}\int _\Omega m_x(\Omega )d\nu (x)\,\le \nu \hbox {-}\underset{x\in \Omega ^c}{\hbox {essinf}}\, m_x(\Omega ^c). \end{aligned}$$

    Indeed, in this case, on account of (2.1), we rewrite (6.4) as

    $$\begin{aligned} 1-m_x(\Omega ^c)\le 1-\frac{1}{\nu (\Omega )}\int _\Omega m_y(\Omega )d\nu (y) \quad \hbox {for }\nu \hbox {-almost every } x \in \Omega ^c, \end{aligned}$$

    or, equivalently,

    $$\begin{aligned} \frac{1}{\nu (\Omega )}\int _\Omega m_y(\Omega )d\nu (y)\le m_x(\Omega ^c) \quad \hbox {for }\nu \hbox {-almost every } x \in \Omega ^c, \end{aligned}$$

    which gives us the characterization.

In the next example we give m-eigenpairs of the 1-Laplacian for the metric random walk spaces given in Example 1.1 (1).

Example 6.8

Let \(\Omega \subset {\mathbb {R}}^N\) with \(\mathcal {L}^N(\Omega ) < \infty \) and consider the metric random walk space \([\Omega , d, m^{J,\Omega }]\) given in Example 1.1 (1) with \(J:= \frac{1}{\mathcal {L}^N(B_r(0))} \upchi _{B_r(0)}\). Moreover, assume that there exists a ball \(B_\rho (x_0) \subset \Omega \) such that \(\mathrm{dist}(B_\rho (x_0), {\mathbb {R}}^N {\setminus } \Omega ) > r\). Then, by (2.2), we have

$$\begin{aligned} P_{ m^{J,\Omega }} (B_\rho (x_0)) = P_{ m^{J}} (B_\rho (x_0)), \end{aligned}$$

and, since \(B_\rho (x_0)\) is \(m^J\)-calibrable, we have that \(B_\rho (x_0)\) is \(m^{J,\Omega }\)-calibrable. Assume also that \(\mathcal {L}^N(B_\rho (x_0)) < \frac{1}{2} \mathcal {L}^N(B_r(0))\). Let us see that

$$\begin{aligned} m^{J,\Omega }_x(B_\rho (x_0)) \le \lambda _{B_\rho (x_0)}^{m^{J, \Omega }} \quad \hbox {for }\mathcal {L}^N\hbox {-almost every } x \in \Omega {\setminus } B_\rho (x_0). \end{aligned}$$
(6.5)

By Remark 6.7, (6.5) is equivalent to

$$\begin{aligned} \frac{1}{\mathcal {L}^N(B_\rho (x_0))}\int _{B_\rho (x_0)} m^{J,\Omega }_x(B_\rho (x_0))dx \,\le \mathcal {L}^N\hbox {-}\underset{x\in \Omega {\setminus } B_\rho (x_0)}{\hbox {essinf}}\, m^{J,\Omega }_x(\Omega {\setminus } B_\rho (x_0)) ). \end{aligned}$$

Now, for \(x\in \Omega \), we have

$$\begin{aligned} m^{J,\Omega }_x(B_\rho (x_0)) = m^{J}_x(B_\rho (x_0)) = \frac{1}{\mathcal {L}^N(B_r(0))} \int _{B_\rho (x_0)} \upchi _{B_r(0)}(x-y) dy \le \frac{1}{2}. \end{aligned}$$

Then, for \(x\in \Omega {\setminus } B_\rho (x_0)\), we have

$$\begin{aligned} m^{J,\Omega }_x(\Omega {\setminus } B_\rho (x_0)) ) = 1 - m^{J,\Omega }_x( B_\rho (x_0)) \ge \frac{1}{2} \ge \frac{1}{\mathcal {L}^N(B_\rho (x_0))}\int _{B_\rho (x_0)} m^{J,\Omega }_x(B_\rho (x_0))dx. \end{aligned}$$

Hence, (6.5) holds. Therefore, by Theorem 6.5, we have that

$$\begin{aligned} \left( \lambda ^{m^{J,\Omega }}_{B_\rho (x_0)}, \frac{1}{\mathcal {L}^N(B_\rho (x_0))} \upchi _{B_{\rho (x_0)}}\right) \end{aligned}$$

is an \(m^{J,\Omega }\)-eigenpair of \(\Delta ^{m^{J,\Omega }}_1\).

Similarly, for the metric random walk space \([{\mathbb {R}}^n, d, m^{J}]\) with \(J= \frac{1}{\mathcal {L}^N(B_r(0))} \upchi _{B_r(0)}\), and for \(\mathcal {L}^N(B_\rho (x_0)) < \frac{1}{2} \mathcal {L}^N(B_r(0))\), we have that

$$\begin{aligned} \left( \lambda ^{m^{J}}_{B_\rho (x_0)}, \frac{1}{\mathcal {L}^N(B_\rho (x_0))} \upchi _{B_{\rho (x_0)}}\right) \end{aligned}$$

is an \(m^{J}\)-eigenpair of \(\Delta ^{m^{J}}_1\).

6.1 The m-Cheeger constant of a metric random walk space with finite measure

In this subsection we give a relation between the non-null m-eigenvalues of the 1-Laplacian and the m-Cheeger constant of X when \(\nu (X)<+\infty \).

From now on in this section we assume that [Xdm] is a metric random walk space with invariant and reversible probability measure \(\nu \). Assuming that \(\nu (X)=1\) is not a loss of generality since, for \(\nu (X)<+\infty \), we may work with \(\frac{1}{\nu (X)}\nu \). Observe that \(\lambda _D^m=\frac{P_m(D)}{\nu (D)}\) remains unchanged if we consider the normalized measure, and the same is true for the m-eigenvalues of the 1-Laplacian.

In [36] we have defined the m-Cheeger constant of X as

$$\begin{aligned} h_m(X):= \inf \left\{ \frac{P_m (D)}{\min \{ \nu (D), \nu (X {\setminus } D)\}} \ : \ D \subset X, \ 0< \nu (D) < 1 \right\} \end{aligned}$$

or, equivalently,

$$\begin{aligned} h_m(X)= \inf \left\{ \frac{P_m (D)}{\nu (D)} \ : \ D \subset X, \ 0 < \nu (D) \le \frac{1}{2}\right\} . \end{aligned}$$
(6.6)

Note that, as a consequence of (2.1), we get

$$\begin{aligned} h_m(X)\le 1. \end{aligned}$$

Furthermore, observe that this definition is consistent with the definition on graphs (see [18], also [7]):

Example 6.9

Let \([V(G), d_G, m^G]\) be the metric random walk space given in Example 1.1 (3) with invariant and reversible measure \(\nu _G\). Then, for \(E \subset V(G)\), since

$$\begin{aligned} P_{m^G}(E) = \sum _{x \in E} \sum _{y \not \in E} w_{x,y} \quad \hbox {and} \quad \nu _G(E):= \sum _{x \in E} d_x, \end{aligned}$$

we have

$$\begin{aligned} \frac{P_{m^G}(E)}{\nu _G(E)} = \frac{1}{\sum _{x \in E} d_x}\sum _{x \in E} \sum _{y \not \in E} w_{x,y}. \end{aligned}$$
(6.7)

Therefore,

$$\begin{aligned} h_{m^G}(V(G)) = \inf \left\{ \frac{1}{\sum _{x \in E} d_x}\sum _{x \in E} \sum _{y \not \in E} w_{x,y} \ : \ E \subset V(G), \ 0 < \nu _G(E) \le \frac{1}{2} \nu _G(V)) \right\} . \end{aligned}$$

This minimization problem is closely related with the balance graph cut problem that appears in Machine Learning Theory (see [26, 27]).

Recall that in Sect. 5 we defined a different m-Cheeger constant (see (5.1)), however, the m-Cheeger constant \(h_m(X)\) is a global constant of the metric random walk space while the m-Cheeger constant \(h_1^m(\Omega )\) is defined for non-trivial \(\nu \)-measurable subsets of the space. Note that, if \(\nu (X)=1\), then

$$\begin{aligned} h_m(X)\le h_1^m(\Omega ) \end{aligned}$$

for any \(\nu \)-measurable set \(\Omega \subset X\) such that \(0<\nu (\Omega )\le 1/2\); and, if \(h_m(X)=\frac{P_m (\Omega )}{\nu (\Omega )}\) for a \(\nu \)-measurable set \(\Omega \subset X\) such that \(0<\nu (\Omega )\le 1/2\), then \(h_m(X)=h_1^m(\Omega )\) and, moreover, \(\Omega \) is m-calibrable.

Proposition 6.10

Assume that \(\nu \) is a probability measure (and, therefore, ergodic). Let \((\lambda , u)\) be an m-eigenpair of \(\Delta ^m_1\). Then,

  1. (i)

    \(\lambda = 0 \ \iff \ u\) is constant \(\nu \)-a.e., that is, \(u= 1\), or \(u=-1\).

  2. (ii)

    \(\lambda \not = 0 \ \iff \) there exists \(\xi \in \mathrm{sign}(u)\) such that \(\displaystyle \int _X \xi (x) d \nu (x) = 0.\)

Observe that (0, 1) and \((0,-1)\) are m-eigenpairs of the 1-Laplacian in metric random walk spaces with an invariant and reversible probability measure.

Proof

  1. (i)

    By (6.2), if \(\lambda = 0\), we have that \(TV_m(u) =0\) and then, by Lemma 2.9, we get that u is constant \(\nu \)-a.e. thus, since \(\Vert u\Vert _{L^1(X,\nu )}=1\) (and we are assuming \(\nu (X)=1\)), either \(u=1\), or \(u=-1\). Similarly, if u is constant \(\nu \)-a.e. then \(TV_m(u) =0\) and, by (6.2), \(\lambda =0\).

  2. (ii)

    (\(\Longleftarrow \)) If \(\lambda =0\), by (i), we have that \(u=1\), or \(u=-1\), and this is a contradiction with the existence of \(\xi \in \mathrm{sign}(u)\) such that \( \int _X \xi (x) d \nu (x) = 0\). (\(\Longrightarrow \)) There exists \(\xi \in \mathrm{sign}(u)\) and \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\) satisfying (6.1). Hence, since \(\mathbf{g}\) is antisymmetric, by the reversibility of \(\nu \), we have

    $$\begin{aligned} \lambda \int _X \xi (x) d\nu (x) = -\int _X \int _{X}\mathbf{g}(x,y)\,dm_x(y) d\nu (x) = 0. \end{aligned}$$

    Therefore, since \(\lambda \not = 0\),

    $$\begin{aligned} \int _X \xi (x) d\nu (x) =0. \end{aligned}$$

\(\square \)

Recall now that, given a function \(u : X \rightarrow {\mathbb {R}}\), \(\mu \in {\mathbb {R}}\) is a median of u with respect to the measure \(\nu \) if

$$\begin{aligned} \nu (\{ x \in X \ : \ u(x) < \mu \}) \le \frac{1}{2} \nu (X) \quad \hbox {and} \quad \nu (\{ x \in X \ : \ u(x) > \mu \}) \le \frac{1}{2} \nu (X). \end{aligned}$$

We denote by \(\mathrm{med}_\nu (u)\) the set of all medians of u. It is easy to see that

$$\begin{aligned}&\mu \in \mathrm{med}_\nu (u) \iff \\&\quad - \nu (\{ u = \mu \}) \le \nu (\{ x \in X \ : \ u(x) > \mu \}) -\, \nu (\{ x \in X \ : \ u(x) < \mu \}) \le \nu (\{ u = \mu \}), \end{aligned}$$

from where it follows that

$$\begin{aligned} 0 \in \mathrm{med}_\nu (u) \iff \exists \xi \in \mathrm{sign}(u) \ \hbox {such that} \ \int _X \xi (x) d \nu (x) = 0. \end{aligned}$$
(6.8)

By Proposition 6.10 and relation (6.8), we have the following result that was obtained for finite graphs by Hein and Bühler in [29].

Corollary 6.11

If \((\lambda , u)\) is an m-eigenpair of \(\Delta ^m_1\) then

$$\begin{aligned} \lambda \not = 0\ \Longleftrightarrow \ 0 \in \mathrm{med}_\nu (u). \end{aligned}$$

Observe that, by this corollary, if \(\lambda \ne 0\) is an m-eigenvalue of \(\Delta ^m_1\), then there exists an m-eigenvector u associated to \(\lambda \) such that its 0-superlevel set \(E_0(u)\) has positive \(\nu \)-measure. In fact, for any m-eigenvector u, either u or \(-u\) will satisfy this condition.

Proposition 6.12

If \((\lambda , u)\) is an m-eigenpair with \(\lambda >0\) and \(\nu (E_0(u)) > 0\), then \(\left( \lambda , \frac{1}{\nu (E_0(u))} \upchi _{E_0(u)}\right) \) is an m-eigenpair, \(\lambda =\lambda _{E_0(u)}^m\) and \(E_0(u)\) is m-calibrable. Moreover \(\nu (E_0(u))\le \frac{1}{2}\).

Proof

First observe that, by Corollary 6.11, we have that \(\nu (E_0(u))\le \frac{1}{2}\). Since \((\lambda ,u)\) is an m-eigenpair, there exists \(\xi \in \hbox {sign}(u)\) such that

$$\begin{aligned} -\lambda \xi \in \Delta _1^mu; \end{aligned}$$

hence, there exists \(\mathbf{g}(x,y)\in \hbox {sign}(u(y)-u(x))\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\), such that

$$\begin{aligned} -\int _{X}{} \mathbf{g}(x,y)\,dm_x(y)= \lambda \xi (x) \quad \hbox {for} \ \nu -\text{ a.e. } x\in X. \end{aligned}$$

Now,

$$\begin{aligned} \xi (x) = \left\{ \begin{array}{ll} 1&{}\quad \hbox {if } x\in {E_0(u)} \hbox { (since }u(x)>0\hbox {)},\\ \\ \in [-1,1] &{}\quad \hbox {if } x\in X{\setminus }{E_0(u)}, \end{array} \right. \end{aligned}$$

and, therefore, \(\xi \in \hbox {sign}(\upchi _{E_0(u)})\). On the other hand,

$$\begin{aligned} \mathbf{g}(x,y) = \left\{ \begin{array}{ll} \in [-1,1]&{}\quad \hbox {if } x,y\in {E_0(u)},\\ \\ -1&{}\quad \hbox {if } x\in {E_0},\ y\in X{\setminus } {E_0(u)} \hbox { (since } u(x)>0,\ u(y)\le 0), \\ \\ 1&{}\quad \hbox {if } x\in X{\setminus }{E_0(u)},\ y\in {E_0(u)} \hbox { (since } u(x)\le 0,\ u(y)> 0),\\ \\ \in [-1,1] &{}\quad \hbox {if } x,y\in X{\setminus }{E_0(u)}, \end{array} \right. \end{aligned}$$

and, consequently, \(\mathbf{g}(x,y)\in \hbox {sign}(\upchi _{E_0(u)}(y)-\upchi _{E_0(u)}(x))\). Therefore, we have that \(\left( \lambda ,\frac{1}{\nu ({E_0(u)})}\upchi _{E_0(u)}\right) \) is an m-eigenpair of \(\Delta ^m_1\). Moreover, by Theorem 6.5, we have that \({E_0(u)}\) is m-calibrable. \(\square \)

Remark 6.13

As a consequence of Proposition 5.15, when we search for m-eigenpairs of the 1-Laplacian we can restrict ourselves to m-eigenpairs of the form \(\left( \lambda , \frac{1}{\nu (E)} \upchi _{E}\right) \) where E is m-calibrable and not decomposable as \(E=E_1\cup _m E_2\). Indeed, suppose that \(\left( \lambda , \frac{1}{\nu (E)} \upchi _{E}\right) \) is an m-eigenpair and \(E=E_1\cup _m E_2\) for some \(E_1\), \(E_2\subset E\). Then, by (6.2), there exist \(\xi \in \mathrm{sign}(\upchi _E)\) and \(\mathbf{g}\in L^\infty (X\times X, \nu \otimes m_x)\) antisymmetric with \(\Vert \mathbf{g} \Vert _{L^\infty (X \times X,\nu \otimes m_x)} \le 1\), such that

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle -\int _{X}{} \mathbf{g}(x,y)\,dm_x(y)= \lambda \xi (x) \quad \nu -\text{ a.e. } x\in X, \\ \\ \displaystyle \mathbf{g}(x,y)\in \hbox {sign}(\upchi _{E}(y)-\upchi _{E}(x))\quad \nu \otimes m_x-\hbox {a.e. } (x,y)\in X\times X. \end{array}\right. \end{aligned}$$

Then, we may take the same \(\xi \) and g(xy) to see that \(\left( \lambda , \frac{1}{\nu (E_1)} \upchi _{E_1}\right) \) is also an m-eigenpair. Indeed, since \(\lambda _E^m=\lambda _{E_1}^m\), we only need to verify that \(\mathbf{g}(x,y)\in \hbox {sign}(\upchi _{E_1}(y)-\upchi _{E_1}(x))\) \(\nu \otimes m_x\)-a.e.. For \(x\in E_1\) we have:

  • if \(y\in E_1\), then \(\upchi _E(y)-\upchi _E(x)=0=\upchi _{E_1}(y)-\upchi _{E_1}(x)\),

  • if \(y\in X{\setminus } E\), then \(\upchi _E(y)-\upchi _E(x)=-1=\upchi _{E_1}(y)-\upchi _{E_1}(x)\),

and, since \(L_m(E_1,E_2)=0\), we have that \(\nu \otimes m_x(E_1\times E_2)=0\) so the condition is satisfied. Similarly for \(x\in E_2\) (again \(\nu \otimes m_x(E_2\times E_1)=0\)). If \(x\in X{\setminus } E\) then,

  • if \(y\in E_1\), \(\upchi _E(y)-\upchi _E(x)=1=\upchi _{E_1}(y)-\upchi _{E_1}(x)\),

  • if \(y\in E_2\), \(\upchi _E(y)-\upchi _E(x)=1\in \hbox {sign}(0)=\hbox {sign}(\upchi _{E_1}(y)-\upchi _{E_1}(x))\)

  • if \(y\in X{\setminus } E\), \(\upchi _E(y)-\upchi _E(x)=0=\upchi _{E_1}(y)-\upchi _{E_1}(x)\).

Let

$$\begin{aligned} \Pi (X):= \left\{ u \in L^1(X, \nu ) \ : \ \Vert u \Vert _{L^1(X,\nu )} = 1 \ \hbox {and} \ 0 \in \mathrm{med}_\nu (u) \right\} \end{aligned}$$

and

$$\begin{aligned} \lambda _1^m(X) := \inf \left\{ TV_m(u) \ : \ u \in \Pi (X) \right\} . \end{aligned}$$
(6.9)

In [36] we proved the following result.

Theorem 6.14

[36] Let [Xdm] be a metric random walk space with invariant and reversible probability measure \(\nu \). Then,

  1. (i)

    \( h_m(X) = \lambda _1^m(X). \)

  2. (ii)

    For \(\Omega \subset X\) \(\nu \)-measurable with \(\nu (\Omega ) = \frac{1}{2}\), \(h_m(X) = \lambda _\Omega ^m \iff \upchi _\Omega - \upchi _{X {\setminus } \Omega } \ \hbox { is a minimizer of }\)   (6.9).

By Corollary 6.11, if \((\lambda , u)\) is an m-eigenpair of \(\Delta ^m_1\) and \(\lambda \not = 0\) then \(u \in \Pi (X)\). Now, \(TV_m(u)=\lambda \), thus, as a corollary of Theorem 6.14 (i), we have the following result. Recall that, for finite graphs, it is well known that the first non–zero eigenvalue coincides with the Cheeger constant (see [14]).

Theorem 6.15

If \(\lambda \not = 0\) is an m-eigenvalue of \(\Delta ^m_1\) then

$$\begin{aligned} h_m(X) \le \lambda . \end{aligned}$$

This result also follows by Proposition 6.12 since \(\nu (E_0(u))\le \frac{1}{2}\).

In the next result we will see that if the infimum in (6.6) is attained then \(h_m(X)\) is an m-eigenvalue of \(\Delta ^m_1\).

Theorem 6.16

Let \(\Omega \) be a \(\nu \)-measurable subset of X such that \(0<\nu (\Omega )\le \frac{1}{2}\).

  1. (i)

    If \(\Omega \) and \(X{\setminus }\Omega \) are m-calibrable then \(\left( \lambda _\Omega ^m,\frac{1}{\nu (\Omega )}\upchi _\Omega \right) \) is an m-eigenpair of \(\Delta ^m_1\).

  2. (ii)

    If \(h_m(X)=\lambda ^m_\Omega \) then \(\Omega \) and \(X{\setminus }\Omega \) are m-calibrable

  3. (iii)

    If \(h_m(X)=\lambda ^m_\Omega \) then \(\left( \lambda _\Omega ^m,\frac{1}{\nu (\Omega )}\upchi _\Omega \right) \) is an m-eigenpair of \(\Delta ^m_1\).

Proof

First of all, observe that, since \(\nu (\Omega )\le \frac{1}{2}\),

$$\begin{aligned} \lambda _{X{\setminus }\Omega }^m \le \lambda _\Omega ^m. \end{aligned}$$

(i): By Theorem 5.8, since \(\Omega \) is m-calibrable, there exists an antisymmetric function \(\mathbf{g}_1\) in \(\Omega \times \Omega \) such that

$$\begin{aligned} -1\le \mathbf{g}_1(x,y)\le 1 \qquad \hbox {for }(\nu \otimes m_x)\hbox {-a.e. }(x,y) \in \Omega \times \Omega , \end{aligned}$$

and

$$\begin{aligned} \lambda _\Omega ^m = -\int _{\Omega }\mathbf{g}_1(x,y)\,dm_x(y) + 1 - m_x(\Omega )\quad \nu \hbox {-a.e. }x\in \Omega ; \end{aligned}$$
(6.10)

and, since \(X{\setminus }\Omega \) is m-calibrable, there exists an antisymmetric function \(\mathbf{g}_2\) in \((X{\setminus }\Omega )\times (X{\setminus }\Omega )\) such that

$$\begin{aligned} -1\le \mathbf{g}_2(x,y)\le 1 \qquad \hbox {for }(\nu \otimes m_x)\hbox {-a.e. }(x,y) \in (X{\setminus }\Omega )\times (X{\setminus }\Omega ), \end{aligned}$$

and

$$\begin{aligned} \lambda _{X{\setminus }\Omega }^m = -\int _{X{\setminus }\Omega }\mathbf{g}_2(x,y)\,dm_x(y) + 1 - m_x(X{\setminus } \Omega )\quad \nu \hbox {-a.e. }x\in X{\setminus }\Omega . \end{aligned}$$
(6.11)

Consequently, by taking

$$\begin{aligned} \mathbf{g}(x,y)=\left\{ \begin{array}{ll} \mathbf{g}_1(x,y)&{}\quad \hbox { if }x,y\in \Omega ,\\ -1&{}\quad \hbox { if }x\in \Omega ,y\in X{\setminus }\Omega ,\\ 1&{}\quad \hbox { if }x\in X{\setminus }\Omega ,y\in \Omega ,\\ -\mathbf{g}_2(x,y)&{}\quad \hbox { if }x,y\in X{\setminus }\Omega , \end{array}\right. \end{aligned}$$

we have that \(\mathbf{g}(x,y)\in \hbox {sign}\left( \upchi _\Omega (y)-\upchi _\Omega (x)\right) \). Moreover, from (6.10),

$$\begin{aligned} \lambda _{\Omega }^m = -\int _{X}\mathbf{g}(x,y)\,dm_x(y)\quad \hbox {for } \nu \hbox {-a.e. }x\in \Omega , \end{aligned}$$

and, since \(\lambda _{X{\setminus }\Omega }^m \le \lambda _\Omega ^m \), from (6.11),

$$\begin{aligned} -\lambda _\Omega ^m\le -\lambda _{X{\setminus }\Omega }^m= -\int _{X}\mathbf{g}(x,y)\,dm_x(y) \le \lambda _\Omega ^m \quad \hbox {for } \nu \hbox {-a.e. }x\in X{\setminus }\Omega . \end{aligned}$$

Hence, by Remark 6.2 (2), we conclude that \(\left( \lambda _\Omega ^m,\frac{1}{\nu (\Omega )}\upchi _\Omega \right) \) is an m-eigenpair of \(\Delta ^m_1\).

(ii): Since \(h_m(X)=\frac{P_m(\Omega )}{\nu (\Omega )}\) and \(0<\nu (\Omega )\le \frac{1}{2}\), we have \(h_m(X)=h_1^m(\Omega )=\frac{P_m(\Omega )}{\nu (\Omega )}\) and, consequently, \(\Omega \) is m-calibrable. Let us suppose that \(X{\setminus }\Omega \) is not m-calibrable. Then, there exists \(E\subset X{\setminus }\Omega \) such that \(\nu (E)<\nu (X{\setminus }\Omega )\) and

$$\begin{aligned} \lambda _E^m<\lambda _{X{\setminus }\Omega }^m \ . \end{aligned}$$

Now, this implies that \(\nu (E)>\frac{1}{2}\) since, otherwise, we get

$$\begin{aligned} \lambda _E^m<\lambda _{X{\setminus }\Omega }^m\le \lambda _\Omega ^m=h_m(X) \end{aligned}$$

which is a contradiction. Moreover, since \(\nu (E)<\nu (X{\setminus }\Omega )\), \(\lambda _E^m<\lambda _{X{\setminus }\Omega }^m\) also implies that

$$\begin{aligned} P_m(E)<P_m(X{\setminus }\Omega )=P_m(\Omega ). \end{aligned}$$

However, since \(\nu (E)>\frac{1}{2}\), we have that \(\nu (X{\setminus } E)<\frac{1}{2}\) and, consequently, taking into account that \(\nu (\Omega )\le \nu (X{\setminus } E)\), we get

$$\begin{aligned} \lambda _{X{\setminus } E}^m=\frac{P_m(E)}{\nu (X{\setminus } E)}<\frac{P_m(\Omega )}{\nu (\Omega )}=h_m(X), \end{aligned}$$

which is also a contradiction.

Finally, (iii) is a direct consequence of (i) and (ii). \(\square \)

As a consequence of Proposition 6.12 and Theorem 6.16, we have the following result.

Corollary 6.17

If \(h_m(X)\) is a positive m-eigenvalue of \(\Delta ^m_1\), then, for any eigenvector u associated to \(h_m(X)\) with \(\nu (E_0(u))>0\),

$$\begin{aligned} \left( h_m(X),\frac{1}{\nu (E_0(u))}\upchi _{E_0(u)}\right) \hbox { is an }m\hbox {-eigenpair of}~\Delta ^m_1, \end{aligned}$$

\(\nu (E_0(u))\le \frac{1}{2}\), and

$$\begin{aligned} h_m(X)=\lambda _{E_0(u)}^m. \end{aligned}$$

Moreover, both \({E_0(u)}\) and \(X{\setminus }{E_0(u)}\) are m-calibrable.

Remark 6.18

For \(\Omega \subset X\) with \(\nu (\Omega ) = \frac{1}{2}\) (thus \(\lambda _\Omega ^m=2P_m(\Omega )\)) we have that:

  1. (1)

    \(\Omega \) and \(X{\setminus }\Omega \) are m-calibrable if, and only if, \(\left( 2P_m(\Omega ),t\upchi _\Omega -(2-t)\upchi _{X{\setminus }\Omega }\right) \) is an m-eigenpair of \(\Delta ^m_1\) for any \(t\in [0,2]\).

  2. (2)

    If \(h_m(X)= 2P_m(\Omega )\) then \(\left( 2P_m(\Omega ),t\upchi _\Omega -(2-t)\upchi _{X{\setminus }\Omega }\right) \) is an m-eigenpair of \(\Delta ^m_1\) for all \(t\in [0,2]\).

Example 6.19

In Fig. 2, following the notation in Example 6.6(2), we consider the metric random walk space \(\left[ X:=(\Omega _2)_{m}, d_{\mathbb {Z}^2}, m_2:=m^{(\Omega _2)_{m}}\right] \). In Fig. 2a, we show this space partitioned into two \(m_2\)-calibrable sets, \(E =\{ (-1,0), (0,0), (1,0), (-1,1), (0,1), (1,1)\}\) and \(X{\setminus } E\), of equal measure, hence, by the previous remark, both \((\lambda _E^{m_2},\frac{1}{\nu (E)}\upchi _E)\) and \((\lambda _{ E}^{m_2},\frac{1}{\nu (E)}\upchi _{X{\setminus } E})\) are \(m_2\)-eigenpairs. However, the Cheeger constant \(h_{m_2}(X)\) is smaller than the eigenvalue \(\lambda _{E}^{m_2}\) since, for \(D = \{ (1,-1), (1,0), (2,0), (2,1),(1,1), (1,2)\}\), we have \(\lambda _D^{m_2} = \frac{1}{6}\) (see Fig. 2b).

Fig. 2
figure 2

The line segments represented in the figures correspond to the edges between adjacent vertices, with \(w_{xy}=1\) for any pair of these neighbouring vertices. The loops that “appear” when considering \(m_2\) (see Example 6.6(2)) are represented by circles

Remark 6.20

By Theorems 6.15 and 6.16, and Corollary 6.17, for finite weighted connected discrete graphs, we have that

$$\begin{aligned} h_m(X) \ \hbox { is the first non-zero eigenvalue of} \ \Delta ^{m^G}_1 \end{aligned}$$
(6.12)

(as already proved in [14, 15] and [29]) and, to solve the optimal Cheeger cut problem, it is enough to find an eigenvector associated to \(h_m(X)\) since then \(\{E_0(u),X{\setminus } E_0(u)\}\) or \(\{E_0(-u),X{\setminus } E_0(-u)\}\) is a Cheeger cut.

In the next examples we will see that (6.12) is not true in general. We obtain infinite weighted connected discrete graphs (with finite invariant and reversible measure) for which there is no first positive m-eingenvalue.

Example 6.21

  1. (1)

    Let \([V(G),d_G,m^G]\) be the metric random walk space defined in Example 1.1 (3) with vertex set \(V(G)=\{x_0,x_1,\ldots ,x_n,\ldots \}\) and weights defined as follows:

    $$\begin{aligned} w_{x_{2n}x_{2n+1}}=\frac{1}{2^n} , \quad w_{x_{2n+1}x_{2n+2}}=\frac{1}{3^n} \quad \hbox {for} \ n=0, 1, 2, \dots \hbox { and } w_{x,y}=0 \hbox { otherwise.} \end{aligned}$$

    We have \(d_{x_0}=1,\ d_{x_1}=2\) and, for \(n\ge 1\),

    $$\begin{aligned} \begin{array}{c} \displaystyle d_{x_{2n}}=w_{x_{2n-1}x_{2n}}+w_{x_{2n}x_{2n+1}}= \frac{1}{3^{n-1}}+\frac{1}{2^n},\\ \\ \displaystyle d_{x_{2n+1}}=w_{x_{2n}x_{2n+1}}+w_{x_{2n+1}x_{2n+2}}=\frac{1}{2^n}+\frac{1}{3^n}. \end{array} \end{aligned}$$

    Furthermore,

    $$\begin{aligned} \nu _G(V) = \sum _{i=0}^\infty d_{x_i} = 3+\sum _{n=1}^\infty \frac{1}{3^{n-1}}+\frac{1}{2^n} + \frac{1}{2^n}+\frac{1}{3^n} = 7. \end{aligned}$$

    Observe that the measure \(\nu _G\) is not normalized, but this does not affect the result because the constants \(\lambda ^m_\Omega \) and the m-eigenvalues of the 1-Laplacian are independent of this normalization.

    Consider \(E_n:=\{ x_{2n}, x_{2n+1}\}\) for \(n\ge 1\). By (2) in Remark 5.1, we have that \(E_n\) is \(m^G\)-calibrable. On the other hand,

    $$\begin{aligned} m_{x_{2n-1}}(E_n)= & {} \frac{1}{1+(\frac{3}{2})^{n-1}} \ , \ \\ m_{x_{2n+2}}(E_n)= & {} \frac{1}{1+\frac{3}{4}(\frac{3}{2})^{n-1}}=\lambda _{E_n}^{m^G} \ , \ \hbox {and} \ m_x(E_n)=0 \ \hbox {else in }V{\setminus } E_n . \end{aligned}$$

    Hence,

    $$\begin{aligned} m_x(E_n) \le \lambda _{E_n}^{m^G} \quad \hbox {for all} \ x \in V {\setminus } E_n. \end{aligned}$$

    Then, by Theorem 6.5, we have that \((\lambda ^{m^G}_{E_n}, \frac{1}{\nu (E_n)} \upchi _{E_n})\) is a \(m^G\)-eigenpair of \(\Delta _1^{m^G}\). Now,

    $$\begin{aligned} \lim _{n \rightarrow \infty } \lambda _{E_n}^{m^G} = \lim _{n \rightarrow \infty }\frac{2^{n+1}}{2^{n+1}+3^{n}} =0. \end{aligned}$$

    Consequently, both by Theorem 6.15 and by definition of \(h_{m^G}(V(G))\), we get

    $$\begin{aligned} h_{m^G}(V(G)) = 0. \end{aligned}$$
  2. (2)

    Let \(0<s<r<\frac{1}{2}\). Let \([V(G),d_G,m^G]\) be the metric random walk space defined in Example 1.1 (3) with vertex set \(V(G)=\{x_0,x_1,\ldots ,x_n,\ldots \}\) and weights defined as follows:

    $$\begin{aligned} w_{x_0,x_1}= & {} \frac{r}{1-r}+\frac{s}{1-s},\quad \\ w_{x_{n}x_{n+1}}= & {} r^n+s^n \quad \hbox {for} \ n= 1, 2, 3, \dots \hbox { and } w_{x,y}=0 \hbox { otherwise}. \end{aligned}$$

    Then,

    $$\begin{aligned} h_{m^G}(V(G)) = \displaystyle \frac{1-r}{1+r} \,\hbox { is not an }m^G\hbox {-eigenvalue of }\Delta _1^{m^G}. \end{aligned}$$

    Indeed, to start with, observe that \(\nu _G(V(G))=\frac{4r}{1-r}+\frac{4s}{1-s}\),

    $$\begin{aligned}&\nu _G(\{x_0\})\le \frac{\nu _G(V(G))}{2},\ \nu _G(\{x_0,x_1\})>\frac{\nu _G(V(G))}{2}, \\&\nu _G(\{x_1\})\le \frac{\nu _G(V(G))}{2},\ \nu _G(\{x_1,x_2\})>\frac{\nu _G(V(G))}{2}, \end{aligned}$$

    and, for \(E_n:=\{x_n,x_{n+1},x_{n+2},\dots \},\) \(n\ge 2\),

    $$\begin{aligned} \nu _G(E_n)\le \frac{\nu _G(V(G))}{2}. \end{aligned}$$

    Now, for \(n\ge 2\),

    $$\begin{aligned} \lambda _{E_n}^m=\frac{r^{n-1}+s^{n-1}}{r^{n-1}+s^{n-1}+2\left( \frac{r^n}{1-r}+\frac{s^n}{1-s}\right) } =\frac{r^{n-1}+s^{n-1}}{\frac{1+r}{1-r}r^{n-1}+\frac{1+s}{1-s}s^{n-1}} \end{aligned}$$

    decreases as n increases (therefore, the sets \(E_n\) are not m-calibrable), and

    $$\begin{aligned} \lim _n\lambda _{E_n}^m=\frac{1-r}{1+r}. \end{aligned}$$

    Let us see that, for any \(E\subset V(G)\) with \(0<\nu _G(E)\le \frac{\nu (V(G))}{2}\), we have \(\lambda _E^m>\frac{1-r}{1+r}\). Indeed, to start with, observe that if \(E=\{x_0\}\) or \(E=\{x_1\}\) then \(\lambda _{\{x_0\}}^m= \lambda _{\{x_1\}}^m=1> \frac{1-r}{1+r}\). Moreover, we have that \(\{x_0,x_1\} \not \subset E\) and \(\{x_1,x_2\} \not \subset E\) since \(\nu _G(\{x_0,x_1\})\not \le \frac{\nu _G(V(G))}{2}\) and \(\nu _G(\{x_1,x_2\})\not \le \frac{\nu _G(V(G))}{2}\). Therefore, it remains to see what happens for sets E satisfying

    1. (i)

      \(x_0\in E\), \(x_1\notin E\) and \(x_n\in E\) for some \(n\ge 2\),

    2. (ii)

      \(x_1\in E\), \(x_0\notin E\) and \(x_n\in E\) for some \(n\ge 3\),

    3. (iii)

      \(x_0\notin E\), \(x_1\notin E\) and \(x_n\in E\) for some \( n\ge 2\).

    For the case (i), let \(n_1\in \mathbb {N}\) be the first index \(n\ge 2\) such that \(x_n\in E\); for the case (ii), let \(n_2\in \mathbb {N}\) be the first index \(n\ge 3\) such that \(x_n\in E\); and for the case (iii), let \(n_3\in \mathbb {N}\) be the first index \(n\ge 2\) such that \(x_n\in E\). Now, for the case (i) we have that

    $$\begin{aligned} \lambda _E^m\ge \lambda _{\{x_0\}\cup E_{n_1}}\ge \lambda _{E_{n_1}}. \end{aligned}$$

    Indeed, the first equality follows from the fact that \(P_m(E)\ge P_m(\{x_0\}\cup E_{n_1})\) and \(\nu (E)\le \nu (\{x_0\}\cup E_{n_1})\) and the second one follows since

    $$\begin{aligned} \lambda _{\{x_0\}\cup E_{n_1}}=\frac{\frac{r}{1-r}+\frac{s}{1-s} +P_m(E_{n_1})}{\frac{r}{1-r}+\frac{s}{1-s} +\nu (E_{n_1})}>\frac{ P_m(E_{n_1})}{ \nu (E_{n_1})}=\lambda _{E_{n_1}}. \end{aligned}$$

    Hence, \(\lambda _E^m>\frac{1-r}{1+r}\). With a similar argument we get, in the case (ii),

    $$\begin{aligned} \lambda _E^m\ge \lambda _{\{x_1\}\cup E_{n_2}}\ge \lambda _{E_{n_2}}>\frac{1-r}{1+r}; \end{aligned}$$

    and, in the case (iii),

    $$\begin{aligned} \lambda _E^m\ge \lambda _{E_{n_3}}>\frac{1-r}{1+r}. \end{aligned}$$

Consequently, \(h_{m^G}(V(G)) = \frac{1-r}{1+r}\) and, by Corollary 6.17, it is not an m-eigenvalue of \(\Delta _1^{m^G}\).