1 Introduction

Subordination is a powerful tool in the study of random processes, see in particular Feller [11, Chapter X], and Sato [26, Chapter 6] in the context of Lévy processes. In the present work, we investigate subordination of random trees, and we apply our results to properties of the random metric space called the Brownian map, which has been proved to be the universal scaling limit of many different classes of random planar maps (see in particular [2, 17, 20]). These applications have been motivated by the work of Miller and Sheffield [21], which is part of a program aiming at the construction of a conformal structure on the Brownian map (see [22,23,24] for recent developments in this direction).

To explain our starting point, let us consider a compact \({\mathbb R}\)-tree \({\mathcal T}\). This means that \({\mathcal T}\) is a compact metric space such that, for every \(a,b\in {\mathcal T}\), there exists a unique (continuous injective) path from a to b, up to reparameterization, and the range of this path, which is called the geodesic segment between a and b and denoted by \(\llbracket a,b \rrbracket \), is isometric to a compact interval of the real line. We assume that \({\mathcal T}\) is rooted, so that there is a distinguished point \(\rho \) in \({\mathcal T}\). This allows us to define a generalogical order on \({\mathcal T}\), by saying that \(a\prec b\) if and only if \(a\in \llbracket \rho ,b\rrbracket \). Consider then a continuous function \(g:{\mathcal T}\longrightarrow {\mathbb R}_+\), such that \(g(\rho )=0\) and g is nondecreasing for the genealogical order. The basic idea of subordination is to identify a and b if g is constant on the geodesic segment \(\llbracket a,b \rrbracket \). So, for every \(a\in {\mathcal T}\), the set of all points that are identified with a is a closed connected subset of \({\mathcal T}\). This gluing operation yields another compact \({\mathbb R}\)-tree \(\widetilde{\mathcal T}\), which is equipped with a metric such that the distance between \(\rho \) and a is g(a) and is called the subordinate tree of \({\mathcal T}\) with respect to g (see Fig. 1 for an illustration). Furthermore, if our initial tree \({\mathcal T}\) was given as the tree coded by a continuous function \(h:[0,\sigma ]\longrightarrow {\mathbb R}_+\) (see [8] or Sect. 2 below), the subordinate tree \(\widetilde{\mathcal T}\) is coded by \(g\circ p_h\), where \(p_h\) is the canonical projection from \([0,\sigma ]\) onto \({\mathcal T}\).

Fig. 1
figure 1

On the left side, the tree \({\mathcal T}\), with segments where g is constant pictured in thin blue lines. On the right side, the subordinate tree, where each connected component of \({\mathcal T}\) made of thin segments has been glued into a single point (color figure online)

Our main interest is in the case where \({\mathcal T}\) is random, and more precisely \({\mathcal T}={\mathcal T}_\zeta \) is the “Brownian tree” coded by a positive Brownian excursion \(\zeta =(\zeta _s)_{0\le s\le \sigma }\) under the Itô excursion measure. One may view \({\mathcal T}_\zeta \) as a variant of Aldous’ Brownian CRT, for which the total mass is not equal to 1, but is distributed according to an infinite measure on \((0,\infty )\). As previously, we write \(p_\zeta \) for the canonical projection from \([0,\sigma ]\) onto \({\mathcal T}_\zeta \). Next, to define the subordination function, we let \((Z_a)_{a\in {\mathcal T}_\zeta }\) be (linear) Brownian motion indexed by \({\mathcal T}_\zeta \), starting from 0 at the root \(\rho \). A simple way to construct this process is to use the Brownian snake approach, letting \(Z_a=\widehat{W}_s\) if \(a=p_\zeta (s)\), where \(\widehat{W}_s\) is the “tip” of the random path \(W_s\), which is the value at time s of the Brownian snake driven by \(\zeta \). Since \(\zeta \) is distributed according to the Itô measure, W follows the Brownian snake excursion measure away from 0, which we denote by \({\mathbb N}_0\) (see [14, Chapter IV]). We then set \(\overline{Z}_a=\max \{Z_b : b\in \llbracket \rho ,a \rrbracket \}\). In terms of the Brownian snake, we have \(\overline{Z}_a=\overline{W}_s:=\max \{ W_s(t):0\le t\le \zeta _s\}\) whenever \(a=p_\zeta (s)\). We also use the notation \(\underline{W}_s:=\min \{ W_s(t):0\le t\le \zeta _s\}\).

Theorem 1

Let \(\widetilde{\mathcal T}_\zeta \) stand for the subordinate tree of \({\mathcal T}_\zeta \) with respect to the (continuous nondecreasing) function \(a\mapsto \overline{Z}_a\). Under the Brownian snake excursion measure \({\mathbb N}_0\), the tree \(\widetilde{\mathcal T}_\zeta \) is a Lévy tree with branching mechanism

$$\begin{aligned} \psi _0(r)=\sqrt{\frac{8}{3}} \, r^{3/2}. \end{aligned}$$

Recall that Lévy trees represent the genealogy of continuous-state branching processes [8], and can be characterized by a regenerative property analogous to the branching property of Galton–Watson trees [27]. Our identification of the distribution of \(\widetilde{\mathcal T}_\zeta \) is reminiscent of the classical result stating that the right-continuous inverse of the maximum process of a standard linear Brownian motion is a stable subordinator with index 1/2.

In view of our applications, it turns out that it is important to have more information than the mere identification of \(\widetilde{\mathcal T}_\zeta \) as a random compact \({\mathbb R}\)-tree. As mentioned above, \(\widetilde{\mathcal T}_\zeta \) can be viewed as the tree coded by the random function \(s\mapsto \overline{Z}_{p_\zeta (s)}=\overline{W}_s\). This coding induces a “lexicographical” order structure on \({\mathcal T}_\zeta \) (see [6] for a thorough discussion of order structures on \({\mathbb R}\)-trees). Somewhat surprisingly, it is not immediately clear that the order structure on \(\widetilde{\mathcal T}_\zeta \) induced by the coding from the function \(s\mapsto \overline{W}_s\) coincides with the usual order structure on Lévy trees, corresponding to the uniform random shuffling at every node in the terminology of [6]. To obtain this property, we relate the function \(s\mapsto \overline{W}_s\) to the height process of [7, 8]. We recall that, from a Lévy process with Laplace exponent \(\psi _0\), we can construct a continuous random process \((H_t)_{t\ge 0}\) called the height process, which codes the \(\psi _0\)-Lévy tree (here and below we say \(\psi _0\)-Lévy tree rather than Lévy tree with branching mechanism \(\psi _0\) for simplicity). See Sect. 3 below for more details.

Theorem 2

There exists a process H, which is distributed under \({\mathbb N}_0\) as the height process of a Lévy tree with branching mechanism \(\psi _0(r)=\sqrt{8/3}\,r^{3/2}\), and a continuous random process \(\Gamma \) with nondecreasing sample paths, such that we have, \({\mathbb N}_0\) a.e. for every \(s\ge 0\),

$$\begin{aligned} \overline{W}_s= H_{\Gamma _s}. \end{aligned}$$

Both H and \(\Gamma \) will be constructed in the proof of Theorem 2, and are measurable functions of \((W_s)_{s\ge 0}\). It is possible to identify the random process \(\Gamma \) as a continuous additive functional of the Brownian snake, but we do not need this fact in the subsequent applications, and we do not discuss this matter in the present work.

Theorem 2 implies that the tree coded by \(\overline{W}_s\) is isometric to the tree coded by \(H_s\), and we recover Theorem 1. But Theorem 2 gives much more, namely that the order structure induced by the coding via \(s\rightarrow \overline{W}_s\) is the same as the order structure induced by the usual height function of the Lévy tree. Order structures are crucial for our applications to the Brownian map. Similarly as in [21], we deal with a version of the Brownian map with randomized volume, which is constructed as a quotient space of the tree \({\mathcal T}_\zeta \): Two points a and b of \({\mathcal T}_\zeta \) are identified if \(Z_a=Z_b\) and if \(Z_c\ge Z_a\) for every \(c\in [a,b]\), where [ab] is the set of all points that are visited when going from a to b around the tree in “clockwise” order (for this to make sense, it is essential that \({\mathcal T}_\zeta \) has been equipped with a lexicographical order structure). We write \(\mathbf{m}\) for the resulting quotient space (the Brownian map) and \(D^*\) for the metric on the Brownian map (see [16, 17] for more details). The space \(\mathbf{m}\) comes with two distinguished points, namely the root \(\rho \) of \({\mathcal T}_\zeta \) and the unique point \(\rho _*\) where Z attains its minimal value—in a sense that can be made precise, these two points are independent and uniformly distributed over \(\mathbf{m}\). For every \(r\ge 0\), let B(r) be the closed ball of radius r centered at \(\rho _*\) in \(\mathbf{m}\). For \(r\in [0,D(\rho ^*,\rho ))\), define the hull \(B^\bullet (r)\) as the complement of the connected component of the complement of B(r) that contains \(\rho \) (informally, \(B^\bullet (r)\) is obtained by filling in all holes of B(r) except for the one containing \(\rho \)). In particular \(B^\bullet (0)=\{\rho _*\}\). Following [21], we define the metric net \(\mathcal {M}\) as the closure of

$$\begin{aligned} \bigcup _{0\le r<D(\rho ^*,\rho )} \partial B^\bullet (r). \end{aligned}$$

(This definition is in fact a little different from [21] which does not take the closure of the union in the last display.) We can equip the set \(\mathcal {M}\) with an “intrinsic” metric \(\Delta ^*\) derived from the Brownian map metric \(D^*\).

It is not hard to verify that, in the construction of \(\mathbf{m}\) as a quotient space of \({\mathcal T}_\zeta \), points of \(\mathcal {M}\) exactly correspond to vertices a of \({\mathcal T}_\zeta \) such that \(Z_a=\underline{Z}_a:=\min \{Z_b:b\in \llbracket \rho ,a\rrbracket \}\) (Proposition 10). This suggests that the metric net \(\mathcal {M}\) is closely related to the subordinate tree of \({\mathcal T}_\zeta \) with respect to the function \(a\mapsto -\underline{Z}_a\), which is a \(\psi _0\)-Lévy tree by the preceding results. To make this relation precise, we need the notion of the looptree introduced by Curien and Kortchemski [4] in a more general setting. Informally, the looptree associated with a Lévy tree is constructed by replacing each point a of infinite multiplicity by a loop of “length” equal to the “weight” of a, in such a way that the subtrees that are the connected components of the complement of a branch along this loop in an order determined by the coding function (see Fig. 3 below for an illustration). To give a more precise definition, first note that Theorem 2 and an obvious symmetry argument allow us to find a process \((H'_s)_{s\ge 0}\) distributed as the height process of a \(\psi _0\)-Lévy tree, and a continuous random process \((\Gamma '_s)_{s\ge 0}\) with nondecreasing sample paths such that \(-\underline{W}_s= H'_{\Gamma '_s}\). Then let \(\chi :=\sup \{s\ge 0:H'_s>0\}\), and let \(\sim \) be the equivalence relation on \([0,\chi ]\) whose graph is the smallest closed symmetric subset of \([0,\chi ]^2\) that contains all pairs (st) with \(s\le t\), \(H'_s=H'_t\) and \(H'_r>H'_s\) for every \(r\in (s,t)\). The looptree \({\mathcal L}\) is defined as the quotient space \([0,\chi ]{/}\sim \) equipped with an appropriate metric (see [4] for a definition of this metric). Clearly, \(H'_\alpha \) makes sense for any \(\alpha \in {\mathcal L}\) and is interpreted as the height of \(\alpha \). We observe that this height is not directly related to the metric of the looptree. The latter metric is in fact not relevant for us, since we consider instead the pseudo-metric \(\mathcal {D}^\circ \) defined for \(\alpha ,\beta \in {\mathcal L}\) by

$$\begin{aligned} \mathcal {D}^\circ (\alpha ,\beta )= 2\min \Bigg ( \max _{\gamma \in [\alpha ,\beta ]} H'_\gamma , \max _{\gamma \in [\beta ,\alpha ]} H'_\gamma \Bigg ) - H'_\alpha -H'_\beta , \end{aligned}$$

where \([\alpha ,\beta ]\) corresponds to the subset of \({\mathcal L}\) visited when going from \(\alpha \) to \(\beta \) “around” \({\mathcal L}\) in “clockwise order” (see Sect. 7 for a more formal definition). We write \(\alpha \simeq \beta \) if \(\mathcal {D}^\circ (\alpha ,\beta )=0\) (informally this means that \(\alpha \) and \(\beta \) “face each other” in the tree, in the sense that they are at the same height, and that points “between” \(\alpha \) and \(\beta \) are at a smaller height). It turns out that this defines an equivalence relation on \({\mathcal L}\). Finally, we let \(\mathcal {D}^*\) be the largest symmetric function on \({\mathcal L}\times {\mathcal L}\) that is bounded above by \(\mathcal {D}^\circ \) and satisfies the triangle inequality.

Theorem 3

The metric net \((\mathcal {M}, \Delta ^*)\) is a.s. isometric to the quotient space \({\mathcal L}{/}\simeq \) equipped with the metric induced by \(\mathcal {D}^*\).

See Theorem 12 in Sect. 7 for a more precise formulation. Theorem 3 is closely related to Proposition 4.4 in [21], where, however, the metric net is not identified as a metric space. The description of the metric net is an important ingredient of the axiomatic characterization of the Brownian map discussed in [21].

Let us briefly comment on the motivations for studying the metric net. Roughly speaking, the Brownian map \(\mathbf{m}\) can be recovered from the metric net \(\mathcal {M}\) by filling in the “holes”. To make this precise, we observe that the connected components of \(\mathbf{m}\backslash \mathcal {M}\) are bounded by Jordan curves (Proposition 14) and that these components are in one-to-one correspondence with connected components of \({\mathcal T}_\zeta \backslash \Theta \), where \(\Theta :=\{a\in {\mathcal T}_\zeta : Z_a=\underline{Z}_a\}\). Each of the latter components is associated with an excursion of the Brownian snake above its minimum, in the terminology of [1], and the distribution of such an excursion only depends on its boundary size as defined in [1] (this boundary size can be interpreted as a generalized length of the Jordan curve bounding the corresponding component of \(\mathbf{m}\backslash \mathcal {M}\)). Theorem 40 in [1] shows that, conditionally on their boundary sizes, these excursions are independent and distributed according to a certain “excursion measure”. In the Brownian map setting, this means that the holes in the metric net are filled in independently, conditionally on the lengths of their boundaries (see [18, Section 11] for a precise statement).

The paper is organized as follows. Section 2 gives a brief discussion of subordination for deterministic trees, and Sect. 3 recalls the basic facts about Lévy trees that we need. After a short presentation of the Brownian snake, Sect. 4 gives the distribution of the subordinate tree \(\widetilde{\mathcal T}_\zeta \) (Theorem 1). In view of identifying the order structure of this subordinate tree, Sect. 5 provides a technical result showing that the height process coding a Lévy tree is the limit in a strong sense of the discrete height functions coding embedded Galton–Watson trees. This result is related to the general limit theorems of [7, Chapter 2] proving that Lévy trees are weak limits of Galton–Watson trees, but the fact that we get a strong approximation is crucial for our applications. In Sect. 6, we prove that the Brownian snake maximum process \(s\mapsto \overline{W}_s\) is a time change of the height process associated with a \(\psi _0\)-Lévy tree (Theorem 2). This result is a key ingredient of the developments of Sect. 7, where we identify the metric net of the Brownian map (Theorem 3). Section 8 discusses the connected components of the complement of the metric net, showing in particular that they are in one-to-one correspondence with the points of infinite multiplicity of the associated Lévy tree, and that the boundary of each component is a Jordan curve. Section 9, which is mostly independent of the preceding sections, discusses more general subordinations of the Brownian tree \({\mathcal T}_\zeta \), which lead to stable Lévy trees with an arbirary index. This section is related to our previous article [3], which dealt with subordination for spatial branching processes, but the latter work did not consider the associated genealogical structures as we do here, and the subordination method, based on the so-called residual lifetime process, was also different. Finally, the appendix presents a more general and more precise version of the special Markov property of the Brownian snake (first established in [13]), which plays an important role in several proofs.

2 Subordination of deterministic trees

In this short section, we present a few elementary considerations about deterministic \({\mathbb R}\)-trees. We refer to [10] for the basic facts about \({\mathbb R}\)-trees that we will need, and to [6] for a thorough study of the coding of compact \({\mathbb R}\)-trees by functions.

Let us consider a compact \({\mathbb R}\)-tree \(({\mathcal T},d)\). We will always assume that \({\mathcal T}\) is rooted, meaning that there is a distinguished point \(\rho \in {\mathcal T}\). If \(a,b\in {\mathcal T}\), the geodesic segment between a and b (the range of the unique geodesic from a to b) is denoted by \(\llbracket a,b \rrbracket \). The point \(a\wedge b\) is then defined by \(\llbracket \rho ,a\wedge b \rrbracket =\llbracket \rho , a \rrbracket \cap \llbracket \rho ,b \rrbracket \). The genealogical partial order on \({\mathcal T}\) is denoted by \(\prec \;\): we have \(a\prec b\) if and only if \(a\in \llbracket \rho ,b\rrbracket \), and we then say that a is an ancestor of b, or b is a descendant of a. Finally, the height of \({\mathcal T}\) is defined by

$$\begin{aligned} \mathcal {H}({\mathcal T})=\max _{a\in {\mathcal T}} d(\rho ,a). \end{aligned}$$

Let \(g:{\mathcal T}\longrightarrow {\mathbb R}_+\) be a nonnegative continuous function on \({\mathcal T}\). Assume that \(g(\rho )=0\) and that g is nondecreasing with respect to the genealogical order (\(a\prec b\) implies that \(g(a)\le g(b)\)). We then define, for every \(a,b\in {\mathcal T}\),

$$\begin{aligned} d^{(g)}(a,b)= g(a)+g(b)-2 \,g(a\wedge b). \end{aligned}$$

Notice that \(d^{(g)}(a,b)\) is a symmetric function of a and b and satisfies the triangle inequality. We can thus consider the equivalence relation

$$\begin{aligned} a\approx _{g} b\quad \hbox {if and only if}\quad d^{(g)}(a,b)=0. \end{aligned}$$

Thus \(a\approx _{g} b\) if and only if \(g(a)=g(b)=g(a\wedge b)\), and this is also equivalent to saying that \(g(c)=g(a)\) for every \(c\in \llbracket a,b\rrbracket \). Write \({\mathcal T}^{(g)}\) for the quotient \({\mathcal T}{/}\approx _{g}\), and \(\pi ^{(g)}\) for the canonical projection from \({\mathcal T}\) onto \({\mathcal T}^{(g)}\). We equip \({\mathcal T}^{(g)}\) with the distance induced by \(d^{(g)}\), for which we keep the same notation \(d^{(g)}\).

Proposition 4

The metric space \(({\mathcal T}^{(g)},d^{(g)})\) is again a compact \({\mathbb R}\)-tree.

Proof

We first note that \(\pi ^{(g)}\) is continuous because, if \((a_n)_{n\ge 0}\) is a sequence in \({\mathcal T}\) that converges to \(a\in {\mathcal T}\), the continuity of g implies that \(d^{(g)}(a_n,a)\) tends to 0. It follows that the metric space \(({\mathcal T}^{(g)},d^{(g)})\) is compact. Then one easily verifies that, for every \(a\in {\mathcal T}\), \(\pi ^{(g)}(\llbracket \rho ,a \rrbracket )\) is a segment in \(({\mathcal T}^{(g)},d^{(g)})\) with endpoints \(\pi ^{(g)}(\rho )\) and \(\pi ^{(g)}(a)\). From Lemma 3.36 in [10], to verify that \(({\mathcal T}^{(g)},d^{(g)})\) is an \({\mathbb R}\)-tree, it suffices to check that the four-point condition

$$\begin{aligned}&d^{(g)}(a_1,a_2) + d^{(g)}(a_3,a_4)\\&\quad \le \max \Big ( d^{(g)}(a_1,a_3)+ d^{(g)}(a_2,a_4), d^{(g)}(a_1,a_4)+d^{(g)}(a_2,a_3)\Big ) \end{aligned}$$

holds for every \(a_1,a_2,a_3,a_4\in {\mathcal T}\). This is straightforward and we omit the details. \(\square \)

We call \(({\mathcal T}^{(g)},d^{(g)})\) the subordinate tree of \(({\mathcal T},d)\) with respect to the function g. By convention, \({\mathcal T}^{(g)}\) is rooted at \(\pi ^{(g)}(\rho )\). Since \(d^{(g)}(\rho ,a)= g(a)\) for every \(a\in {\mathcal T}\), we have \(\mathcal {H}({\mathcal T}^{(g)})=\max \{g(a):a\in {\mathcal T}\}\).

Consider now a continuous function \(h:[0,\sigma ]\longrightarrow {\mathbb R}_+\), where \(\sigma \ge 0\), such that \(h(0)=h(\sigma )=0\), and assume that \(({\mathcal T},d)\) is the tree coded by h in the sense of [8] or [6]. This means that \({\mathcal T}={\mathcal T}_h\) is the quotient space \([0,\sigma ]{/}\sim _h\), where the equivalence relation \(\sim _h\) is defined on \([0,\sigma ]\) by

$$\begin{aligned} s\sim _h t \quad \hbox {if and only if}\quad h(s)=h(t)=\min _{s\wedge t\le r\le r\vee t} h(r), \end{aligned}$$

and \(d=d_h\) is the distance induced on the quotient space by

$$\begin{aligned} d_h(s,t)= h(s)+h(t)-2\min _{s\wedge t\le r\le r\vee t} h(r). \end{aligned}$$

Notice that the topology of \(({\mathcal T}_h,d_h)\) coincides with the quotient topology on \({\mathcal T}_h\).

The canonical projection from \([0,\sigma ]\) onto \({\mathcal T}_h\) is denoted by \(p_h\), and \({\mathcal T}_h\) is rooted at \(\rho _h=p_h(0)\). For \(s\in [0,\sigma ]\), the quantity \(h(s)=d_h(0,s)\) is interpreted as the height of \(p_h(s)\) in the tree. One easily verifies that, for every \(s,t\in [0,\sigma ]\), the property \(p_h(s)\prec p_h(t)\) holds if and only if \(h(s)=\min \{h(r):s\wedge t\le r\le s\vee t\}\).

Remark

The function h is not determined by \({\mathcal T}_h\). In particular, if \(\phi :[0,\sigma ']\rightarrow [0,\sigma ]\) is continuous and nondecreasing, and such that \(\phi (0)=0\) and \(\phi (\sigma ')=\sigma \), the tree coded by \(h\circ \phi \) is isometric to the tree coded by h. This simple observation will be useful later.

Proposition 5

Under the preceding assumptions, if g is a nonnegative continuous function on \({\mathcal T}_h\) such that \(g(\rho _h)=0\) and g is nondecreasing with respect to the genealogical order on \({\mathcal T}_h\), the subordinate tree \(({\mathcal T}_h^{(g)},d_h^{(g)})\) of \(({\mathcal T}_h,d_h)\) with respect to the function g is isometric to the tree \(({\mathcal T}_G,d_G)\) coded by the function \(G=g\circ p_h\).

Proof

Note that the function G is nonnegative and continuous on \([0,\sigma ]\), and \(G(0)=G(\sigma )=0\). We can therefore make sense of the tree \(({\mathcal T}_G,d_G)\) and as above we denote the canonical projection from \([0,\sigma ]\) onto \({\mathcal T}_G\) by \(p_G\). We first notice that, for every \(s,t\in [0,\sigma ]\), the property \(p_h(s)=p_h(t)\) implies \(p_G(s)=p_G(t)\). Indeed, if \(p_h(s)=p_h(t)\), then, for every \(r\in [s\wedge t,s\vee t]\), we have \(p_h(s)\prec p_h(r)\) and therefore \(g(p_h(s))\le g(p_h(r))\), so that

$$\begin{aligned} G(s)=G(t)=\min _{r\in [s\wedge t,s\vee t]} G(r), \end{aligned}$$

and \(p_G(s)=p_G(t)\). We can thus write \(p_G=\mathbf{p}\circ p_h\), where the function \(\mathbf{p}:{\mathcal T}_h \longrightarrow {\mathcal T}_G\) is continuous and onto.

Then, let \(a,b\in {\mathcal T}_h\) and write \(a=p_h(s)\) and \(b=p_h(t)\), with \(s,t\in [0,\sigma ]\). We note that, for every \(r\in [s\wedge t,s\vee t]\), \(p_h(s)\wedge p_h(t)\prec p_h(r)\), and furthermore \(p_h(s)\wedge p_h(t)=p_h(r_0)\), if \(r_0\) is any element of \([s\wedge t,s\vee t]\) at which h attains its minimum over \([s\wedge t,s\vee t]\). It follows that

$$\begin{aligned} g(p_h(s)\wedge p_h(t))=\min _{r\in [s\wedge t,s\vee t]} G(r). \end{aligned}$$

Hence,

$$\begin{aligned} d^{(g)}_h(a,b)= & {} g(p_h(s))+g(p_h(t))- 2 g(p_h(s)\wedge p_h(t))\\= & {} G(s)+G(t) - 2\min _{r\in [s\wedge t,s\vee t]} G(r)= d_G(s,t). \end{aligned}$$

If \(\pi ^{(g)}\) is the projection from \({\mathcal T}_h\) onto the subordinate tree \({\mathcal T}_h^{(g)}\), we see in particular that the condition \(\pi ^{(g)}(a)=\pi ^{(g)}(b)\) implies \(p_G(s)=p_G(t)\) and therefore \(\mathbf{p}(a)=\mathbf{p}(b)\).

It follows that \(\mathbf{p}= I \circ \pi ^{(g)}\), where \(I:{\mathcal T}_h^{(g)} \longrightarrow {\mathcal T}_G\) is onto. It remains to verify that I is isometric, but this is immediate from the identities in the last display. \(\square \)

Remark

It is known that any compact \({\mathbb R}\)-tree is isometric to \({\mathcal T}_h\) for some function h (see [6, Corollary 1.2]). Thus Proposition 5 provides an alternative proof of Proposition 4.

3 Lévy trees

In the next sections, we will consider the case where \({\mathcal T}\) is the (random) tree coded by a Brownian excursion distributed under the Itô excursion measure \(\mathbf{n}(\cdot )\), and we will identify certain subordinate trees as Lévy trees. In this section, we recall the basic facts about Lévy trees that will be needed later. We refer to [7, 8] for more details.

We consider a nonnegative function \(\psi \) defined on \([0,\infty )\) of the type

$$\begin{aligned} \psi (r)=\alpha r + \beta r^2 + \int _{(0,\infty )} \pi (\mathrm {d}u)\,(e^{-ur}-1 +ur), \end{aligned}$$
(1)

where \(\alpha ,\beta \ge 0\), and \(\pi (\mathrm {d}u)\) is a \(\sigma \)-finite measure on \((0,\infty )\) such that \(\int (u\,{\wedge }\,u^2)\,\pi (\mathrm {d}u) <\infty \). With any such function \(\psi \), we can associate a continuous-state branching process (see [12] and references therein), and \(\psi \) is then called the branching mechanism function of this process. Notice that the conditions on \(\psi \) are not the most general ones, because we restrict our attention to the critical or subcritical case. Additionally, we will assume that

$$\begin{aligned} \int _1^\infty \frac{\mathrm {d}r}{\psi (r)} <\infty . \end{aligned}$$
(2)

This condition, which implies that at least one of the two properties \(\beta >0\) or \(\int u\,\pi (\mathrm {d}u)=\infty \) holds, is equivalent to the a.s. extinction of the continuous-state branching process with branching mechanism \(\psi \) [12]. Special cases include \(\psi (r)=r^\gamma \) for \(1<\gamma \le 2\).

Under the preceding assumptions, one can make sense of the Lévy tree that describes the genealogy of the continuous-state branching process with branching mechanism \(\psi \). We consider, under a probability measure \(\mathbf {P}\), a spectrally positive Lévy process \(X=(X_t)_{t\ge 0}\) with Laplace exponent \(\psi \), meaning that \(\mathbf {E}[\exp (-\lambda X_t)]=\exp (t\psi (\lambda ))\) for every \(t\ge 0\) and \(\lambda >0\). We define the associated height process by setting, for every \(t\ge 0\),

$$\begin{aligned} H_t=\lim _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon } \int _0^t \mathrm {d}s\,\mathbf {1}_{\{X_s<\inf \{X_r:s\le r\le t\}+\varepsilon \}}, \end{aligned}$$

where the limit holds in probability under \(\mathbf {P}\). Then [7, Theorem 1.4.3] the process \((H_t)_{t\ge 0}\) has a continuous modification, which we consider from now on. We have \(H_t=0\) if and only if \(X_t=I_t\), where \(I_t=\inf \{X_s:0\le s\le t\}\) is the past minimum process of X.

Let \(\mathbf {N}\) stand for the (infinite) excursion measure of \(X-I\). Here the normalization of \(\mathbf {N}\) is fixed by saying that the local time at 0 of \(X-I\) is the process \(-I\). Let \(\chi \) stand for the duration of the excursion under \(\mathbf {N}\). The height process H is well defined (and has continuous paths) under \(\mathbf {N}\), and we have \(H_0=H_\chi =0\), \(\mathbf {N}\) a.e. To simplify notation, we will write \(\max H=\max \{H_s:0\le s\le \chi \}\) under \(\mathbf {N}\).

By definition (see [8, Definition 4.1]), the Lévy tree with branching mechanism \(\psi \) (or in short the \(\psi \)-Lévy tree) is the random compact \({\mathbb R}\)-tree \({\mathcal T}_H\) coded by the function \((H_t)_{0\le t\le \chi }\) under \(\mathbf {N}\), or more generally any random tree with the same distribution—note that the distribution of the Lévy tree is an infinite measure. We refer to [7, 8] for several results explaining in which sense the Lévy tree codes the genealogy of the continuous-state branching process with branching mechanism \(\psi \). In the special case \(\psi (r)=r^2{/}2\), X is just a standard linear Brownian motion, \(H_t=2(X_t-I_t)\) is twice a reflected Brownian motion, and the Lévy tree is the tree coded by (twice) a positive Brownian excursion under the (suitably normalized) Itô measure. Conditioning on \(\chi =1\) then yields the Brownian continuum random tree. When \(\psi (r)=r^\gamma \) with \(1<\gamma <2\), one gets the stable tree with index \(\gamma \).

The distribution of the height of a Lévy tree is given as follows. For every \(h>0\),

$$\begin{aligned} \mathbf {N}(\mathcal {H}({\mathcal T}_H)>h)=\mathbf {N}(\max H> h)= v(h), \end{aligned}$$

where the function \(v:(0,\infty )\rightarrow (0,\infty )\) is determined by

$$\begin{aligned} \int _{v(h)}^\infty \frac{\mathrm {d}r}{\psi (r)}= h. \end{aligned}$$

Remark

In the preceding considerations, the normalization of the infinite measure \(\mathbf {N}\) is fixed by our choice of the local time at 0 of \(X-I\), and we can recover \(\psi \) from \(\mathbf {N}\) by the formulas for the distribution of \(\mathcal {H}({\mathcal T}_H)\) under \(\mathbf {N}\). What happens if we multiply \(\mathbf {N}\) by a constant \(\lambda >0\) ? The tree \({\mathcal T}_H\) under \(\lambda \mathbf {N}\) is still a Lévy tree in the previous sense, but the associated branching mechanism is now \(\widetilde{\psi }(r)=\lambda \psi (r{/}\lambda )\). To see this, consider the Lévy process \(X'_t=\frac{1}{\lambda }X_{\lambda t}\), whose Laplace exponent is \(\widetilde{\psi }(\lambda )\). It is not hard to verify that the height process corresponding to \(X'\) is \(H'_t=H_{\lambda t}\). Furthermore, if \(\mathbf {N}'\) is the excursion measure of \(X'\) above its past minimum process, one also checks that the distribution of \((H'_{t{/}\lambda })_{t\ge 0}\) under \(\mathbf {N}'\) is the distribution of \((H_t)_{t\ge 0}\) under \(\lambda \mathbf {N}\). However, the tree coded by \((H'_{t{/}\lambda })_{t\ge 0}\) is the same as the tree coded by \((H'_{t})_{t\ge 0}\). This shows that \({\mathcal T}_H\) under \(\lambda \mathbf {N}\) is a Lévy tree with branching mechanism \(\widetilde{\psi }\). Note that this is consistent with the formula for \(\mathbf {N}(\mathcal {H}({\mathcal T}_H)>h)\).

We now state two results that will be important for our purposes. We first mention that, for every \(h\ge 0\), one can define under \(\mathbf {P}\) a local time process of H at level h, which is denoted by \((L^h_t)_{t\ge 0}\) and is such that the following approximation holds for every \(t>0\),

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \mathbf {E}\left[ \sup _{s\le t}\left| \frac{1}{\varepsilon }\int _0^s \mathbf {1}_{\{h< H_s\le h+\varepsilon \}}\mathrm {d}r - L^h_s\right| \right] =0, \end{aligned}$$
(3)

and the latter convergence is uniform in h (see [7, Proposition 1.3.3]). When \(h=0\) we have simply \(L^0_t=-I_t\). The definition of \((L^h_t)_{t\ge 0}\) also makes sense under \(\mathbf {N}\), with a similar approximation.

We fix \(h>0\), and let \((u_j,v_j)_{j\in J}\) be the collection of all excursion intervals of H above level h: these are all intervals (uv), with \(0\le u<v\), such that \(H_u=H_v=h\) and \(H_r>h\) for every \(r\in (u,v)\). For each such excursion interval \((u_j,v_j)\), we define the corresponding excursion by \(H^{(j)}_s=H_{(u_j+s)\wedge v_j} -h\), for every \(s\ge 0\). Then \(H^{(j)}\) is a random element of the space \(C({\mathbb R}_+,{\mathbb R}_+)\) of all continuous functions from \({\mathbb R}_+\) into \({\mathbb R}_+\). We also let \(\mathbf {N}^\circ \) be the \(\sigma \)-finite measure on \(C({\mathbb R}_+,{\mathbb R}_+)\) which is the “law” of \((H_s)_{s\ge 0}\) under \(\mathbf {N}\).

Proposition 6

  1. (i)

    Under the probability measure \(\mathbf {P}\), the point measure

    $$\begin{aligned} \sum _{j\in J} \delta _{(L^h_{u_j},H^{(j)})}(\mathrm {d}\ell ,\mathrm {d}\omega ) \end{aligned}$$

    is Poisson with intensity \(\mathbf {1}_{[0,\infty )}(\ell )\,\mathrm {d}\ell \,\mathbf {N}^\circ (\mathrm {d}\omega )\).

  2. (ii)

    Under the probability measure \(\mathbf {N}(\cdot \mid \max H >h)\) and conditionally on \(L^h_\chi \), the point measure

    $$\begin{aligned} \sum _{j\in J} \delta _{(L^h_{u_j},H^{(j)})}(\mathrm {d}\ell ,\mathrm {d}\omega ) \end{aligned}$$

    is Poisson with intensity \(\mathbf {1}_{[0,L^h_\chi ]}(\ell )\,\mathrm {d}\ell \,\mathbf {N}^\circ (\mathrm {d}\omega )\).

See [8, Proposition 3.1, Corollary 3.2] for a slightly more precise version of this proposition. It follows from the preceding proposition that Lévy trees satisfy a branching property analogous to the classical branching property of Galton–Watson trees. To state this property, we introduce some notation. If \(({\mathcal T},d)\) is a (deterministic) compact \({\mathbb R}\)-tree rooted at \(\rho \), and \(0<h< \mathcal {H}({\mathcal T})\), we can consider the subtrees of \({\mathcal T}\) above level h. Here, a subtree above level h is just the closure of a connected component of \(\{a\in {\mathcal T}:d(\rho ,a)> h\}\). Such a subtree is itself viewed as a rooted \({\mathbb R}\)-tree (the root is obviously the unique point at height h in the subtree).

Proposition 7

Let \({\mathcal T}\) be a random compact \({\mathbb R}\)-tree defined under an infinite measure \(\mathcal {N}\), such that \(\mathcal {N}(\mathcal {H}({\mathcal T})=0)=0\) and \(0<\mathcal {N}(\mathcal {H}({\mathcal T})>h)<\infty \) for every \(h>0\). For every \(h,\varepsilon >0\), write \(M(h,h+\varepsilon )\) for the number of subtrees of \({\mathcal T}\) above level h with height greater than \(\varepsilon \).

  1. (i)

    Suppose that \({\mathcal T}\) is a Lévy tree. Then, for every \(h,\varepsilon >0\), for every integer \(p\ge 1\), the distribution under \(\mathcal {N}(\cdot \mid M(h,h+\varepsilon )=p)\) of the unordered collection formed by the p subtrees of \({\mathcal T}\) above level h with height greater than \(\varepsilon \) is the same as that of the unordered collection of p independent copies of \({\mathcal T}\) under \(\mathcal {N}(\cdot \mid \mathcal {H}({\mathcal T})>\varepsilon )\).

  2. (ii)

    Conversely, if the property stated in (i) holds, then \({\mathcal T}\) is a Lévy tree.

The property stated in (i) is called the branching property. The fact that it holds for Lévy trees is a straightforward consequence of Proposition 6 (ii). For the converse, we refer to [27, Theorem 1.1]. Note that the branching property remains valid if we multiply the underlying measure \(\mathcal {N}\) by a positive constant, which is consistent with the remark above.

To conclude this section, let us briefly comment on points of infinite multiplicity of the Lévy tree \({\mathcal T}_H\). The multiplicity of a point a of \({\mathcal T}_H\) is the number of connected components of \({\mathcal T}_H\backslash \{a\}\), and a is called a leaf if it has multiplicity one. Suppose that there is no quadratic part in \(\psi \), meaning that the constant \(\beta \) in (1) is 0 (note that condition (2) then implies that \(\pi \) has infinite mass). Then [8, Theorem 4.6], all points of \({\mathcal T}_H\) have multiplicity 1, 2 or \(\infty \). The set of all points of infinite multiplicity is a countable dense subset of \({\mathcal T}_H\), and these points are in one-to-one correspondence with local minima of H, or with jump times of X. More precisely, let a be a point of infinite multiplicity of \({\mathcal T}_H\). Then, if \(s_1=\min p_H^{-1}(a)\) and \(s_2=\max p_H^{-1}(a)\), we have \(s_1<s_2\), \(H_{s_1}=H_{s_2}=\min \{H_s:s_1\le s\le s_2\}\), and \(p_H^{-1}(a)=\{s\in [s_1,s_2]: H_s= H_{s_1}\}\) is a Cantor set contained in \([s_1,s_2]\). In terms of the Lévy process X, \(s_1\) is a jump time of X and \(s_2=\inf \{s\ge s_1: X_s\le X_{s_1-}\}\), and \(p_H^{-1}(a)\) consists exactly of those \(s\in [s_1,s_2]\) such that \(\inf \{X_r:r\in [s_1,s]\}=X_s\). Furthermore, if for every \(r\in [0,\Delta X_{s_1}]\) we set \(\eta _r=\inf \{s\ge s_1: X_{s}<X_{s_1}-r\}\), the points of \(p_H^{-1}(a)\) are all of the form \(\eta _r\) or \(\eta _{r-}\). The quantity \(\Delta X_{s_1}\) is the “weight” of the point of infinite multiplicity a. See [8] for more details.

4 Subordination by the Brownian snake maximum

In this section, we prove Theorem 1. We start with a brief presentation of the Brownian snake. We refer to [14, Chapter IV] for more details.

We let \(\mathcal {W}\) be the space of all finite paths in \({\mathbb R}\). Here a finite path is simply a continuous mapping \(\mathrm {w}:[0,\zeta ]\longrightarrow {\mathbb R}\), where \(\zeta =\zeta _{(\mathrm {w})}\) is a nonnegative real number called the lifetime of \(\mathrm {w}\). The set \(\mathcal {W}\) is a Polish space when equipped with the distance

$$\begin{aligned} d_\mathcal {W}(\mathrm {w},\mathrm {w}')=|\zeta _{(\mathrm {w})}-\zeta _{(\mathrm {w}')}|+\sup _{t\ge 0}|\mathrm {w}(t\wedge \zeta _{(\mathrm {w})})-\mathrm {w}'(t\wedge \zeta _{(\mathrm {w}')})|. \end{aligned}$$

The endpoint (or tip) of the path \(\mathrm {w}\) is denoted by \(\widehat{\mathrm {w}}=\mathrm {w}(\zeta _{(\mathrm {w})})\). For every \(x\in {\mathbb R}\), we set \(\mathcal {W}_x=\{\mathrm {w}\in \mathcal {W}:\mathrm {w}(0)=x\}\). We also identify the trivial path of \(\mathcal {W}_x\) with zero lifetime with the point x.

The standard (one-dimensional) Brownian snake with initial point x is the continuous Markov process \((W_s)_{s\ge 0}\) taking values in \(\mathcal {W}_x\), whose distribution is characterized by the following properties:

  1. (a)

    The process \((\zeta _{(W_s)})_{s\ge 0}\) is a reflected Brownian motion in \({\mathbb R}_+\) started from 0. To simplify notation, we write \(\zeta _s=\zeta _{(W_s)}\) for every \(s\ge 0\).

  2. (b)

    Conditionally on \((\zeta _{s})_{s\ge 0}\), the process \((W_s)_{s\ge 0}\) is time-inhomogeneous Markov, and its transition kernels are specified as follows. If \(0\le s\le s'\),

    • \(W_{s'}(t)=W_{s}(t)\) for every \(t\le m_\zeta (s,s'):=\min \{\zeta _r:s\le r\le s'\}\);

    • the random path \((W_{s'}(m_\zeta (s,s')+t)-W_{s'}(m_\zeta (s,s')))_{0\le t\le \zeta _{s'}- m_\zeta (s,s')}\) is independent of \(W_s\) and distributed as a real Brownian motion started at 0 and stopped at time \(\zeta _{s'}- m_\zeta (s,s')\).

Informally, the value \(W_s\) of the Brownian snake at time s is a one-dimensional Brownian path started from x, with lifetime \(\zeta _s\). As s varies, the lifetime \(\zeta _s\) evolves like reflected Brownian motion in \({\mathbb R}_+\). When \(\zeta _s\) decreases, the path is erased from its tip, and when \(\zeta _s\) increases, the path is extended by adding “little pieces” of Brownian paths at its tip.

For every \(x\in {\mathbb R}\), we write \({\mathbb P}_x\) for the probability measure under which \(W_0=x\), and \({\mathbb N}_x\) for the (infinite) excursion measure of W away from x. Also \(\sigma :=\sup \{s>0: \zeta _s> 0\}\) stands for the duration of the excursion under \({\mathbb N}_x\). Under \({\mathbb N}_x\), \((\zeta _s)_{s\ge 0}\) is distributed according to the Itô excursion measure \(\mathbf{n}(\cdot )\), and the normalization is fixed by the formula

$$\begin{aligned} {\mathbb N}_x\left( \max _{0\le s\le \sigma }\zeta _s >\varepsilon \right) = \frac{1}{2\varepsilon }. \end{aligned}$$

The following property of the Brownian snake will be used in several places below. Recall that \(\widehat{W}_s=W_s(\zeta _s)\) is the tip of the path \(W_s\). We say that \(r\in [0,\sigma )\) is a time of right increase of \(s\mapsto \zeta _s\), resp. of \(s\mapsto \widehat{W}_s\), if there exists \(\varepsilon \in (0,\sigma -r]\) such that \(\zeta _u\ge \zeta _r\), resp. \(\widehat{W}_u\ge \widehat{W}_r\), for every \(u\in [r,r+\varepsilon ]\). We can similarly define points of left increase. Then according to [19, Lemma 3.2], \({\mathbb N}_x\) a.e., no time \(r\in (0,\sigma )\) can be simultaneously a time of (left or right) increase of \(s\mapsto \zeta _s\) and a point of (left or right) increase of \(s\mapsto \widehat{W}_s\).

Le us fix \(x=0\) and argue under \({\mathbb N}_0\). As previously, we write \({\mathcal T}_\zeta \) for the (random) tree coded by the function \((\zeta _s)_{0\le s\le \sigma }\), \(p_\zeta :[0,\sigma ]\longrightarrow {\mathcal T}_\zeta \) for the canonical projection, and \(\rho _\zeta =p_\zeta (0)\) for the root of \({\mathcal T}_\zeta \). Properties of the Brownian snake show that the condition \(p_\zeta (s)=p_\zeta (s')\) implies \(W_s=W_{s'}\), and thus we can define \(W_a\) for every \(a\in {\mathcal T}_\zeta \) by setting \(W_a=W_s\) if \(a=p_\zeta (s)\). We note that, if \(a=p_\zeta (s)\) and \(t\in [0,\zeta _s]\), \(W_s(t)\) coincides with \(\widehat{W}_b\) where b is the unique point of \(\llbracket \rho _\zeta ,a\rrbracket \) at distance t from \(\rho _\zeta \). For every \(\mathrm {w}\in \mathcal {W}\), set

$$\begin{aligned} \overline{\mathrm {w}}:=\max _{0\le t\le \zeta _{(\mathrm {w})}} \mathrm {w}(t). \end{aligned}$$

Then, the function \(a\mapsto \overline{W}_a\) is continuous and nondecreasing on \({\mathcal T}_\zeta \) (if \(a=p_\zeta (s)\) and \(b=p_\zeta (s')\), the condition \(a\prec b\) implies that \(\zeta _s\le \zeta _{s'}\) and that \(W_a\) is the restriction of \(W_b\) to the interval \([0,\zeta _s]\), so that obviously \(\overline{W}_a\le \overline{W}_b\)). As in Theorem 1, we write \(\widetilde{\mathcal T}_\zeta \) for the subordinate tree of \({\mathcal T}_\zeta \) with respect to the function \(a\mapsto \overline{W}_a\).

Proof of Theorem 1

We first verify that the branching property stated in Proposition 7 holds for \(\widetilde{\mathcal T}_\zeta \) under \({\mathbb N}_0\), and to this end we rely on the special Markov property of the Brownian snake.

Let \(h>0\), and, for every \(\mathrm {w}\in \mathcal {W}\), set \(\tau _h(\mathrm {w})=\inf \{t\in [0,\zeta _{(\mathrm {w})}]:\mathrm {w}(t)\ge h\}\). Let \((a_i,b_i)_{i\in I}\) be the connected components of the open set \(\{s\ge 0: \tau _h(W_s)<\zeta _s\}\). For every such connected component \((a_i,b_i)\), for every \(s\in [a_i,b_i]\), the path \(W_s\) coincides with \(W_{a_i}=W_{b_i}\) up to time \(\tau _h(W_{a_i})=\zeta _{a_i}=\zeta _{b_i}=\tau _h(W_{b_i})\) (these assertions are straightforward consequences of the properties of the Brownian snake, and we omit the details). We then set, for every \(s\ge 0\),

$$\begin{aligned} W^{(i)}_s(t) := W_{(a_i+s)\wedge b_i}(\zeta _{a_i}+t),\quad \hbox {for } 0\le t\le \zeta ^{(i)}_s:=\zeta _{(a_i+s)\wedge b_i}-\zeta _{a_i}. \end{aligned}$$

We view \(W^{(i)}\) as a random element of the space of all continuous functions from \({\mathbb R}_+\) into \(\mathcal {W}_h\), and the \(W^{(i)}\)’s are called the excursions of W outside the domain \((-\infty ,h)\) (see the appendix below for further details in a more general setting).

By a compactness argument, only finitely many of the excursions \(W^{(i)}\) hit \((h+\varepsilon ,\infty )\). Let \(M_{h,\varepsilon }\) be the number of these excursions. It follows from Corollary 22 in the appendix that, for every \(p\ge 1\), conditionally on \(\{M_{h,\varepsilon }=p\}\), the unordered collection formed by the excursions of W outside \((-\infty ,h)\) that hit \((h+\varepsilon ,\infty )\) is distributed as the unordered collection of p independent copies of W under \({\mathbb N}_h(\cdot \mid \sup \{\widehat{W}_s:s\ge 0\} > h+\varepsilon )\). On the other hand, noting that \(\widetilde{\mathcal T}_\zeta \) is the tree coded by the function \([0,\sigma ]\ni s\mapsto \overline{W}_s\) (by Proposition 5), we also see that subtrees of \(\widetilde{\mathcal T}_\zeta \) above level h with height greater than \(\varepsilon \) are in one-to-one correspondence with excursions of W outside \((-\infty ,h)\) that hit \((h+\varepsilon ,\infty )\), and if a subtree \(\widetilde{\mathcal T}^{(i)}\) corresponds to an excursion \(W^{(i)}\), \(\widetilde{\mathcal T}^{(i)}\) is obtained from the excursion \(W^{(i)}\) (shifted so that it starts from 0) by exactly the same procedure that allows us to construct \(\widetilde{\mathcal T}_\zeta \) from W under \({\mathbb N}_0\): To be specific, \(\widetilde{\mathcal T}^{(i)}\) is coded by the function \(s\mapsto \overline{W}^{(i)}_s-h\) just as \(\widetilde{\mathcal T}_\zeta \) is coded by \(s\mapsto \overline{W}_s\). The preceding considerations show that \(\widetilde{\mathcal T}_\zeta \) satisfies the branching property, and therefore is a Lévy tree.

To get the formula for \(\psi _0\), we note that the distribution of the height of \(\widetilde{\mathcal T}_\zeta \) is given by

$$\begin{aligned} {\mathbb N}_0(\mathcal {H}(\widetilde{\mathcal T}_\zeta )>r)={\mathbb N}_0\left( \sup _{s\ge 0} \widehat{W}_s >r\right) =\frac{3}{2r^2}, \end{aligned}$$

where the last equality can be found in [14, Section VI.1]. Since we also know that the function \(v(r)={\mathbb N}_0(\mathcal {H}(\widetilde{\mathcal T}_\zeta )>r)\) solves \(v'=-\psi _0(v(r))\), the formula for \(\psi _0\) follows. \(\square \)

5 Approximating a Lévy tree by embedded Galton–Watson trees

In this section, we come back to the general setting of Sect. 3. Our goal is to prove that the Lévy tree \({\mathcal T}_H\) is (under the probability measure \(\mathbf {N}(\cdot \mid \mathcal {H}({\mathcal T}_H)>h)\) for some \(h>0\)) the almost sure limit of a sequence of embedded Galton–Watson trees, and that this limit is consistent with the order structure of the Lévy tree. We refer to [15] for basic facts about Galton–Watson trees. A key property for us is the fact that Galton–Watson trees are rooted ordered (discrete) trees, also called plane trees, so that there is a lexicographical ordering on vertices.

In what follows, we argue under the probability measure \(\mathbf {P}\). Recall that X is under \(\mathbf {P}\) a Lévy process with Laplace exponent \(\psi \), and that H is the associated height process. We fix an integer \(n\ge 1\), and, for every integer \(j\ge 0\), we consider the sequence of all excursions of H above level \(j\,2^{-n}\) that hit level \((j+1)2^{-n}\). We let

$$\begin{aligned} 0\le \alpha ^n_0<\alpha ^n_1<\alpha ^n_2<\cdots \end{aligned}$$

be the ordered sequence consisting of all the initial times of these excursions, for all values of the integer \(j\ge 0\) (so, \(\alpha ^n_0\) corresponds to the beginning of an excursion of H above 0 that hits \(2^{-n}\), \(\alpha ^n_1\) may be either the beginning of an excursion of H above 0 that hits \(2^{-n}\) or the beginning of an excursion of H above \(2^{-n}\) that hits \(2\times 2^{-n}\), and so on). For every \(j\ge 0\), we also let \(\beta ^n_j\) be the terminal time of the excursion starting at time \(\alpha ^n_j\).

We then set, for every integer \(k\ge 0\),

$$\begin{aligned} H^n_k= 2^n\, H_{\alpha ^n_k}. \end{aligned}$$

Proposition 8

The process \((H^n_k)_{k\ge 0}\) is the discrete height process of a sequence of independent Galton–Watson trees with the same offspring distribution \(\mu _n\).

Recall that the discrete height process of a sequence of Galton–Watson trees gives the generation of the successive vertices in the trees, assuming that these vertices are listed in lexicographical order in each tree and one tree after another. See [15] or [7, Section 0.2]. The (finite) height sequence of a single tree is defined analogously.

Proof

By construction, \(\alpha ^n_0\) is the initial time of the first excursion of H above 0 that hits \(2^{-n}\). Notice that this excursion is distributed as H under \(\mathbf{N}(\cdot \mid \max H \ge 2^{-n})\). Let \(K\ge 1\) be the (random) integer such that \(\alpha ^n_{K-1}<\beta ^n_0\le \alpha ^n_K\), so that \( \alpha ^n_K\) is the initial time of the second excursion of H above 0 that hits \(2^{-n}\).

With the excursion of H during interval \([\alpha ^n_0,\beta ^n_0]\), we can associate a (plane) tree \({\mathcal T}^n_0\) constructed as follows. The children of the ancestor correspond to the excursions of H above level \(2^{-n}\), during the time interval \([\alpha ^n_0,\beta ^n_0]\), that hit \(2\times 2^{-n}\) and the order on these children is obviously given by the chronological order. Equivalently, the children of the ancestor correspond to the indices \(i\in \{1,\ldots ,K-1\}\) such that \(H^n_i=1\). Then, assuming that the ancestor has at least one child (equivalently that \(K\ge 2\)), the children of the first child of the ancestor correspond to the excursions of H above level \(2\times 2^{-n}\), during the time interval \([\alpha ^n_1,\beta ^n_1]\), that hit \(3\times 2^{-n}\), and so on. See Fig. 2 for an illustration.

Fig. 2
figure 2

The sequence \(\alpha ^n_0,\alpha ^n_1,\ldots \) and the tree \({\mathcal T}^n_0\) (in thick lines)

Write \(N^n_0\) for the number of children of the ancestor in \({\mathcal T}^n_0\). It follows from Proposition 6 (ii) that, conditionally on \(N^n_0\), the successive excursions of H above level \(2^{-n}\), during the time interval \([\alpha ^n_0,\beta ^n_0]\), that hit \(2\times 2^{-n}\) are independent and distributed as H under \(\mathbf{N}(\cdot \mid \max H \ge 2^{-n})\) (recall that our definition shifts excursions above a level h so that they start from 0). Recalling the construction of the tree \({\mathcal T}^n_0\), we now obtain that, conditionally on \(N^n_0\), the subtrees of \({\mathcal T}^n_0\) originating from the children of the ancestor are independent and distributed according to \({\mathcal T}^n_0\). This just means that \({\mathcal T}^n_0\) is a Galton–Watson tree, and its offspring distribution \(\mu _n\) is the law under \(\mathbf{N}(\cdot \mid \max H \ge 2^{-n})\) of the number of excursions of H above level \(2^{-n}\) that hit \(2\times 2^{-n}\).

With the second excursion of H above 0 that hits \(2^{-n}\), we can similarly associate a Galton–Watson tree \({\mathcal T}^n_1\) with offspring distribution \(\mu _n\), and so on. The trees \({\mathcal T}^n_0,{\mathcal T}^n_1,\ldots \) are independent as a consequence of the strong Markov property of the Lévy process X. By construction, the process \((H^n_k)_{k\ge 0}\) is the discrete height process of the sequence \({\mathcal T}^n_0,{\mathcal T}^n_1,\ldots \). \(\square \)

Proposition 9

For every \(n\ge 1\), set \(v_n:= 2^nv(2^{-n})=2^n \mathbf{N}(\max H \ge 2^{-n})\). Then, for every \(A>0\),

$$\begin{aligned} \sup _{t\le A} \left| 2^{-n} H^n_{\lfloor v_n t\rfloor } - H_t\right| \mathrel {\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{}} 0 \end{aligned}$$

in probability under \(\mathbf {P}\).

Proof

Recall that, for every \(h\ge 0\), \((L^h_t)_{t\ge 0}\) denotes the local time of H at level h. It will be convenient to introduce, for every \(n\ge 1\) and every \(j\ge 0\), the increasing process

$$\begin{aligned} \mathcal {N}^n_{(j)}(t):=\#\left\{ k\ge 0: H^n_k=j,L^{j2^{-n}}_{\alpha ^n_k}\le t\right\} . \end{aligned}$$

As a consequence of Proposition 6(i) applied with \(h=j 2^{-n}\), we get that \(\mathcal {N}^n_{(j)}(t)\) is a Poisson process with parameter \(v(2^{-n})=\mathbf{N}(\max H \ge 2^{-n})\).

We claim that, for every \(A>0\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{j\ge 0}\left( \mathbf {E}\left[ \sup _{s\le A} |v(2^{-n})^{-1}\,\#\{k: H^n_k=j,\alpha ^n_k\le s\} - L^{j2^{-n}}_s|\right] \right) =0. \end{aligned}$$
(4)

To see this, first observe that, for every \(s>0\),

$$\begin{aligned} \mathcal {N}^n_{(j)}\left( \left( L_s^{j2^{-n}}\right) -\right) \le \#\left\{ k: H^n_k=j,\alpha ^n_k\le s\right\} \le \mathcal {N}^n_{(j)}\left( L_s^{j2^{-n}}\right) , \end{aligned}$$

and then write

$$\begin{aligned}&\mathbf {E}\left[ \sup _{s\le A} \left| v(2^{-n})^{-1}\,\#\left\{ k: H^n_k=j,\alpha ^n_k\le s\right\} - L^{j2^{-n}}_s\right| \right] \\&\qquad \le \sum _{p=1}^\infty \mathbf {E}\Big [\mathbf {1}_{\{p-1\le L^{j2^{-n}}_A\le p\}} \sup _{t\le p} |v(2^{-n})^{-1}\,\mathcal {N}^n_{(j)}(t) - t|\Big ]\\&\qquad \le \sum _{p=1}^\infty \mathbf {P}\left( p-1\le L^{j2^{-n}}_A\right) ^{1/2}\,\mathbf {E}\left[ \sup _{t\le p} |v(2^{-n})^{-1}\,\mathcal {N}^n_{(j)}(t) - t|^2\right] ^{1/2}. \end{aligned}$$

Then, if \(\mathcal {N}(t)\) stands for a standard Poisson process, we have by a classical martingale inequality

$$\begin{aligned} \mathbf {E}\left[ \sup _{t\le p} \left| v(2^{-n})^{-1}\,\mathcal {N}^n_{(j)}(t) - t\right| ^2\right]&= v(2^{-n})^{-2}\mathbf {E}\left[ \sup _{t\le v(2^{-n})p} (\mathcal {N}(t)-t)^2\right] \\&\le 4v(2^{-n})^{-2}\,\mathbf {E}\left[ (\mathcal {N}(v(2^{-n})p)-v(2^{-n})p)^2\right] \\&=4v(2^{-n})^{-1}p. \end{aligned}$$

It follows that, for every \(j\ge 0\),

$$\begin{aligned}&\mathbf {E}\left[ \sup _{s\le A} \left| v(2^{-n})^{-1}\,\#\left\{ k: H^n_k=j,\alpha ^n_k\le s\right\} - L^{j2^{-n}}_s\right| \right] \\&\quad \le \left( \sum _{p=1}^\infty \left( p\,\mathbf {P}\left( p-1\le L^{j2^{-n}}_A\right) \right) ^{1/2}\right) \times 2v(2^{-n})^{-1/2}, \end{aligned}$$

and the proof of (4) is completed by noting that \(v(2^{-n})\longrightarrow \infty \) as \(n\rightarrow \infty \), and that

$$\begin{aligned} \sum _{p=1}^\infty \left( p\,\mathbf {P}\left( p-1\le L^{j2^{-n}}_A\right) \right) ^{1/2} \le \sum _{p=1}^\infty \left( p\,\mathbf {P}\left( p-1\le L^{0}_A\right) \right) ^{1/2}<\infty , \end{aligned}$$

because \(L^{j2^{-n}}_A\) is bounded above in distribution by \(L^0_A\) (cf Definition 1.3.1 in [7]), and we know that \(L^0_A=-I_A\) has exponential moments.

Let \(\ell \ge 1\) be an integer. By summing the convergence in (4) over possible choices of \(0\le j< \ell 2^n\), we also obtain that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbf {E}\left[ \sup _{s\le A} \left| 2^{-n}v(2^{-n})^{-1}\,\#\left\{ k: H_{\alpha ^n_k}< \ell ,\alpha ^n_k\le s\right\} - 2^{-n}\sum _{j=0}^{\ell 2^n-1}L^{j2^{-n}}_s\right| \right] =0. \end{aligned}$$
(5)

On the other hand, we have, for every \(s\ge 0\),

$$\begin{aligned}&\left| \int _0^s \mathrm {d}r\,\mathbf {1}_{\{H_r\le \ell \}} - 2^{-n}\sum _{j=0}^{\ell 2^n-1}L^{j2^{-n}}_s\right| \\&\quad = \left| \sum _{j=0}^{\ell 2^n-1}\left( \int _0^s \mathrm {d}r\,\mathbf {1}_{\left\{ j2^{-n}<H_r\le (j+1)2^{-n}\right\} } - 2^{-n}L^{j2^{-n}}_s\right) \right| \\&\quad \le 2^{-n} \sum _{j=0}^{\ell 2^n-1} \left| 2^n \int _0^s \mathrm {d}r\,\mathbf {1}_{\{j2^{-n}<H_r\le (j+1)2^{-n}\}} - L^{j2^{-n}}_s\right| , \end{aligned}$$

and it follows from (3) that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbf {E}\left[ \sup _{s\le A} \left| \int _0^s \mathrm {d}r\,\mathbf {1}_{\{H_r\le \ell \}} - 2^{-n}\sum _{j=0}^{\ell 2^n-1}L^{j2^{-n}}_s\right| \right] =0. \end{aligned}$$
(6)

By combining (5) and (6), we get

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbf {E}\left[ \sup _{s\le A} \left| \int _0^s \mathrm {d}r\,\mathbf {1}_{\{H_r\le \ell \}} - v_n^{-1}\,\#\left\{ k: H_{\alpha ^n_k}< \ell ,\alpha ^n_k\le s\right\} \right| \right] =0, \end{aligned}$$

where \(v_n=2^nv(2^{-n})\). Since \(\mathbf {P}(\max \{H_s:0\le s\le A\}\ge \ell )\) can be made arbitrarily small by choosing \(\ell \) large, we have obtained that

$$\begin{aligned} \sup _{s\le A} \Big | v_n^{-1}\,\#\{k: \alpha ^n_k\le s\} - s\Big | \mathrel {\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{}} 0 \end{aligned}$$

in probability. Elementary arguments show that this implies

$$\begin{aligned} \sup _{t\le A} \left| \alpha ^n_{\lfloor v_nt\rfloor } - t\right| \mathrel {\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{}} 0 \end{aligned}$$
(7)

in probability, and therefore also

$$\begin{aligned} \sup _{t\le A} \left| H_{\alpha ^n_{\lfloor v_nt\rfloor }} - H_t\right| \mathrel {\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{}} 0 \end{aligned}$$

in probability. This completes the proof. \(\square \)

In what follows, we will need an analog of the preceding two propositions for the height process under its excursion measure \(\mathbf{N}\). Let us fix \(n\ge 1\). Under the measure \(\mathbf{N}(\cdot \cap \{ \max H\ge 2^{-n}\})\), we define

$$\begin{aligned} 0= \bar{\alpha }^n_0<\bar{\alpha }^n_1<\bar{\alpha }^n_2<\cdots <\bar{\alpha }^n_{\bar{K}_n-1} \end{aligned}$$

as the ordered sequence consisting of the initial times of all excursions of H above level \(j\,2^{-n}\) that hit level \((j+1)2^{-n}\), for all values of the integer \(j\ge 0\). The analog of Proposition 8 says that, under \(\mathbf{N}(\cdot \mid \max H\ge 2^{-n})\), the finite sequence

$$\begin{aligned} \bar{H}^n_k:= 2^n\, H_{\bar{\alpha }^n_k},\quad 0\le k\le \bar{K}_n-1 \end{aligned}$$

is distributed as the height sequence of a Galton–Watson tree with offspring distribution \(\mu _n\). This is immediate from the fact that an excursion with distribution \(\mathbf{N}(\cdot \mid \{ \max H\ge 2^{-n}\})\) is obtained by taking (under \(\mathbf {P}\)) the first excursion of H with height greater than \(2^{-n}\). By convention, we take \(\bar{\alpha }^n_k=\chi \) and \(\bar{H}^n_k=0\) if \(k\ge \bar{K}_n\).

We next fix a sequence \((n_p)_{p\ge 1}\) such that both (7) and the convergence of Proposition 9 hold \(\mathbf {P}\) a.s. along this sequence, for each \(A>0\). From now on, we consider only values of n belonging to this sequence. We claim that we have then also

$$\begin{aligned} \sup _{t\ge 0} \left| 2^{-n} \bar{H}^n_{\lfloor v_n t\rfloor } - H_t\right| \mathrel {\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{}} 0,\qquad \mathbf{N}\hbox { a.e.} \end{aligned}$$
(8)

To see this, note that it suffices to argue under \(\mathbf{N}(\cdot \mid \max H >\delta )\) for some \(\delta >0\), and then to consider the first excursion (under \(\mathbf {P}\)) of H away from 0 with height greater than \(\delta \). We abuse notation by still writing \(0= \bar{\alpha }^n_0<\bar{\alpha }^n_1<\bar{\alpha }^n_2<\cdots <\bar{\alpha }^n_{\bar{K}_n-1}\) for the finite sequence of times defined as explained above, now relative to this first excursion with height greater than \(\delta \).

We observe that, provided n is large enough so that \(2^{-n}<\delta \), we have

$$\begin{aligned} \bar{\alpha }^n_k=\alpha ^n_{d_n+k}, \qquad 0\le k< \bar{K}_n=r_n-d_n \end{aligned}$$

where \(d_n\) is the index such that \(\alpha ^n_{d_n}=:d_{(\delta )}\) is the initial time of the first excursion of H away from 0 with height greater than \(\delta \), and \(r_n\) is the first index \(k>d_n\) such that \(\alpha ^n_k\) does not belong to the interval \([d_{(\delta )},r_{(\delta )}]\) associated with this excursion. Notice that \(\alpha ^n_{r_n}\) decreases to \(r_{(\delta )}\) as \(n\rightarrow \infty \). Our claim (8) then reduces to verifying that

$$\begin{aligned} \sup _{t\ge 0} \left| H_{\alpha ^n_{(d_n+\lfloor v_n t\rfloor )\wedge r_n}} - H_{(d_{(\delta )}+t)\wedge r_{(\delta )}}\right| \mathrel {\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{}} 0,\qquad \mathbf {P}\hbox { a.s.} \end{aligned}$$

which follows from Proposition 9 and (7), recalling that both these convergences hold a.s. on the sequence of values of n that we consider.

6 The coding function of the subordinate tree as a time-changed height process

In this section, we prove Theorem 2. We consider the Brownian snake under its excursion measure \({\mathbb N}_0\), and we recall the notation \(\overline{W}_s=\max \{W_s(t):0\le t\le \zeta _s\}\), and \(\overline{W}_a=\overline{W}_s\) if \(a=p_\zeta (s)\). As in Theorem 1, we write \(\widetilde{\mathcal T}_\zeta \) for the subordinate tree of the Brownian tree \({\mathcal T}_\zeta \) with respect to the function \(a\mapsto \overline{W}_a\).

Proof of Theorem 2

We set \(W^*=\max \{\overline{W}_s:0\le s\le \sigma \}\). Fix \(n\ge 1\) and argue under the probability measure \({\mathbb N}_0(\cdot \mid W^*\ge 2^{-n})\). We define a discrete plane tree \({\mathcal T}^{(n)}\) which is constructed from \((\overline{W}_s)_{0\le s\le \sigma }\) in exactly the same way as \({\mathcal T}^n_0\) was constructed from the excursion of H during \([\alpha ^n_0,\beta ^n_0]\) in the proof of Proposition 8 (see Fig. 2). In this construction, the children of the root correspond to the excursions of the process \(s\rightarrow \overline{W}_s\) above level \(2^{-n}\) that hit \(2\times 2^{-n}\), or equivalently to the excursions of W outside \((-\infty ,2^{-n})\) that hit \(2\times 2^{-n}\) (recall the definition of these excursions from the proof of Theorem 1). The point is that, if (uv) is the time interval associated with an excursion of W outside \((-\infty ,2^{-n})\), the process \(s\mapsto \overline{W}_s\) remains strictly above \(2^{-n}\) on the whole interval (uv)). Similarly as in the previous section, the children of the root are ordered according to the chronological order of the Brownian snake.

By the special Markov property, in the form given in Corollary 21 in the appendix below, conditionally on the number of excursions of W outside \((-\infty ,2^{-n})\) that hit \(2\times 2^{-n}\), these excursions listed in chronological order are independent and distributed according to \({\mathbb N}_0(\cdot \mid W^*\ge 2^{-n})\), modulo the obvious translation by \(-2^{-n}\). This shows that the random plane tree \({\mathcal T}^{(n)}\) satisfies the branching property at the first generation and therefore \({\mathcal T}^{(n)}\) is a Galton–Watson tree.

Let \(\mu _n\) be the offspring distribution found in Proposition 8 in the case where \(\psi (r)=\psi _0(r)\). We claim that \(\mu _n\) is also the offspring distribution of \({\mathcal T}^{(n)}\). To see this, observe that \(\mu _n\) is, by definition, the distribution of the number of points of a \(\psi _0\)-Lévy tree at height \(2^{-n}\) that have descendants at height \(2\times 2^{-n}\) (conditionally on the event that the height of the tree is at least \(2^{-n}\)). Thanks to Theorem 1 and to the fact that \(\widetilde{\mathcal T}_\zeta \) is the tree coded by the function \(s\mapsto \overline{W}_s\), we know that this is the same as the conditional distribution of the number of excursions of \(s\mapsto \overline{W}_s\) above level \(2^{-n}\) that hit \(2\times 2^{-n}\), under \({\mathbb N}_0(\cdot \mid W^*\ge 2^{-n})\).

Let

$$\begin{aligned} 0\le \xi ^n_0<\xi ^n_1<\xi ^n_2<\cdots <\xi ^n_{ K_n-1} \end{aligned}$$

be the ordered sequence consisting of the initial times of all excursions of \(s\rightarrow \overline{W}_s\) above level \(j\,2^{-n}\) that hit level \((j+1)2^{-n}\), for all values of the integer \(j\ge 0\). Note that each such excursion corresponds to a vertex of the tree \({\mathcal T}^{(n)}\), and so \( K_n\) is just the total progeny of \({\mathcal T}^{(n)}\). By convention, we also define \(\xi ^n_{K_n}=\sigma \). Set

$$\begin{aligned} \widetilde{H}^n_k= 2^n\,\overline{W}_{\xi ^n_k},\quad \hbox {if }0\le k< K_n, \end{aligned}$$

and \(\widetilde{H}^n_k=0\) if \(k\ge K_n\). Then \((\widetilde{H}^n_k,0\le k< K_n)\) is the height sequence of \({\mathcal T}^{(n)}\) (note that the lexicographical ordering on vertices of \({\mathcal T}^{(n)}\) corrresponds to the chronological order on the associated excursion initial times). Hence \((\widetilde{H}^n_k)_{k\ge 0}\) has the same distribution as the sequence \((\bar{H}^n_k)_{k\ge 0}\) which was defined at the end of the preceding section from the height process H under \(\mathbf{N}(\cdot \mid \max H \ge 2^{-n})\).

But in fact more in true: the whole collection of the discrete sequences \((\widetilde{H}^n_k)_{k\ge 0}\) for all \(n\ge 1\) has the same distribution under \({\mathbb N}_0\) as the similar collection of sequences \((\bar{H}^n_k)_{k\ge 0}\) constructed from the height process H under \(\mathbf{N}\) (the reason is the fact that, in both constructions, the tree at step n can be obtained from the tree at step \(n+1\) by the deterministic operation consisting in keeping only those vertices at even generation that have at least one child, and viewing that set of vertices as a plane tree in the obvious manner). The convergence (8) now allows us to set, for every \(t\ge 0\),

$$\begin{aligned} \widetilde{H}_t=\lim _{n\rightarrow \infty } 2^{-n} \widetilde{H}^n_{\lfloor v_n t\rfloor }, \end{aligned}$$

and the process \((\widetilde{H}_t)_{t\ge 0}\) is distributed as the height process of the Lévy tree with branching mechanism \(\psi _0(r)=\sqrt{8/3}\,r^{3/2}\). The limit in the preceding display holds uniformly in t, \({\mathbb N}_0\) a.e., provided we argue along the subsequence of values of n introduced at the end of the preceding section. We set \(\widetilde{\chi }=\sup \{s\ge 0:\widetilde{H}_s>0\}\).

We observe that the distribution of \((\widetilde{H}, (\widetilde{H}^n_k)_{n\ge 1,k\ge 0})\) under \({\mathbb N}_0\) is the same as that of \((H, (\bar{H}^n_k)_{n\ge 1,k\ge 0})\) under \(\mathbf {N}\), and so we must have, for every \(n\ge 1\) and \(k\ge 0\),

$$\begin{aligned} \widetilde{H}^n_k=2^n\,\widetilde{H}_{\widetilde{\alpha }^n_k}, \end{aligned}$$

where \(\widetilde{\alpha }^n_0<\widetilde{\alpha }^n_1<\cdots <\widetilde{\alpha }^n_{\widetilde{K}_n-1}\) are the initial times of the excursions of \(\widetilde{H}_s\) above level \(j\,2^{-n}\) that hit level \((j+1)2^{-n}\), for all values of the integer \(j\ge 0\), and \(\widetilde{\alpha }^n_k=\widetilde{\chi }\) if \(k\ge \widetilde{K}_n\). Notice that \(\widetilde{K}_n=K_n\) because the height sequence of the tree \({\mathcal T}^{(n)}\) is \((\widetilde{H}^n_k)_{0\le k\le K_n-1}\). Also, if \(n<m\) and \(k\in \{0,1,\ldots ,K_n-1\}\), \(k'\in \{0,1,\ldots ,K_m-1\}\), the property \(\tilde{\alpha }^n_k= \tilde{\alpha }^m_{k'}\) holds if and only if \(\xi ^n_k=\xi ^n_{k'}\): Indeed these properties hold if and only if the vertex with index k in the (lexicographical) ordering of \({\mathcal T}^{(n)}\) coincides with the vertex with index \(k'\) in the ordering of \({\mathcal T}^{(m)}\), modulo the identification of the vertex set of \({\mathcal T}^{(n)}\) as a subset of the vertex set of \({\mathcal T}^{(m)}\), in the way explained above.

We need to verify that \(\overline{W}_s\) can be written as a time change of \(\widetilde{H}\). As a first step, we notice that, for every \(0\le k< K_n\),

$$\begin{aligned} 2^n \overline{W}_{\xi ^n_k}=\widetilde{H}^n_k= 2^n \widetilde{H}_{\widetilde{\alpha }^n_k}. \end{aligned}$$

and so \(\overline{W}_{\xi ^n_k}=\widetilde{H}_{\widetilde{\alpha }^n_k}\). This suggests that the process \(\Gamma \) in the statement of the theorem should be such that \(\Gamma _{\xi ^n_k}=\widetilde{\alpha }^n_k\), for every \(k\in \{0,1,\ldots ,K_n-1\}\) and every n.

At this point, we observe that

$$\begin{aligned} \max _{1\le k\le K_n} (\widetilde{\alpha }^n_k-\widetilde{\alpha }^n_{k-1}) \mathrel {\mathop {\longrightarrow }\limits _{n\rightarrow \infty }^{}} 0,\qquad {\mathbb N}_0\ \hbox {a.e.} \end{aligned}$$
(9)

Indeed, if this property fails, a compactness argument gives two times \(u,v\in [0,\widetilde{\chi }]\) with \(u<v\) such that \(t\rightarrow \widetilde{H}_t\) is monotone nonincreasing on [uv]. To see that this cannot occur, we may replace \(\widetilde{H}\) by the process H constructed from a Lévy process excursion X as explained in Sect. 3. We then note that jumps of X are dense in \([0,\chi ]\), and the strong Markov property shows that, for any jump time s of X, for any \(\varepsilon >0\), we can find \(s',s''\in [s,s+\varepsilon ]\), with \(s''>s'\), such that \(H_{s''}>H_{s'}\) (use formula (20) in [7], or see the comments at the end of Sect. 3).

Let \(s\in [0,\sigma )\) and, for every integer \(n\ge 1\), let \(k_n(s)\in \{0,1,\ldots ,K_n-1\}\) be the unique integer such that \(\xi ^n_{k_n(s)}\le s<\xi ^n_{k_n(s)+1}\). We note that the sequence \(\xi ^n_{k_n(s)}\) is monotone nondecreasing (this is obvious since \((\xi ^n_0,\ldots ,\xi ^n_{K_n})\) is a subset of \((\xi ^{n+1}_0,\ldots ,\xi ^{n+1}_{K_{n+1}})\)). It follows that the sequence \(\widetilde{\alpha }^n_{k_n(s)}\) is also monotone nondecreasing: Indeed, if \(n<m\) and \(k\in \{0,1,\ldots ,K_n-1\}\), \(k'\in \{0,1,\ldots ,K_m-1\}\) are such that \(\xi ^n_k\le \xi ^m_{k'}\), we have automatically \(\widetilde{\alpha }^n_k\le \widetilde{\alpha }^m_{k'}\) since, writing \(\xi ^n_k=\xi ^m_{k^*}\), the fact that \(k^*\le k'\) implies that \(\widetilde{\alpha }^n_k=\widetilde{\alpha }^m_{k^*}\le \widetilde{\alpha }^m_{k'}\).

We can now set

$$\begin{aligned} \Gamma _s=\lim _{n\rightarrow \infty } \widetilde{\alpha }^n_{k_n(s)}. \end{aligned}$$
(10)

Note that this limit will exist simultaneously for all \(s\in [0,\sigma )\) outside a set of \({\mathbb N}_0\)-measure 0. We also take \(\Gamma _s=\widetilde{\chi }\) for all \(s\ge \sigma \). Clearly \(s\mapsto \Gamma _s\) is nondecreasing and, by construction, the property \(\Gamma _{\xi ^n_k}=\widetilde{\alpha }^n_k\) holds for every \(k\in \{0,1,\ldots ,K_n\}\) and every n, and we have \(\overline{W}_s=H_{\Gamma _s}\) when s is of the form \(\xi ^n_k\). We also note that \(s\longrightarrow \Gamma _s\) is continuous as a consequence of the property (9). To check right-continuity, observe that, if \(s\in [0,\sigma )\) is fixed, and \(s'\) is such that \(\xi ^n_{k_n(s)}<s'<\xi ^n_{k_n(s)+1}\) then, for every \(m\ge n\), the property \(\xi ^m_{k_m(s')}\le s'<\xi ^n_{k_n(s)+1}\) forces \(\widetilde{\alpha }^m_{k_m(s')}\le \widetilde{\alpha }^n_{k_n(s)+1}\), hence (letting m tend to \(\infty \)) \(\Gamma _{s'}\le \widetilde{\alpha }^n_{k_n(s)+1}\), and use (9) and the definition (10) of \(\Gamma _s\). Left-continuity is derived by a similar argument.

For \(s\in [0,\sigma )\), set \(s'=\lim \uparrow \xi ^n_{k_n(s)}\) and \(s''=\lim \downarrow \xi ^n_{k_n(s)+1}\). Note that \(s'\le s\le s''\). On one hand, by passing to the limit \(n\rightarrow \infty \) in the equality

$$\begin{aligned} \overline{W}_{\xi ^n_{k_n(s)}}=\widetilde{H}_{\alpha ^n_{k_n(s)}}, \end{aligned}$$

we obtain that \(\overline{W}_{s'}=\widetilde{H}_{\Gamma _s}\). On the other hand, \(\overline{W}\) must be constant on the interval \([s',s'']\). To see this, we first observe that \(\overline{W}\) must be nonincreasing on \([s',s'']\) (otherwise there would be some n and some \(k\in \{0,1,\ldots ,K_n-1\}\) such that \(\xi ^n_{k_n(s)}<\xi ^n_k <\xi ^n_{k_n(s)+1}\), which is absurd). So we need to verify that there is no nontrivial interval \([s_1,s_2]\) such that \(s\mapsto \overline{W}_s\) is both nonincreasing and nonconstant on \([s_1,s_2]\), and, to prove this, we may replace nonincreasing by nondecreasing thanks to the invariance of \({\mathbb N}_0\) under time-reversal. Argue by contradiction, and suppose that \(s_1<s_2\) are such that that the event where \(0<s_1<s_2<\sigma \) and \(s\mapsto \overline{W}_s\) is both nondecreasing and nonconstant on \([s_1,s_2]\) has positive \({\mathbb N}_0\)-measure. We can then find a stopping time T such that, with positive \({\mathbb N}_0\)-measure on the latter event, we have \(s_1<T<s_2\), \(\overline{W}_T=\widehat{W}_T\) and \(\max \{W_T(s):0\le s \le \zeta _T\}\) is attained only at \(\zeta _T\) (take \(T=\inf \{s>s_1: \widehat{W}_s\ge \overline{W}_{s_1}+\delta \}\), with \(\delta >0\) small enough). Using the strong Markov property of the Brownian snake, we then find \(r\in (T,s_2)\) such that \(W_r\) is the restriction of \(W_T\) to \([0,\zeta _T-\varepsilon ]\), for some \(\varepsilon >0\), which implies \(\overline{W}_r<\overline{W}_T\) and gives a contradiction with the fact that \(s\mapsto \overline{W}_s\) is nondecreasing on \([s_1,s_2]\).

Finally, since \(\overline{W}\) is constant on \([s',s'']\), we have \(\overline{W}_s=\overline{W}_{s'}=\widetilde{H}_{\Gamma _s}\), which was the desired result. This completes the proof of Theorem 2. \(\square \)

7 Applications to the Brownian map

In this section, we discuss applications of the previous results to the Brownian map. Analogously to [21], we consider a version of the Brownian map with “randomized volume”, which may be constructed under the Brownian snake excursion measure \({\mathbb N}_0\) as follows. Recall that \(({\mathcal T}_\zeta ,d_\zeta )\) stands for the tree coded by \((\zeta _s)_{0\le s\le \sigma }\) and \(p_\zeta \) is the canonical projection from \([0,\sigma ]\) onto \({\mathcal T}_\zeta \). For \(a,b\in {\mathcal T}_\zeta \), the “lexicographical interval” [ab] stands for the image under \(p_\zeta \) of the smallest interval [st] (\(s,t\in [0,\sigma ]\)) such that \(p_\zeta (s)=a\) and \(p_\zeta (t)=b\) (here we make the convention that if \(s>t\) the interval [st] is equal to \([s,\sigma ]\cup [0,t]\)).

For every \(a\in {\mathcal T}_\zeta \), we set \(Z_a=\widehat{W}_s\), where s is such that \(p_\zeta (s)=a\). In particular \(Z_{\rho _\zeta }=0\). The random mapping \({\mathcal T}_\zeta \ni a \mapsto Z_a\) is interpreted as Brownian motion indexed by the “Brownian tree” \({\mathcal T}_\zeta \).

We then define a mapping \(D^\circ :{\mathcal T}_\zeta \times {\mathcal T}_\zeta \longrightarrow {\mathbb R}_+\) by setting

$$\begin{aligned} D^\circ (a,b)=Z_a+Z_b-2\max \left\{ \min _{c\in [a,b]} Z_c, \min _{c\in [b,a]} Z_c\right\} . \end{aligned}$$

For \(a,b\in {\mathcal T}_\zeta \), we set \(a\approx b\) if and only if \(D^\circ (a,b)=0\), or equivalently

$$\begin{aligned} Z_a=Z_b=\max \left\{ \min _{c\in [a,b]} Z_c, \min _{c\in [b,a]} Z_c\right\} . \end{aligned}$$

One can verify that if \(a\approx a'\) then \(D^\circ (a,b)=D^\circ (a',b)\) for any \(b\in {\mathcal T}_\zeta \) (the point is that, if \(a\approx a'\) with \(a\not =a'\), then necessarily a and \(a'\) are leaves of \({\mathcal T}_\zeta \), and the reals \(t,t'\in [0,\sigma )\) such that \(p_\zeta (t)=a\) and \(p_\zeta (t')=a'\) are unique, which implies that \([a,b]\subset [a,a']\cup [a',b]\) and \([b,a]\subset [b,a']\cup [a',a]\)).

We also set

$$\begin{aligned} D^*(a,b)=\inf _{a_0=a,a_1,\ldots ,a_p=b} \sum _{i=1}^p D^\circ (a_{i-1},a_i), \end{aligned}$$

where the infimum is over all choices of the integer \(p\ge 1\) and of the points \(a_1,\ldots ,a_{p-1}\) of \({\mathcal T}_\zeta \). If \(a\approx a'\) then \(D^*(a,b)=D^*(a',b)\) for any \(b\in {\mathcal T}_\zeta \). Furthermore one can also prove that \(D^*(a,b)=0\) if and only if \(a\approx b\). Since \(D^*\) satisfies the triangle inequality, it follows that \(\approx \) is an equivalence relation on \({\mathcal T}_\zeta \). The Brownian map (with randomized volume) is the quotient space \(\mathbf{m}:={\mathcal T}_\zeta {/}\approx \), which is equipped with the distance induced by the function \(D^*\) (with a slight abuse of notation, we still denote the induced distance by \(D^*\)). We write \(\Pi \) for the canonical projection from \({\mathcal T}_\zeta \) onto \(\mathbf{m}\), and \(\mathbf{p}=\Pi \circ p_\zeta \). We also write \(D^\circ (x,y)=D^\circ (a,b)\) if \(x=\Pi (a)\) and \(y=\Pi (b)\).

In the usual construction of the Brownian map, one deals with the conditioned measure \({\mathbb N}_0(\cdot \mid \sigma =1)\) instead of \({\mathbb N}_0\), but otherwise the construction is exactly the same and we refer to [16, 17] for more details.

The Brownian map \(\mathbf{m}\) comes with two distinguished points. The first one \(\rho =\mathbf{p}(0)=\Pi (\rho _\zeta )\) corresponds to the root \(\rho _\zeta \) of \({\mathcal T}_\zeta \). The second distinguished point is \(\rho _*=\Pi (a_*)\), where \(a_*\) is the (unique) point of \({\mathcal T}_\zeta \) at which Z attains its minimum:

$$\begin{aligned} Z_{a_*}=\min _{a\in {\mathcal T}_\zeta } Z_a. \end{aligned}$$

We will write \(Z_*=Z_{a_*}\) to simplify notation. The reason for considering \(\rho _*\) comes from the fact that distances from \(\rho _*\) have a simple expression. For any \(a\in {\mathcal T}_\zeta \),

$$\begin{aligned} D^*(\rho _*,\Pi (a))= Z_a - Z_*. \end{aligned}$$
(11)

The following “cactus bound” [16, Proposition 3.1] also plays an important role. Let \(a,b\in {\mathcal T}_\zeta \), and let \(\gamma :[0,1]\rightarrow \mathbf{m}\) be a continuous path in \(\mathbf{m}\) such that \(\gamma (0)=\Pi (a)\) and \(\gamma (1)=\Pi (b)\). Then,

$$\begin{aligned} \min _{0\le t\le 1} D^*(\rho _*,\gamma (t)) \le \min _{c\in \llbracket a,b \rrbracket } D^*(\rho _*,\Pi (c))= \min _{c\in \llbracket a,b \rrbracket } (Z_c-Z_*), \end{aligned}$$
(12)

where we recall that \(\llbracket a,b \rrbracket \) is the geodesic segment between a and b in \({\mathcal T}_\zeta \), not to be confused with the interval [ab]. In other words, any continuous path from \(\Pi (a)\) to \(\Pi (b)\) must come at least as close to \(\rho _*\) as the (image under \(\Pi \) of the) geodesic segment from a to b in \({\mathcal T}_\zeta \).

We now introduce the metric net, in the terminology of [21]. For every \(r\ge 0\), we consider the ball B(r) defined by

$$\begin{aligned} B(r)=\{x\in \mathbf{m}: D^*(\rho _*,x)\le r\}. \end{aligned}$$

For \(0\le r<D^*(\rho _*,\rho )=-Z_*\), we define the hull \(B^\bullet (r)\) as the complement of the connected component of B(r) that contains \(\rho \). Informally, \(B^\bullet (r)\) is obtained from B(r) by “filling in” the holes of B(r) except for the one containing \(\rho \). Write \(\partial B^\bullet (r)\) for the topological boundary of \(B^\bullet (r)\). We define the metric net \(\mathcal {M}\) as the closure in \(\mathbf{m}\) of the union

$$\begin{aligned} \bigcup _{0\le r<-Z_*} \partial B^\bullet (r). \end{aligned}$$

Our goal is to investigate the structure of \(\mathcal {M}\).

If \(a\in {\mathcal T}_\zeta \) and \(s\in [0,\sigma ]\) are such that \(p_\zeta (s)=a\), we write \(W_a=W_s\) (as previously) and we use the notation

$$\begin{aligned} \underline{W}_a=\underline{W}_s =\min \{W_s(t):0\le t\le \zeta _s\}=\min \{Z_b:b\in \llbracket \rho _\zeta ,a\rrbracket \}, \end{aligned}$$

where the last equality holds because, as already mentioned, the quantities \(W_s(t)\) for \(0\le t\le \zeta _s\) correspond to the values of \(Z_b=\widehat{W}_b\) along the geodesic segment \(\llbracket \rho _\zeta ,a\rrbracket \).

We then introduce the closed subset of \({\mathcal T}_\zeta \) defined by

$$\begin{aligned} \Theta =\{a\in {\mathcal T}_\zeta : \underline{W}_a=\widehat{W}_a\}. \end{aligned}$$

We note that points of \(\Theta \) have multiplicity either 1 or 2 in \({\mathcal T}_\zeta \). Indeed, there are only countably many points of multiplicity 3, and it is not hard to see that these points do not belong to \(\Theta \).

Proposition 10

Let \(x\in \mathbf{m}\). Then \(x\in \mathcal {M}\) if and only if \(x=\Pi (a)\) for some \(a\in \Theta \).

Proof

Fix \(r\in [0,-Z_*)\) and \(x\in \mathbf{m}\). We claim that \(x\in \partial B^\bullet (r)\) if and only if we can write \(x=\Pi (a)\) with both \(\widehat{W}_a=Z_*+r\), and

$$\begin{aligned} W_a(t)>Z_*+r,\quad \forall t\in [0,\zeta _{(W_a)}). \end{aligned}$$

Indeed, if these conditions hold, we have \(D^*(\rho _*,x)=r\) by (11), and the image under \(\Pi \) of the geodesic segment from a to \(\rho _\zeta \) provides a path from x to \(\rho \) that stays outside B(r) except at the initial point x. It follows that \(\Pi (a)\) belongs to \(\partial B^\bullet (r)\).

Conversely, if \(x\in \partial B^\bullet (r)\), then it is obvious that \(D^*(\rho _*,x)=r\) giving \(\widehat{W}_a=Z_*+r\) for any a such that \(\Pi (a)=x\). Write \(x=\lim x_n\), where \(x_n \notin B^\bullet (r)\), and, for every n, let \(a_n\in {\mathcal T}_\zeta \) such that \(\Pi (a_n)=x_n\). The fact that \(x_n \notin B^\bullet (r)\) implies that, for every c belonging to the geodesic segment between \(a_n\) and \(\rho _\zeta \), we have \(Z_c> Z_*+r\) (otherwise the cactus bound (12) would imply that any path between \(x_n\) and \(\rho \) visits B(r), which is a contradiction). By compactness, we may assume that \(a_n\) converges to a as \(n\rightarrow \infty \), and we have \(\Pi (a)=x\). We then get that the property \(Z_c> Z_*+r\) holds for c belonging to the geodesic segment between a and \(\rho _\zeta \), except possibly for \(c=a\). This completes the proof of our claim.

It follows from the claim that the property \(x=\Pi (a)\) for some \(a\in {\mathcal T}_\zeta \) such that \(\widehat{W}_a = \underline{W}_a\) holds for every \(x\in \mathcal {M}\) (this property holds if \(x\in \partial B^\bullet (r)\) for some \(0\le r<-Z_*\) and is preserved under passage to the limit, using the compactness of \({\mathcal T}_\zeta \)). Conversely, suppose that this property holds, with \(a\not =\rho _\zeta \) to discard a trivial case. If the path \(W_a\) hits its minimum only at its terminal point, the first part of the proof shows that \(x\in \partial B^\bullet (r)\) for \(r= \widehat{W}_a-Z_*\). If the path \(W_a\) hits its minimum both at its terminal time and at another time, then Lemma 16 in [1] shows that \(a=\lim a_n\), where \(\widehat{W}_{a_n}<\widehat{W}_a\) for every n. Let \(b_n\) be the first point (i.e. the closest point to \(\rho _\zeta \)) on the ancestral line of \(a_n\) such that \(\widehat{W}_{b_n}= \widehat{W}_{a_n}\). Then, by the first part of the proof, \(\Pi (b_n)\) belongs to \(\partial B^\bullet (r_n)\), with \(r_n= \widehat{W}_{a_n}-Z_*\). Noting that \(b_n\) must lie on the geodesic segment between a and \(a_n\) in the tree \({\mathcal T}_\zeta \), we see that we have also \(a=\lim b_n\), so that we get that \(x=\Pi (a)=\lim \Pi (b_n)\) belongs to \(\mathcal {M}\). \(\square \)

Remark

The preceding arguments are closely related to [5, Section 3] (see in particular formula (16) in [5]), which deals with the slightly different setting of the Brownian plane.

If \(x\in \mathcal {M}\) and \(a\in \Theta \) is such that \(x=\Pi (a)\), the image under \(\Pi \) of the geodesic segment from a to \(\rho _\zeta \) provides a path in \(\mathbf{m}\) that stays in the complement of B(r) for every \(0\le r< D^*(\rho ^*,x)=Z_a-Z_*\) (indeed the values of \(Z_b\) for b belonging to this segment are the numbers \(W_a(t)\ge \underline{W}_a=\widehat{W}_a=Z_a\)). It follows that all points belonging to a geodesic from x to \(\rho _*\) also belong to \(\mathcal {M}\).

We note that we can define an “intrinsic” metric on \(\mathcal {M}\) by setting, for every \(x,y\in \mathcal {M}\),

$$\begin{aligned} \Delta ^*(x,y)\,=\,\mathrel {\mathop {\inf _{x=x_0,x_1,\ldots ,x_k=y}}\limits _{x_1,\ldots ,x_{k-1}\in \mathcal {M}}^{}} \sum _{i=1}^k D^\circ (x_{i-1},x_i). \end{aligned}$$
(13)

It is obvious that \(\Delta ^*(x,y)\ge D^*(x,y)\). In particular, \(\Delta ^*(x,y)=0\) implies \(x=y\), and it follows that \(\Delta ^*\) is a metric on \(\mathcal {M}\). The quantity \(\Delta ^*(x,y)\) corresponds to the infimum of the lengths (computed with respect to \(D^*\)) of paths from x to y that are obtained by the concatenation of pieces of geodesics from points of \(\mathcal {M}\) to \(\rho ^*\) (we already noticed that these geodesics stay in \(\mathcal {M}\)). We have clearly \(\Delta ^*(\rho _*,x)=D^*(\rho _*,x)=D^\circ (\rho _*,x)\) for every \(x\in \mathcal {M}\), and \(\Delta ^*\)-geodesics from x to \(\rho _*\) coincide with \(D^*\)-geodesics from x to \(\rho _*\) (if \(x,y\in \mathcal {M}\) and xy belong to the same \(D^*\)-geodesic to \(\rho _*\), the results of [16] imply that \(D^*(x,y)=D^\circ (x,y)=\Delta ^*(x,y)\)).

Remark

The topology induced by \(\Delta ^*\) on \(\mathcal {M}\) coincides with the topology induced by \(D^*\). Since \(\Delta ^*\ge D^*\) and \((\mathcal {M},D^*)\) is compact, it is enough to prove that \((\mathcal {M},\Delta ^*)\) is also compact. However, if \((x_n)_{n\ge 1}\) is a sequence in \(\mathcal {M}\), we may write \(x_n=\Pi (a_n)\), with \(a_n\in \Theta \), and then extract a subsequence \((a_{n_k})\) that converges to \(a_\infty \) in \({\mathcal T}_\zeta \). We have \(a_\infty \in \Theta \) because \(\Theta \) is closed. Furthermore the fact that \(a_{n_k}\) converges to \(a_\infty \) implies that \(D^\circ (a_{n_k},a_\infty )\) tends to 0, and therefore \(\Delta ^*(x_{n_k},\Pi (a_\infty ))\) also tends to 0, showing that \((x_n)_{n\ge 1}\) has a convergent subsequence in \((\mathcal {M},\Delta ^*)\).

The preceding proposition shows that the metric net \(\mathcal {M}\) has close connections with the subset \(\Theta \) of \({\mathcal T}_\zeta \). The latter set is itself related to the subordinate tree of \({\mathcal T}_\zeta \) with respect to the function \(a\mapsto -\underline{W}_a\). By Theorem 2 (and an obvious symmetry argument) we can construct a process \((H_t)_{0\le t\le \chi }\) distributed as the height process of the Lévy tree with branching mechanism \(\psi _0(r)=\sqrt{8/3}\,r^{3/2}\), and a continuous random process \((\Gamma _s)_{s\ge 0}\) with nondecreasing sample paths such that \(\Gamma _0=0\), \(\Gamma _\sigma =\chi \), and for every \(s\in [0,\sigma ]\),

$$\begin{aligned} -\underline{W}_s= H_{\Gamma _s}. \end{aligned}$$
(14)

Formula (14) obviously implies that, if \(\Gamma \) is constant on some interval, then \(\underline{W}\) is constant on the same interval. The converse is also true since H is not constant on any nontrivial interval.

We define a random equivalence relation \(\sim \) on \([0,\chi ]\), by requiring that the graph of \(\sim \) is the smallest closed symmetric subset of \([0,\chi ]^2\) that contains all pairs (st) with \(s\le t\), \(H_s=H_t\), and \(H_r>H_s\) for all \(r\in (s,t)\). We leave it to the reader to check that this set is indeed the graph of an equivalence relation (use the comments at the end of Sect. 3). In addition to the pairs (st) satisfying the previous relation, the graph of \(\sim \) contains a countable collection of pairs (uv), each of them associated with a point of infinite multiplicity a of the tree \({\mathcal T}_H\) by the relations \(u=\min p_H^{-1}(a)\) and \(v=\max p_H^{-1}(a)\).

Fig. 3
figure 3

A simulation of a looptree (simulation by Igor Kortchemski). For technical reasons, some of the trees branching off a loop are pictured inside this loop, but, from the point of view of the present work, it is better to think of these trees as growing outside the loop, so that the space inside the loop may be “filled in” appropriately

We denote the quotient space \([0,\chi ]{/}\sim \) by \({\mathcal L}\). Then \({\mathcal L}\) can be identified with the “looptree” associated with H. Roughly speaking (see [4] for more details) the looptree is obtained by replacing each point of infinite multiplicity a of the tree \({\mathcal T}_H\) by a loop of “length” equal to the weight of a, so that the subtrees that are the connected components of the complement of a in the tree branch along this loop in an order determined by the coding function H (see Fig. 3). Note that the looptree associated with H is equipped in [4] with a particular metric. Here we avoid introducing this metric on \({\mathcal L}\), because it will be more relevant to our applications to introduce a pseudo-metric that will be described below.

Let us introduce the right-continuous inverse of \(\Gamma \). For every \(u\in [0,\chi )\), we set

$$\begin{aligned} \tau _u:=\inf \{s\ge 0: \Gamma _s>u\}. \end{aligned}$$

By convention, we also set \(\tau _\chi =\sigma \). The left limit \(\tau _{u-}\) is equal to \(\inf \{s\ge 0:\Gamma _s=u\}\). Note that \(\Gamma \) (and therefore also \(\underline{W}\)) is constant on every interval \([\tau _{u-},\tau _u]\). We also observe that \(\Gamma _{\tau _u}=u\) and thus \(-\underline{W}_{\tau _u}=H_u\). Furthermore, for every \(\varepsilon >0\), \(\Gamma _{\tau _u+\varepsilon }>\Gamma _{\tau _u}\), and therefore \(\underline{W}\) is not constant on \([\tau _u,\tau _u+\varepsilon ]\) [recall (14)], which implies \(\underline{W}_{\tau _u}=\widehat{W}_{\tau _u}\). We have thus \(p_\zeta (\tau _u)\in \Theta \) and a similar argument gives \(p_\zeta (\tau _{u-})\in \Theta \).

We next consider the subsets \(\Theta ^\circ \) and \(\Theta ^1\) of \(\Theta \) defined as follows. We let \(\mathcal {S}^1\) be the countable subset of \([0,\sigma ]\) that consists of all times \(\tau _{u-}\) for \(u\in (0,\chi ]\) such that \(\tau _{u-}<\tau _u\) and we set \(\Theta ^1=p_\zeta (\mathcal {S}^1)\). By the preceding remarks, \(\Theta ^1\subset \Theta \), and we also set \(\Theta ^\circ =\Theta \backslash \Theta ^1\). We note that \(s\in [0,\sigma )\) belongs to \(\mathcal {S}^1\) if and only if s is the left end of a maximal open interval on which \(\underline{W}\) (or equivalently \(\Gamma \)) is constant. Alternatively, s belongs to \(\mathcal {S}^1\) if and only if \(\underline{W}_s=\widehat{W}_s\) and \(\underline{W}\) is constant on \([s,s+\varepsilon ]\) for some \(\varepsilon >0\). Indeed, if the latter properties hold then \(\underline{W}\) is not constant on any interval \([s-\delta ,s]\), \(\delta >0\) (thus implying that \(s=\tau _{u-}\) with \(u=\Gamma _s\)) because otherwise, using the fact that s cannot be a time of left increase of \(\zeta \), this would mean that \(W_s\) attains its minimum both at \(\zeta _s\) and at a time \(r<\zeta _s\), leading to a contradiction with [1, Lemma 16].

Let us explain the motivation for these definitions. Our goal is to identify the metric net \(\mathcal {M}\) as a quotient space of the loop tree \({\mathcal L}=[0,\chi ]{/}\sim \). We will verify (see Lemma 11 below) that the mapping \([0,\chi ]\ni u\mapsto p_\zeta (\tau _u)\) induces a bijection from \(\mathcal {L}\) onto \(\Theta ^\circ \) (we can already observe that times of the form \(\tau _u\) do not belong to \(\mathcal {S}^1\)). On the other hand, it is not hard to check that, for any point a of \(\Theta \), there exists \(a'\in \Theta ^\circ \) such that \(\Pi (a)=\Pi (a')\). Hence the preceding bijection will allow us to view \(\mathcal {M}=\Pi (\Theta )=\Pi (\Theta ^\circ )\) as a quotient space of \(\mathcal {L}\), see Theorem 12 below.

We notice that points of \(\Theta ^1\) are leaves of \({\mathcal T}_\zeta \), meaning that if \(a\in \Theta ^1\), there is exactly one value \(s\in [0,\sigma ]\) such that \(p_\zeta (s)=a\). Indeed suppose that \(a=p_\zeta (s)=p_\zeta (s')\) with \(s<s'\) and \(a\in \Theta \). Then both s and \(s'\) are times of (left or right) increase of \(\zeta \), and, together with the property \(\widehat{W}_a=\underline{W}_a\), this implies that \(\widehat{W}\) takes values strictly less than \(\underline{W}_a\) immediately after s, resp. immediately after \(s'\) (see Sect. 4), so that neither s nor \(s'\) can belong to \(\mathcal {S}^1\).

It follows from the last observation that \(p_\zeta (\tau _u)\in \Theta ^\circ \) for every \(u\in [0,\chi ]\). Indeed, we have already seen that \(p_\zeta (\tau _u)\) belongs to \(\Theta \), and if we had \(p_\zeta (\tau _u)\in \Theta ^1\) this would imply that \(\tau _u\in \mathcal {S}^1\), which is impossible.

Remark

One can describe the elements of \(\mathcal {S}^1\) in the following way (this description is not needed for what follows). Let \(b\in \Theta \) such that b has a strict descendant c with \(\underline{W}_c=\underline{W}_b\) (in the terminology of [1], b is an excursion debut above the minimum). Let \([s_1,s_2]\subset [0,\sigma ]\) be the interval whose image under \(p_\zeta \) gives all descendants of b. Then the set

$$\begin{aligned} \{r\in [s_1,s_2]: \underline{W}_r =\underline{W}_{s_1}\hbox { and }\widehat{W}_r>\underline{W}_{s_1}\} \end{aligned}$$

is an open subset of \([s_1,s_2]\), and the left end of each of its connected components belongs to \(\mathcal {S}^1\). Furthermore any element of \(\mathcal {S}^1\) can be obtained in this way.

Lemma 11

For \(u,v\in [0,\chi ]\), the property \(u\sim v\) holds if and only if \(p_\zeta (\tau _u)=p_\zeta (\tau _v)\). Furthermore, the mapping \(\Psi : [0,\chi ]\ni u\mapsto p_\zeta (\tau _u)\) induces a bijection from \({\mathcal L}=[0,\chi ]{/}\sim \) onto \(\Theta ^\circ \), which will be denoted by \(\Phi \).

Proof

We already know that \(p_\zeta (\tau _u)\in \Theta ^\circ \) for every \(u\in [0,\chi ]\). We then verify that, if \(a\in \Theta ^\circ \), there exists \(u\in [0,\chi )\) with \(p_\zeta (\tau _u)=a\). We can write \(a=p_\zeta (s)\) where \(\widehat{W}_s=\underline{W}_s\) and the mapping \(r\mapsto \underline{W}_r\) is not constant on \([s,s+\varepsilon ]\), for every \(\varepsilon >0\). Using the formula \(-\underline{W}_r= H_{\Gamma _r}\), it follows that \(\Gamma _{s+\varepsilon }>\Gamma _s\) for every \(\varepsilon >0\), and thus \(s=\tau _{\Gamma _s}\). Finally \(a=p_\zeta (\tau _u)\) with \(u=\Gamma _s\).

Next let us prove that \(u\sim v\) implies \(p_\zeta (\tau _u)=p_\zeta (\tau _v)\). Let \(u\sim v\) and without loss of generality suppose that \(0<u<v<\chi \). We first assume that \(H_u=H_v\) and \(H_r>H_u\) for every \(r\in (u,v)\). Then, we must have \(\underline{W}_s<\underline{W}_{\tau _u}=\underline{W}_{\tau _{v-}}\) for every \(s\in (\tau _u,\tau _{v-})\)—recall that, by (14), \(\underline{W}\) is constant over any interval \([\tau _{r-},\tau _r]\). This implies that \(\zeta _s\ge \zeta _{\tau _u}=\zeta _{\tau _{v-}}\) for every \(s\in (\tau _u,\tau _{v-})\) (if there exists \(s\in (\tau _u,\tau _{v-})\) such that \(\zeta _s< \zeta _{\tau _u}\), then, for every \(\delta >0\) small enough, the properties of the Brownian snake allow us to find such an s with the additional property that \(W_s\) is the restriction of \(W_{\tau _u}\) to \([0,\zeta _s-\delta ]\), which contradicts \(\underline{W}_s<\underline{W}_{\tau _u}\)—and we can make a symmetric argument if there exists \(s\in (\tau _u,\tau _{v-})\) such that \(\zeta _s< \zeta _{\tau _{v-}}\)). It follows that \(p_\zeta (\tau _u)=p_\zeta (\tau _{v-})\). Moreover, since \(\tau _{v-}\) is a point of left increase of \(\zeta \), \(\tau _{v-}\) cannot be a point of right increase of \(\widehat{W}\), so that there are values of \(s> \tau _{v-}\) arbitrarily close to \(\tau _{v-}\) such that \(\underline{W}_s<\underline{W}_{\tau _{v-}}\), and therefore \(\Gamma _s>\Gamma _{\tau _{v-}}\). It follows that we have \(\tau _{v-}=\tau _v\) giving \(p_\zeta (\tau _u)=p_\zeta (\tau _v)\) as desired.

Suppose then that \(u\sim v\) but the property \(H_r>H_u\) for every \(r\in (u,v)\) does not hold. Then (uv) is the limit of a sequence \((u_n,v_n)\) such that, for every n, \(H_{u_n}=H_{v_n}\) and \(H_r>H_{u_n}\) for every \(r\in (u_n,v_n)\). We must have \(u_n<u\) and \(v_n>v\) for all n large enough (otherwise a simple argument shows that \(H_r>H_u\) for every \(r\in (u,v)\), which contradicts our assumption). By the first part of the argument, we have \(\zeta _s\ge \zeta _{\tau _{u_n}}=\zeta _{\tau _{v_n}}\) for \(s\in [\tau _{u_n},\tau _{v_n}]\), and, letting n tend to \(\infty \), we get \(\zeta _s\ge \zeta _{\tau _{u-}}=\zeta _{\tau _{v}}\), for every \(s\in [\tau _{u-},\tau _v]\). Then the fact that \(\tau _{u-}\) is a point of right increase of \(\zeta \) implies that \(\tau _{u-}=\tau _u\) (by the same argument as above) and we conclude again that \(p_\zeta (\tau _u)=p_\zeta (\tau _v)\).

Finally, it remains to prove that the property \(p_\zeta (\tau _u)=p_\zeta (\tau _v)\) implies \(u\sim v\). Note that \(\tau _u=\tau _v\) is only possible if \(u=v\), so that we may assume that \(\tau _u<\tau _v\). Then \(a=p_\zeta (\tau _u)=p_\zeta (\tau _v)\) is a point of multiplicity 2 of \({\mathcal T}_\zeta \) (since \(a\in \Theta \), a cannot have multiplicity 3 in \({\mathcal T}_\zeta \)), and the points \(p_\zeta (s)\) for \(\tau _u\le s\le \tau _v\) are the descendants of a, so that \(\underline{W}_s\le \underline{W}_a\) for every \(\tau _u\le s\le \tau _v\). It follows that \(H_r\ge H_u\) for \(u\le r\le v\) (write \(H_r=-\underline{W}_{\tau _r}\)). If \(H_r> H_u\) for \(u< r< v\), this means that \(u\sim v\) and we are done. Otherwise there exists \(r\in (u,v)\) such that \(H_r=H_u\), and this means that a has a strict descendant \(b=p_\zeta (\tau _r)\) such that \(\underline{W}_b=\underline{W}_a\). This implies that the path \(W_a\) hits its minimal value only at its terminal time (otherwise \(W_b\) would have two equal local minima). We know that, just before \(\tau _u\), there are values of s such that \(\zeta _s<\zeta _{\tau _u}\) (otherwise \(\tau _u\) would a time of local minimum of \(\zeta \), but this is excluded since such times correspond to points of multiplicity 3 of \({\mathcal T}_\zeta \) and thus never satisfy \(\underline{W}_s=\widehat{W}_s\)), and it follows that there are times \(s<\tau _u\) arbitrarily close to \(\tau _u\) such that \(\underline{W}_s> \underline{W}_{\tau _u}\), and thus \(H_{\Gamma _s}<H_u\). Hence, if we set \(u_n=\sup \{r<u: H_r=H_u-\frac{1}{n}\}\), we have \(u_n\longrightarrow u\) as \(n\rightarrow \infty \). Similarly, if \(v_n=\inf \{r>v: H_r=H_v-\frac{1}{n}\}\) we have \(v_n\longrightarrow v\) as \(n\rightarrow \infty \). Clearly \(u_n\sim v_n\) so that we also get \(u\sim v\). \(\square \)

We write \(\mathrm {p}_\mathcal {L}\) for the canonical projection from \([0,\chi ]\) onto \(\mathcal {L}=[0,\chi ]{/}\sim \). If \(\alpha \in \mathcal {L}\) and \(\alpha =\mathrm {p}_\mathcal {L}(s)\), we will also write \(H_\alpha =H_s\).

In a way similar to the definition of intervals in \({\mathcal T}_\zeta \), we can define intervals in \(\mathcal {L}\). If \(\alpha ,\beta \in {\mathcal L}\), we set \([\alpha ,\beta ]=\mathrm {p}_{\mathcal L}([s,t])\), where \(s,t\in [0,\chi ]\) are such that \(\mathrm {p}_{\mathcal L}(s)=\alpha \) and \(\mathrm {p}_{\mathcal L}(t)=\beta \) and [st] is as small as possible (here again we use the convention \([s,t]=[s,\chi ]\cup [0,t]\) if \(t<s\)).

We will identify the metric net \((\mathcal {M},\Delta ^*)\) with a quotient space of the looptree \({\mathcal L}\). Informally, the latter quotient space is obtained by identifying two points \(\alpha \) and \(\beta \) if they face each other at the same height in \({\mathcal L}\): This means that we require that \(H_\alpha =H_\beta \), and that vertices “between” \(\alpha \) and \(\beta \) have a smaller height. To make this more precise, we define, for every \(\alpha ,\beta \in {\mathcal L}\),

$$\begin{aligned} \mathcal {D}^\circ (\alpha ,\beta )= 2\min \Bigg ( \max _{\gamma \in [\alpha ,\beta ]} H_\gamma , \max _{\gamma \in [\beta ,\alpha ]} H_\gamma \Bigg ) - H_\alpha -H_\beta , \end{aligned}$$

and

$$\begin{aligned} \mathcal {D}^*(\alpha ,\beta )= \inf _{\alpha _0=\alpha ,\alpha _1,\ldots ,\alpha _{k-1},\alpha _k=\beta } \; \sum _{i=1}^k \mathcal {D}^\circ (\alpha _{i-1},\alpha _i), \end{aligned}$$

where the infimum is over all possible choices of the integer \(k\ge 1\) and of \(\alpha _1,\ldots ,\alpha _{k-1}\in {\mathcal L}\).

The following statement is a reformulation, in a more precise form, of Theorem 3 stated in the introduction.

Theorem 12

For \(\alpha ,\beta \in {\mathcal L}\), set \(\alpha \simeq \beta \) if and only if \(\mathcal {D}^*(\alpha ,\beta )=0\). Then the property \(\alpha \simeq \beta \) holds if and only if \(\mathcal {D}^\circ (\alpha ,\beta )=0\), or equivalently

$$\begin{aligned} H_\alpha =H_\beta =\min \Bigg ( \max _{\gamma \in [\alpha ,\beta ]} H_\gamma , \max _{\gamma \in [\beta ,\alpha ]} H_\gamma \Bigg ). \end{aligned}$$
(15)

Furthermore, \(\mathcal {D}^*\) induces a metric on the quotient space \({\mathcal L}{/}\simeq \). If \(\Phi :{\mathcal L}\longrightarrow \Theta ^\circ \) denotes the bijection of Lemma 11, \(\Pi \circ \Phi \) induces an isometry from \(({\mathcal L}{/}\simeq , \mathcal {D}^*)\) onto \((\mathcal {M}, \Delta ^*)\).

Remark

It is not a priori obvious that (15) defines an equivalence relation on \({\mathcal L}\). This property follows from the fact that (15) holds if and only if \(\mathcal {D}^*(\alpha ,\beta )=0\), which we derive in the following proof from the relations between \({\mathcal L}\) and the Brownian map.

Proof

We first verify that, if \(\alpha ,\beta \in {\mathcal L}\) and \(a=\Phi (\alpha ),b=\Phi (\beta )\), we have

$$\begin{aligned} \mathcal {D}^\circ (\alpha ,\beta )= D^\circ (a,b). \end{aligned}$$
(16)

Let \(s\in [0,\chi ]\) be such that \(\alpha =\mathrm {p}_{\mathcal L}(s)\). Note that we have then \(a=p_\zeta (\tau _s)\) by the definition of \(\Phi \). Hence,

$$\begin{aligned} H_\alpha =H_s=-\underline{W}_{\tau _s}=-Z_{\tau _s}=-Z_a. \end{aligned}$$

Similarly, we have \(H_\beta =-Z_b\).

So the proof of (16) reduces to checking that

$$\begin{aligned} \max _{\gamma \in [\alpha ,\beta ]} H_\gamma = \max _{c\in [a,b]} (-Z_c). \end{aligned}$$

To get this equality, we write

$$\begin{aligned} \max _{\gamma \in [\alpha ,\beta ]} H_\gamma&=\min _{\mathrm {p}_{\mathcal L}(s)=\alpha ,\mathrm {p}_{\mathcal L}(t)=\beta } \Big ( \max _{r\in [s,t]} H_r\Big )\\&= \min _{\mathrm {p}_{\mathcal L}(s)=\alpha ,\mathrm {p}_{\mathcal L}(t)=\beta } \Big ( \max _{r\in [\tau _s,\tau _t]} (-Z_r)\Big )\\&= \min _{p_\zeta (u)=a,p_\zeta (v)=b} \Big ( \max _{r\in [u,v]} (-Z_r)\Big )\\&=\max _{c\in [a,b]} (-Z_c). \end{aligned}$$

The second equality holds because \(H_r=-\underline{W}_{\tau _r}=-Z_{\tau _r}\), and \(\underline{W}\) stays constant on intervals \([\tau _{r-},\tau _r]\). To justify the third equality, we note that the elements u of \([0,\sigma ]\) such that \(p_\zeta (u)=a\) are exactly the reals \(u=\tau _s\) where s is such that \(\mathrm {p}_{\mathcal L}(s)=\alpha \): Since \(a\in \Theta ^\circ \), any \(u\in [0,\sigma ]\) such that \(p_\zeta (u)=a\) must be of the form \(u=\tau _s\) with \(s\in [0,\chi ]\), and if u is of this form, the property \(p_\zeta (u)=a\) is equivalent to \(\mathrm {p}_{\mathcal L}(s)=\alpha \) by Lemma 11. This completes the proof of (16).

We then claim that, for every \(x\in \mathcal {M}\), there exists \(a\in \Theta ^\circ \) such that \(\Pi (a)=x\). Let \(x\in \mathcal {M}\). By Proposition 10, we know that \(x=\Pi (a)\) where \(a\in \Theta \). If \(a\in \Theta \backslash \Theta ^\circ =\Theta ^1\), then there exists \(u\in (0,\chi ]\) such that \(\tau _{u-}<\tau _u\) and \(a=p_\zeta (\tau _{u-})\), and we know that \(p_\zeta (\tau _u)\in \Theta ^\circ \). Our claim will follow from the fact that \(\Pi (p_\zeta (\tau _r))=\Pi (p_\zeta (\tau _{r-}))\), for every r such that \(\tau _{r-}<\tau _r\). Indeed, we know that \(\underline{W}\) is constant on \([\tau _{r-},\tau _r]\), so that \(\underline{W}_s=\underline{W}_{\tau _{r-}}=\underline{W}_{\tau _r}\) for every \(s\in [\tau _{r-},\tau _r]\). Since both \(p_\zeta (\tau _{r-})\) and \(p_\zeta (\tau _r)\) belong to \(\Theta \) we have also \(\underline{W}_{\tau _{r-}}=\widehat{W}_{\tau _{r-}}\) and \(\underline{W}_{\tau _r}=\widehat{W}_{\tau _r}\). It follows that \(\widehat{W}_s\ge \widehat{W}_{\tau _{r-}}=\widehat{W}_{\tau _r}\) for every \(s\in [\tau _{r-},\tau _r]\), hence \(Z_c\ge Z_{p_\zeta (\tau _{r-})}=Z_{p_\zeta (\tau _{r})}\) for every \(c\in [p_\zeta (\tau _{r-}),p_\zeta (\tau _{r})]\), which exactly means that \(\Pi (p_\zeta (\tau _{r-}))=\Pi (p_\zeta (\tau _{r}))\) as desired.

Let \(x,y\in \mathcal {M}\), and \(a,b\in \Theta ^\circ \) such that \(\Pi (a)=x\) and \(\Pi (b)=y\). From (13) and the preceding claim, we may write

$$\begin{aligned} \Delta ^*(x,y)=\,\mathrel {\mathop {\inf _{a=a_0,a_1,\ldots ,a_k=b}}\limits _{a_1,\ldots ,a_{k-1}\in \Theta ^\circ }^{}} \sum _{i=1}^k D^\circ (a_{i-1},a_i), \end{aligned}$$

We then use the bijection \(\Phi \) of Lemma 11 to observe that, if \(\alpha =\Phi ^{-1}(a)\) and \(\beta =\Phi ^{-1}(b)\), we have also, thanks to (16),

$$\begin{aligned} \Delta ^*(x,y)=\,\mathrel {\mathop {\inf _{\alpha =\alpha _0,\alpha _1,\ldots ,\alpha _k=\beta }}\limits _{\alpha _1,\ldots ,\alpha _{k-1}\in {\mathcal L}}^{}} \sum _{i=1}^k \mathcal {D}^\circ (\alpha _{i-1},\alpha _i)= \mathcal {D}^*(\alpha ,\beta ). \end{aligned}$$
(17)

In particular, if \(\alpha ,\beta \in {\mathcal L}\) and \(x=\Pi \circ \Phi (\alpha )\), \(y=\Pi \circ \Phi (\beta )\), we see that the condition \(\mathcal {D}^*(\alpha ,\beta )=0\) holds if and only if \(\Delta ^*(x,y)=0\), and (since \(\Delta ^*(x,y)\ge D^*(x,y)\)) the latter condition holds if and only if \(D^\circ (x,y)=0\), or equivalently \(\mathcal {D}^\circ (\alpha ,\beta )=0\) (by (16)). This gives the first assertion of the theorem.

Then \(\mathcal {D}^*\) is symmetric and satisfies the triangle inequality, hence induces a metric on the quotient space \({\mathcal L}{/}\simeq \). From the property \(\Delta ^*(\Pi \circ \Phi (\alpha ),\Pi \circ \Phi (\beta ))= \mathcal {D}^*(\alpha ,\beta )\), we see that the relation \(\alpha \simeq \beta \) implies \(\Pi \circ \Phi (\alpha )=\Pi \circ \Phi (\beta )\), so that \(\Pi \circ \Phi \) induces a mapping from \({\mathcal L}{/}\simeq \) to \(\mathcal {M}\). This mapping is onto since \(\Pi (\Theta ^\circ )=\mathcal {M}\), and is an isometry by (17). \(\square \)

8 Holes in the metric net

In this section, we continue our applications to the Brownian map. We keep the notation and assumptions of the preceding section. In particular, the process \((H_s)_{0\le s\le \chi }\), which is distributed under \({\mathbb N}_0\) as the height process of the \(\psi _0\)-Lévy tree, was introduced so that the representation formula (14) holds.

Our goal is to discuss the connected components of the complement of the metric net in the Brownian map (these are called Brownian disks in [21]). We again argue under the excursion measure \({\mathbb N}_0\). For every \(s>0\), we denote by \(\mathcal {Y}_s\) the (total mass of the) exit measure of the Brownian snake from \((-s,\infty )\), see [14, Chapter V]. Then \((\mathcal {Y}_s)_{s>0}\) has a càdlàg modification that we consider from now on (see the discussion in Section 2.5 of [1]). For every \(s>0\), we can also consider the total local time of H at level s, which we denote by \(L^s_\chi \) in agreement with Sect. 3. The Ray-Knight theorem of [7, Theorem 1.4.1] shows that \((L^s_\chi )_{s>0}\) is distributed under \({\mathbb N}_0\) according to the excursion measure of the continuous-state branching process with branching mechanism \(\psi _0\), and therefore has also a càdlàg modification.

Lemma 13

We have \(L^s_\chi =\mathcal {Y}_s\) for every \(s>0\), \({\mathbb N}_0\) a.e.

Proof

Let \(s>0\) and, for every \(\varepsilon >0\), let \(N_{s,\varepsilon }\) be the number of excursions of the Brownian snake outside \((-s,\infty )\) that hit \(-s-\varepsilon \). Then an easy application of the special Markov property gives

$$\begin{aligned} \mathcal {Y}_s=\lim _{\varepsilon \rightarrow 0} v(\varepsilon )^{-1}\,N_{s,\varepsilon }\;,\qquad {\mathbb N}_0\ \hbox {a.e.} \end{aligned}$$

where \(v(\varepsilon )={\mathbb N}_0(\max \widehat{W}_s>\varepsilon )={\mathbb N}_0(\mathcal {H}({\mathcal T}_H)>\varepsilon )\). On the other hand, (14) shows that \(N_{s,\varepsilon }\) is also the number of excursions of H above \((s,\infty )\) that hit \(s+\varepsilon \). By comparing the preceding approximation of \(\mathcal {Y}_s\) with [8, Theorem 4.2], we arrive at the stated result. \(\square \)

Following closely [1], we say that \(a\in \Theta \) is an excursion debut if a has a strict descendant b such that \(Z_c>Z_a\) for every \(c\in \rrbracket a,b\rrbracket \). We also say that \(m\in {\mathbb R}_+\) is a local minimum of H if there exist \(s\in (0,\chi )\) and \(\varepsilon \in (0,s\wedge (1-s))\) such that \(H_s=m\) and \(H_r\ge m\) for every \(r\in (s-\varepsilon ,s+\varepsilon )\).

We now claim that the following sets are in one-to-one correspondence:

  1. (a)

    The set of all connected components of \(\mathbf{m}\backslash \mathcal {M}\).

  2. (b)

    The set of all connected components of \({\mathcal T}_\zeta \backslash \Theta \).

  3. (c)

    The set of all excursion debuts.

  4. (d)

    The set of all jump times of the exit measure process \(\mathcal {Y}\).

  5. (e)

    The set of all points of infinite multiplicity of \({\mathcal T}_H\).

  6. (f)

    The set of all local minima of H.

Let us explain these correspondences. First the fact that local minima of H correspond to points of infinite multiplicity of \({\mathcal T}_H\) was explained at the end of Sect. 3. Recall that, for every point of infinite multiplicity of \({\mathcal T}_H\), there is a Cantor set of local minimum times corresponding to the associated local minimum (see the end of Sect. 3). Then, by [8, Theorem 4.7], each branching point b (necessarily of infinite multiplicity) of \({\mathcal T}_H\) corresponds to a discontinuity of \(s\mapsto L^s_\chi =\mathcal {Y}_s\) at time \(H_b\), and the corresponding jump \(\Delta \mathcal {Y}_s\) is the weight of the branching point b. By [1, Proposition 36], discontinuity times for \(\mathcal {Y}_s\) are in one-to-one correspondence with excursion debuts, and a discontinuity time s corresponds to an excursion debut a such that \(s=-Z_a\). The fact that excursion debuts are in one-to-one correspondence with connected components of \({\mathcal T}_\zeta \backslash \Theta \) is Proposition 20 in [1]: If a is an excursion debut, the associated connected component \(\mathcal {C}\) is the collection of all strict descendants b of a such that \(Z_c>Z_a\) for every \(c\in \rrbracket a,b\rrbracket \), and the boundary \(\partial \mathcal {C}\) consists of all descendants b of a such that \(Z_b=Z_a\) and \(Z_c>Z_a\) for every \(c\in \rrbracket a,b\llbracket \) (here we may replace the strict inequality \(Z_c>Z_a\) by a weak one \(Z_c\ge Z_a\) because the local minimima of a Brownian snake path are distinct). Furthermore, the “boundary size” of \(\mathcal {C}\) may be defined as the quantity \(\Delta \mathcal {Y}_s\), if s is the associated jump time of the process \(\mathcal {Y}\) (this is also the weight of the corresponding point of infinite multiplicity of \({\mathcal T}_H\)). Finally, the fact that the sets (a) and (b) are also in one-to-one correspondence is a consequence of the following lemma.

Lemma 14

Let \(\mathcal {C}\) be a connected component of \({\mathcal T}_\zeta \backslash \Theta \). Then \(\Pi (\mathcal {C})\) is a connected component of \(\mathbf{m}\backslash \mathcal {M}\), and \(\partial \Pi (\mathcal {C})= \Pi (\partial \mathcal {C})\).

Proof

Let \(a\in {\mathcal T}_\zeta \) be the excursion debut such that \(\mathcal {C}\) is the collection of all strict descendants b of a such that \(Z_c>Z_a\) for every \(c\in \rrbracket a,b\rrbracket \). We first observe that \(\Pi (\mathcal {C})\) is an open subset of \(\mathbf{m}\backslash \mathcal {M}\). This follows from the fact that the topology of \(\mathbf{m}\) is the quotient topology and \(\Pi ^{-1}(\Pi (\mathcal {C}))=\mathcal {C}\) (to derive the latter equality, note that, if \(b\in \mathcal {C}\) and \(b'\in {\mathcal T}_\zeta \) are such that \(b\approx b'\), we have \( \min \{Z_c:c\in \llbracket b,b'\rrbracket \}=Z_b=Z_{b'} > Z_a\), and it follows that \(b'\in \mathcal {C}\)). Since \(\Pi (\mathcal {C})\) is connected, in order to get the statement of the lemma, we need only verify that, if \(x\in \mathbf{m}\backslash \mathcal {M}\) is such that there is a continuous path \((\gamma (t))_{0\le t\le 1}\) that stays in \(\mathbf{m}\backslash \mathcal {M}\) and connects x to a point y of \(\Pi (\mathcal {C})\), then \(x\in \Pi (\mathcal {C})\). We argue by contradiction and assume that \(x\notin \Pi (\mathcal {C})\). We then set \(t_0:=\inf \{t\in (0,1]:\gamma (t)\in \Pi (\mathcal {C})\}\). Clearly, \(\gamma (t_0)\) belongs to the boundary \(\partial \Pi (\mathcal {C})\) of \(\Pi (\mathcal {C})\). On the other hand, it is easy to verify that \(\partial \Pi (\mathcal {C})\subset \Pi (\partial \mathcal {C})\) (if \(z\in \partial \Pi (\mathcal {C})\), we can write \(z=\lim \Pi (a_n)\) where \(a_n\in \mathcal {C}\), and, by extracting a subsequence, we can assume that \(a_n \longrightarrow a_\infty \) in \({\mathcal T}_\zeta \), so that we have \(z=\Pi (a_\infty )\), and \(a_\infty \in \partial \mathcal {C}\) since \(a_\infty \in \mathcal {C}\) would imply \(z\in \Pi (\mathcal {C})\), contradicting \(z\in \partial \Pi (\mathcal {C})\)). So \(\gamma (t_0)\in \Pi (\partial \mathcal {C})\subset \Pi (\Theta )= \mathcal {M}\), which contradicts our assumption that \(\gamma \) stays in \(\mathbf{m}\backslash \mathcal {M}\).

For the last assertion, it remains to see that \( \Pi (\partial \mathcal {C})\subset \partial \Pi (\mathcal {C})\). This is straightforward: If \(b\in \partial \mathcal {C}\), we have automatically \(Z_b=Z_a\) and \(b\in \Theta \), so that \(\Pi (b)\in \mathcal {M}\), forcing \(\Pi (b)\in \partial \Pi (\mathcal {C})\). \(\square \)

It is worth giving a direct interpretation of the correspondence between sets (c) and (f) above. If a is an excursion debut, we can write \(a=p_\zeta (s_1)=p_\zeta (s_2)\), where \(s_1<s_2\), and the image under \(p_\zeta \) of the interval \([s_1,s_2]\) corresponds to the exploration of descendants of a in \({\mathcal T}_\zeta \). It follows that we have \(\underline{W}_c\le \underline{W}_a=Z_a\) for every \(c\in p_\zeta ([s_1,s_2])\). There are points \(b\in p_\zeta ([s_1,s_2])\) such that \(Z_b<Z_a\) (in fact one can find such points arbitrarily close to a, as a consequence of the fact that points of increase for \(\zeta \) cannot be points of increase for \(\widehat{W}\)). If b is such a point we can consider the last ancestor c of b such that \(Z_c\le Z_a\), noting that the definition of an excursion debut implies that c is a strict descendant of a. Then if \(t_1<t_2\) are the two times in \((s_1,s_2)\) such that \(p_\zeta (t_1)=p_\zeta (t_2)=c\), one verifies that both \(\Gamma _{t_1}\) and \(\Gamma _{t_2}\) are local minimum times of H corresponding to the local minimum \(-Z_a\). In fact the set of all these local minimum times consists of all \(\Gamma _r\) for \(r\in (s_1,s_2)\) such that \(p_\zeta (r)\) belongs to the boundary of the connected component of \({\mathcal T}_\zeta \backslash \Theta \) associated with a.

We will now establish that the boundary of any connected component of \(\mathbf{m}\backslash \mathcal {M}\) is a simple loop. To this end, it is convenient to introduce the Lévy process excursion \((X_t)_{0\le t\le \chi }\) associated with H (see Sect. 3). Note that X can be reconstructed as a measurable function of H, and that any point of infinite multiplicity of \({\mathcal T}_H\) corresponds to a unique jump time of X, such that the size of the jump is the weight of the branching point. There is thus also a onto-one correspondence between connected components of \(\mathbf{m}\backslash \mathcal {M}\) and jump times of X.

Recall the notation \(\Psi \) for the mapping introduced in Lemma 11.

Proposition 15

Let C be a connected component of \(\mathbf{m}\backslash \mathcal {M}\), let a be the associated excursion debut, and let \(s\in (0,\chi )\) be the associated jump time of X. For every \(r\in [0,\Delta X_s]\), set

$$\begin{aligned} \eta _r=\inf \{t\ge s: X_t<X_s-r\}. \end{aligned}$$

Then the mapping

$$\begin{aligned} r\mapsto \gamma (r)=\Pi \circ \Psi (\eta _r), \quad 0\le r\le \Delta X_s, \end{aligned}$$

defines a simple loop in \(\mathbf{m}\), whose initial and end points are equal to \(\Pi (a)\). Furthermore, the range of \(\gamma \) is the boundary of C.

Proof

To simplify notation, we set \(h=-Z_a\). By definition, \(\Pi \circ \Psi (u)=\Pi (p_\zeta (\tau _u))\) for \(0\le u\le \chi \). The first step of the proof is is to check that every point of \(\partial C\) can be written in the form \(\Pi (p_\zeta (\tau _r))\) for some \(r\in [s,\eta _{\Delta X_s}]\) such that \(H_r=h\).

Let \(\mathcal {C}\) stand for the connected component of \({\mathcal T}_\zeta \backslash \Theta \) such that \(\Pi (\mathcal {C})=C\). Recall that \(\partial C=\Pi (\partial \mathcal {C})\) by Lemma 14. Let \([s_1,s_2]\) be the interval corresponding to the descendants of a in the coding of \({\mathcal T}_\zeta \). Since \(s_1\) and \(s_2\) are both (left or right) increase times of \(\zeta \), they cannot be increase times for \(\widehat{W}\), and this implies \(s_1=\tau _{\Gamma _{s_1}}\) and \(s_2=\tau _{\Gamma _{s_2}}\). Let \(b\in \partial \mathcal {C}\subset \Theta \). We claim that \(b=p_\zeta (\tau _r)\) or \(b=p_\zeta (\tau _{r-})\) for some \(r\in [0,\chi ]\) such that \(s_1\le \tau _r\le s_2\). This is true if \(b=a\) since \(a=p_\zeta (s_1)=p_\zeta (\tau _{\Gamma _{s_1}})\). If \(b\in \partial \mathcal {C}\) and \(b\not = a\), we can write \(b=p_\zeta (u)\) with \(u\in (s_1,s_2)\), and Lemma 16 in [1] implies that there are values of v arbitrarily close to u such that \(\widehat{W}_v<\widehat{W}_u\), which implies \(u=\tau _{\Gamma _u}\) or \(u=\tau _{\Gamma _u-}\), giving our claim. On the other hand, we have seen in the proof of Theorem 12 that \(\Pi (p_\zeta (\tau _{r-}))=\Pi (p_\zeta (\tau _r))\) for any \(r\in (0,\chi ]\). Since any point of \(\partial C\) can be written as \(\Pi (b)\) for some \(b\in \partial \mathcal {C}\), we conclude that any point \(x\in \partial C\) is of the form \(x=\Pi (p_\zeta (\tau _r))=\Pi \circ \Psi (r)\) for some \(r\in [0,\chi ]\) such that \(s_1\le \tau _r\le s_2\). Furthermore, we have then \(H_r=-\underline{W}_{\tau _r}=-Z_{p_\zeta (\tau _r)}=-Z_a=h\), where the second equality holds because \(p_\zeta (\tau _r)\in \Theta \) and the third one because \(Z_b=Z_a\) for every \(b\in \partial \mathcal {C}\) (see the comments before Lemma 14).

Next note that the condition \(s_1\le \tau _r\le s_2\) holds if and only if \(\Gamma _{s_1}\le r\le \Gamma _{s_2}\), and that \([\Gamma _{s_1},\Gamma _{s_2}]\) is the interval corresponding to the descendants of the point of infinite multiplicity associated with C, in the coding of the tree \({\mathcal T}_H\). Using the comments of the end of Sect. 3, we see that this interval is the same as \([s,\eta _{\Delta X_s}]\) with the notation of the proposition. This completes the first step of the proof.

In the second step, we observe that, conversely, if r is such that \(r\in [s,\eta _{\Delta X_s}]\) and \(H_r=h\), we have \(\Pi (p_\zeta (\tau _r))\in \partial C\). Indeed, we have then, recalling that \(p_\zeta (\tau _r)\in \Theta \),

$$\begin{aligned} Z_{p_\zeta (\tau _r)}=\underline{W}_{\tau _r}=-H_r=-h=Z_a, \end{aligned}$$

and the fact that \(p_\zeta (\tau _r)\) is a descendant of a, which belongs to \(\Theta \) and satisfies \(Z_{p_\zeta (\tau _r)}=Z_a\), implies that \(p_\zeta (\tau _r)\in \partial \mathcal {C}\) (see the comments before Lemma 14, or [1, Lemma 19]).

We next note that the mapping \(r\mapsto \eta _r\) is right-continuous. From the results recalled at the end of Sect. 3, the times \(r\in [s,\eta _{\Delta X_s}]\) such that \(H_r=h\) are exactly all reals of the form \(\eta _u\) or \(\eta _{u-}\) for some \(u\in [0,\Delta X_s]\). Moreover, for all u such that \(\eta _{u-}<\eta _{u}\), we have \(H_r> h\) for every \(r\in (\eta _{u-},\eta _u)\), so that \(\eta _{u-}\sim \eta _u\) and (by Lemma 11) \(p_\zeta (\tau _{\eta _{u-}})= p_\zeta (\tau _{\eta _u})\).

By combining the first two steps with the preceding observation, we obtain that the range of the mapping \(\gamma \) of the proposition coincides with \(\partial C\). We note that \(\gamma (0)=\gamma (\Delta X_s)=\Pi (a)\), by the fact that \(s=\eta _0 \sim \eta _{\Delta X_s}\) (because \(s=\min p_H^{-1}(\kappa )\) and \(\eta _{\Delta X_s}=\max p_H^{-1}(\kappa )\), if \(\kappa =p_H(s)\) is the point of infinite multiplicity of \({\mathcal T}_H\) associated with C). It remains to verify that \(\gamma \) is continuous and that the restriction of \(\gamma \) to \([0,\Delta X_s)\) is one-to-one. To get the latter property, we note that the equivalence classes of \(\eta _u\), \(u\in [0,\Delta X_s)\), in the quotient space \({\mathcal L}=[0,\chi ]{/}\sim \) are distinct (they correspond to the points of the loop associated with the point of infinite multiplicity \(\kappa \)), and furthermore, the equivalence classes of these elements of \({\mathcal L}\) in the quotient \({\mathcal L}{/}\simeq \) are also distinct (from the fact that \(H_r> h\) for every \(r\in (\eta _{u-},\eta _u)\) whenever \(u\in [0,\Delta X_s)\) is such that \(\eta _{u-}<\eta _{u}\), it easily follows that property (15) cannot hold when \(\alpha =\mathrm {p}_{\mathcal L}(\eta _u)\) and \(\beta =\mathrm {p}_{\mathcal L}(\eta _v)\) with two distinct elements u and v of \([0,\Delta X_s)\)). Theorem 12 then implies that the quantities \(\Pi (\Psi (\eta _u))\), \(u\in [0,\Delta X_s)\), are distinct elements of \(\mathcal {M}\). Finally, to prove that \(\gamma \) is continuous, we note that \(\gamma \) is right-continuous with left limits by construction, and that the left limit of \(\gamma \) at \(u\in (0,\Delta X_s]\) is

$$\begin{aligned} \gamma (u-)=\Pi (p_\zeta (\tau _{(\eta _{u-})-}))=\Pi (p_\zeta (\tau _{\eta _{u-}}))=\Pi (p_\zeta (\tau _{\eta _{u}})) \end{aligned}$$

where the last equality holds because \(p_\zeta (\tau _{\eta _{u-}})= p_\zeta (\tau _{\eta _u})\) as mentioned above. This shows that \(\gamma (u-)=\gamma (u)\) and completes the proof. \(\square \)

Let us conclude this section with some comments. Recalling that the Brownian map \(\mathbf{m}\) is homeomorphic to the two-dimensional sphere [19], we get from Jordan’s theorem that all connected components of \(\mathbf{m}\backslash \mathcal {M}\) are homeomorphic to the disk. In fact these connected components are called Brownian disks in [21]. If C is a given connected component of \(\mathbf{m}\backslash \mathcal {M}\), the structure of C—in a sense that we do not make precise here—is described by the associated component \(\mathcal {C}\) of \({\mathcal T}_\zeta \backslash \Theta \), and the values of Z on \(\mathcal {C}\) (shifted so that the boundary values vanish). The preceding data correspond to what is called an excursion of the Brownian snake above its minimum in [1]. One key result of [1] states that conditionally on the exit measure process \((\mathcal {Y}_s)_{s>0}\), the excursions above the minimum are independent, and the distribution of the excursion corresponding to a jump \(\Delta \mathcal {Y}_s\) is given by a certain “excursion measure” conditioned on the boundary size being equal to \(\Delta \mathcal {Y}_s\). This suggests that one can reconstruct the Brownian map by first considering the metric net \(\mathcal {M}\) (which is a measurable function of H) and then gluing independently on each “hole” of the metric net (associated with a point of infinite multiplicity of the tree \({\mathcal T}_H\)) a Brownian disk corresponding to a Brownian snake excursion whose boundary size is the weight of the point of infinite multiplicity. See [18, Section 11] for further results in this direction.

9 Subordination by the local time

In this section, which is mostly independent of the previous ones, we generalize the subordination by the maximum discussed in Sect. 4. To this end, we deal with the Brownian snake associated with a more general spatial motion. Specifically, we consider a strong Markov process \(\xi \) with continuous sample paths with values in \({\mathbb R}_+\), and we write \(P_x\) for a probability measure under which \(\xi \) starts from x. We assume that 0 is a regular recurrent point for \(\xi \), and that

$$\begin{aligned} E_0\left[ \int _0^\infty \mathrm {d}t\,\mathbf {1}_{\{\xi _t=0\}}\right] =0. \end{aligned}$$
(18)

We can then define the local time process \((L(t),t\ge 0)\) of \(\xi \) at 0 (up to a multiplicative constant). We make the following continuity assumption: there exist two reals \(p>0\) and \(\varepsilon >0\), and a constant C such that, for every \(t\in [0,1]\) and \(x\in {\mathbb R}_+\),

$$\begin{aligned} E_x\left[ \left( \sup _{r\le t} |\xi _r-x|\right) ^p\right]&\le C\,t^{2+\varepsilon }, \end{aligned}$$
(19)
$$\begin{aligned} E_0[L(t)^p]&\le C\,t^{2+\varepsilon }. \end{aligned}$$
(20)

We write \(\mathscr {N}\) for the excursion measure of \(\xi \) away from 0 associated with the local time process \(L(\cdot )\), and \(\eta \) for the duration of the excursion under \(\mathscr {N}\).

Under the preceding assumptions, the Brownian snake whose spatial motion is the pair \((\xi ,L)\) is defined by a straightforward adaptation of properties (a) and (b) stated at the beginning of Sect. 4 (see [14, Chapter IV] for more details), and we denote this process by \((W_s,\Lambda _s)\), where for every \(s\ge 0\),

$$\begin{aligned} W_s=(W_s(t))_{0\le t\le \zeta _s},\quad \Lambda _s=(\Lambda _s(t))_{0\le t\le \zeta _s}. \end{aligned}$$

For every \((x,r)\in {\mathbb R}_+\times {\mathbb R}_+\), let \({\mathbb N}_{(x,r)}\) denote the excursion measure of \((W,\Lambda )\) away from (xr). Under \({\mathbb N}_{(x,r)}\), the “lifetime process” \((\zeta _s)_{s\ge 0}\) is distributed according to the Itô measure \(\mathbf{n}(\cdot )\), and as above we let \(\sigma :=\sup \{s\ge 0:\zeta _s>0\}\) stand for the duration of the excursion \((\zeta _s)_{s\ge 0}\). As previously, \({\mathcal T}_\zeta \) denotes the tree coded by \((\zeta _s)_{0\le s\le \sigma }\) and \(p_\zeta : [0,\sigma ]\longrightarrow {\mathcal T}_\zeta \) is the canonical projection.

We write \(Y_0\) for the total mass of the exit measure of \((W,\Lambda )\) from \((0,\infty )\times {\mathbb R}_+\) (see [14, Chapter V] or the appendix below for the definition of exit measures). This makes sense under the excursion measures \({\mathbb N}_{(x,r)}\) for \(x>0\).

Let \(\widehat{\Lambda }_s=\Lambda _s(\zeta _s)\) be the total local time at 0 accumulated by the path \(W_s\). If \(a=p_\zeta (s)\), we write \(\widehat{\Lambda }_a= \widehat{\Lambda }_s\) (this does not depend on the choice of s). Then the function \(a\mapsto \widehat{\Lambda }_a\) is nondecreasing with respect to the genealogical order.

Theorem 16

Under \({\mathbb N}_{(0,0)}\), the subordinate tree \(\widetilde{\mathcal T}\) of \({\mathcal T}_\zeta \) with respect to the function \(a\mapsto \widehat{\Lambda }_a\) is a Lévy tree whose branching mechanism \(\psi \) can be described as follows:

$$\begin{aligned} \psi (r)= 2\int m(\mathrm {d}x)\,u_r(x)^2, \end{aligned}$$

where m is the invariant measure of \(\xi \) defined by

$$\begin{aligned} \int m(\mathrm {d}x)\,\varphi (x)= \mathscr {N}\left( \int _0^\eta \mathrm {d}t\,\varphi (\xi _t)\right) , \end{aligned}$$

and the function \((u_r(x))_{r\ge 0,x>0}\) is given by

$$\begin{aligned} u_r(x)= {\mathbb N}_{(x,0)}(1-\exp (-r Y_0)). \end{aligned}$$

Proof

As in the proof of Theorem 1, we make use of the special Markov property of the Brownian snake. Fix \(r>0\), and consider the domain \(D_r={\mathbb R}_+\times [0,r)\). Write \(\mathcal {Z}^{D_r}\) for the exit measure from \(D_r\). The first-moment formula for exit measures [14, Proposition V.3] shows that \(\mathcal {Z}^{D_r}\) is \({\mathbb N}_{(0,0)}\) a.e. supported on \(\{(0,r)\}\), so that we can write

$$\begin{aligned} \mathcal {Z}^{D_r} = Y_r\,\delta _{(0,r)}, \end{aligned}$$

where \(Y_r\) is a nonnegative random variable. Let \(\mathcal {E}^{D_r}\) stand for the \(\sigma \)-field generated by the paths \((W_s,\Lambda _s)\) before they exit \(D_r\). By Corollary 22 in the appendix, under \({\mathbb N}_{(0,0)}\) and conditionally on \(\mathcal {E}^{D_r}\), the excursions of the Brownian snake \((W,\Lambda )\) “outside” \(D_r\) form a Poisson measure with intensity \(Y_r\,{\mathbb N}_{(0,r)}\). Now notice that, for every \(h>0\), subtrees of \(\widetilde{\mathcal T}\) above level r that hit \({r+h}\) correspond to those among these excursions that exit \(D_{r+h}\) (we again use Proposition 5 to obtain that \(\widetilde{\mathcal T}\) is the tree coded by \(s\rightarrow \widehat{\Lambda }_s\)). As in the proof of Theorem 1 (we omit a few details here), it follows that the distribution of \(\widetilde{\mathcal T}\) under \({\mathbb N}_{(0,0)}\) satisfies the branching property of Proposition 7, and so \(\widetilde{\mathcal T}\) under \({\mathbb N}_{(0,0)}\) must be a Lévy tree.

To determine the branching mechanism of this Lévy tree, we fix \(R>0\), and, for \(0\le r<R\), we set

$$\begin{aligned} U_\lambda (x,r)={\mathbb N}_{(x,r)}(1-\exp (-\lambda Y_R)). \end{aligned}$$

By [14, Theorem V.4], \(U_\lambda \) satisfies the integral equation

$$\begin{aligned} U_\lambda (x,r) + 2\,E_{(x,r)}\left[ \int _0^{\tau _R} U_\lambda (\xi _s,L_s)^2 \,\mathrm {d}s\right] =\lambda \end{aligned}$$

where the Markov process \((\xi ,L)\) starts from (xr) under the probability measure \(P_{(x,r)}\), and \(\tau _R=\inf \{t\ge 0: L(t)\ge R\}\) is the exit time from \(D_R\) for the process \((\xi ,L)\). When \(x=0\), excursion theory for \(\xi \) gives

$$\begin{aligned} E_{(0,r)}\left[ \int _0^{\tau _R} U_\lambda (\xi _s,L_s)^2 \,\mathrm {d}s\right]= & {} \int _r^R \mathscr {N}\left( \int _0^\eta U_\lambda (\xi _t,\ell )^2\mathrm {d}t\right) \mathrm {d}\ell \\= & {} \int _r^R \mathrm {d}\ell \int m(\mathrm {d}y)\,U_\lambda (y,\ell )^2. \end{aligned}$$

Set \(v_\lambda (x,r)=U_\lambda (x,R-r)\) for \(0< r\le R\). By a translation argument, \(v_\lambda (x,r)\) does not depend on our choice of R provided that \(R\ge r\). It follows from the preceding considerations that

$$\begin{aligned} v_\lambda (0,r) +2 \int _0^r \mathrm {d}\ell \int m(\mathrm {d}y)\,v_\lambda (y,\ell )^2 =\lambda . \end{aligned}$$

On the other hand, by applying the special Markov property (Corollary 22) to the domain \((0,\infty )\times {\mathbb R}_+\), we have, for every \(y> 0\) and \(r>0\),

$$\begin{aligned} v_\lambda (y,r)= {\mathbb N}_{(y,0)}(1-\exp (-\lambda Y_r))= {\mathbb N}_{(y,0)}(1-\exp (-Y_0 v_\lambda (0,r))) =u_{v_\lambda (0,r)}(y), \end{aligned}$$

with the notation introduced in the theorem. We conclude that

$$\begin{aligned} v_\lambda (0,r)+ \int _0^r \mathrm {d}\ell \,\psi (v_\lambda (0,\ell )) =\lambda , \end{aligned}$$
(21)

where \(\psi \) is as in the statement of the theorem. Note that the functions \(r\mapsto u_r(x)\) are monotone increasing, and so is \(\psi \). Then (21) also implies that \(v_\lambda (0,r)\) is a continuous nonincreasing function of r, that tends to \(\lambda \) as \(r\rightarrow 0\). It follows that \(\psi (r)<\infty \) for every \(r>0\), and then by dominated convergence that \(\psi \) is continuous on \([0,\infty )\). The unique solution of (21) is given by

$$\begin{aligned} \int _{v_\lambda (0,r)}^\lambda \frac{\mathrm {d}\ell }{\psi (\ell )} = r \end{aligned}$$

(in particular, we must have \(\int _{0+}\psi (\ell )^{-1}\mathrm {d}\ell =\infty \)). As \(\lambda \rightarrow \infty \), \(v_\lambda (0,r)\) converges to \({\mathbb N}_{(0,0)}(Y_r\not =0)\), which coincides with \({\mathbb N}_{(0,0)}(\mathcal {H}(\widetilde{\mathcal T})>r)\), where \(\mathcal {H}(\widetilde{\mathcal T})\) denotes the height of \(\widetilde{\mathcal T}\). Hence, the function \(v(r)= {\mathbb N}_{(0,0)}(\mathcal {H}(\widetilde{\mathcal T})>r)\) is given by

$$\begin{aligned} \int _{v(r)}^\infty \frac{\mathrm {d}\ell }{\psi (\ell )} = r, \end{aligned}$$

and this suffices to establish that the branching mechanism of \(\widetilde{\mathcal T}\) is \(\psi \). \(\square \)

The formula for \(\psi \) that appears in Theorem 16 is not explicit and in general does not allow the calculation of this function. We will now argue that we can identify \(\psi \), up to a multiplicative constant, if \(\xi \) satisfies a scaling property. From now on until the end of the section, we assume (in addition to the previous hypotheses) that there exists a constant \(\alpha >0\) such that, for every \(x\ge 0\) and \(\lambda >0\), the law of

$$\begin{aligned} (\lambda ^{\alpha }\xi _{t/\lambda })_{t\ge 0} \end{aligned}$$

under \(P_x\) coincides with the law of \((\xi _t)_{t\ge 0}\) under \(P_{x\lambda ^{\alpha }}\). In other words, the process \(\xi \) is a self-similar Markov process with values in \([0,\infty )\), see the survey [25] for more information on this class of processes. A particular case (with \(\alpha =1/2\)) is the Bessel process of dimension \(d\in (0,2)\).

The excursion measure \(\mathscr {N}\) must then satisfy a similar scaling invariance property. More precisely, for every \(\lambda >0\), the law of

$$\begin{aligned} (\lambda ^{\alpha }\xi _{t/\lambda })_{t\ge 0} \end{aligned}$$

under \(\mathscr {N}\) must be equal to \(\lambda ^{\beta }\) times the law of \((\xi _t)_{t\ge 0}\) under \(\mathscr {N}\), for some constant \(\beta \in (0,1)\). The fact that \(\beta <1\) is clear since the scaling property implies that \(\mathscr {N}(\eta>r)=r^{-\beta }\mathscr {N}(\eta >1)\) and we must have \(\mathscr {N}(\eta \wedge 1)<\infty \). The inverse local time of \(\xi \) at 0 is then a stable subordinator of index \(\beta \), which is consistent with assumption (20).

Proposition 17

Under the preceding assumptions, there exists a constant \(c>0\) such that \(\psi (r)=c\,r^{1+\beta }\).

Proof

We first observe that

$$\begin{aligned} m(\mathrm {d}x)=c'\,x^{-1+\frac{1-\beta }{\alpha }}\,\mathrm {d}x, \end{aligned}$$

for some positive constant \(c'\). To see this, we write, for every \(\lambda >0\),

$$\begin{aligned} \int m(\mathrm {d}x)\,\varphi (\lambda x)&= \mathscr {N}\left( \int _0^\eta \mathrm {d}t\,\varphi (\lambda \xi _t)\right) \\&= \lambda ^{-1/\alpha }\mathscr {N}\left( \int _0^{\lambda ^{1/\alpha }\eta } \mathrm {d}t\,\varphi (\lambda \xi _{t/\lambda ^{1/\alpha }})\right) \\&= \lambda ^{\beta /\alpha -1/\alpha }\mathscr {N}\left( \int _0^\eta \mathrm {d}t\,\varphi (\xi _t)\right) \\&=\lambda ^{\frac{\beta -1}{\alpha }} \int m(\mathrm {d}x)\,\varphi (x). \end{aligned}$$

It follows that m has the form stated above. We then observe that, for every \(\lambda >0\), we can also consider the following scaling transformation of the Brownian snake:

$$\begin{aligned} W'_s(t)=\lambda \,W_{\lambda ^{-2/\alpha }s}(\lambda ^{-1/\alpha }t),\quad \hbox {for } 0\le t\le \zeta '_s:=\lambda ^{1/\alpha }\zeta _{\lambda ^{-2/\alpha }s}, \end{aligned}$$

and the “law” of \(W'\) under \({\mathbb N}_{(x,0)}\) coincides with \(\lambda ^{1/\alpha }\) times the “law” of W under \({\mathbb N}_{(\lambda x,0)}\). Furthermore the exit measure \(Y'_0\) associated with \(W'\) is equal to \(\lambda ^{1/\alpha } Y_0\) (we leave the details as an exercise for the reader). With the notation of Theorem 16, it follows that, for every \(x>0\) and \(\lambda >0\),

$$\begin{aligned} u_r(\lambda x)= & {} {\mathbb N}_{(\lambda x,0)}(1-\exp (-r Y_0))=\lambda ^{-1/\alpha } {\mathbb N}_{(x,0)}(1-\exp (-r \lambda ^{1/\alpha } Y_0))\\= & {} \lambda ^{-1/\alpha }\,u_{\lambda ^{1/\alpha } r}(x). \end{aligned}$$

Hence, for every \(r>0\) and \(\mu >0\),

$$\begin{aligned} \psi (\mu r)= c\int \mathrm {d}x\,x^{-1+\frac{1-\beta }{\alpha }}\,u_{\mu r}(x)^2 =c\int \mathrm {d}x\,x^{-1+\frac{1-\beta }{\alpha }}\,\mu ^2\,u_r(\mu ^\alpha x)^2 =\mu ^{1+\beta }\,\psi (r), \end{aligned}$$

using the change of variables \(y=\mu ^\alpha x\). This completes the proof. \(\square \)

Remark

If \(\xi =|B|\) is the absolute value of a linear Brownian motion B, then a famous theorem of Lévy asserts that the pair \((\xi ,L)\) has the same distribution as \((S-B,S)\), where \(S_t=\max \{B_s:0\le s\le t\}\) (to be specific, this holds with a particular choice of the normalization of L). We then see that Theorem 1 is a special case of Theorem 16 and Proposition 17. In that case, \(\alpha =\beta =1/2\), and we recover the formula \(\psi (r)=c\,r^{3/2}\).