1 Introduction

A planar map is a proper embedding of a finite connected planar graph in the sphere, taken up to orientation-preserving homeomorphisms. These objects were first studied from a combinatorial point of view in the works of Tutte in the 1960s (see notably [32]) and have since been of use in different domains of mathematics, such as algebraic geometry (see, e.g., [18]) and theoretical physics (as in [4]). There has been great progress in their probabilistic study ever since the work of Schaeffer [30], which has among other things led to finding the scaling limit of many large random maps (we mention [26] and [19]).

Our subject of interest here is the local convergence of large random maps, which means that we are not interested in scaling limits but in the combinatorial structure of a map around a chosen root. Such problems were first studied by Angel and Schramm [5] and Krikun [16], who showed that the distributions of uniform triangulations and quadrangulations with n vertices converge weakly as n goes to infinity. Each limit is the distribution of an infinite random map, respectively known as the uniform infinite planar triangulation (UIPT) and the uniform infinite planar quadrangulation (UIPQ). Of particular interest to us is the paper [10] where the convergence to the UIPQ is shown by a method involving the well-known Cori–Vauquelin–Schaeffer bijection [30].

We will generalize this to a large family of random maps called the class of Boltzmann-distributed random maps. Let \({{\mathbf {q}}}=(q_n)_{n\in \mathbb {N}}\) be a sequence of nonnegative numbers. We assign to every finite planar map a weight which is equal to the product of the weights of its faces, the weight of a face being \(q_d\) where d is the number of edges adjacent to said face, counted with multiplicity. If the sum of all the weights of all the maps is finite, then one can normalize this into a probability distribution.

The use of the so-called Bouttier–Di Francesco–Guitter bijection (see [8], or Sect. 5.3) allows us to obtain the convergence to infinite maps for a fairly large class of weight sequences \({\mathbf {q}}\). For \({\mathbf {q}}\) in this class, let \((M_n,E_n)\) be a \({\mathbf {q}}\)-Boltzmann rooted map conditioned to have n vertices (or n edges or n faces) our main Theorem 6.1 states that this sequence converges in distribution to a random map \((M_{\infty },E_{\infty })\), which we call the infinite \({\mathbf {q}}\) -Boltzmann map. Due to combinatorial reasons, we have to restrict n to a sublattice of \(\mathbb {Z}_+\).

The classes of weight sequences for which this is true are the class of critical sequences when we condition by the number of vertices, and regular critical when we condition by the number of edges or faces (both are defined in Sect. 5.2). These classes contain all sequences with finite support (up to multiplicative constants). Taking \(q_n={\mathbbm {1}}_{\{n=p\}}\) with \(p\geqslant 3\) gives us the case of the uniform p-angulation, making our results an extension of what was known about the UIPT and UIPQ.

Local limits of Boltzmann random maps have notably been studied recently in [7]. A key difference with our work here is the fact that the maps are supposed to be bipartite in [7] (the weight sequence \({\mathbf {q}}\) is supported on the even integers). In this context, conditioning maps by their number of edges ends up being more natural than in our work, and it is sufficient to only assume criticality and not regular criticality.

The proof of convergence to an infinite map hinges on a similar result for critical multi-type Galton–Watson trees and forests (Theorem 3.1). This theorem itself generalizes the well-known fact that critical monotype Galton–Watson trees, when conditioned to be large, converge to an infinite tree formed by a unique infinite spine to which many finite trees are grafted. This infinite tree was first indirectly mentioned in [15], Lemma 1.14, and many details about the convergence are given in [2] and [13]. One of its properties is that one can obtain its distribution from the distribution of the finite tree by a size-biasing process, as explained in [20].

In the multi-type case, as with maps, we have two different kinds of conditionings. If the tree is simply critical, then we must condition it by the number of vertices of one type early, while if it is regular critical (criticality and regular criticality being defined in Sect. 2.1), we can condition it by its “size,” for a general notion of size where we count all the vertices, giving some integer weight to each vertex depending on its type. The distribution of the infinite limiting tree can once again be described by a biasing process from the original tree, as explained in Proposition 3.1, something which was anticipated in [17].

A fairly important issue in Theorem 3.1 is the problem of periodicity: as with maps, a multi-type Galton–Watson tree cannot have any number of vertices. To be precise, the size of the tree is always in \(\alpha +d{\mathbb {Z}}_+\), where d and \(\alpha \) are integers which depend on the offspring distribution and the type of the root (for \(\alpha \) only). Particular care must thus be taken when counting the vertices of forests or specific subtrees.

We end this introduction by mentioning two papers which deal with similar limits and have appeared since the start of work. In [28] is studied the limit of the multi-type Galton–Watson process associated to the tree, while the authors of [3] are also interested in the local limit of the tree. The difference is that they are focused on the aperiodic case and that they condition on the vector of population sizes of each types, and not a linear function of it. It is, however, shown in [3] that, when we condition on only one type, our result can be deduced from theirs.

The paper is split into two halves: we start by working on trees and later on apply the results to maps. To be precise, after recalling facts about multi-type Galton–Watson trees in Sect. 2, we state in Sect. 3 the convergence of large critical multi-type Galton–Watson forests to their infinite counterpart, the proof of which is done in Sect. 4. Section 5 then states the basic background on planar maps, and we state and prove Theorem 6.1, our main theorem of convergence of maps, in Sect. 6. The final section is then dedicated to an application, namely showing that the infinite Boltzmann map is almost surely a recurrent graph.

2 Background on Multi-type Galton–Watson Trees

2.1 Basic Definitions

Multi-type Plane Trees We recall the standard formalism for family trees, first introduced by Neveu [27]. We denote by \(\mathbb {N}\) the set of strictly positive integers, and \(\mathbb {Z}_+\) the set of nonnegative integers. Let

$$\begin{aligned} {\mathcal {U}}=\bigcup _{k=0}^{\infty } \mathbb {N}^{k} \end{aligned}$$

be the set of finite words on \(\mathbb {N}\), also known as the Ulam–Harris tree. Elements of \({\mathcal {U}}\) are written as sequences \(u=u^1u^2\ldots u^k\), and we call \(|u|=k\) the height of u. We also let \(u^-=u^1u^2\ldots u^{k-1}\) be the father of u when \(k>0\). In the case of the empty word \(\emptyset \), we let \(|\emptyset |=0\) and we do not give it a father. If \(u=u^1\ldots u^k\) and \(v=v^1\ldots v^l\) are two words, we define their concatenation \(uv=u^1\ldots u^k v^1\ldots v^l.\)

A plane tree is a subset \({\mathbf {t}}\) of \({\mathcal {U}}\) which satisfies the following conditions:

  • \(\emptyset \in {\mathbf {t}}\),

  • \(u \in {\mathbf {t}}{\setminus }\{\emptyset \} \Rightarrow u^-\in {\mathbf {t}}\),

  • \(\forall u\in {\mathbf {t}}, \exists k_u({\mathbf {t}})\in \mathbb {Z}_+, \forall j\in \mathbb {N}, uj\in {\mathbf {t}} \Leftrightarrow j \leqslant k_u({\mathbf {t}})\).

Given a tree \({\mathbf {t}}\) and an integer \(n\in \mathbb {Z}_+\), we let \({\mathbf {t}}_n=\{u\in {\mathbf {t}},|u|= n\}\) and \({\mathbf {t}}_{\leqslant n}=\{u\in {\mathbf {t}},|u|\leqslant n\}\). We call height of \({\mathbf {t}}\) the supremum ht(t) of the heights of all its elements. If \(u\in {\mathbf {t}}\), we let \({\mathbf {t}}_u=\{v\in {\mathcal {U}},uv\in {\mathbf {t}}\}\) be the subtree of \({\mathbf {t}}\) rooted at u.

Note that the finiteness of \(k_u({\mathbf {t}})\) for any vertex u implies that all the trees which we consider are locally finite: a vertex can only have a finite number of neighbors. We do, however, allow infinite trees.

Let now \(K\in \mathbb {N}\) be an integer. We call \([K]=\{1,2,\ldots ,K\}\) the set of types. A K-type tree is then a pair \(({\mathbf {t}},{\mathbf {e}})\) where \({\mathbf {t}}\) is a plane tree and \({\mathbf {e}}\) is a function: \({\mathbf {t}}\rightarrow [K]\), which gives a type \({\mathbf {e}}(u)\) to every vertex \(u\in {\mathbf {t}}\). For a vertex \(u\in {\mathbf {t}}\), we also let \({\mathbf {w}}_{{\mathbf {t}}}(u)=\big ({\mathbf {e}}(u1),\ldots ,{\mathbf {e}}(uk_u({\mathbf {t}}))\big )\) be the list of types of the ordered offspring of u. Note of course that the knowledge of \({\mathbf {e}}(\emptyset )\) and of all the \({\mathbf {w}}_{{\mathbf {t}}}(u)\), \(u\in {\mathbf {t}}\) gives us the complete type function \({\mathbf {e}}\).

We let

$$\begin{aligned} \mathcal {W}_K =\bigcup _{n=0}^{\infty } [K]^n. \end{aligned}$$

be the set of finite type-lists. Given such a list \({\mathbf {w}}\in \mathcal {W}_K\) and a type \(i\in [K]\), we let \(p_i({\mathbf {w}})=\#\{j,w_j=i\}\) and \(p({\mathbf {w}})=(p_i({\mathbf {w}}))_{i\in [K]}\). This defines a natural projection from \(\mathcal {W}_K\) onto \((\mathbb {Z}_+)^{K}\). We also let \(|{\mathbf {w}}|=\sum _i p_i({\mathbf {w}})\) be the length of \({\mathbf {w}}\). Elements of \(\mathcal {W}_K\) should be seen as orderings of types, such that the type i appears \(p_i({\mathbf {w}})\) times in the order \({\mathbf {w}}\).

Offspring Distributions We call ordered offspring distribution any sequence \(\mathbf {\zeta }=(\zeta ^{(i)})_{i\in [K]}\) where, for all \(i\in [K]\), \(\zeta ^{(i)}\) is a probability distribution on \(\mathcal {W}_K\). Letting for all i \(\mu ^{(i)}=p_*\zeta ^{(i)}\) be the image measure of \(\zeta ^{(i)}\) on \((\mathbb {Z}_+)^K\) by p, we then call \(\mu =(\mu ^{(i)})_{i\in [K]}\) the associated unordered offspring distribution.

We will always assume the condition

$$\begin{aligned} \exists i\in [K], \mu ^{(i)} \left( \left\{ \mathbf {z}\in (\mathbb {Z}_+)^k,\sum _{j=1}^K z_j \ne 1\right\} \right) >0 \end{aligned}$$

to avoid degenerate cases which lead to infinite linear trees.

Uniform Orderings Let us give details about a particular case of ordered offspring distribution. For \({\mathbf {n}}=(n_i)_{i\in [K]}\in (\mathbb {Z}_+)^K\), we call uniform ordering of \({\mathbf {n}}\) any uniformly distributed random variable on the set of words \({\mathbf {w}}\in \mathcal {W}_K\) satisfying \(p({\mathbf {w}})={\mathbf {n}}\). Such a random variable can be obtained by taking the word \((1,1,\ldots ,1,2,\ldots ,2,3,\ldots ,K,\ldots ,K)\) (where each i is repeated \(n_i\) times) and applying a uniform permutation to it. Now let \(\mathbf {\mu }=(\mu ^{(i)})_{i\in [K]}\) be a family of distributions on \((\mathbb {Z}_+)^K\), we call uniform ordering of \(\mathbf {\mu }\) the ordered offspring distribution \(\mathbf {\zeta }=(\zeta ^{(i)})_{i\in [K]}\) where, for each i, \(\zeta ^{(i)}\) is the distribution of a uniform ordering of a random variable with distribution \(\mu ^{(i)}\).

Galton–Watson Distributions We can now define the distribution of a K-type Galton–Watson tree rooted at a vertex of type \(i\in [K]\) and with ordered offspring distribution \(\mathbf {\zeta }\), which we call \(\mathbb {P}^{(i)}_{\mathbf {\zeta }}\), by

$$\begin{aligned} \mathbb {P}^{(i)}_{\mathbf {\zeta }}({\mathbf {t}},{\mathbf {e}}) ={\mathbbm {1}}_{\{{\mathbf {e}}(\emptyset )=i\}} \prod _{u\in {\mathbf {t}}}\zeta ^{({\mathbf {e}}(u))}({\mathbf {w}}_{{\mathbf {t}}}(u)) \end{aligned}$$
(2.1)

for any finite tree \(({\mathbf {t}},{\mathbf {e}})\). This formula only defines a subprobability measure in general; however, in the cases which interest us (namely critical offspring distributions, see the next section) we will indeed have a probability distribution. In practice, we are not interested in this formula as much as in the branching property, which also characterizes these distributions: the types of the children of the root of a tree \(({\mathbf {T,E}})\) with law \(\mathbb {P}^{(i)}_{\mathbf {\zeta }}\) are determined by a random variable with law \(\zeta ^{(i)}\) and, conditionally on the offspring of the root being equal to a word \({\mathbf {w}}\), the subtrees rooted at points j with \(j\in [|{\mathbf {w}}|]\) are independent, each one with distribution \(\mathbb {P}^{(j)}_{\mathbf {\zeta }}\).

Criticality Let \(M=(m_{i,j})_{i,j\in [K]}\) be the \(K\times K\) matrix defined by

$$\begin{aligned} m_{i,j}=\sum _{\mathbf {z}\in (\mathbb {Z}_+)^K} z_j \mu ^{(i)}({\mathbf {z}}), \quad \forall i,j\in [K]. \end{aligned}$$

We assume that M is irreducible, which means that, for all i and j in [K], there exists some power p such that the (ij)th entry of \(M^p\) is nonzero. In this case, we know by the Perron–Frobenius theorem that the spectral radius \(\rho \) of M is in fact an eigenvalue of M. We say that \(\zeta \) (or \(\mu \), or M) is subcritical if \(\rho <1\) and critical if \(\rho =1\), which both in particular imply that Eq. (2.1) does define a probability distribution and that Galton–Watson trees with ordered offspring distribution \(\zeta \) are almost surely finite. We will always assume criticality in the rest of the paper. The Perron–Frobenius theorem also tells us that, up to multiplicative constants, the left and right eigenvectors of M for \(\rho \) are unique. We call them \({\mathbf {a}}=(a_1,\ldots ,a_K)\) and \({\mathbf {b}}=(b_1,\ldots ,b_K)\) and normalize them such that \(\sum _i a_i= \sum _i a_ib_i=1\), in which case their components are all strictly positive.

The fact that \({\mathbf {b}}\) is a right eigenvector of M translates as

$$\begin{aligned} b_i=\sum _{{\mathbf {z}}\in (\mathbb {Z}_+)^{K}} \mu ^{(i)}({\mathbf {z}}){\mathbf {b}}\cdot {\mathbf {z}}, \end{aligned}$$

where \(\cdot \) is the usual dot product. One can deduce from this the existence of a martingale naturally associated with the Galton–Watson tree. Let \(({\mathbf {T}},{\mathbf {E}})\) have distribution \(\mathbb {P}^{(i)}_{\mathbf {\zeta }}\) for some \(i\in [K]\) and, for all \(n\in \mathbb {N}\) and \(j\in [K]\), let \(Z^{(j)}_n\) be the number of vertices of \({\mathbf {T}}\) which have height n and type j, and set \({\mathbf {Z}}_n=(Z^{(j)}_n)_{j\in [K]}\). Define then, for \(n\in \mathbb {Z}_{+}\),

$$\begin{aligned} X_n= {\mathbf {b}}\cdot {\mathbf {Z}}_n =\sum _{j=1}^K b_j Z^{(j)}_n. \end{aligned}$$
(2.2)

The process \((X_n)_{n\in \mathbb {Z}_+}\) is then a martingale.

Finally, we say that \(\zeta \) (or \(\mu \)) is regular critical if, in addition to being critical, \(\zeta \) has small exponential moments in the following sense:

$$\begin{aligned} \exists z>1,\forall i\in [K], \sum _{{\mathbf {w}}\in \mathcal {W}_K} \zeta ^{(i)}({\mathbf {w}}) z^{|{\mathbf {w}}|} <\infty \end{aligned}$$

Spatial Trees Later on in this paper we will be looking at spatial K-type trees, that is trees coupled with labels on their vertices. We define a K-type spatial tree to be a triple \(({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\) where \(({\mathbf {t}},{\mathbf {e}})\) is a K-type tree and \({\mathbf {l}}\) is any real-valued function on \({\mathbf {t}}\). Note that, given \({\mathbf {t}}\), \({\mathbf {e}}\) and \({\mathbf {l}}(\emptyset )\), the rest of \({\mathbf {l}}\) is completely determined by the differences \({\mathbf {l}}(u)-{\mathbf {l}}(u^-)\) for \(u\in {\mathbf {t}}{\setminus }\{\emptyset \}\). This is why we let, for \(u\in {\mathbf {t}}\), \({\mathbf {y}}_u=\Big ({\mathbf {l}}(u1)-{\mathbf {l}}(u),{\mathbf {l}} (u2)-{\mathbf {l}}(u),\ldots ,{\mathbf {l}}\big (uk_u ({\mathbf {t}})\big )-{\mathbf {l}}(u)\Big )\in {\mathbb {R}}^{|{\mathbf {w}}_{{\mathbf {t}}}(u)|}\) be the list of ordered label displacements of the offspring of u.

Consider, for all types \(i\in [K]\) and words \({\mathbf {w}}\in \mathcal {W}_K\), a probability distribution \(\nu ^{(i)}_{{\mathbf {w}}}\) on \({\mathbb {R}}^{|{\mathbf {w}}|}\), as well as a number \(\varepsilon \). We let \(\mathbb {P}^{(i,\varepsilon )}_{\mathbf {\zeta },{\mathbf {\nu }}}\) be the distribution of a triple \(({\mathbf {T,E,L}})\) where \(({\mathbf {T,E}})\) is a K-type tree with distribution \(\mathbb {P}^{(i)}_{\mathbf {\zeta }}\), the root \(\emptyset \) has label \(\varepsilon \) and the label displacements \(\big ({\mathbf {L}}(u1)-{\mathbf {L}}(u), {\mathbf {L}}(u2)-{\mathbf {L}}(u),\ldots , {\mathbf {L}}(uk_u( {\mathbf {T}}))-{\mathbf {L}}(u)\big )\) (with \(u\in {\mathbf {T}}\)) are all independent, each one having distribution \(\nu ^{\big ({\mathbf {E}}(u)\big )}_{{\mathbf {w}}_{{\mathbf {T}}}(u)}\) conditionally on \({\mathbf {E}}(u)\) and \({\mathbf {w}}_{\mathbf {T}}(u)\).

Forests We will not only look at trees but also at multi-type (and, when needed, labelled) forests, a forest being defined as an ordered finite collection of trees: elements of the form \(({\mathbf {f,e,l}}) =\big (({\mathbf {t}}^1,{\mathbf {e}}^1,{\mathbf {l}}^1),\ldots , ({\mathbf {t}}^p,{\mathbf {e}}^p,{\mathbf {l}}^p)\big )\).

A Galton–Watson random forest will be a forest where the trees are mutually independent, and each one has a Galton–Watson distribution with the same ordered offspring distribution (and label increment distribution, in the labelled case). We can thus let, for \({\mathbf {w}}\in \mathcal {W}_K\), \(\mathbb {P}^{({\mathbf {w}})}_{\mathbf {\zeta }}\) be the distribution of \(({\mathbf {T}}^i,{\mathbf {E}}^i)_{i\in [|{\mathbf {w}}|]}\) where the \(({\mathbf {T}}^i,{\mathbf {E}}^i)\) are independent, and each \(({\mathbf {T}}^i,{\mathbf {E}}^i)\) has distribution \(\mathbb {P}^{(w_i)}_{\mathbf {\zeta }}\) and, given also a list of initial labels \(\varepsilon =(\varepsilon _1,\ldots ,\varepsilon _{|{\mathbf {w}}|})\), \(\mathbb {P}^{({\mathbf {w}}),(\varepsilon )}_{\mathbf {\zeta },\nu }\) be the distribution of \(({\mathbf {T}}^i,{\mathbf {E}}^i,{\mathbf {L}}^i)_{i\in [|{\mathbf {w}}|]}\) where the terms of the sequence are independent and, for a given i, \(({\mathbf {T}}^i,{\mathbf {E}}^i,\mathbf {L}^i)\) has distribution \(\mathbb {P}^{(w_i,\varepsilon _i)}_{\zeta ,\nu }.\)

All previous notation will be adapted to forests; for example, the height of a forest \({\mathbf {f}}\) is the maximum of the heights of its elements, \({\mathbf {f}}_{\leqslant n}\) is the forest where each tree has been cut at height n, and so on.

Remarks Concerning Notation For readability, we will throughout the paper use the canonical variable \(({\mathbf {T}},{\mathbf {E}})\), which is simply the identity function of the space of K-type trees, as well as \(({\mathbf {T}},{\mathbf {E}},{\mathbf {L}})\), \((\mathbf {F},{\mathbf {E}})\) \(({\mathbf {F}},{\mathbf {E}},{\mathbf {L}})\) when looking at labelled trees or forests. Thus, we will, for instance, write \(\mathbb {P}^{(i)}_{\zeta }\big (({\mathbf {T}},{\mathbf {E}})=({\mathbf {t}},{\mathbf {e}})\big )\) instead of \(\mathbb {P}^{(i)}_{\zeta }({\mathbf {t}},{\mathbf {e}})\), for a given type i and a given K-type tree \(({\mathbf {t}},{\mathbf {e}})\).

Moreover, since we will never change the types and labels of vertices of a tree, we will often drop \({\mathbf {e}}\) and \({\mathbf {l}}\) from the notation, once again for readability, and, in the same vein, since we only consider one offspring distribution at a time, we also often drop \(\zeta \) from the \(\mathbb {P}_{\zeta }\) notation.

Local Convergence of Multi-type Trees and Forests Take a sequence of K-type forests \(({\mathbf {f}}^{(n)},{\mathbf {e}}^{(n)})_{n\in \mathbb {N}}\). We say that this sequence converges locally to a K-type forest \(({\mathbf {f}},{\mathbf {e}})\) if, for all \(k\in \mathbb {N}\), and \(n\in \mathbb {N}\) large enough (depending on k), we have \(({\mathbf {f}}^{(n)}_{\leqslant k},{\mathbf {e}}^{(n)}_{\leqslant k})=({\mathbf {f}}_{\leqslant k},{\mathbf {e}}_{\leqslant k})\). This convergence can be metrized: we can, for example, set, for two K-type forests \(({\mathbf {f}},{\mathbf {e}})\) and \(({\mathbf {f}}',{\mathbf {e}}')\), \(d\big (({\mathbf {f}},{\mathbf {e}}),({\mathbf {f}}',{\mathbf {e}}')\big )=\frac{1}{1+p}\) where p is the supremum of all integers k such that \(({\mathbf {f}}_{\leqslant k},{\mathbf {e}}_{\leqslant k})=({\mathbf {f}}'_{\leqslant k},{\mathbf {e}}'_{\leqslant k})\).

Convergence in distribution of random forests for this metric is simply characterized: if \(({\mathbf {F}}^{(n)},{\mathbf {E}}^{(n)})_{n\in \mathbb {N}}\) is a sequence of random K-type forests, it converges in distribution to a certain random forest \(({\mathbf {F}},{\mathbf {E}})\) if and only if, for all \(k\in \mathbb {N}\) and finite K-type forests \(({\mathbf {f}},{\mathbf {e}})\), the quantity \(\mathbb {P}\big (({\mathbf {F}}^{(n)}_{\leqslant k},{\mathbf {E}}^{(n)}_{\leqslant k})=({\mathbf {f}},{\mathbf {e}})\big )\) converges to \(\mathbb {P}\big (({\mathbf {F}}_{\leqslant k},{\mathbf {E}}_{\leqslant k})=({\mathbf {f}},{\mathbf {e}})\big ).\)

All these definitions can directly be adapted to the case of spatial forests: when asking for equality between the forests below height k, we also ask equality of the labels below this height.

2.2 Cutting a Tree at the First Generation of Fixed Type

In this section, we fix a reference type \(j\in [K]\). We are interested in the first generation of type j, that is, in a K-type tree \({\mathbf {t}}\), the set of vertices of \({\mathbf {t}}\) with type j which have no ancestors of type j, except maybe for the root. We then call \(C_j({\mathbf {t}})\) the tree formed by all the vertices which lie below or on the first generation of type j, including all vertices which lie on branches with no individuals of this type. If \({\mathbf {T}}\) has distribution \(\mathbb {P}^{(i)}_{\zeta }\) for some type i, we let \(\mathbb {P}^{(i)}_{\text {cut}_j}\) be the distribution of \(C_j({\mathbf {T}})\) and let \(\mu _{i,j}\) be the distribution of the number of leaves of \(C_j({\mathbf {T}})\) which have type j (that is, the size of the first generation of type j in \(\mathbf {T}\)). Finally, if \(\mathbf {T}\) has distribution \(\mathbb {P}^{(j)}\), we let \(\xi _{i,j}\) be the distribution of the number of vertices of type i in \(C_j({\mathbf {T}})\) (excluding the root, so that when \(i=j\) we end up with \(\xi _{j,j}=\mu _{j,j}\)).

The following proposition gives a few properties of the moments of the \(\mu _{i,j}\) and \(\xi _{i,j}\). Most of them are already proven in [25].

Proposition 2.1

Let i and j be two different types.

  1. (i)
    $$\begin{aligned} \sum _{k=0}^{\infty } k\mu _{i,j}(k)=\frac{b_i}{b_j}. \end{aligned}$$
  2. (ii)
    $$\begin{aligned} \sum _{k=0}^{\infty } k\xi _{i,j}(k)=\frac{a_i}{a_j}. \end{aligned}$$
  3. (iii)

    Assume that \(\zeta \) has finite second moments. Then

    $$\begin{aligned} \text {Var}(\mu _{i,i})=\frac{\sigma ^2}{a_ib_i^2}, \end{aligned}$$

    where the number \(\sigma > 0\) is defined by \(\sigma ^2=\sum _{i,j,k} a_ib_jb_kQ^{(i)}_{j,k}\), with

    \(Q^{(i)}_{j,j}=\sum _{{\mathbf {z}}\in (\mathbb {Z}_+)^K} \mu ^{(i)}({\mathbf {z}}) z_j(z_j-1)\) and \(Q^{(i)}_{j,k}=\sum _{{\mathbf {z}}\in (\mathbb {Z}_+)^K} \mu ^{(i)}({\mathbf {z}}) z_jz_k\) for \(j\ne k\).

  4. (iv)

    Assume that \(\zeta \) is regular critical. Then \(\mu _{i,i}\) and \(\xi _{i,j}\) also have some finite exponential moments:

    $$\begin{aligned} \exists z>1, \sum _{n\in \mathbb {Z}_+} \mu _{i,i}(n) z^n<\infty \text { and } \sum _{n\in \mathbb {Z}_+} \xi _{i,j}(n)z^n<\infty . \end{aligned}$$

Proof

We start with point (i). We fix \(j\in [K]\) and, for all \(i\in [K]\), let \(c_i=\sum _{k=0}^{\infty } k\mu _{i,j}(k)\). The proof that \(c_i=\frac{b_i}{b_j}\) for all i is done in two steps: first show that \(c_j=1\) and then that the vector \({\mathbf {c}}=(c_i)_{i\in [K]}\) is a right eigenvector of M for the eigenvalue 1.

The fact that \(c_j=1\) is proven in [25], Proposition 4, (i). It is obtained by removing the types different from j one by one, and noticing that criticality is conserved at every step until we are left with a critical monotype Galton–Watson tree.

To prove that \({\mathbf {c}}\) is a right eigenvector of M, consider a type \(i\in [K]\) and apply the branching property at height 1 in a tree with distribution \(\mathbb {P}^{(i)}_{\zeta }\), we get

$$\begin{aligned} c_i&=\sum _{{\mathbf {z}}\in (\mathbb {Z}_+)^K} \mu ^{(i)}({\mathbf {z}}) \Big (\sum _{l\in [K]{\setminus }\{j\}} z_lc_l + z_j\Big ) \\&= \sum _{{\mathbf {z}}\in (\mathbb {Z}_+)^K} \mu ^{(i)}({\mathbf {z}}) \Big (\sum _{l=1}^K z_lc_l\Big ) \\&= \sum _{l=1}^K m_{i,l}c_l. \end{aligned}$$

Since \(\sum _{l=1}^k m_{i,l}c_l\) is the ith component of \((M{\mathbf {c}})\), the proof is complete.

Point (ii) was also proven in [25], as part of the proof of Proposition 4, (ii). Similarly, points (iii) and (iv) feature in [25], Proposition 4. \(\square \)

2.3 Size of a Tree and Periodicity

As said earlier, we plan on conditioning trees on being large. To do this extent, we need to define a notion of “size” of a tree. One natural notion of size would be the total number of vertices of the tree. Another one, which, as will be shown later, is easier to work with combinatorially, would be to count only the number of vertices of one fixed type. We propose a fairly general notion of size which contains the above two examples: let \(\gamma =(\gamma _1,\ldots ,\gamma _K)\) be a vector of nonnegative integers, one of them at least being nonzero. We then let, for a K-type tree \({\mathbf {(t,e)}}\)

$$\begin{aligned} |{\mathbf {t}}|_{\gamma }=\sum _{i=1}^K \gamma _i \#_i({\mathbf {t}}) \end{aligned}$$

where \(\#_i({\mathbf {t}})\) denotes the number of vertices of \({\mathbf {t}}\) with type \(i\in [K]\).

One consequence of criticality is that, while a Galton–Watson tree with ordered offspring distribution \(\zeta \) is almost surely finite, the expected value of its size is infinite.

Lemma 2.1

For all \(i\in [K]\), we have

$$\begin{aligned} {\mathbb {E}}^{(i)}\big [|{\mathbf {T}}|_{\gamma }\big ]=\infty . \end{aligned}$$

Proof

We just need to check that, for all \(j\in [K]\), we have \({\mathbb {E}}^{(i)}\big [\#_j{\mathbf {T}}\big ]=\infty .\) Let us generalize the previous section by calling, recursively, for \(k>1\), the k th generation of type j of a tree the set of its vertices of type j whose closest ancestor of type j is in the \((k-1)\)th generation of type j. By Proposition 2.1, point (i), the number of vertices on each of those generations has expected value \(\frac{b_j}{b_i}>0\), and thus, their sum has infinite expected value. \(\square \)

As it happens, when \({\mathbf {(T,E)}}\) is a Galton–Watson tree, then \(|{\mathbf {T}}|_{\gamma }\) cannot take any integer value. For example, in a classical monotype tree, if an individual can only have an even number of children, then the total number of vertices in the tree has to be odd. Here is a precise statement for the general case.

Proposition 2.2

There exists an integer \(d\in \mathbb {N}\) and integers \(\alpha _i\) in \(\{0,1,\ldots ,d-1\}\) for all types \(i\in [K]\) such that, for \(n\in \mathbb {Z}_+\):

  • if \(\mathbb {P}^{(i)}(|{\mathbf {T}}|_{\gamma }=n)>0\), then \(n\equiv \alpha _i \pmod d\).

  • if \(n\equiv \alpha _i \pmod d\) and n is large enough, then \(\mathbb {P}^{(i)}(|\mathbf {T}|_{\gamma }=n)>0\).

Remark 1

This immediately extends to forests: if \({\mathbf {w}}\in \mathcal {W}_K\), then let \(\alpha _{{\mathbf {w}}}\in \{0,\ldots ,d-1\}\) such that \(\alpha _{{\mathbf {w}}}\equiv \sum _{k=1}^{|{\mathbf {w}}|} \alpha _{w_k}\). Then the size in a forest with distribution \(\mathbb {P}^{({\mathbf {w}})}_{\zeta }\) is a.s. of the form \(\alpha _{{\mathbf {w}}}+dn\) with \(n\in \mathbb {Z}_+\) and, if n is large enough, the forest has size \(\alpha _{{\mathbf {w}}}+dn\) with nonzero probability.

The proof of Proposition 2.2 requires the following lemma, which is a variant of the well-known “Frobenius coin problem.”

Lemma 2.2

Let \(n_1,\ldots ,n_p\) be p nonnegative integers and let \(d=gcd(n_1,\ldots ,n_p)\). There exists integers \(N_2,\ldots ,N_p\) such that the set

$$\begin{aligned} \left\{ \sum _{i=1}^k k_in_i: k_1\in \mathbb {Z}_+ ,\; k_i\in \{0,\ldots ,N_i\} \, \forall i\in \{2,\ldots ,p\}\right\} \end{aligned}$$

contains all large enough multiples of d.

The values of \(N_2,\ldots ,N_p\) are of no importance for us in this paper. All we need to know is that, when adding multiples of \(n_1,\ldots ,n_p\), if we allow all multiples of \(n_1\), then we only need a finite amount of multiples of the others.

Proof

A straightforward induction shows that, if we can prove Lemma 2.2 in the case \(p=2\), then we can generalize it to all p. We will thus restrict ourselves to the case where \(p=2\), and can in fact further simplify the problem by dividing \(n_1\) and \(n_2\) by their gcd, making them coprime. In this case, \(N_2\) and “large enough” can easily be explicited: we will show that every integer greater than or equal to \(n_1n_2\) can be written as \(k_1n_1+k_2n_2\) with \(k_1\in \mathbb {Z}_+\) and \(k_2\in \{0,\ldots ,n_1-1\}\).

Let \(n\geqslant n_1n_2\). Since \(n_1\) and \(n_2\) are coprime, \(n_2\) is invertible modulo \(n_1\), and we know that there exists \(k_2\) in \(\{0,\ldots ,n_1-1\}\) such that \(k_2n_2 \equiv n \pmod {n_1}\). Therefore, there exists \(k_1\in \mathbb {Z}\) such that \(n-k_2n_2=k_1n_1\), and since, \(n\geqslant n_1n_2\) and \(k_2 \leqslant n_1-1\), we also have \(k_1\geqslant 0\).\(\square \)

Throughout the following proof, we use for \(i\in [k]\) the notation \(\mathbb {T}^{(i)}_{\zeta }\) for the set of trees which can be obtained with positive probability starting with a root of type i:

$$\begin{aligned} \mathbb {T}^{(i)}_{\zeta }=\left\{ {\mathbf {t}}: \mathbb {P}^{(i)}_{\zeta }\big ({\mathbf {T}}={\mathbf {t}}\big )>0\right\} \end{aligned}$$

Proof of Proposition 2.2

Let \(j\in [K]\) be a type such that \(\zeta ^{(j)}(\emptyset )>0\), i.e., an individual of type j can die without having any children. We start by proving the existence of d and \(\alpha _j\) by focusing on what happens when we jump from one generation of type j to the next. Let

$$\begin{aligned} G=\left\{ n\in \mathbb {N}, \mathbb {P}^{(j)}(|{\mathbf {T}}|_{\gamma }=n)>0\right\} . \end{aligned}$$

Let us also introduce some notation: if \(A_1,\ldots ,A_p\) are subsets of \(\mathbb {Z}\), then we let \(A_1+\ldots +A_p\) be their sumset, that is the set of integers which can be obtained as sums \(\sum _{i=1}^p a_i\) with \(a_i\in A_i\) for all i. We also let \(G^{+p}\) the p-fold iterated sumset of G:

$$\begin{aligned} G^{+p}=\left\{ \sum _{i=1}^p n_i: \forall i, n_i\in G\right\} \end{aligned}$$

The set G can be obtained inductively by cutting \(\mathbf {T}\) at its first generation of type j, and then grafting new trees at each vertex of this generation. To be precise, take a tree \({\mathbf {t}}\in \mathbb {T}^{(j)}_{\zeta }\). Let \(a_{{\mathbf {t}}}\) be the sum of all the \(\gamma \)-weights of all the vertices which do not have type j in the cut tree \(C_j({\mathbf {t}})\), and let \(p_{{\mathbf {t}}}\) be the number of vertices in the first generation of type j of \({\mathbf {t}}\). We then have

$$\begin{aligned} G = \bigcup _{{\mathbf {t}}\in \mathbb {T}^{(j)}_{\zeta }} \{\gamma _j+a_{{\mathbf {t}}}\} + G^{+p_{{\mathbf {t}}}} \end{aligned}$$

Note of course that there is much redundance in this union, since \(a_{{\mathbf {t}}}\) and \(p_{{\mathbf {t}}}\) only depend on \(C_j({\mathbf {t}})\) (\({\mathbf {t}}\) up to its first generation of type j). Next, we do some reindexing to remove the overlap, and at the same time separate the union into three classes:

  • the tree with only one vertex (of type j) is isolated in its own class.

  • the second class contains the trees with no vertices of type j except for the root.

  • the third class contains all the other possible trees cut at their first generation of type j.

We thus obtain

$$\begin{aligned} G = \{\gamma _j\} \cup \bigcup _{x\in X} \{\gamma _j + a_x\} \cup \bigcup _{y\in Y} \{\gamma _j+b_y\} + G^{+p_y} \end{aligned}$$

where X and Y are two abstract sets which we need not worry about, \(a_x\in \mathbb {N}\) for all \(x\in X\) and \((b_y,p_y)\in \mathbb {Z}_+\times \mathbb {N}\) for \(y\in Y\). Note that Y is non-empty (by criticality or aperiodicity).

It then follows that

$$\begin{aligned} G=\left\{ \gamma _j + \sum _{x\in X}k_x a_x +\sum _{y\in Y} k_y (b_y+p_y\gamma _j): \sum _{x\in X}k_x \leqslant \sum _{y\in Y}k_yp_y \right\} . \end{aligned}$$

From this, let

$$\begin{aligned} d=gcd\Big (\{a_x:x\in X\} \cup \{b_y+p_y\gamma _j:y\in Y\}\Big ) \end{aligned}$$

and let \(\alpha _j\in \{0,\ldots ,d-1\}\) be the remainder of \(\gamma _j\) mod d. Then it is immediate that \(G\subset \alpha _j+d\mathbb {Z}_+\), and Lemma 2.2 ensures that G contains all large enough members of \(\alpha _j+d\mathbb {Z}_+\) (the condition \(\sum _{x\in X}k_x \leqslant \sum _{y\in Y}k_yp_y\) being weaker than the condition of Lemma 2.2, we in fact only need all the \(k_x\) and \(k_y\) to be in a finite set, except for any one specific \(k_y\)).

Now, let i be a different type from j. We first want to show that, if \({\mathbf {t}}\) and \({\mathbf {t'}}\) are both in \(\mathbb {T}^{(i)}_{\zeta }\), then \(|{\mathbf {t}}|_{\gamma }\equiv |{\mathbf {t'}}|_{\gamma } \pmod d\). To this end, consider a tree \({\mathbf {t}}^0\) in \(\mathbb {T}^{(j)}_{\zeta }\) which contains at least one vertex u of type i (such a tree exists by virtue of irreducibility). Now let \({\mathbf {t}}^1\) and \({\mathbf {t}}^2\) to be \({\mathbf {t}}^0\) except that we replace the subtree rooted at u by, respectively, \({\mathbf {t}}\) and \({\mathbf {t'}}\). Both \({\mathbf {t}}^1\) and \({\mathbf {t}}^2\) also belong to \(\mathbb {T}^{(j)}_{\zeta }\), which implies that \(|{\mathbf {t}}^1|_{\gamma }\equiv |{\mathbf {t}}^2|_{\gamma }\equiv \alpha _j \pmod d\), which itself implies \(|{\mathbf {t}}|_{\gamma }\equiv |{\mathbf {t'}}|_{\gamma } \pmod d\). This shows the existence of \(\alpha _i\).

Finally, we want to show that, if n is large enough, then \(\mathbb {P}^{(i)} \big (|{\mathbf {T}}|_{\gamma }=\alpha _i+dn\big )>0\). Take any tree \({\mathbf {(t,e)}}\in \mathbb {T}^{(i)}_{\zeta }\) which contains at least one vertex u of type j. Let m and p be integers such that \(|{\mathbf {t}}|_{\gamma }=\alpha _i+dm\) and \(|{\mathbf {t}}_u|_{\gamma }=\alpha _j+dp\), where \({\mathbf {t}}_u\) is the subtree rooted at u. We know that, if n is large enough, there exists \({\mathbf {t'}}\) in \(\mathbb {T}^{(j)}_{\zeta }\) such that \(|{\mathbf {t'}}|_{\gamma }=\alpha _j+d(n+p-m)\). Replacing \({\mathbf {t}}_u\) by \({\mathbf {t'}}\) in \({\mathbf {t}}\) then yields a tree with size \(\alpha _i+dn\) which itself is in \(\mathbb {T}^{(i)}_{\zeta }\), thus ending our proof. \(\square \)

Proposition 2.2 can be refined in the case where we only count the number of vertices of one specific type. We leave the proof of the following corollary to the reader.

Corollary 2.1

Assume that \(\gamma _i={\mathbbm {1}}_{i=1}\). Then:

  • the period d is gcd of the support of \(\mu _{1,1}\), and \(\alpha _1=1\).

  • for \(i\in \{2,\ldots ,K\}\) the measure \(\mu _{i,1}\) is supported on \(\alpha _i+d\mathbb {Z}_+\).

3 Infinite Multi-type Galton–Watson Trees and Forests

In this section we will consider unlabelled trees and forests with a critical ordered offspring distribution \(\mathbf {\zeta }\), and will omit mentioning \(\mathbf {\zeta }\) for readability purposes. We could in fact work with spatial trees; however, since the labellings are done conditionally on the tree and in independent fashion for each vertex, the reader can check that the proofs do not change at all if we add the labellings in.

Just as in the case of critical monotype Galton–Watson trees, multi-type trees have an infinite variant which is obtained through a size-biasing method which was first introduced in [17].

3.1 Existence of the Infinite Forest

Proposition 3.1

Let \({\mathbf {w}}\in \mathcal {W}\). There exists a unique probability measure \(\widehat{\mathbb {P}}^{({\mathbf {w}})}\) on the space of infinite K-type forests such that, for any \(n\in \mathbb {Z}_{+}\) and for any finite K-type forest \({\mathbf {f}}\) with height n,

$$\begin{aligned} \widehat{\mathbb {P}}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant n}={\mathbf {f}}\big )=\frac{1}{Z_{{\mathbf {w}}}}\left( \sum _{u\in {\mathbf {f}}_n} b_{{\mathbf {e}}(u)}\right) \mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant n}={\mathbf {f}}\big ), \end{aligned}$$
(3.1)

where the normalizing constant \(Z_{{\mathbf {w}}}\) is equal to \(\sum _{i=1}^{|{\mathbf {w}}|} b_{w_i}=p({\mathbf {w}})\cdot {\mathbf {b}}.\)

Proof

Our proof is structured as the one given in [20] for monotype trees. Let \(n\in \mathbb {Z}_{+}\), we will first define a probability distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}_n\) on the space of K-type forests with height exactly n paired with a point of height n. Let \(({\mathbf {f}},{\mathbf {e}})\) be such a forest and \(u\in {\mathbf {f}}_n\), and set

$$\begin{aligned} \widehat{\mathbb {P}}^{({\mathbf {w}})}_n\big ({\mathbf {f}},u\big ) =\frac{b_{{\mathbf {e}}(u)}}{Z_{{\mathbf {w}}}} \mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}={\mathbf {f}}\big ). \end{aligned}$$

The martingale property of the process \((X_n)_{n\in \mathbb {Z}_{+}}\) defined by (2.2) under \(\mathbb {P}^{({\mathbf {w}})}\) ensures us that we do have probability measures: the total mass of \(\widehat{\mathbb {P}}^{({\mathbf {w}})}_n\) is \(\frac{1}{Z_{{\mathbf {w}}}} {\mathbb {E}}^{({\mathbf {w}})}_{\mathbf {\zeta }}[X_n] =\frac{p({\mathbf {w}})\cdot {\mathbf {b}}}{Z_{{\mathbf {w}}}}=1\).

We will check that these are compatible in the sense that, for \(n\in \mathbb {Z}_{+}\), if \(({\mathbf {F}},U)\) has distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}_{n+1}\) then \(({\mathbf {F}}_{\leqslant n},U^-)\) has distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}_n\). Fix therefore \({\mathbf {f}}\) a K-type forest of height n and u a vertex of t at height n. We have

$$\begin{aligned} \widehat{\mathbb {P}}^{({\mathbf {w}})}_{n+1}\big (({\mathbf {F}}_{\leqslant n},U^-)=({\mathbf {f}},u)\big )&= \frac{1}{Z_{{\mathbf {w}}}}\mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant n}={\mathbf {f}}\big ) \sum _{\mathbf {x}\in \mathcal {W}_K} \zeta ^{({\mathbf {e}}(u))}(\mathbf {x}) \sum _{j=1}^{|\mathbf {x}|} b_{x_j} \\&= \frac{1}{Z_{{\mathbf {w}}}}\mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant n}={\mathbf {f}}\big ) \sum _{{\mathbf {z}}\in (\mathbb {Z}_+)^K} \mu ^{({\mathbf {e}}(u))}({\mathbf {z}})\,{\mathbf {z}}\cdot {\mathbf {b}} \\&=\frac{1}{Z_{{\mathbf {w}}}}\mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant n}={\mathbf {f}}\big ) \, b_{{\mathbf {e}}(u)}. \end{aligned}$$

Kolmogorov’s consistency theorem then allows us to define a distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}_{\infty }\) on the set of forests where one of the trees has a distinguished infinite path. Forgetting the infinite path then gives us the distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}\) which we were looking for. \(\square \)

For \(n\in \mathbb {Z}_+\), \({\mathbf {f}}\) a forest of height \(n+1\) and \(u\in {\mathbf {f}}_{n+1}\), we have

$$\begin{aligned} \widehat{\mathbb {P}}^{({\mathbf {w}})}_{n+1}\Big (({\mathbf {F}},U)&=({\mathbf {f}},u) \;|\; ({\mathbf {F}}_{\leqslant n},U^-)=({\mathbf {f}}_{\leqslant n},u^-)\Big ) \\&= \frac{b_{\mathbf {e(u)}}}{b_{\mathbf {e(u^-)}}} \mathbb {P}^{({\mathbf {w}})}_{n+1}\Big ({\mathbf {F}}={\mathbf {f}} \; | \; {\mathbf {F}}_{\leqslant n}={\mathbf {f}}_{\leqslant n}\Big ). \end{aligned}$$

From this formula follows a simple description of these infinite forests.

Given a type \(i\in [K]\), a random tree with distribution \(\widehat{\mathbb {P}}^{(i)}\) can be described in the following way: it is made of a spine, that is an infinite ascending chain starting at the root, on which we have grafted independent trees with ordered offspring distribution \(\mathbf {\zeta }\). Elements of the spine have a different offspring distribution, called \(\widehat{\mathbf {\zeta }}\), which is a size-biased version of \(\mathbf {\zeta }\). It is defined by

$$\begin{aligned} \widehat{\zeta }^{(j)}({\mathbf {x}})=\frac{1}{b_j}\sum _{l=1}^{|{\mathbf {x}}|} b_{x_l}\zeta ^{(j)}({\mathbf {x}}), \end{aligned}$$
(3.2)

with \(j\in [K]\) and \({\mathbf {x}}\in \mathcal {W}_K\). Given an element of the spine \(u\in {\mathcal {U}}\) and its offspring \({\mathbf {x}}\in \mathcal {W}_K\), the probability that the next element of the spine is uj for \(j\in [|{\mathbf {x}}|]\) is proportional to \(b_{x_j}\), and therefore equal to

$$\begin{aligned} \frac{b_{x_j}}{\sum _{l=1}^{|{\mathbf {x}}|} b_{x_l}}. \end{aligned}$$

To get a forest with distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}\), let first J be a random variable taking values in \([|{\mathbf {w}}|]\) such that \(J=j\) with probability proportional to \(b_{w_j}\). Conditionally on J, let \(\mathbf {T}_J\) be a tree with distribution \(\widehat{\mathbb {P}}^{(J)}\), and let \({\mathbf {T}}_i\), for \(i\in [|{\mathbf {w}}|]\), \(i\ne J\) be a tree with distribution \(\mathbb {P}^{(i)}\), all these trees being mutually independent. Then the forest \(({\mathbf {T}}_i)_{i\in [|{\mathbf {w}}|]}\) has distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}\).

Remark 2

Recall that a tree with law \(\mathbb {P}^{(i)}\) is finite for any \(i\in [K]\). Therefore, a forest with distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}\) can only have one infinite path, and thus, we do not lose any information by going from \(\widehat{\mathbb {P}}^{({\mathbf {w}})}_{\infty }\) to \(\widehat{\mathbb {P}}^{({\mathbf {w}})}\).

3.2 Convergence to the Infinite Forest

Recall from Sect. 2.3 the notations d and \(\alpha _{{\mathbf {w}}}\): the size of a forest with distribution \(\mathbb {P}^{({\mathbf {w}})}_{\zeta }\) is always of the form \(\alpha _{{\mathbf {w}}}+dn\).

Theorem 3.1

Assume one of the following:

  • \(\gamma _j={\mathbbm {1}}_{j=1}\) for \(j\in [K]\).

  • \(\zeta \) is regular critical.

As n tends to infinity, a forest \({\mathbf {F}}\) with distribution \(\mathbb {P}^{({\mathbf {w}})}\), conditioned on \(|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn\), converges in distribution to a forest with distribution \(\widehat{\mathbb {P}}^{({\mathbf {w}})}\). In other words, given a forest \(({\mathbf {f}},{\mathbf {e}})\) of height k, we have

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}} \mid |{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn\big ) \underset{n\rightarrow \infty }{\longrightarrow }\widehat{\mathbb {P}}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}}\big ) \end{aligned}$$

This theorem is split into two quite distinct parts. For the first part, we assume that the notion of size of a tree we take is simply the amount of vertices of one fixed type, which we can take as 1 by symmetry. In this case, the theorem will be proved with purely combinatorial tools, notably ratio limit theorems for random walks. In the second part, we do not make any assumptions on \(\gamma \), and in exchange for that we have to restrict ourselves to the case where the offspring distribution has exponential moments. The result will then be proved with the help of techniques from analytic combinatorics.

4 Proof of Theorem 3.1

4.1 The Main Ingredient

Whether we count only one type of vertex or the offspring distribution is regular critical, the proof of Theorem 3.1 will rely on the following asymptotic equivalence, indexed by any word \({\mathbf {w}}\in \mathcal {W}_K\):

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}(|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn) \underset{n\rightarrow \infty }{\sim } \frac{Z_{{\mathbf {w}}}}{b_1} \mathbb {P}^{(1)} \big (|{\mathbf {T}}|_{\gamma }=\alpha _1+d(n+p)\big ), \quad \forall p\in \mathbb {Z}\quad \qquad (H_{{\mathbf {w}}}) \end{aligned}$$

What equation \((H_{{\mathbf {w}}})\) means is that, when we ask for a forest to have size of order dn with large n, then exactly one of its tree components will have size or order dn, while the others will be comparatively microscopic.

Proof that Theorem 3.1 Follows from \((H_{{\mathbf {w}}})\) Take a K-type forest \({\mathbf {f}}\) with height \(k\in \mathbb {N}\), and let \({\mathbf {x}}\in \mathcal {W}_K\) be the word obtained by taking the types of the vertices of \({\mathbf {f}}\) with height k (the order of the elements \({\mathbf {x}}\) actually has no influence). For n large enough, we have

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}} \mid |{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn\big )&=\frac{\mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}},|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn \big )}{\mathbb {P}^{({\mathbf {w}})}(|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn)} \\&=\mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}}\big )\frac{\mathbb {P}^{({\mathbf {x}})}(|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn-q)}{\mathbb {P}^{({\mathbf {w}})}(|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn)} \end{aligned}$$

where \(q=|{\mathbf {f}}_{\leqslant k-1}|_{\gamma }\). By the results of Sect. 2.3, if \(\mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}}\big )>0\) then \(\alpha _{{\mathbf {w}}}-q\) must be congruent to \(\alpha _{{\mathbf {x}}}\) modulo d, giving us

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}} \mid |{\mathbf {F}}|_{\gamma }= \alpha _{{\mathbf {w}}}+dn\big ) = \mathbb {P}^{({\mathbf {w}})}\big ({\mathbf {F}}_{\leqslant k}={\mathbf {f}}\big )\frac{\mathbb {P}^{({\mathbf {x}})}\big (|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {x}}}+d(n+p)\big )}{\mathbb {P}^{({\mathbf {w}})}\big (|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn\big )} \end{aligned}$$

for some signed integer p. Now if we let n tend to infinity, using both \((H_{{\mathbf {w}}})\) and \((H_{{\mathbf {x}}})\), we obtain

$$\begin{aligned} \frac{\mathbb {P}^{({\mathbf {x}})}\big (|{\mathbf {F}}|_{\gamma } =\alpha _{{\mathbf {x}}}+d(n+p)\big )}{\mathbb {P}^{({\mathbf {w}})}\big (|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn\big )} \underset{n\rightarrow \infty }{\longrightarrow }\frac{Z_{{\mathbf {x}}}}{Z_{{\mathbf {w}}}}=\frac{1}{Z_{{\mathbf {w}}}}\left( \sum _{u\in {\mathbf {f}}_k} b_{{\mathbf {e}}(u)}\right) , \end{aligned}$$

which concludes the proof of Theorem 3.1, assuming \((H_{{\mathbf {w}}})\). \(\square \)

4.2 Proving \((H_{{\mathbf {w}}})\) When Counting Only One Type

We assume from now on that \(\gamma _j={\mathbbm {1}}_{j=1}\) for all \(j\in [K]\), and will therefore from now on write \(\#_1 {\mathbf {T}}\) for \(|{\mathbf {T}}|_{\gamma }\). Recall from Sect. 2.3 in particular that d is the gcd of the support of \(\mu _{1,1}\) and that \(\alpha _1=1\).

Obtaining \((H_{{\mathbf {w}}})\) for every word \({\mathbf {w}}\) will be done in several small steps. We will first prove it for some fairly simple words and gradually enlarge the class of \({\mathbf {w}}\) for which it holds, until we have every element of \(\mathcal {W}_K\).

4.2.1 Ratio Limit Theorems for a Random Walk

Let \((S_n)_{n\in \mathbb {N}}\) be a random walk which starts at 0 and whose jumps are all greater than or equal to \(-1\), their distribution being given by \(\mathbb {P}(S_1=k)=\mu _{1,1}(k+1)\) for \(k\geqslant -1\).

Lemma 4.1

For all \(\alpha \in \{0,\ldots ,d-1\}\) we have

$$\begin{aligned} \mathbb {P}(S_{\alpha +dn}=-\alpha ) \underset{n\rightarrow \infty }{\sim } \mathbb {P}(S_{dn}=0) \underset{n\rightarrow \infty }{\sim } \mathbb {P}(S_{d(n+1)}=0) \end{aligned}$$

Proof

The first thing to notice is that the random walk \((\frac{S_{dn}}{d})_{n\in \mathbb {N}}\) is irreducible, recurrent and aperiodic on \(\mathbb {Z}\). First, it is indeed integer-valued because, by definition, for every n, \(S_{n+1}\equiv S_n-1\pmod d\), and thus, we stay in the same class modulo d if we take d steps at a time. Irreducibility comes from the fact that steps of \((S_n)_{n\in \mathbb {N}}\) has a nonzero probability of being equal to \(-1\) because \(\mu _{j,j}(0)>0\), and thus, \((\frac{S_{dn}}{d})_{n\in \mathbb {N}}\) can have positive jumps or jumps equal to \(-1\). Since the jumps of \((S_n)_{n\in \mathbb {N}}\) are centered by Proposition 2.1, point (i) this makes \((\frac{S_{dn}}{d})_{n\in \mathbb {N}}\) an irreducible and centered random walk on \(\mathbb {Z}\), so that it is recurrent (see, e.g., Theorem 8.2 in [14]). Finally, aperiodicity is obtained from the fact that, if \(\mu _{j,j}(n)>0\), then \(\mathbb {P}(S_n=0)>0\) by jumping straight to \(n-1\) and going down to 0 one step at a time.

As a consequence of this, we can apply Spitzer’s strong ratio theorem (see [31], p.49) to the random walk \((\frac{S_{dn}}{d})_{n\in \mathbb {N}}\). We obtain that, for any \(k\in \mathbb {Z}\),

$$\begin{aligned} \mathbb {P}(S_{dn}=0) \underset{n\rightarrow \infty }{\sim }\mathbb {P}(S_{d(n+1)}=0) \underset{n\rightarrow \infty }{\sim }\mathbb {P}(S_{dn}=dk). \end{aligned}$$

This proves the second half of Lemma 4.1 and can also be used to prove the first half. Let \(\mu _{j,j}^{* \alpha }\) be the distribution of the sum of \(\alpha \) independent variables with distribution \(\mu _{j,j}\). For \(n\in \mathbb {N}\), we then have

$$\begin{aligned} \mathbb {P}(S_{\alpha +dn}=-\alpha )=\sum _{p\in \mathbb {Z}}\mathbb {P}(S_{dn}=-\alpha -p)\mu ^{* \alpha }_{j,j} (p+\alpha ). \end{aligned}$$

Fatou’s lemma then gives us

$$\begin{aligned} \underset{n\rightarrow \infty }{\liminf }\ \frac{\mathbb {P}(S_{\alpha +dn}=-\alpha )}{\mathbb {P}(S_{dn}=0)}\geqslant \sum _{p\in \mathbb {Z}}\mu ^{* \alpha }_{j,j} (p+\alpha )=1 \end{aligned}$$

A similar argument also shows that

$$\begin{aligned} \underset{n\rightarrow \infty }{\liminf }\ \frac{\mathbb {P}(S_{d(n+1)}=0)}{\mathbb {P}(S_{\alpha +dn}=-\alpha )} \geqslant 1, \end{aligned}$$

and this ends the proof. \(\square \)

4.2.2 The Case Where \({\mathbf {w}}=(1,1,\ldots ,1)\)

Consider a tree \(\mathbf {T}\) with distribution \(\mathbb {P}^{(1)}\). Consider then the reduced tree \(\Pi ^{(1)}(\mathbf {T})\) where all the vertices with types different from 1 have been erased but ancestral lines are kept (such that the father of a vertex of \(\Pi ^{(1)}(T)\) is its closest ancestor of type 1 in \({\mathbf {T}}\)). This tree is precisely studied in [25], where it is shown that it is a monotype Galton–Watson tree, its offspring distribution naturally being \(\mu _{1,1}\). As a result, the well-known cyclic lemma (see [29], Sections 6.1 and 6.2) tells us that

$$\begin{aligned} \mathbb {P}^{(1)}(\#_1 {\mathbf {T}}=1+dn)=\frac{1}{1+dn}\mathbb {P}(S_{1+dn}=-1). \end{aligned}$$

where \((S_n)_{n\in \mathbb {N}}\) is the random walk defined in Sect. 4.2.1. One particular consequence of this is the fact that, thanks to Lemma 4.1, in order to prove \((H_{{\mathbf {w}}})\) for a certain word \({\mathbf {w}}\), we can restrict ourselves to proving the asymptotic equivalence for a single value of p, which will we take to be 0.

Consider now a word \({\mathbf {w}}=(1,1,\ldots ,1)\) of length k, where k is any integer. The cyclic lemma can be adapted to forests (see [29] again), and we have

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}(\#_1 {\mathbf {F}}=k+dn)=\frac{k}{k+dn}\mathbb {P}(S_{k+dn}=-k), \end{aligned}$$

Lemma 4.1 then implies \((H_{{\mathbf {w}}})\) in this case since \(Z_{{\mathbf {w}}}=kb_1\) and \(\alpha _{{\mathbf {w}}}=k\).

The cases where \({\mathbf {w}}\) contains types different from 1 will be much less simple, and we first start with an inequality.

4.2.3 A Lower Bound for General \({\mathbf {w}}\)

Let \({\mathbf {w}}\in \mathcal {W}_K\). In order to count the number of vertices of type 1 of a forest with distribution \(\mathbb {P}^{({\mathbf {w}})}\), we cut it at its first generation of type 1.

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}(\#_1{\mathbf {F}} =\alpha _{{\mathbf {w}}}+dn)=\sum _{i=1}^{|{\mathbf {w}}|}\sum _{k_i=0}^{\infty }\mu _{{w_i},1}(k_i)\mathbb {P}^{(1,\ldots ,1)}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}-q+dn) \end{aligned}$$

where q is the number of times 1 appears in \({\mathbf {w}}\) and 1 is repeated \(k_1+k_2,\ldots +k_{|{\mathbf {w}}|}\) times in \(\mathbb {P}^{(1,\ldots ,1)}\). By Corollary 2.1, whenever \(\mu _{{w_i},1}(k_i)>0\), we have \(\alpha _{w_i}\equiv k_i-{\mathbbm {1}}_{w_i=1} \pmod d\), and thus, the use of \(H_{(1,\ldots ,1)}\), combined with Fatou’s lemma, gives us the following lower bound:

$$\begin{aligned} \underset{n\rightarrow \infty }{\liminf }\,\frac{\mathbb {P}^{({\mathbf {w}})} (\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}+dn)}{\mathbb {P}^{(1)} (\#_1{\mathbf {T}}=1+dn)} \geqslant \sum _{i=1}^{|{\mathbf {w}}|}\sum _{k_i}k_i\mu _{w_{k_i},1}(k_i). \end{aligned}$$

We can then use point (i) of Proposition 2.1 to identify the right-hand side and obtain

$$\begin{aligned} \underset{n\rightarrow \infty }{\liminf }\,\frac{\mathbb {P}^{({\mathbf {w}})}(\#_1{\mathbf {F}} =\alpha _{{\mathbf {w}}}+dn)}{\mathbb {P}^{(1)} (\#_1{\mathbf {T}}=1+dn)} \geqslant \frac{Z_{\mathbf {w}}}{b_1}. \end{aligned}$$
(4.1)

To prove the reverse inequality for the limsup, we will try to fit a forest with distribution \(\mathbb {P}^{({\mathbf {w}})}\) “inside” a tree with distribution \(\mathbb {P}^{(1)}\). We first need some additional notions.

4.2.4 The Extension Relation

We describe here a tool which will be useful in the future. Let \(({\mathbf {t}},{\mathbf {e}})\) and \(({\mathbf {t'}},{\mathbf {e'}})\) be two K-type trees. We say that \({\mathbf {t}}'\) extends \({\mathbf {t}}\), which we write \({\mathbf {t}}'\vdash {\mathbf {t}}\) (omitting as usual the type functions for clarity) if \({\mathbf {t}}'\) can be obtained from \({\mathbf {t}}\) by grafting trees on the leaves of \({\mathbf {t}}'\). More precisely, \({\mathbf {t}}'\vdash {\mathbf {t}}\) if:

  • \({\mathbf {t}}\subset {\mathbf {t}}'\).

  • \(\forall u\in {\mathbf {t}}\), \({\mathbf {e}}(u)=\mathbf {e'}(u)\).

  • \(\forall u\in {\mathbf {t}}'{\setminus }{\mathbf {t}},\exists v\in \partial {\mathbf {t}},w\in {\mathcal {U}}: \;u=vw\).

Here, \(\partial {\mathbf {t}}\) is the set of leaves of \({\mathbf {t}}\), that is the set of vertices v of \({\mathbf {t}}\) such that \(k_v({\mathbf {t}})=0\). See Fig. 1 for an example.

Fig. 1
figure 1

An example of a 2-type tree extending another. Here, \({\mathbf {t}}'\vdash {\mathbf {t}}\)

This is once again adaptable to forests: if \(({\mathbf {f}},{\mathbf {e}})\) and \((\mathbf {f'},\mathbf {e'})\) are two k-type forests, then we say that \({\mathbf {f}}'\vdash {\mathbf {f}}\) if they have the same number of tree components and each tree of \({\mathbf {f}}'\) extends the corresponding tree of \({\mathbf {f}}\).

The extension relation behaves well with Galton–Watson random forests. For example, the following is immediate from the branching property:

Lemma 4.2

If \((\mathbf {f,e})\) is a finite forest and \({\mathbf {w}}\) the list of types of the roots of its components, then

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}})=\prod _{u\in {\mathbf {f}} {\setminus }\partial {\mathbf {f}}}\zeta ^{({\mathbf {e}}(u))}({\mathbf {w}}_{{\mathbf {f}}}(u)) \end{aligned}$$

Moreover, we have a generalization of the branching property: conditionally on \({\mathbf {F}}\vdash {\mathbf {f}}\), \({\mathbf {F}}\) is obtained by appending independent trees at the leaves of \({\mathbf {f}}\), and for every such leaf v, the tree grafted at v has distribution \(\mathbb {P}^{({\mathbf {e}}(v))}\).

For infinite trees, we get a generalization of (3.1):

Lemma 4.3

If \(({\mathbf {f,e}})\) is a finite forest, let \({\mathbf {x}}\) be the word formed by the types of the leaves of \({\mathbf {f}}\) in lexicographical order. We have

$$\begin{aligned} \widehat{\mathbb {P}}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}})=\frac{Z_{{\mathbf {x}}}}{Z_{{\mathbf {w}}}}\mathbb {P}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}}). \end{aligned}$$

Proof

Let n be the height of \({\mathbf {f}}\). Any forest of height n which extends \({\mathbf {f}}\) can be obtained by adding after each leaf u of \({\mathbf {f}}\) a tree with height smaller than \(n-|u|\). Let \(u_1,\ldots ,u_p\) be the leaves of \({\mathbf {f}}\), and \(e_1,\ldots ,e_p\) be their types, we will append for all i a tree \(({\mathbf {t}}^i,{\mathbf {e}}^i)\) to the leaf \(u_i\) and call the resulting forest \((\tilde{{\mathbf {f}}},\tilde{{\mathbf {e}}})\), implicitly a function of \({\mathbf {f}}\) and \({\mathbf {t}}^1,\ldots ,{\mathbf {t}}^p\). Thus, recalling the notation X for the martingale defined in Eq. (2.2),

$$\begin{aligned} \widehat{\mathbb {P}}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}})&=\sum _{{\mathbf {t}}_1,\ldots ,{\mathbf {t}}_p} \sum _{v\in \tilde{{\mathbf {f}}}_n}\frac{ b_{\tilde{{\mathbf {e}}}(v)}}{Z_{{\mathbf {w}}}} \mathbb {P}^{({\mathbf {w}})}\big (({\mathbf {F}}_{\leqslant n},{\mathbf {E}}_{\leqslant n})=(\tilde{{\mathbf {f}}},\tilde{{\mathbf {e}}})\big )\\&=\sum _{{\mathbf {t}}_1,\ldots ,{\mathbf {t}}_p} \sum _{i=1}^p\sum _{v\in {\mathbf {t}}^i_{n-|u_i|}}\frac{ b_{\tilde{{\mathbf {e}}}(v)}}{Z_{{\mathbf {w}}}}\mathbb {P}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}})\prod _{i=1}^p\mathbb {P}^{(e_i)}({\mathbf {T}}_{\leqslant n-|u_i|}={\mathbf {t}}_i)\\&=\frac{\mathbb {P}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}})}{Z_{{\mathbf {w}}}}\sum _{i=1}^p \sum _{{\mathbf {t}}_1,\ldots ,{\mathbf {t}}_p}\sum _{v\in {\mathbf {t}}^i_{n-|u_i|}}b_{{\mathbf {e}}_i(v)}\prod _{i=1}^p\mathbb {P}^{(e_i)}({\mathbf {T}}_{\leqslant n-|u_i|}={\mathbf {t}}_i)\\&=\frac{\mathbb {P}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}})}{Z_{{\mathbf {w}}}}\sum _{i=1}^p {\mathbb {E}}^{(e_i)}[X_{n-|u_i|}] \\&=\frac{\mathbb {P}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}})}{Z_{{\mathbf {w}}}}\sum _{i=1}^p b_{e_i} \\&=\frac{Z_{{\mathbf {x}}}}{Z_{{\mathbf {w}}}}\mathbb {P}^{({\mathbf {w}})}({\mathbf {F}}\vdash {\mathbf {f}}). \end{aligned}$$

\(\square \)

4.2.5 The Case Where There is a Tree \({\mathbf {t}}\) such that \(\mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}})>0\) and \({\mathbf {w}}\) is the Word Formed by the Leaves of \({\mathbf {t}}\)

Let \(({\mathbf {t}},{\mathbf {e}})\) be a tree with root of type 1 such that \(\mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}})>0\). Let \({\mathbf {w}}\) be the word formed by the types of the leaves of \({\mathbf {t}}\), we will prove \((H_{{\mathbf {w}}})\). We first need an intermediate lemma.

Lemma 4.4

There exists a countable family of trees \({\mathbf {t}}^{(2)},{\mathbf {t}}^{(3)}\ldots \) such that, for any K-type tree \({\mathbf {t}}'\) with root of type 1:

  • either \({\mathbf {t}}\vdash {\mathbf {t}}'\).

  • or \({\mathbf {t}}'\vdash {\mathbf {t}}\).

  • or there is a unique i such that \({\mathbf {t}}'\vdash {\mathbf {t}}^{(i)}\).

Proof

For all \(k\in \{2,3,\ldots ,ht({\mathbf {t}})\}\), take all the trees \({\mathbf {t'}}\) which have height k and which satisfy both \({\mathbf {t'}}_{\leqslant k-1}={\mathbf {t}}_{\leqslant k-1}\) and \({\mathbf {t'}}_k\ne {\mathbf {t}}_k.\) These are in countable amount, and we can therefore call them \(({\mathbf {t}}^{(i)})_{i\geqslant 2}\) in any order. Now for any K-type tree \({\mathbf {t'}}\) with root of type 1, by considering the highest integer k such that \({\mathbf {t'}}_{\leqslant k-1}={\mathbf {t}}_{\leqslant k-1}\), we directly obtain that, if none of \({\mathbf {t}}\) and \({\mathbf {t'}}\) extend the other, then \({\mathbf {t}}'\) extends one of the \({\mathbf {t}}^{(i)}\). \(\square \)

Now let \({\mathbf {t}}^{(1)}={\mathbf {t}}\), and, for all \(i\in \mathbb {N}\), let also \({\mathbf {w}}^i\) be the word formed by the types of the leaves of \({\mathbf {t}}^{(i)}\). Write

$$\begin{aligned} \mathbb {P}^{(1)}(\#_1 {\mathbf {T}}=1+dn)&=\sum _{i=1}^{\infty }\mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)},\#_1 {\mathbf {T}}=1+dn) \\&\quad + \mathbb {P}^{(1)}( {\mathbf {t}}\vdash {\mathbf {T}},{\mathbf {t}}\ne {\mathbf {T}},\#_1 {\mathbf {T}}=1+dn)\\&=\sum _{i=1}^{\infty }\mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)})\mathbb {P}^{({\mathbf {w}}^i)}(\#_1{\mathbf {F}}=1-q^{(i)}+dn)\\&\quad + \mathbb {P}^{(1)}( {\mathbf {t}}\vdash {\mathbf {T}},{\mathbf {t}}\ne {\mathbf {T}},\#_1 {\mathbf {T}}=1+dn) \end{aligned}$$

where \(q^{(i)}\) is the number of vertices of type 1 of \({\mathbf {t}}^{(i)}\) which are not leaves. Divide by \(\mathbb {P}^{(1)}(\#_1 {\mathbf {T}}=1+dn)\) on both sides of the equation to obtain

$$\begin{aligned} \sum _{i=1}^{\infty }\mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)})\frac{\mathbb {P}^{({\mathbf {w}}^i)}(\#_1{\mathbf {F}}=1-q^{(i)}+dn)}{\mathbb {P}^{(1)}(\#_1 {\mathbf {T}}=1+dn)}+\mathbb {P}^{(1)}( {\mathbf {t}}\vdash {\mathbf {T}}\;| \;\#_1 {\mathbf {T}}=1+dn) =1 \end{aligned}$$
(4.2)

Note that

$$\begin{aligned} \mathbb {P}^{(1)}( {\mathbf {t}}\vdash {\mathbf {T}}\;| \;\#_1 {\mathbf {T}}=1+dn) \end{aligned}$$

is equal to 0 for n large enough, since \({\mathbf {t}}\) is finite.

By the results of Sect. 2.3, we have \(1-q^{(i)}\equiv \alpha _{{\mathbf {w}}^{(i)}} \pmod d\) for all \(i\in \mathbb {N}\), and thus, using the lower bound (4.1), we have

$$\begin{aligned} \underset{n\rightarrow \infty }{\liminf }\, \mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)})\frac{\mathbb {P}^{({\mathbf {w}}^i)}(\#_1{\mathbf {F}}=1-q^{(i)}+dn)}{\mathbb {P}^{(1)}(\#_1 {\mathbf {T}}=1+dn)} \geqslant \mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)})\frac{Z_{{\mathbf {w}}^i}}{b_1} \end{aligned}$$

for all \(i\in \mathbb {N}\). However, by Lemmas 4.3 and 4.4, we have

$$\begin{aligned} \sum _{i=1}^{\infty } \mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)})\frac{Z_{{\mathbf {w}}^i}}{b_1} = \sum _{i=1}^{\infty } \widehat{\mathbb {P}}^{(i)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)}) =1, \end{aligned}$$

and thus, whenever \(\mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}}^{(i)})\) is nonzero, we must have

$$\begin{aligned} \underset{n\rightarrow \infty }{\limsup }\,\frac{\mathbb {P}^{({\mathbf {w}}^i)}(\#_1{\mathbf {F}}=1-q^{(i)}+dn)}{\mathbb {P}^{(1)}(\#_1 {\mathbf {T}}=1+dn)} \leqslant \frac{Z_{{\mathbf {w}}^i}}{b_1}, \end{aligned}$$

which ends the proof of \((H_{{\mathbf {w}}})\).

4.2.6 Removing One Element from \({\mathbf {w}}\)

Lemma 4.5

Let \({\mathbf {w}}\in \mathcal {W}_K\) be such that \((H_{{\mathbf {w}}})\) holds. Let m be any integer in \([|{\mathbf {w}}|]\) and let \(\tilde{{\mathbf {w}}}\) be \({\mathbf {w}}\), except that we remove \(w_m\) from the list. Then \((H_{\tilde{{\mathbf {w}}}})\) also holds.

Proof

For \(n\in \mathbb {N}\), we split the event \(\{\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}+dn\}\) according to the first and second generations of type 1 in the mth tree of the forest. By calling k the number of vertices in the first generation of type 1 issued from the mth tree, and then \(i_1,\ldots ,i_k\) the numbers of vertices in the first generation of type 1 of each corresponding subtree, we have

$$\begin{aligned}&\mathbb {P}^{({\mathbf {w}})}(\#_1{\mathbf {F}} =\alpha _{{\mathbf {w}}}+dn)\\&\quad = \sum _{k}\mu _{w_m,1}(k)\sum _{i_1,\ldots ,i_k}\prod _{r=1}^k\mu _{1,1}(i_r) \mathbb {P}^{(\tilde{{\mathbf {w}}}^{i_1+\ldots +i_r})}(\#_1{\mathbf {F}}= \alpha _{{\mathbf {w}}}-k-{\mathbbm {1}}_{\{w_m=1\}}+dn) \end{aligned}$$

where \(\tilde{{\mathbf {w}}}^{i_1+\cdots +i_r}\) is the word \({\mathbf {w}}\) where \(w_m\) has been replaced by 1, repeated \(i_1+\cdots +i_r\) times. Note that the term of the sum where \(k=0\) is to be interpreted as \(\mathbb {P}^{(\tilde{{\mathbf {w}}})}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}- {\mathbbm {1}}_{\{w_m=1\}}+dn)\).

We now use the same argument as in the end of the previous section: we first divide by \(\mathbb {P}^{({{\mathbf {w}}})}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}+dn)\) to get

$$\begin{aligned} \sum _{k}\mu _{w_m,1}(k)\sum _{i_1,\ldots ,i_k}\prod _{r=1}^k\mu _{1,1}(i_r) \frac{\mathbb {P}^{(\tilde{{\mathbf {w}}}^r)}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}} -k-{\mathbbm {1}}_{\{w_m=1\}}+dn)}{\mathbb {P}^{({{\mathbf {w}}})}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}+dn)}=1. \end{aligned}$$

For each choice of k and \(i_1,\ldots ,i_k\), using lower bound (4.1) as well as \((H_{{\mathbf {w}}})\), we have

$$\begin{aligned}&\underset{n\rightarrow \infty }{\liminf }\mu _{w_m,1}(k)\prod _{r=1}^k\mu _{1,1}(i_r)\frac{\mathbb {P}^{(\tilde{{\mathbf {w}}}^r)}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}-k-\mathbbm {1}_{\{w_m=1\}}+dn)}{\mathbb {P}^{({{\mathbf {w}}})}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}+dn)}\\&\quad \geqslant \mu _{w_m,1}(k)\prod _{r=1}^k\mu _{1,1}(i_r)\frac{Z_{\tilde{{\mathbf {w}}}}+\sum _{r} i_r}{Z_{{\mathbf {w}}}}. \end{aligned}$$

A repeated use of point (i) of Proposition 2.1 shows that these add up to 1, and thus, for k and \(i_1,\ldots ,i_k\) such that \(\mu _{w_m,1}(k)\prod _{r=1}^k\mu _{1,1}(i_r)\ne 0\), we do have

$$\begin{aligned} \underset{n\rightarrow \infty }{\lim }\frac{\mathbb {P}^{(\tilde{{\mathbf {w}}}^r)}(\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}-k- {\mathbbm {1}}_{\{w_m=1\}}+dn)}{\mathbb {P}^{(\tilde{{\mathbf {w}}})} (\#_1{\mathbf {F}}=\alpha _{{\mathbf {w}}}+dn)}=\frac{Z_{\tilde{{\mathbf {w}}}} +\sum _{r=1}^k i_r}{Z_{{\mathbf {w}}}}. \end{aligned}$$

By irreducibility, one can find k such that \(\mu _{w_m,1}(k)\ne 0\), and by criticality one has \(\mu _{1,1}(0)\ne 0\), meaning that we can take \(i_1,\ldots ,i_k\) all equal to zero, and this ends the proof. \(\square \)

4.2.7 End of the Proof

By applying Lemma 4.5 repeatedly and using the fact that \((H_{{\mathbf {w}}})\) stays true if we permute the terms of \({\mathbf {w}}\), we obtain that, if \({\mathbf {w}}\) and \({\mathbf {w}}'\) are two words such that any type features fewer times in \({\mathbf {w}}'\) than in \({\mathbf {w}}\), then \((H_{{\mathbf {w}}})\) implies \((H_{{\mathbf {w}}'})\). Thus, by Sect. 4.2.5, we now only need to show the following lemma.

Lemma 4.6

For all nonnegative integers \(n_1,\ldots ,n_K\), there exists a K-type tree \(({\mathbf {t}},{\mathbf {e}})\) which has more than \(n_i\) leaves of type i for all \(i\in [K]\), and such that \(\mathbb {P}^{(1)}({\mathbf {T}}\vdash {\mathbf {t}})>0\).

Proof

The first step is showing that, for p large enough, the pth generation of type 1 of \({\mathbf {T}}\) has positive probability of having more than \(n_1+\cdots +n_K\) vertices, where the pth generation of type 1 is the set of vertices of type 1 which have exactly p ancestors of type 1 including the root. This is immediate because the average of \(\mu _{1,1}\) is 1 and we are not in a degenerate tree, and thus, the size of each generation of type 1 has positive probability of being strictly larger than the previous generation.

Irreducibility then tells us that, after each vertex of the pth generation of type 1, there is a positive probability of finding a vertex of type i for any i. \(\square \)

4.3 Proving \((H_{{\mathbf {w}}})\) When \(\zeta \) is Regular Critical

We now take general \(\gamma \) and assume that \(\zeta \) is regular critical. Our aim here is to prove the following refinement of \((H_{{\mathbf {w}}})\): there exists a constant \(C>0\) such that, for all \({\mathbf {w}}\in \mathcal {W}_K\)

$$\begin{aligned} \mathbb {P}^{({\mathbf {w}})}\Big (|{\mathbf {F}}|_{\gamma }=\alpha _{{\mathbf {w}}}+dn\Big ) \underset{n\rightarrow \infty }{\sim }Z_{{\mathbf {w}}}\sqrt{\frac{\gamma \cdot {\mathbf {a}}}{2\pi d \sigma ^2 n^3}},\qquad \qquad \qquad \qquad \qquad \qquad \qquad (H'_{{\mathbf {w}}}) \end{aligned}$$

where \({\mathbf {a}}\) is the left eigenvector of the mean matrix M, and \(\sigma ^2\) was defined in Sect. 2.2. The actual values do not matter much; however, the important part is that the right-hand side is \(Z_{{\mathbf {w}}}\) divided by \(n^{3/2}\), times a constant. We will prove this by using analytic methods, notably the smooth implicit-function schema theorem (see notably [11], Section VII.4 and [22]).

4.3.1 Proving \((H'_{{\mathbf {w}}})\) for One-Letter Words

Let \(i\in [K]\) and, for appropriate \(z\in {\mathbb {C}}\), let

$$\begin{aligned} \psi _i(z)= {\mathbb {E}}^{(i)}\big [z^{|{\mathbf {T}}|_{\gamma }}\big ]=\sum _{n\in \mathbb {Z}_+} \mathbb {P}^{(i)}\big (|{\mathbf {T}}|_{\gamma }=n\big )z^n. \end{aligned}$$

This power series has nonnegative coefficients, and, since \(\zeta \) is critical, its radius of convergence is 1. This is because \(\psi _i(1)=1\) (since \(\psi ^{(i)}\) is the generating function of a probability distribution) and \(\psi '_i(1)=\infty \) (Lemma 2.1). We let \({\mathbb {D}}\) be the open unit disk. The periodicity structure of Sect. 2.3 lets us rewrite \(\psi _i\) in a more precise way: there exists another power series \(\phi _i\) such that

$$\begin{aligned} \forall z\in {\mathbb {D}}, \psi _i(z)=z^{\alpha _i}\phi _i(z^d), \end{aligned}$$

and all the coefficients of \(\phi _i\), except for a finite amount, are strictly positive. Our aim is then to show that the coefficient of \(z^n\) in \(\phi _i\) behaves like \(n^{-3/2}\) as n tends to infinity.

Recall from Sect. 2.2 the distribution \(\mathbb {P}^{(i)}_{\text {cut}_i}\) of the Galton–Watson tree cut at its first generation of type i. Given such a cut tree \({\mathbf {t}}\), we call \(p_{{\mathbf {t}}}\) its number of leaves of type i. We obtain from the Galton–Watson construction the following equation:

$$\begin{aligned} \psi _i(z)=z^{\gamma _i}\sum _{{\mathbf {t}}}\mathbb {P}^{(i)}_{\text {cut}_i}({\mathbf {t}}) z^{\sum _{j\ne i}\gamma _j\#_{j}({\mathbf {t}})} \big (\psi _i(z)\big )^{p_{{\mathbf {t}}}}. \end{aligned}$$

This can be refined with the periodicity structure: we know from Proposition 2.2 that, if \(\mathbb {P}^{(i)}_{\text {cut}_i}({\mathbf {t}})>0\), then \(\alpha _i\equiv \gamma _i+\sum _{j\ne i}\gamma _j\#_{j}({\mathbf {t}})+p_{{\mathbf {t}}}\alpha _i\pmod d\). We let \(n_{{\mathbf {t}}}\in \mathbb {Z}_+\) be such that \(\gamma _i+\sum _{j\ne i}\gamma _j\#_{j}({\mathbf {t}})+p_{{\mathbf {t}}}\alpha _i=\alpha _i+n_{{\mathbf {t}}}d\), and then obtain

$$\begin{aligned} z^\alpha _i\phi _i(z^d)=\sum _{{\mathbf {t}}}\mathbb {P}^{(i)}_{\text {cut}_i}({\mathbf {t}}) z^{\alpha _i+dn_{{\mathbf {t}}}}\phi _i(z^d), \end{aligned}$$

which reduces to

$$\begin{aligned} \phi _i(z)=\sum _{{\mathbf {t}}}\mathbb {P}^{(i)}_{\text {cut}_i}({\mathbf {t}}) z^{n_{{\mathbf {t}}}}\big (\phi _i(z)\big )^{p_{{\mathbf {t}}}}. \end{aligned}$$

The function \(\phi _i\) thus solves

$$\begin{aligned} \phi _i(z) = G\big (z,\phi _i(z)\big ) \end{aligned}$$

where, for appropriate z and w,

$$\begin{aligned} G(z,w)={\mathbb {E}}^{(i)}_{\text {cut}_i}\big [ z^{n_{{\mathbf {T}}}}w^{p_{{\mathbf {T}}}}\big ]. \end{aligned}$$

We will now apply smooth implicit-function schema theorem, as stated in [11], Theorem VII.3. We have to check several conditions on the double power series \(G(z,w)=\sum _{n,m}g_{m,n}z^mw^n\) with positive coefficients first.

  • We show that G is analytic in a domain \(\{|z|<R, |w|<R\}\) with \(R>1\). Because of regular criticality and Proposition 2.1, the number of vertices lying before or on the first generation of type i is both exponentially integrable variables (in the sense of “Appendix”), and thus, their sum, which is the total number of vertices lying before the first generation of type i, is also exponentially integrable. Thus, there exists \(z>1\) such that \({\mathbb {E}}^{(i)}_{\text {cut}_i}\big [ z^{\#\mathbf {T}}\big ]<\infty \), and then bounding \(|\mathbf {T}|_{\gamma }\) by \(\gamma _{\max } \#\mathbf {T}\) (\(\gamma _{\max }\) being the highest value of \(\gamma _i,\) \(i\in [K]\)) and rewriting \(n_{{\mathbf {T}}}\) in terms of \(|\mathbf {T}|_{\gamma }\), we get \(R>1\) such that \(G(R,R)<\infty \).

  • Unlike the assumptions of [11], it is possible that \(g_{0,0}=0\) (e.g., if \(\gamma _i=0\) and an individual of type i can die without giving birth to any offspring), but this is just an unneeded normalization assumption. We do know, however, that the coefficient for \(g_{0,1}\ne 1\) and that \(g_{0,n}\ne 0\) for some \(n\geqslant 2\) since the measure \(\mu _{i,i}\) has expected value 1 and nonzero variance.

  • The pair \((r,s)=(1,1)\) lies inside the domain of analyticity of G and satisfies the so-called characteristic system

    $$\begin{aligned} G(r,s)=s \quad \text {and}\quad \partial _{w}G(r,s)=1. \end{aligned}$$

    Of course, in our setting, we are just saying that the coefficients of G sum up to 1 and that the average of \(\mu _{i,i}\) is 1, which we know since Proposition 2.1.

Knowing all of this and the fact that \(\phi _i\) is aperiodic (in the sense of [11], since only a finite number of its coefficients are not 0), the analytic implicit-function schema gives us the following estimate for the coefficient of \(z^n\) in \(\phi _i\):

$$\begin{aligned} \mathbb {P}^{(i)}\Big (|{\mathbf {F}}|_{\gamma }=\alpha _i+dn\Big ) \underset{n\rightarrow \infty }{\sim }\sqrt{\frac{\partial _zG(1,1)}{2\pi \partial ^2_{ww}G(1,1)n^3}}. \end{aligned}$$

Proposition 2.1 gives us the wanted values for the partial derivatives:

$$\begin{aligned} \partial _zG(1,1)&={\mathbb {E}}_{\text {cut}_i}\big [n_{\mathbf {T}}\big ]\\&=\frac{1}{d}{\mathbb {E}}_{\text {cut}_i}\big [\gamma _i+(p_{\mathbf {T}}-1)\alpha _i+\sum _{j\ne i}\gamma _j\#_{j}({\mathbf {T}})\big ]\\&=\frac{1}{d}\Big (\gamma _i+0+\sum _{j\ne i}\frac{\gamma _j a_j}{a_i}\Big )\\&=\frac{\gamma \cdot {\mathbf {a}}}{da_i}, \end{aligned}$$

and

$$\begin{aligned} \partial ^{2}_{ww}G(1,1)&={\mathbb {E}}_{\text {cut}_i}\big [p_{\mathbf {T}}(p_{\mathbf {T}}-1)\big ]\\&=\frac{\sigma ^2}{a_ib_i^2}, \end{aligned}$$

and this ends our proof. \(\square \)

4.3.2 Moving on to General Words

The general case of \((H'_{{\mathbf {w}}})\) follows from the following lemma:

Lemma 4.7

Let \(a>1\) and let X and Y be two independent integer-valued random variables such that

$$\begin{aligned} \mathbb {P}(X=n)\underset{n\rightarrow \infty }{\sim }\frac{C_X}{n^a} \quad \text {and} \quad \mathbb {P}(Y=n)\underset{n\rightarrow \infty }{\sim }\frac{C_Y}{n^a}. \end{aligned}$$

Then we also have

$$\begin{aligned} \mathbb {P}(X+Y=n)\underset{n\rightarrow \infty }{\sim }\frac{C_X+C_Y}{n^a} \end{aligned}$$

Proof

We will separately show that

$$\begin{aligned} \underset{n\rightarrow \infty }{\limsup }\; n^a\mathbb {P}(X+Y=n)\leqslant C_X+C_Y \end{aligned}$$

and

$$\begin{aligned} \underset{n\rightarrow \infty }{\liminf }\; n^a\mathbb {P}(X+Y=n)\geqslant C_X+C_Y \end{aligned}$$

For \(n\in \mathbb {Z}_+\), let \(x_n=\mathbb {P}(X=n)\), \(y_n=\mathbb {P}(Y=n)\) and \(z_n=\mathbb {P}(X+Y=n)=\sum _{k=0}^nx_ky_{n-k}\). Cut the sum the following way:

$$\begin{aligned} z_n= \sum _{k=0}^{K} x_ky_{n-k} + \sum _{k=K+1}^{n-K-1} x_ky_{n-k} + \sum _{k=n-K}^{n} x_ky_{n-k}. \end{aligned}$$
(4.3)

For the lower bound, let \(\varepsilon >0\), and choose K large enough that \(\sum _{k=0}^K x_k\geqslant (1-\varepsilon )\), \(\sum _{k=0}^K y_k\geqslant (1-\varepsilon )\) and, for n larger than K, \(x_n\geqslant (1-\varepsilon )C_X n^{-a}\) and \(y_n\geqslant (1-\varepsilon )C_Y n^{-a}\). Now take \(n\geqslant 2K\). In the first sum, use \(y_{n-k}\geqslant (1-\varepsilon )C_Y(n-K)^{-a}\), and in the third, use \(x_k\geqslant (1-\varepsilon )C_X(n-K)^{-a}\) to obtain

$$\begin{aligned} z_n \geqslant (1-\varepsilon )(n-K)^{-a} \big (C_X\sum _{k=0}^n y_k+C_Y\sum _{k=0}^n x_k\big )\geqslant (1-\varepsilon )^2(n-K)^{-a}(C_X+C_Y). \end{aligned}$$

Taking n to infinity, we get

$$\begin{aligned} \underset{n\rightarrow \infty }{\liminf }\; n^a z_n\geqslant (1-\varepsilon )^2(C_X+C_Y), \end{aligned}$$

and letting \(\varepsilon \) tend to 0 gives us the lower bound.

The upper bound will require more work. Let \(\varepsilon >0\) and \(0<\varepsilon <1/2\), we will do the same cut as in Eq. (4.3), but with a varying K, equal to \(\lfloor \varepsilon n\rfloor .\) Take n large enough such that, for \(k\geqslant \lfloor \varepsilon n\rfloor \), \(k^{a}x_k\leqslant (1+\varepsilon )C_X\) and \(k^{a}y_k\leqslant (1+\varepsilon )C_Y.\) Write in the first sum \(y_{n-k}\leqslant (1+\varepsilon )C_Y(n-\lfloor \varepsilon n\rfloor )^{-a}\) and in the third one \(x_k\leqslant (1+\varepsilon )C_X(n-\lfloor \varepsilon n\rfloor )^{-a}\), while for the middle one we use \(x_ky_{n-k}\leqslant (1+\varepsilon )^2C_XC_Y \lfloor \varepsilon n\rfloor ^{-2a}\). We then have

$$\begin{aligned} z_n&\leqslant \sum _{k=0}^{\lfloor \varepsilon n\rfloor }x_k(1+\varepsilon )(n-\lfloor \varepsilon n\rfloor )^{-a} C_Y+\sum _{k=0}^{\lfloor \varepsilon n\rfloor }y_k(1+\varepsilon )(n-\lfloor \varepsilon n\rfloor )^{-a} C_X \\&\quad + \sum _{k=0}^n (1+\varepsilon )^2C_XC_Y\lfloor \varepsilon n\rfloor ^{-2a} \\&\leqslant (1+\varepsilon )(C_X+C_Y)(n-\lfloor \varepsilon n\rfloor )^{-a}+(1+\varepsilon )^2C_XC_Y(\lfloor \varepsilon n\rfloor ) n\lfloor \varepsilon n\rfloor ^{-2a}. \end{aligned}$$

Since \(a>1,\) we have \(1-2a<a,\) and thus, the last term is negligible compared to \(n^{-a}\). Hence, \(\limsup n^az_n\leqslant (1+\varepsilon )(1-\varepsilon )^{-a},\) and letting \(\varepsilon \) tend to 0 gives us the wanted bound. \(\square \)

The case \(a=3/2\) coupled with a simple induction then proves \((H'_{{\mathbf {w}}})\) for general \({\mathbf {w}}\in \mathcal {W}_K\).

5 Background on Random Planar Maps

5.1 Planar Maps

As stated in the Introduction, a planar map is a proper embedding m of a finite connected planar graph in the sphere, in the sense that edges do not intersect. These are taken up to orientation-preserving homeomorphisms of the sphere, thus making them combinatorial objects. We call faces of a map m the connected components of its complement in the sphere, and let \({\mathcal {F}}_m\) be their set. The degree of a face f, denoted by \(\deg (f)\), is the number of edges it is adjacent to, counting multiplicity: we count every edge as many times as we encounter it when circling around f. The numbers of vertices, edges and faces of a map are respectively denoted by \(\#V(m)\), \(\#E(m)\) and \(\#F(m)\). Finally, the graph distance on m is denoted by d.

We are going to look at maps which are both rooted and pointed. These are triplets (mer), where m is a planar map, e is an oriented edge of m called the root edge, starting at a vertex \(e^-\) and pointing to a vertex \(e^+\), and r is a vertex of m. We call \(\mathcal {M}\) the set of all such maps and \(\mathcal {M}_n\) the set of such maps with n vertices for \(n\in \mathbb {N}\). A map (mer) will be called positive (resp. null, negative) if \(d(r,e^+)=d(r,e^-)+1\) (resp. \(d(r,e^-)\), \(d(r,e^-)-1\)). We call \(\mathcal {M}^+\), \(\mathcal {M}^0\) and \(\mathcal {M}^-\) the corresponding sets of maps and, for \(n\in \mathbb {N}\), \(\mathcal {M}_n^+\), \(\mathcal {M}_n^0\) and \(M_n^-\) the corresponding sets of maps which have n vertices. Since there is a trivial bijection between positive and negative maps, we will mostly restrict ourselves to \(\mathcal {M}^+\) and \(\mathcal {M}^0\). By convention, we add to \(\mathcal {M}^+\) the vertex map \(\dagger \), which consists of one vertex, no edges and one face.

5.2 Boltzmann Distributions

Let \({\mathbf {q}}=(q_n)_{n\in \mathbb {N}}\) be a sequence of nonnegative numbers such that there exists \(i\geqslant 3\) with \(q_i>0\). For any map m, let

$$\begin{aligned} W_{{\mathbf {q}}}(m)=\prod _{f\in {\mathcal {F}}_m} q_{\deg (f)}. \end{aligned}$$

Note that this quantity only depends on the map m, and not on any root r or point m. We say that the sequence \({\mathbf {q}}\) is admissible if the sum

$$\begin{aligned} Z_{{\mathbf {q}}}= \sum _{(m,e,r)\in \mathcal {M}} W_{{\mathbf {q}}}(m) \end{aligned}$$

is finite. When \({\mathbf {q}}\) is admissible, we can define the Boltzmann probability distribution \(B_{{\mathbf {q}}}\) by setting, for a pointed rooted map (mer),

$$\begin{aligned} B_{{\mathbf {q}}}(m,e,r)=\frac{W_{{\mathbf {q}}}(m)}{Z_{{\mathbf {q}}}}. \end{aligned}$$

We also introduce the versions of \(B_{{\mathbf {q}}}\) conditioned to be positive or null: let \(Z_{{\mathbf {q}}}^+= \sum _{(m,e,r)\in \mathcal {M}^+} W_{{\mathbf {q}}}(m)\) and \(Z_{{\mathbf {q}}}^0= \sum _{(m,e,r)\in \mathcal {M}^0} W_{{\mathbf {q}}}(m)\) and, for any map (mer), \(B^+_{{\mathbf {q}}}(m,e,r)=\frac{W_{{\mathbf {q}}}(m)}{Z_{{\mathbf {q}}}^+}\) if it is positive and \(B^0_{{\mathbf {q}}}(m,e,r)=\frac{W_{{\mathbf {q}}}(m)}{Z_{{\mathbf {q}}}^0}\) if it is null.

For nonnegative numbers x and y, let

$$\begin{aligned} f^{\bullet }(x,y)=\sum _{k,k'}{\left( {\begin{array}{c}2k+k'+1\\ k+1\end{array}}\right) } {\left( {\begin{array}{c}k+k'\\ k\end{array}}\right) }q_{2+2k+k'}\,x^ky^{k'} \end{aligned}$$

and

$$\begin{aligned} f^{\diamond }(x,y)=\sum _{k,k'}{\left( {\begin{array}{c}2k+k'\\ k\end{array}}\right) } {\left( {\begin{array}{c}k+k'\\ k\end{array}}\right) }q_{1+2k+k'}\,x^ky^{k'}. \end{aligned}$$

It was shown in [24], Proposition 1, that \({\mathbf {q}}\) is admissible if and only if the system

$$\begin{aligned} 1-\frac{1}{x}&=f^{\bullet }(x,y) \end{aligned}$$
(5.1)
$$\begin{aligned} y&=f^{\diamond }(x,y) \end{aligned}$$
(5.2)

has a solution with \(x>1\), such that the spectral radius of the matrix

$$\begin{aligned} \left( \begin{array}{c@{\quad }c@{\quad }c} 0 &{} 0 &{} x-1 \\ \frac{x}{y}\partial _xf^{\diamond }(x,y) &{} \partial _yf^{\diamond }(x,y) &{} 0 \\ \frac{x^2}{x-1}\partial _xf^{\bullet }(x,y) &{} \frac{xy}{x-1}\partial _yf^{\bullet }(x,y) &{} 0 \end{array} \right) \end{aligned}$$

is smaller than or equal to 1. The existence of such a solution implies its uniqueness, with \(x=Z_{{\mathbf {q}}}^+\) and \(y=\sqrt{Z_{{\mathbf {q}}}^0}\). We let \(Z^{\diamond }_{{\mathbf {q}}}=\sqrt{Z_{{\mathbf {q}}}^0}\).

We then say that \({\mathbf {q}}\) is critical if the spectral radius of the aforementioned matrix is exactly 1 and that it is regular critical if, moreover, for some \(\varepsilon >0\), we have \(f^{\bullet }(Z_{{\mathbf {q}}}^++\varepsilon ,Z_{{\mathbf {q}}^{\diamond }}+\varepsilon )<\infty \).

Random Non-pointed Maps We will also occasionally consider rooted maps (me) without any specified point r. If \({\mathbf {q}}\) is admissible, we let \(B^{\emptyset }_{{\mathbf {q}}}\) be the probability measure on the set of rooted maps such that, for a rooted map (me),

$$\begin{aligned} B^{\emptyset }_{{\mathbf {q}}}(m,e)=\frac{W_{{\mathbf {q}}}(m)}{Z^{\emptyset }_{{\mathbf {q}}}} \end{aligned}$$

where \(Z^{\emptyset }_{{\mathbf {q}}}\) is an appropriate constant.

Note that, if a random rooted and pointed map (MER) has distribution \(B_{{\mathbf {q}}}\), then the distribution of (ME) (ignoring R) is not \(B^{\emptyset }_{{\mathbf {q}}}\), but \(B^{\emptyset }_{{\mathbf {q}}}\) biased by the number of vertices: if (me) is a rooted map with n vertices, then \(\sum _{r\in m} B_{{\mathbf {q}}}(m,e,r)\) is proportional to \(nB^{\emptyset }_{{\mathbf {q}}}\). This is because, there are exactly n ways of pointing (me), and they all lead to a different rooted and pointed map.

5.3 The Bouttier–Di Francesco–Guitter Bijection

In [8] was exposed a bijection between rooted and pointed maps and a certain class of 4-type labelled trees called mobiles. Let us quickly recall the facts here, with a few variations to make the bijection more adapted to our study.

5.3.1 Mobiles

A finite spatial 4-type tree \(({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\) is called a mobile if the types satisfy the following conditions:

  • The root has type 1 or 2,

  • The children of a vertex of type 1 all have type 3,

  • If a vertex has type 2, then it has only one child, which has type 4, except if it is the root, if \(\emptyset \) has type 2 then it has exactly two children, both of type 4,

  • Vertices of types 3 and 4 can only have children of types 1 and 2,

and the labels satisfy the following conditions:

  • Vertices of types 1 and 3 have integer labels, and vertices of types 2 and 4 have labels in \(\mathbb {Z}+\frac{1}{2}\),

  • The root has label 0 if it is of type 1, \(\frac{1}{2}\) if it is of type 2,

  • Vertices of type 3 or 4 have the same label as their father.

  • If \(u\in {\mathbf {t}}\) has type 3 or 4, let by convention \(u0=u\underline{k_u({\mathbf {t}})+1}=u^-\). Then, for all \(i\in \{0,\ldots ,k_u({\mathbf {t}})\}\), \({\mathbf {l}}\big (u\underline{i+1}\big )-{\mathbf {l}}(ui)\geqslant -\frac{1}{2} ({\mathbbm {1}}_{\{{\mathbf {e}}(ui)=1\}} + {\mathbbm {1}}_{\{{\mathbf {e}}(u\underline{i+1})=1\}})\).

The notation \(u\underline{i+1}\) means that we are looking at \(i+1\) as a letter, the word \(u\underline{i+1}\) being the concatenation of u and \(i+1\).

Fig. 2
figure 2

Authorized labelling differences when circling around a vertex of type 3 or 4

Fig. 3
figure 3

An example of a mobile, with root of type 1

Traditionally, vertices of type 1 are represented as white circles \(\bigcirc \), vertices of type 2 are “flags” \(\diamond \) while the other two types are dots \(\bullet \). Notice also that we do not need to mention the labels of vertices with types 3 and 4 since the label of such a vertex is the same as that of its father. We let \({\mathbb {T}}_M\) be the set of finite mobiles, \({\mathbb {T}}_M^+\) be the set of finite mobiles such that \({\mathbf {e}}(\emptyset )=1\) and \({\mathbb {T}}_M^0\) be the set of finite mobiles such that \({\mathbf {e}}(\emptyset )=2\). See Fig. 2 for an illustration of the labelling rule, and Fig. 3 for an example of a mobile.

5.3.2 The Bijection

Let \(({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\) be a mobile and let us describe how to transform it into a map (an illustration is given in Fig. 4). Let \(v_1,v_2,\ldots ,v_p\) be, in order, the vertices of type 1 or 2 of \({\mathbf {t}}\) appearing in the standard contour process and \(e_1,e_2,\ldots ,e_p\) and \(l_1,l_2,\ldots ,l_p\) be the corresponding types and labels. We refer to \(v_1,\ldots ,v_p\) as the corners of the tree because a vertex will be visited a number of times equal to the number of angular sectors around it delimited by the tree. Draw \({\mathbf {t}}\) in the plane and add an extra type 1 vertex r outside of \({\mathbf {t}}\), giving it label \(\underset{{\mathbf {e}}(u)=1}{\min }{\mathbf {l}}(u)-1\). Now, for every \(i\in [p]\), define the successor of the ith corner as the next corner of type 1 with label \(l_i-1\) if \(e_i=1\) and \(l_i-\frac{1}{2}\) if \(e_i=2\). If there is no such vertex, then let its successor be r. In both cases, draw an arc between \(v_i\) and the successor. This construction can be done without having any of the arcs intersect. Now erase all the original edges of the tree, as well as vertices of types 3 and 4. Erase as well all the vertices of type 2, merging the corresponding pairs of arcs. We are left with a planar map, with a distinguished vertex r. The root edge depends on the type of the root of the tree: if \({\mathbf {e}}(\emptyset )=1\) then we let the root edge be the first arc which was drawn (have it point to \(\emptyset \) for a positive map, and away from \(\emptyset \) for a negative map). If \({\mathbf {e}}(\emptyset )=2\) then we let the root edge be the result of the merging of the two edges adjacent to \(\emptyset \), pointing to the successor of the first corner encountered in the contour process.

This construction gives us two bijections: one between \({\mathbb {T}}_M^+\) and \({\mathcal {M}}^+\) and one between \({\mathbb {T}}_M^0\) and \({\mathcal {M}}^0\), which we both call \(\Psi \).

Fig. 4
figure 4

Having added a vertex with label \(-2\) to the mobile of Fig. 3, we transform it into a map

It was shown in [24] that the BDFG bijection serves as a link between Galton–Watson mobiles and Boltzmann maps.

Proposition 5.1

Consider an admissible weight sequence \({\mathbf {q}}\) and define an unordered 4-type offspring distribution \(\mathbf {\mu }\) by

$$\begin{aligned}&\mu ^{(1)}(0,0,k,0)=\frac{1}{Z^+_{\mathbf {q}}}(1-\frac{1}{Z^+_{\mathbf {q}}})^k \\&\mu ^{(2)}(0,0,0,1)=1 \\&\mu ^{(3)}(k,k',0,0)=\frac{(Z^+_{\mathbf {q}})^k (Z^{\diamond }_{\mathbf {q}})^{k'}{\left( {\begin{array}{c}2k+k'+1\\ k+1\end{array}}\right) } {\left( {\begin{array}{c}k+k'\\ k\end{array}}\right) }q_{2+2k+k'}}{f^{\bullet }(Z^+_{\mathbf {q}},Z^{\diamond }_{\mathbf {q}})} \\&\mu ^{(4)}(k,k',0,0)=\frac{(Z^+_{\mathbf {q}})^k (Z^{\diamond }_{\mathbf {q}})^{k'}{\left( {\begin{array}{c}2k+k'\\ k\end{array}}\right) }{\left( {\begin{array}{c}k+k'\\ k\end{array}}\right) }q_{1+2k+k'}}{f^{\diamond }(Z^+_{\mathbf {q}},Z^{\diamond }_{\mathbf {q}})}. \end{aligned}$$

Let then \(\mathbf {\zeta }\) be the ordered offspring distribution which is uniform ordering of \(\mu \), as explained in Sect. 2.1. This offspring distribution is irreducible, and it is critical (resp. regular critical) if the weight sequence \({\mathbf {q}}\) is critical (resp. regular critical), while it is subcritical if \({\mathbf {q}}\) is admissible but not critical. Define also, for all ordered offspring type-list \({\mathbf {w}}\), \(\nu ^{(i)}_{{\mathbf {w}}}\) as the uniform measure on the set \(D^{(i)}_{{\mathbf {w}}}\) of allowed displacements to have a mobile, which is precisely \(D^{(i)}_{{\mathbf {w}}} = \{0\}^{|{\mathbf {w}}|}\) if \(i=1\) or \(i=2\) and

$$\begin{aligned} D^{(i)}_{{\mathbf {w}}} {=} \left\{ {\mathbf {y}}{=}(y_i)_{i\in [|{\mathbf {w}}|]}:\; \ \forall i\in \{0,1,\ldots ,|{\mathbf {w}}|\}, \ y_{i+1}-y_i {+} \frac{1}{2}({\mathbbm {1}}_{\{w_i=1\}}{+}{\mathbbm {1}}_{\{w_{i+1}{=}1\}})\in \mathbf {Z_+} \right\} , \end{aligned}$$

if \(i=3\) or \(i=4\), in which case we set by convention \(w_0=w_{|{\mathbf {w}}|+1}=i-2\) and \(y_0=y_{|{\mathbf {w}}|+1}=0\).

Then:

  • if \(({\mathbf {T,E,L}})\) has distribution \(\mathbb {P}^{(1),(0)}_{\mathbf {\zeta },\nu }\), then the random map \(\Psi ({\mathbf {T,E,L}})\) has distribution \(\mathbb {B}^+_{{\mathbf {q}}}\).

  • if \(({\mathbf {F,E,L}})\) is a forest with distribution \(\mathbb {P}^{(2,2),(\frac{1}{2},\frac{1}{2})}_{\mathbf {\zeta },\nu }\), consider the mobile formed by merging both tree components at their roots. The image of this mobile by \(\Psi \) has law \({\mathbb {B}}_{{\mathbf {q}}}^0\).

Remark 3

The operation of merging two trees at their roots can be formalized the following way. Consider two trees \(({\mathbf {t}}_1,{\mathbf {e}}_1)\) \(({\mathbf {t}}_2,{\mathbf {e}}_2)\) which are such that, in both trees, the root has type 2 and has a unique child, with type 4. For \(u\in {\mathbf {t}}_2{\setminus }\{\emptyset \}\), we can write \(u=1u^2\ldots u^k\). Let then \(u'=2u^2\ldots u^k\), and let \({\mathbf {t}}_2'=\left\{ u',\; u\in {\mathbf {t}}_2{\setminus }\{\emptyset \}\right\} \). We can now define \({\mathbf {t}}={\mathbf {t}}_1\cup {\mathbf {t}}_2'\), which is easily checked to be a tree. Types can then simply be assigned by setting, for \(u\in {\mathbf {t}}_1\), \({\mathbf {e}}(u)={\mathbf {e}}_1(u)\) and, for \(u\in {\mathbf {t}}_2{\setminus }\{\emptyset \}\), \({\mathbf {e}}(u')={\mathbf {e}}_2(u)\).

This operation is of course continuous for the local convergence topology since, for any \(k\in \mathbb {Z}_+\), the kth generation of \({\mathbf {t}}\) is completely determined by the kth generations of \({\mathbf {t}}_1\) and \({\mathbf {t}}_2\).

Remark 4

If the weight sequence \({\mathbf {q}}\) is such that \(q_{2n+1}=0\) for all \(n\in \mathbb {Z}_+\), then a \({\mathbf {q}}\)-Boltzmann map is a.s. bipartite, which implies that there will be no vertices of type 2 or 4 in the corresponding tree, in which case we consider the mobile as a tree with two types, and it stays irreducible. Moreover, we then have \(Z^{\diamond }_{{\mathbf {q}}}=0\).

Remark 5

The number of vertices, edges and faces of the map can be read on the tree.

  • \(\#V\big (\Psi ({\mathbf {T,E,L}})\big )=1 + \#_1{\mathbf {T}}.\)

  • \(\#E\big (\Psi ({\mathbf {T,E,L}})\big )=|{\mathbf {T}}|_{\gamma }-1\) with \(\gamma =(1,0,1,1).\)

  • \(\#F\big (\Psi ({\mathbf {T,E,L}})\big ) =|{\mathbf {T}}|_{\gamma }\) with \(\gamma =(0,0,1,1).\)

5.4 Infinite Maps and Local Convergence

If (me) is a rooted map and \(k\in \mathbb {N}\), we let \(B_{m,e}(k)\) be the map formed by all vertices whose graph distance to \(e^+\) is less than or equal to k, and all edges connecting such vertices, except if the distance between each vertex of such an edge and \(e^+\) is exactly k. The map \(B_{m,e}(k)\) is still rooted at the same oriented edge e. For two rooted maps (me) and \((m',e')\), let \(d\big ((m,e),(m',e')\big )=\frac{1}{1+p}\) where p is the supremum of all integers k such that \(B_{m,e}(k)\) is equivalent to \(B_{m',e'}(k)\). This defines a metric on the set of rooted maps. Call then \(\overline{\mathcal {M}}\) the completion of this set. Elements of \(\overline{\mathcal {M}}\) which are not finite maps are then called infinite maps, which we mostly consider as a sequence of compatible finite maps: \((m,e)=(m_i,e_i)_{i\in \mathbb {N}}\) with \((m_i,e_i)=B_{m_{i+1},e_{i+1}}(i)\) for all i. Note in particular that infinite maps are not pointed.

As with trees and forests, convergence in distribution is simply characterized: if \((M_n,E_n)_{n\in \mathbb {N}}\) is a sequence of random rooted maps, one can check that it converges in distribution to a certain random map (ME) if and only if, for all finite deterministic maps \((m',e')\) and all \(k\in \mathbb {N}\), \(\mathbb {P}\big (B_{(M_n,E_n)}(k)=(m',e')\big )\) converges to \(\mathbb {P}\big (B_{(M,E)}(k)=(m',e')\big ).\)

6 Convergence to Infinite Boltzmann Maps

We now take a critical weight sequence \({\mathbf {q}}\), and take \(\mu \), \(\mathbf {\zeta }\) and \(\nu \) as defined in Proposition 5.1. Since, in the BDFG bijection, the number of vertices, edges and faces of the map correspond to the vertices of certain types of the tree, we expect Theorem 3.1 to tell us that Boltzmann maps with large amounts of vertices converge locally. This section is dedicated to establishing the fact that this is indeed the case.

We first define three different periodicity factors \(d_V\), \(d_E\) and \(d_F\) corresponding to vertices, edges and faces

$$\begin{aligned} d_V&=gcd\Big (\left\{ n\in \mathbb {N}:\,q_{2n+2}>0\right\} \cup \left\{ m\in 2\mathbb {Z}_+ +1:\,q_{m+2}>0\right\} \Big ) \\ d_E&=gcd\Big (\left\{ n\in \mathbb {N}:\,q_{2n}>0\right\} \cup \left\{ m\in 2\mathbb {Z}_+ +1:\,q_m>0\right\} \Big )\\ d_F&= {\left\{ \begin{array}{ll} 1&{}\text { if }\exists n\in \mathbb {N}:\,q_{2n}>0 \\ 2&{}\text { otherwise.} \end{array}\right. } \end{aligned}$$

Let also \(\alpha _V=2\) and \(\alpha _E=\alpha _F=0\).

Theorem 6.1

Let \(I\in \{V,E,F\}\). If \(I=E\) or \(I=F\), we also assume that \({\mathbf {q}}\) is regular critical. For appropriate \(n\in \mathbb {N}\), let \((M_n,E_n,R_n)\) be a variable with distribution \(B_{{\mathbf {q}}}\), conditioned on \(\#I(M)=n\). We then have

$$\begin{aligned} (M_n,E_n) \underset{\underset{n\in \alpha _I+d_I\mathbb {Z}_+}{n\rightarrow \infty }}{\Longrightarrow }(M_{\infty },E_{\infty }) \end{aligned}$$

in distribution for the local convergence, where \((M_{\infty },E_{\infty })\) is an infinite rooted map which we call the infinite \({\mathbf {q}}\)-Boltzmann map.

If \(I=E\) or \(I=F\), assuming \({\mathbf {q}}\) is regular critical, consider now \((M_n,E_n)\) with distribution \(B^{\emptyset }_{{\mathbf {q}}}\) conditioned on \(\#I(M)=n\). We then also have

$$\begin{aligned} (M_n,E_n) \underset{\underset{n\in \alpha _I+d_I\mathbb {Z}_+}{n\rightarrow \infty }}{\Longrightarrow }(M_{\infty },E_{\infty }) \end{aligned}$$

with the same limiting map \((M_{\infty },E_{\infty })\).

The choice of the subsequence \((\alpha _I+d_In)_{n\in \mathbb {Z}_+}\) is explained by the fact that, just as with trees, the number of vertices/edges/faces of a map with distribution \(B_{{\mathbf {q}}}\) can only be of the form \(\alpha _I+d_In\) for integer n, and this has nonzero probability for n large enough. This will be explained in Sect. 6.3.1.

The infinite map \((M_{\infty },E_{\infty })\) is moreover planar, in the sense that it is possible to embed it in the plane in such a way that bounded subsets of the plane only encounter a finite number of edges, see Lemma 6.3.

6.1 Two Applications

6.1.1 The Example of Uniform p-Angulations

Here we take an integer \(p\geqslant 3\) and consider maps which only have faces of degree p, which we call p-angulations. The well-known Euler’s formula will show us that the number of vertices and edges of such a map is determined by its number of faces. Let m be a finite p-angulation, and let V be its number of vertices, E be its number of edges and F be its number of faces. Since each edge is adjacent to two faces, we have \(p\#F(m)=2\#E(m)\). Euler’s formula, on the other hand, states that \(\#V(m)-\#E(m)+\#F(m)=2\). Combining the two shows that

$$\begin{aligned} \#V(m)=2+\left( \frac{p}{2}-1\right) \#F(m). \end{aligned}$$

Note that these relations imply that there is no real difference between pointed and non-pointed maps when looking at p-angulations, since a uniform pointed map with a fixed number of faces can be obtain by taking a uniform non-pointed map and uniformly choosing the specific point afterward.

At this point, we must split the discussion according to the parity of p.

Uniform Infinite 2p -angulation Let \(p\geqslant 2\). It has been shown in [21] that the weight sequence \({\mathbf {q}}\) defined by

$$\begin{aligned} q_n=\frac{(p-1)^{p-1}}{p^p {\left( {\begin{array}{c}2p-2\\ p-1\end{array}}\right) } } {\mathbbm {1}}_{\{n=2p\}} \end{aligned}$$

is critical, and it is in fact regular critical because it has finite support. Since the weight of a map here only depends on its number of faces (or vertices), it is immediate that conditioning the distribution \(B_{{\mathbf {q}}}\) to the set of maps with n face yields the uniform 2p-angulation with n faces. We thus obtain the following.

Proposition 6.1

(Uniform infinite 2p-angulation) Let \(p\geqslant 2\) and, for \(n\in \mathbb {N}\), let \((M_n,E_n)\) be a uniform rooted map among the set of rooted 2p-angulation with n faces. Then \((M_n,E_n)\) converges locally in distribution as n goes to infinity, the limit being a random rooted map which we call the uniform infinite 2p -angulation.

In the case where \(2p=4\), we obtain the local convergence in distribution of large uniform quadrangulation to the UIPQ which was first obtained by Krikun [16]. In fact, our method here ends up being essentially the same as that of [10], where we have used the BDFG bijection in a situation where the simpler Cori–Vauquelin–Schaeffer bijection would have sufficed.

Uniform Infinite \(2p+1\) -angulation Let \(p\in \mathbb {N}\) and consider \(2p+1\)-angulations. It follows from the relation \(\#V(m)= (p-1/2)\#F(m)\) that a \(2p+1\)-angulation must have an even number of faces. As in the even case, a uniform \(2p+1\)-angulation can be seen as a conditioned Boltzmann-distributed random map for the weight sequence \({\mathbf {q}}\) defined by

$$\begin{aligned} q_n=\alpha {\mathbbm {1}}_{\{n=2p+1\}}, \end{aligned}$$

for any positive number \(\alpha \). It has been shown in [9], Proposition A.2 that there is one value of \(\alpha \) which makes this sequence regular critical. Theorem 6.1 then gives us the following.

Proposition 6.2

(Uniform infinite \(2p+1\)-angulation) Let \(p\in \mathbb {N}\) and, for \(n\in \mathbb {N}\), let \((M_n,E_n)\) be a uniform rooted map among the set of rooted \(2p+1\)-angulation with 2n faces. Then \((M_n,E_n)\) converges locally in distribution as n goes to infinity, to a random rooted map called the uniform infinite \((2p+1)\) -angulation.

6.2 Uniform Planar Maps

It has been shown in [23] that a uniform map chosen among the set of rooted maps with n edges converges locally in distribution. Our methods also allow us to get this result, and that it is also true if we take a uniform pointed and rooted map.

Proposition 6.3

  1. (i)

    For \(n\in \mathbb {N}\), let \((M_n,E_n,R_n)\) be a uniform random variable in the set of rooted and pointed maps with n edges. Then \((M_n^{\bullet },E_n^{\bullet })\) converges locally in distribution to an infinite map called the uniform infinite planar map (UIPM).

  2. (ii)

    For \(n\in \mathbb {N}\), let \((M_n,E_n)\) be a uniform random variable in the set of rooted maps with n edges. Then \((M_n,E_n)\) converges locally in distribution to the UIPM.

Proof

Let \(\lambda =\frac{1}{2\sqrt{3}}\) and define the weight sequence \({\mathbf {q}}\) by \(q_n=\lambda ^n\) for \(n\in \mathbb {N}\). Given any map (mer), notice that \(W_{{\mathbf {q}}}=\lambda ^{\sum _{f\in {\mathcal {F}}_m} \deg (f)}=\lambda ^{\#E(m)}\), where \(\#E(m)\) is the number of edges of m. Thus, assuming that \({\mathbf {q}}\) is admissible (which we prove in the following), conditioning a map with distribution \(B_{{\mathbf {q}}}\) (resp. \(B_{{\mathbf {q}}}^{\emptyset }\)) on having n edges gives a uniform rooted map (resp. uniform rooted and pointed map) with n edges.

Since we clearly have \(d_F=1\), all we need to do now is to prove that \({\mathbf {q}}\) is regular critical. We start by computing the two generating functions \(f^{\bullet }\) and \(f^{\diamond }\). Recall the formulas

$$\begin{aligned} \sum _{k=0}^{\infty } {\left( {\begin{array}{c}k+p\\ k\end{array}}\right) }x^k=\frac{1}{(1-x)^{p+1}} \end{aligned}$$

for \(p\in \mathbb {Z}_+\) and \(|x|<1\), as well as

$$\begin{aligned} \sum _{k=0}^{\infty } {\left( {\begin{array}{c}2k\\ k\end{array}}\right) } z^k = \frac{1}{\sqrt{1-4z}} \end{aligned}$$

and

$$\begin{aligned} \sum _{k=0}^{\infty } {\left( {\begin{array}{c}2k+1\\ k\end{array}}\right) } z^k = \frac{1-\sqrt{1-4z}}{2z\sqrt{1-4z}} \end{aligned}$$

for \(|z|<1/4\). For \(x\geqslant 0\) and \(y\geqslant 0\), we have, with \(z=\frac{\lambda ^2x}{(1-\lambda y)^2}\):

$$\begin{aligned} f^{\bullet }(x,y)&=\sum _{k,k'}\frac{(2k+k'+1)!}{k!(k+1)!(k')!}\lambda ^{2+2k+k'}x^k y^{k'} \\&=\lambda ^2\sum _{k,k'}{\left( {\begin{array}{c}2k+1\\ k\end{array}}\right) }{\left( {\begin{array}{c}2k+k'+1\\ k'\end{array}}\right) }(\lambda ^2x)^k(\lambda y)^{k'} \\&=\lambda ^2\sum _k {\left( {\begin{array}{c}2k+1\\ k\end{array}}\right) } \frac{(\lambda ^2x)^k}{(1-\lambda y)^{2k+2}} \\&=\lambda ^2\frac{1}{(1-\lambda y)^2}\sum _k {\left( {\begin{array}{c}2k+1\\ k\end{array}}\right) } z^k \\&=\lambda ^2\frac{1}{(1-\lambda y)^2} \frac{1-\sqrt{1-4z}}{2z\sqrt{1-4z}}. \end{aligned}$$

and

$$\begin{aligned} f^{\diamond }(x,y)&=\lambda \sum _{k,k'}\frac{(2k+k')!}{(k!)^2!(k')!}(\lambda ^2x)^k(\lambda y)^{k'} \\&=\lambda \sum _{k,k'}{\left( {\begin{array}{c}2k\\ k\end{array}}\right) }{\left( {\begin{array}{c}2k+k'\\ k'\end{array}}\right) }(\lambda ^2x)^k(\lambda y)^{k'} \\&=\lambda \sum _k {\left( {\begin{array}{c}2k\\ k\end{array}}\right) } \frac{(\lambda ^2x)^k}{(1-\lambda y)^{2k+1}} \\&=\lambda \frac{1}{1-\lambda y} \frac{1}{\sqrt{1-4z}}. \end{aligned}$$

Both series then converge if and only if \(4z<1\), otherwise said \(\lambda ^2x<4(1-\lambda y)^2\). We then let the reader check that \(x=4/3\) and \(y=1/\sqrt{3}\) satisfy the wanted conditions for criticality and, since they are not on the edge of the domain, we even have regular criticality. \(\square \)

6.3 Proof of Theorem 6.1

The proof of Theorem 6.1 starts with the proof of the case of rooted and pointed maps. This involves showing the convergence for maps conditioned to be null or positive by using the BDFG bijection and identifying the limiting map as the image of an infinite tree by the bijection, and then removing the conditionings. To go from pointed to non-pointed maps, we will follow ideas from [6] and [1] to show that if a map is conditioned to have n faces or edges, then its number of vertices is well concentrated around a deterministic multiple of n, and thus, biasing by it will not change the convergence.

6.3.1 On the Trees Associated to \(B_{{\mathbf {q}}}\)

We want to investigate the periodic structure of Galton–Watson trees with ordered offspring distribution \(\nu \). We thus let \(\gamma _V=(1,0,0,0)\), \(\gamma _E=(1,0,1,1)\) and \(\gamma _F=(0,0,1,1)\) and, for \(I\in \{V,E,F\}\), we let \(d^{I}\) and \((\alpha ^I_i)_{i\in [4]}\) be the periodicity factors given by Proposition 2.2. Note that we do not yet know that \(d^I=d_I\), where \(d_I\) was defined at the beginning of Sect. 6. This is the main content of the following proposition:

Lemma 6.1

We have, for \(I\in \{V,E,F\}\):

  • \(d^I=d_I\).

  • \(\alpha ^I_1=\gamma ^I_1\).

Moreover, if the weight sequence \({\mathbf {q}}\) is not only supported on \(2\mathbb {N}\), then we also have

  • \(2\alpha ^I_2\equiv \alpha ^I_1\pmod d\).

Proof

We will concentrate on the case where \(I=V\), the other two cases having similar proofs. We first treat the bipartite case separately. In this case, types 1 and 3 alternate in tree, and it is straightforward that \(d^V=gcd\big (\{m\in \mathbb {N},\,q_{2m+2}>0\}\big )\) and that \(\alpha _3=0\). We now assume not to be in this case.

It is immediate that \(\alpha _3=0\) and \(\alpha _4=\alpha _2\) because a vertex of type 1 or 2 can give birth to a single vertex of type 2 or 4, respectively.

Next, take \(m\in \mathbb {N}\) such that \(q_{2m+2}>0\). Using the fact that a vertex of type 3 can give birth to m vertices of type 1, one obtains \(m\equiv 0 \pmod {d^V}\). Thus, \(d^V\) divides the gcd of \(\{m\in \mathbb {N}:\,q_{2m+2}>0\}\).

Now take an odd integer \(m=2n+1\) such that \(q_{m+2}=q_{2n+3}>0\). A vertex of type 3 can then give birth to n vertices of type 1 and one vertex of type 2, and a vertex of type 4 can give birth to \(n+1\) vertices of type 1. We thus obtain \(n+\alpha _2\equiv 0 \pmod d\) and \(n+1 \equiv \alpha _2 \pmod d\). Combining these gives us \(m=2n+1\equiv 0 \pmod {d^V}\) and \(2\alpha _2 \equiv m+1 \equiv 1 \pmod {d^V}\).

We have thus shown \(2\alpha _2 \equiv 1 \pmod {d^V}\) and that \(d^V\) divides \(d_V\). To show that they are equal we require some more refined analysis.

Notice that for words \({\mathbf {w}}=(k,k',0,0)\) such that \(\mu ^{(3)}({\mathbf {w}})>0\), we have \(2k+k'\equiv 0 \pmod {d_V}\). Indeed, if \(k'\) is even, letting \(n=k+\frac{k'}{2}\), we then have \(q_{2n+2}>0\), implying that \(d'\) divides n, while if \(k'\) is odd, we let \(n=2k+k'\), and then \(q_{n+2}>0\) and therefore \(d'\) divides n. Similarly, if \(\mu ^{(4)}({\mathbf {w}})>0\), then \(2k+k'\equiv 1 \pmod {d_V}\). Applying this repeatedly to a tree \(({\mathbf {t}},{\mathbf {e}})\) such that \(\mathbb {P}^{(1)}_{\zeta }({\mathbf {T}}\vdash {\mathbf {t}})>0\) and such that all its leaves are of type 1 or 2, one obtains \(2k+k'=0 \pmod {d_V}\) where k and \(k'\) are respectively the number of leaves of types 1 and 2 in \({\mathbf {t}}\). Taking \(({\mathbf {t}},{\mathbf {e}})\) which has only one generation of type 1, and we do obtain that \(d_V\) divides every member of the support of \(\mu _{1,1}\), which is all we need. \(\square \)

6.3.2 Infinite Mobiles and the BDFG Bijection

We call infinite mobile any infinite 4-type labelled tree \(({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\) which satisfies the conditions of Sect. 5.3.2, which has a unique infinite spine and such that the labels of vertices of type 1 of the spine do not have a lower bound. We let \(\overline{\mathbb {T}}_M\) be the set of all finite and infinite mobiles, and split it in \(\overline{\mathbb {T}}_M=\overline{\mathbb {T}}_M^+\cup \overline{\mathbb {T}}_M^0\) as before.

The BDFG bijection \(\Psi \) can be naturally extended to \(\overline{\mathbb {T}}_M\). Let \(({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\in \overline{\mathbb {T}}_M\), we let \((u_n)_{n\in \mathbb {N}}\) be the sequence of the elements of the spine. This sequence splits the tree in two: the part which is on the left-hand side of the spine, and the part which is on the right-hand side. To be precise, we say that \(v\in {\mathbf {t}}\) is on the left-hand side of the spine if there exists three integers nk and l and a sequence of integers x such that \(v=u_nkx\), \(u_{n+1}=u_nl\) and \(k<l\), and v is on the right-hand side if we have the same, but with \(k>l\).

This splitting allows us to define a contour process, but it has to be indexed by \(\mathbb {Z}\): since every subtree branching out of the spine is finite, we can let \(\big (v(n)\big )_{n\in \mathbb {N}}\) be the contour process of the left-hand side and \(\big (v(-n)\big )_{n\in \mathbb {N}}\) be the other half. This determines a unique sequence \(\big (v(n)\big )_{n\in \mathbb {Z}}\). Since we have assumed that the labels of the vertices of type 1 do not have a lower bound, the notion of successor we used for finite trees is still valid, and in fact, unlike in the case of a finite tree, we do not need to add an extra vertex. As in the finite case, we connect every vertex of type 1 or 2 to its successor, erase all the original edges of the tree, erase vertices of types 2, 3 and 4, merging the two edges adjacent to every vertex of type 2. This leaves us with an infinite map (by construction, the arcs do not intersect, while the following Lemma 6.2 implies that it is locally finite). We give this map a root edge which is determined with the same rules as in the finite case; however, it is not pointed. We call this rooted map \(\Psi ({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\).

Lemma 6.2

The extended BDFG function \(\Psi \) is continuous on \(\overline{\mathbb {T}}_M\).

Proof

Let \(({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\) be an infinite mobile. We assume \({\mathbf {e}}(\emptyset )=1\), the other case can be treated the same way. For \(n\in \mathbb {N}\), we need to find \(p \in \mathbb {N}\) such that, for another mobile \(({\mathbf {t}}',{\mathbf {e}}',{\mathbf {l}}')\), if \(({\mathbf {t}}_{\leqslant p},{\mathbf {e}}_{\leqslant p},{\mathbf {l}}_{\leqslant p})=({\mathbf {t}}_{\leqslant p},{\mathbf {e}}_{\leqslant p},{\mathbf {l}}_{\leqslant p})\) then \(B_{m,e}(n)=B_{m',e'}(n)\), where \((m,e,r)=\Psi ({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\) and \((m',e',r')=\Psi ({\mathbf {t}}',{\mathbf {e}}',{\mathbf {l}}')\). Let \(s\in \mathbb {N}\) be large enough such that all the arcs in \(B_{m}(n)\) connect vertices of \({\mathbf {t}}_{\leqslant s}\), let \(x=\underset{v\in {\mathbf {t}}_{\leqslant s}}{\inf }{\mathbf {l}}(v)\) and let u be any type 1 vertex of the spine such that \({\mathbf {l}}(u)<x-1\). Notice now that there are no arcs connecting \({\mathbf {t}}_{\leqslant s}\) and the subtree above u. Indeed, the successor of any vertex of \({\mathbf {t}}_{\leqslant s}\) will be encountered below u while, if v is above u, \({\mathbf {l}}(v)\geqslant x\) would imply that its successor is also above u, while \({\mathbf {l}}(v)\leqslant x-1\) would make it impossible for its successor to be in \(B_{(t,l)}(s)\). Taking p to be the height of u then ends the proof.\(\square \)

Lemma 6.3

For any infinite mobile \(({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\), the infinite map \(\Psi ({\mathbf {t}},{\mathbf {e}},{\mathbf {l}})\) is planar, in the sense that it can be embedded in the plane in such a way that bounded subsets of the plane only encounter a finite number of edges.

Proof

We first start by embedding the mobile in the plane in a convenient way. We draw its infinite spine as the subset \(\{0\}\times \mathbb {Z}_+\), where the child of (0, n) is \((0,n+1)\) for \(n\in \mathbb {Z}_+\). Starting from this, we can then embed the tree in \(\mathbb {Z}\times \mathbb {Z}_+\) such that the second coordinate of a vertex is always its graph distance to the root, and also such that the children of any vertex u always form a set of the type \(\{(n,m),(n+1,m),\ldots ,(n+k_{u}({\mathbf {t}})-1,m)\}\), with \(n\in \mathbb {Z}\) and \(m\in \mathbb {N}\) and their first coordinates are in the correct order. With such an embedding, it is also apparent that there exists a continuous function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) which is decreasing on \((-\infty ,0]\) and increasing on \([0,+\infty )\), which has limit \(+\infty \) at both \(-\infty \) and \(+\infty \) such that \({\mathbf {t}}\) is strictly above the graph of f.

We point out an important fact of the bijection: let u be any corner of type 1 or 2 and let v be its successor. Then, for any corner w of type 1 or 2 which is encountered between u and v in the contour process of the mobile, the successor of w, which we call x, is then encountered between w and v, and the arc between w and x is then enclosed between \({\mathbf {t}}\) and the arc connecting u and v. From this fact, we obtain that all the arcs which connect two points on the left-hand side of the tree can be embedded without any issues: first draw the arcs connecting the line of successors starting at the root, and enclose in each of them the other necessary arcs.

The arcs which originate from the right-hand side of the tree are a more complex issue, because some of them might start very high on the right-hand side, go around a large part of the tree and end up high on the left-hand side. To make sure that these are well separated, we introduce for \(n\in \mathbb {Z}_+\) the “strip”

$$\begin{aligned} S_n=\left\{ (x,y)\in {\mathbb {R}}^2: \ f(x)-n+1\leqslant y < f(x)-n\right\} \end{aligned}$$

We now explore the right-hand side of the tree in counterclockwise order and, when we encounter the nth corner of type 1 or 2, we join it to \(S_n\). We point out that it is possible to do this in such a way that second coordinate along the path is nondecreasing. We then do the same thing for the corner’s successor, and then join both halves by a path which stays in \(S_n\).

The paths we have drawn this way still do not intersect because of the “enclosure” property as before, and this embedding is indeed such that bounded subsets of \({\mathbb {R}}^2\) only encounter a finite number of edges. This is because we have split these edges in parts which are in \(S_n\), of which there is only one for every n, and parts which originate from vertices of the tree and have nondecreasing second coordinate, of which there are a finite amount in bounded subsets because there is a finite amount of vertices of \({\mathbf {t}}\) with bounded second coordinate. \(\square \)

6.3.3 Behavior of the Labels on the Spine of the Infinite Tree

Let \(({\mathbf {T,E,L}})\) be a 4-tree with law \(\widehat{\mathbb {P}}^{(1),(0)}_{\mathbf {\zeta },\nu }\) or the tree obtained from merging both components of a forest with distribution \(\widehat{\mathbb {P}}^{(2,2),(\frac{1}{2},\frac{1}{2})}_{\mathbf {\zeta },\nu }\) at their roots. The aim of this section is to show that it is an infinite mobile, that is, that the labels on the spine do not have a lower bound. Let us first describe it quickly.

The root of \(\mathbf {T}\) has either type 1 and label 0, or type 2 and label 1 / 2, in which case it has (exceptionally) two children of type 4, one of them (uniformly selected) being on the spine. The vertices which are not on the spine have offspring distribution \(\zeta \), which was defined in Proposition 5.1 as the uniform ordering of \(\mu \), while vertices which are on the spine have offspring \(\widehat{\zeta }\), defined by (3.2). The distribution \(\widehat{\zeta }\) is itself the uniform ordering of a distribution \(\widehat{\mu }\) on \((\mathbb {Z}_+)^4\) which we defined by

$$\begin{aligned} \widehat{\mu }^{(i)}(k_1,k_2,k_3,k_4)=\frac{k_1b_1+k_2b_2+k_3b_3+k_4b_4}{b_i}\mu ^{(i)}(k_1,k_2,k_3,k_4) \end{aligned}$$

for \(i\in [4]\) and \(k_1,k_2,k_3,k_4\in \mathbb {Z}_+\) and where \(b_1,b_2,b_3,b_4\) are some positive numbers which depend on \({\mathbf {q}}\). The label displacement distribution \(\nu ^{(i)}_{{\mathbf {w}}}\) for a type \(i\in [K]\) and a word \({\mathbf {w}}\) is then the uniform distribution on the set \(D^{(i)}_{{\mathbf {w}}}\) which was defined in Proposition 5.1.

Lemma 6.4

Let \(i\in \{1,2,3,4\}\) and \({\mathbf {w}}\in \mathcal {W}_4\) such that \(\zeta ^{(i)}_{{\mathbf {w}}}>0\). Define the reversed word \(\overset{\leftarrow }{{\mathbf {w}}}=(w_{|{\mathbf {w}}|},\ldots ,w_1)\), and, for a label sequence \({\mathbf {y}}=(y_i)_{i\in [|{\mathbf {w}}|]}\), let \(\overset{\leftarrow }{{\mathbf {y}}}=(-y_{|{\mathbf {w}}|},-y_{|{\mathbf {w}}|-1},\ldots ,-y_1)\). The function which maps \({\mathbf {y}}\) to \(\overset{\leftarrow }{{\mathbf {y}}}\) is a bijection between \(D^{(i)}_{{\mathbf {w}}}\) and \(D^{(i)}_{\overset{\leftarrow }{{\mathbf {w}}}}\), sets which are defined in Proposition 5.1.

As a corollary, we get that, if \({\mathbf {W}}\) has distribution \(\widehat{\zeta }^{(i)}\) for some i and \({\mathbf {Y}}\) has distribution \(\nu ^{(i)}_{\mathbf {W}}\) conditionally on \({\mathbf {W}}\), then the pair \((\overset{\leftarrow }{\mathbf {W}},\overset{\leftarrow }{\mathbf {Y}})\) has the same distribution as \(({\mathbf {W}},{\mathbf {Y}})\).

Proof

If \(i=1\) or \(i=2\) then the result is immediate, since \(\overset{\leftarrow }{{\mathbf {w}}}={\mathbf {w}}\) and \(D^{(i)}_{{\mathbf {w}}}\) only has one element.

If \(i=3\) or \(i=4\), bijectivity of the map comes from the fact that reversing a sequence (and eventually changing the signs of its elements) is an involutive operation, and thus, we only need to check that \(\overset{\leftarrow }{{\mathbf {y}}}\in D^{(i)}_{\overset{\leftarrow }{{\mathbf {w}}}}\) for any displacement list \({\mathbf {y}}\), which is straightforward given the definitions, since \((-y_{|{\mathbf {w}}|+1-(i+1)})-(-y_{|{\mathbf {w}}|+1-i})=y_{|{\mathbf {w}}|-i+1}-y_{|{\mathbf {w}}|-i}\) for \(i\in \{0,\ldots ,|{\mathbf {w}}|\}\). \(\square \)

Lemma 6.5

Let, for \(n\in \mathbb {Z}_+\), \(U_n\) be the \((n+1)\)th vertex of type 1 of the spine of \({\mathbf {T}}\). We then have

$$\begin{aligned} \underset{n\in \mathbb {N}}{\inf }\ {\mathbf {L}}(U_n) = -\infty . \end{aligned}$$

Proof

Note that \(U_n\) is well defined for all \(n\in \mathbb {Z}_+\), because the number of vertices of type 1 on the spine of \({\mathbf {T}}\) is a.s. infinite. Indeed, if it were not the case then all the vertices on the spine after a certain height would have type 2 or 4, but since a vertex of type 4 has positive probability of having at least one child of type 1, having an infinite sequence of vertices 2 and 4 has probability 0.

Notice then that \(\big ({\mathbf {L}}(U_n)\big )_{n\in \mathbb {Z}_+}\) is in fact a centered random walk in \({\mathbf {Z}}\). It is a random walk because of the construction—the set of descendants of a vertex of type 1 of the spine with label k will have distribution \(\widehat{\mathbb {P}}^{(1,k)}_{\mathbf {\zeta },\nu }\). We can see that it is centered thanks to Lemma 6.4. Define the mirrored tree \((\overset{\leftarrow }{{\mathbf {T}}},\overset{\leftarrow }{{\mathbf {E}}},\overset{\leftarrow }{{\mathbf {L}}})\) by reversing the order of all the offspring of \({\mathbf {T}}\). To precise, if \(u=u_1u_2\ldots u_n\in {\mathbf {T}}\), then let, for \(i\in [n]\), \(v_i=k_{u_1\ldots u_{i-1}}-i+1\) and let then \(\overset{\leftarrow }{u}=v_1\ldots v_n\). Let then \(\overset{\leftarrow }{{\mathbf {E}}}(\overset{\leftarrow }{u})={\mathbf {E}}(u)\) and, define the labels \(\overset{\leftarrow }{{\mathbf {L}}}\) on \(\overset{\leftarrow }{{\mathbf {T}}}\) by \(\overset{\leftarrow }{{\mathbf {L}}}(\emptyset )={\mathbf {L}}(\emptyset )\) and, for all u, \({\mathbf {y}}_{\overset{\leftarrow }{u}}=\overset{\leftarrow }{{\mathbf {y}}_u}\) (as defined in Lemma 6.4). Since, for \(i\in [4]\) and \({\mathbf {w}}\in \mathcal {W}_4\) the distribution \(\zeta ^{(i)}\) is the uniform ordering of \(\mu ^{(i)}\) and \(\nu ^{(i)}_{{\mathbf {w}}}\) is uniform on \(D^{(i)}_{{\mathbf {w}}}\), we obtain from Lemma 6.4 that \((\overset{\leftarrow }{{\mathbf {T}}},\overset{\leftarrow }{{\mathbf {E}}},\overset{\leftarrow }{{\mathbf {L}}})\) has the same distribution as \(({\mathbf {T}},{\mathbf {E}},{\mathbf {L}})\). In particular, \({\mathbf {L}}(U_1)-{\mathbf {L}}(U_0)\) has the same distribution as \({\mathbf {L}}(U_0)-{\mathbf {L}}(U_1)\), making its distribution centered. In particular, the centered random walk \(\big ({\mathbf {L}}(U_n)\big )_{n\in \mathbb {Z}_+}\) then has a.s. no upper or lower bounds, for example by [14], Theorem 8.2. \(\square \)

6.3.4 Removing Conditionings

Take once again \(I\in \{V,E,F\}\). We need for this section some extra notation: for \(n\in \mathbb {N}\), \({\mathcal {M}}_n\) is the set of pointed and rooted maps (mer) with \(\#I(m)=n\), \({\mathcal {M}}_n^+\) and \({\mathcal {M}}_n^0\) are the analogous sets of positive and null maps. The probability measures \(B_{{\mathbf {q}}}^n\), \(B_{{\mathbf {q}}}^{n,+}\) and \(B_{{\mathbf {q}}}^{n,0}\) are also the associated conditioned versions of \(B_{{\mathbf {q}}}.\)

The work done in the previous sections shows that maps with distribution \(B_{{\mathbf {q}}}^{n,+}\) and \(B_{{\mathbf {q}}}^{n,0}\) converge in distribution along the subsequence \((\alpha _I+d_In)_{n\in \mathbb {N}}\) (considering \(B_{{\mathbf {q}}}^{n,0}\) only in the non-bipartite case). To show that maps with distribution \(B_{{\mathbf {q}}}^n\) converge, all that is left for us to do is to show that the two quantities \(B_{{\mathbf {q}}}^n ({\mathcal {M}}_n^+)\) and \(B_{{\mathbf {q}}}^n({\mathcal {M}}_n^0)\) converge (along the same subsequence). Since \(2B_{{\mathbf {q}}}^n ({\mathcal {M}}_n^+)+B_{{\mathbf {q}}}^n({\mathcal {M}}_n^0)=1\), we can in fact restrict ourselves to showing that the quotient \(\frac{B_{{\mathbf {q}}}^n ({\mathcal {M}}_n^+)}{B_{{\mathbf {q}}}^n( {\mathcal {M}}_n^0)}\) converges. Elementary calculations on conditionings give us

$$\begin{aligned} \frac{B_{{\mathbf {q}}}^n ({\mathcal {M}}_n^+)}{B_{{\mathbf {q}}}^n( {\mathcal {M}}_n^0)}&=\frac{B_{{\mathbf {q}}}\Big ((M,E,R)\in {\mathcal {M}}^+\;|\;(M,E,R)\in {\mathcal {M}}_n\Big )}{B_{{\mathbf {q}}}\Big ((M,E,R)\in {\mathcal {M}}^0\;|\;(M,E,R)\in {\mathcal {M}}_n\Big )} \\&=\frac{B_{{\mathbf {q}}}^+\Big ((M,E,R) \in {\mathcal {M}}_n\Big )}{B_{{\mathbf {q}}}^0\Big ((M,E,R)\in {\mathcal {M}}_n\Big )}\frac{B_{{\mathbf {q}}}( {\mathcal {M}}^+)}{B_{{\mathbf {q}}}({\mathcal {M}}^0)}. \end{aligned}$$

Recall that, in the BDFG bijection, the number of vertices of the map is exactly one more than the number of vertices of type 1 in the mobile. As a consequence, we have

$$\begin{aligned} \frac{B_{{\mathbf {q}}}^+\Big ((M,E,R)\in {\mathcal {M}}_n\Big )}{B_{{\mathbf {q}}}^0\Big ((M,E,R)\in {\mathcal {M}}_n\Big )} =\frac{\mathbb {P}^{(1)}_{\zeta } \big ( |{\mathbf {T}}|_{\gamma _I}=n-1\big )}{\mathbb {P}^{(2,2)}\big ( |{\mathbf {F}}|_{\gamma _I}=n-1)}. \end{aligned}$$

We then deduce from \((H_{{\mathbf {w}}})\) and Lemma 6.1 that this quotient indeed converges as n converges to infinity, along the \((\alpha _I+dn)_{n\in \mathbb {N}}\) subsequence.

6.3.5 The Non-pointed Case

We follow here the method given in [1], Sect. 6. We now assume that \({\mathbf {q}}\) is regular critical, and that \(I\in \{E,F\}\), and \((M_n,E_n,R_n)\) is still a map with distribution \(B_{{\mathbf {q}}}\) conditioned on \(\#I(M_n)=n\). We show that the number of vertices of \(M_n\) is concentrated around a multiple of n.

Lemma 6.6

There exists \(m>0\) such that, for \(\delta >0\), there exists \(C_{\delta }>0\) such that, for \(n\in \mathbb {N}\) large enough,

$$\begin{aligned} \mathbb {P}\Big [\big |\#V(M_n) - m n\big |>\delta n\Big ]\leqslant \exp (-C_{\delta }n) \end{aligned}$$

This lemma is itself a consequence of this similar result on trees:

Lemma 6.7

Let \(K\in \mathbb {N}\) and let \(\nu \) be a non-degenerate, irreducible and regular critical K-type ordered offspring distribution. Let \(\gamma \in (\mathbb {Z}_+)^K\) and \(\gamma '\in (\mathbb {Z}_+)^K\) be two size measuring vectors. There then exists \(m>0\) such that, for any type \(i\in [K]\) and \(\delta >0\), there exists \(c_{\delta }>0\) such that, for \(n\in \mathbb {N}\) large enough,

$$\begin{aligned} \mathbb {P}^{(i)}\big (|\mathbf {T}|_{\gamma }=n,\Big ||\mathbf {T}|_{\gamma '}-mn\Big |\geqslant \delta n\big )\leqslant \exp (-c_{\delta }n) \end{aligned}$$

Proof

We treat the case where \(\gamma _k={\mathbbm {1}}_{k=i}\) (i being the type of the root) and \(\gamma '_k={\mathbbm {1}}_{k=j}\), and leave the generalization to all \(\gamma \) and \(\gamma '\) to the reader. In this case, the value of m we are looking for is in fact \(a_j/a_i\), the average of the measure \(\xi _{j,i}\) defined in Sect. 2.2.

Consider an infinite sequence of i.i.d trees \(({\mathbf {T}}_n)_{n\in \mathbb {N}}\) with distribution \(\mathbb {P}^{(i)}\) and list the vertices of type i of these trees in lexicographical order. For n in \(\mathbb {N}\), call \(A_n\) the total number of vertices of type j placed between the nth element of this list and the next generation of type i. Notice now that the event where \(|\#_j\mathbf {T}_1-\frac{a_j}{a_i} n|>\delta n \) and \(\#_1\mathbf {T}_1=n\) is included in the event where \(|A_1+\cdots +A_n - \frac{a_j}{a_i} n|>\delta n\), thus giving us

$$\begin{aligned} \mathbb {P}^{(1)}\Big (|\#_i{\mathbf {T}}-\frac{a_j}{a_i} n|>\delta n \;\bigcap \; \#_1{\mathbf {T}}=n\Big )\leqslant \mathbb {P}\big (|A_1+\cdots +A_n - \frac{a_j}{a_i} n|>\delta n\big ), \end{aligned}$$

Since these variables are i.i.d with distribution \(\xi _{j,i}\) and have an exponential moment thanks to Proposition 2.1, point (iv), we can apply Cramér’s theorem which gives us a constant \(c_{\delta }>0\) such that, for large enough \(n\in \mathbb {N}\),

$$\begin{aligned} \mathbb {P}\big (|A_1+\cdots +A_n - \frac{a_j}{a_1} n|>\delta n\big ) \leqslant \exp (-c_{\delta }n), \end{aligned}$$

thus ending the proof. \(\square \)

Proving Lemma 6.6 from Lemma 6.7 then simply consists in applying the BDFG bijection and noticing that conditioning on events of the form \(\{|\mathbf {T}|_{\gamma }=n\}\) does not change anything here, because the probability of such an event is of order \(n^{-3/2}\), the inverse of which can be absorbed in the exponential factor. \(\square \)

End of The proof of Theorem 6.1, Second Part Since we will simultaneously manipulate pointed and non-pointed maps, we change our notation slightly here: \((M_n,E_n,R_n)\) is a rooted and pointed map with distribution \(B_{{\mathbf {q}}}\) conditioned on \(\#I(M_n)=n\) and \((M_n^{\emptyset }, E_n^\emptyset )\) is a rooted map with distribution \(B_{{\mathbf {q}}}^\emptyset \) conditioned on \(\#I(M_n^\emptyset )=n\). As mentioned earlier, the distribution of \((M_n,E_n)\) is that of a biased version of \((M_n^{\emptyset }, E_n^\emptyset )\). We write this inversely: for bounded functions F, we have

$$\begin{aligned} {\mathbb {E}}\big [F(M_n^\emptyset ,E_n^\emptyset )\big ] ={\mathbb {E}}[\frac{1}{\#V(M_n)}]^{-1}{\mathbb {E}}\big [\frac{F(M_n,E_n)}{\#V(M_n)}\big ]. \end{aligned}$$

If we let \(X_n=mn(\#V(M_n))^{-1}\), we get the statement

$$\begin{aligned} {\mathbb {E}}\big [F(M_n^\emptyset ,E_n^\emptyset )\big ] ={\mathbb {E}}\big [F(M_n,E_n)\frac{X_n}{\mathbb {E}[X_n]}\big ], \end{aligned}$$

which yields

$$\begin{aligned} {\mathbb {E}}\big [|F(M_n^\emptyset ,E_n^\emptyset )-F(M_n,E_n)|\big ]= {\mathbb {E}}\Big [F(M_n,E_n)\Big |1-\frac{X_n}{\mathbb {E}[X_n]}\Big |\Big ]. \end{aligned}$$

Proving that \(X_n\) converges to 1 in \(L^1\) will then end the proof. Take \(\delta >0\) and write

$$\begin{aligned} {\mathbb {E}}[|X_n-1|]\leqslant \delta + {\mathbb {E}}[|X_n-1|{\mathbbm {1}}_{\{|X_n-1|>\delta \}}]. \end{aligned}$$

Let \(\varepsilon =\delta (1-\delta )^{-1}\), such that the event \(\{|X_n-1|>\delta \}\) is included in the event \(\{|\#V(M_n) - m n|>\varepsilon n\}\). Recalling that \(X_n\leqslant mn\), we then have

$$\begin{aligned} {\mathbb {E}}[|X_n-1|]\leqslant \delta + (mn+1)\mathbb {P}(|Y_n - m n|>\varepsilon n), \end{aligned}$$

and Lemma 6.6 ends the proof of Proposition 6.3. \(\square \)

7 Recurrence of the Infinite Map

The aim of this section is to prove the following:

Theorem 7.1

The random rooted graph \((M_{\infty },E_{\infty }^+)\) is almost surely recurrent.

Our principal tool for the proof will be the main result of [12]: since \((M_{\infty },E_{\infty }^+)\) is the limit in distribution of \(\big ((M_n,E_n^+),n\in \mathbb {N}\big )\), and since \(E_n^+\) is chosen according to the stationary distribution on \(M_n\) (that is, a vertex is chosen with probability proportional to its degree, i.e., its number of adjacent edges), then Theorem 1.1 of [12] states that if we can find positive constants \(\lambda \) and C such that, for all \(n\in \mathbb {N}\),

$$\begin{aligned} \mathbb {P}({\mathrm {deg}}(E^+)\geqslant n) \leqslant C{\mathrm {e}}^{-\lambda n}, \end{aligned}$$

then Theorem 7.1 will be proven.

We invite the reader to read Appendix 8 where we discuss a few elementary results concerning random variables with such exponential tails.

7.1 The Case of Positive Maps

Picture a mobile \(({\mathbf {T,E,L}})\) with distribution \(\widehat{\mathbb {P}}^{(1),(0)}_{\zeta ,\nu }\): it has an infinite spine, and on its right and left sides are grafted some finite trees. Since the BDFG bijection makes \(\emptyset \) into \(e^+\), we will want to show that \(\emptyset \) has an exponentially integrable number of successors and is the successor of an exponentially integrable number of vertices. We start with a simplified case.

Proposition 7.1

Let \({\mathbf {A}}\) be a mobile with distribution \(\mathbb {P}_{\mathbf {\zeta },\nu }^{(1),(0)}\) conditioned to the event where \(\emptyset \) has exactly one child. Let X be the number of corners of \({\mathbf {A}}\) for which \(\emptyset \) is the successor. Then X is an \(EI(\lambda )\) variable for a certain \(\lambda >0\).

Proof

Recall that X is the number of corners labelled 1 or \(\frac{1}{2}\) met before encountering a vertex labelled 0 while circling counterclockwise around the tree \({\mathbf {A}}\). We will separately treat corners of types 1 and 2.

Let \(X_1\) be the number of corners of type 1 encountered. We claim that, for all n,

$$\begin{aligned} \mathbb {P}(X_1=n \; | \; X_1\geqslant n)\geqslant \alpha (1-\frac{1}{Z^+}), \end{aligned}$$
(7.1)

where \(\alpha >0\) is the probability that, given a vertex of type 3 labelled 1, its rightmost offspring is of type 1 and has label 0. The fact that \(\alpha \) is strictly positive comes from the fact that there exists \(i\geqslant 3\) such that \(q_i>0\). In the case where such an i is different from 3, the vertex of type 3 can have offspring with at least one child of type 1, the uniform ordering of the offspring means that this child can be the rightmost one, and the distribution of the label displacements shows that it can have label 0. For the case where \(q_3>0\) and \(q_i=0\) for \(i\geqslant 4\), the type 3 vertex can have a unique child of type 2 with label \(\frac{1}{2}\), which can have a unique child of type 4 which can have a unique child of type 1 with label 0.

Inequality (7.1) is obtained by recalling from Proposition 5.1 that the offspring of vertices of type 1 is only made of vertices of type 3, and that their number follows a geometric distribution with parameter \(1-\frac{1}{Z^+}\). Thus, whenever we visit a corner of a type 1 vertex with label 1, there is a \(1-\frac{1}{Z^+}\) chance that this vertex has another child. This immediately gives us (7.1), and a simple induction shows that \(X_1\) is indeed an EI variable.

Let now \(X_2\) be the number of vertices of type 2 with label \(\frac{1}{2}\) encountered before the first vertex labelled. We insist that we count each vertex exactly once, when we meet them for the first time on the counterclockwise exploration path. Then the same argument as for vertices of label 1 shows that \(\mathbb {P}(X_2=n \; | \; X_2\geqslant n)\geqslant \alpha '\) for some strictly positive \(\alpha '\), and \(X_2\) is an EI variable.

Since \(X\leqslant X_1 + 2X_2\), we now have our conclusion. \(\square \)

The following lemma provides some additional on the structure \({\mathbf {T}}\).

Lemma 7.1

Let \(n\in \mathbb {Z}_+,\) and let V be the nth vertex of the spine of \({\mathbf {T}}\) to have type 1. Let also \(N_r\) and \(N_l\) be the numbers of subtrees rooted at v on the right and left sides of the spine. These variables are i.i.d., and their common distribution is geometric with parameter \(1-\frac{1}{Z^+}\).

Proof

By combining Propositions 5.1 and 3.1, we obtain that the total offspring N of V follows a size-biased geometric distribution: we have

$$\begin{aligned} \mathbb {P}(N=k)=\frac{k}{(Z^+)^2}\left( 1-\frac{1}{Z^+}\right) ^{k-1} \end{aligned}$$

for \(k\geqslant 1\). Recall also that the child of V which is on the spine is chosen uniformly among the offspring of V. We thus have

$$\begin{aligned} \mathbb {P}(N_l=k,N_r=k')=\frac{\mathbb {P}(N=1+k+k')}{1+k+k'}=(\frac{1}{Z^+})(1-\frac{1}{Z^+})^{k+k'}, \end{aligned}$$

ending the proof. \(\square \)

Proof of Theorem 7.1 for Positive Maps First off, by Lemma 7.1, we know that \(\emptyset \) has an EI amount of children, since geometric variables are EI, and therefore has an EI amount of successors. Next, look at all the subtrees of \({\mathbf {T}}\) which are rooted at \(\emptyset \), excluding the subtree containing the spine. These are in EI amount, all independent, and, by Proposition 7.1, the root \(\emptyset \) is connected to an EI amount of vertices in each of them. Lemma 8.2 allows to combine all of this: outside of the subtree containing the spine, \(\emptyset \) is connected to an EI amount of vertices. Thus, we now only need to prove a variation of Proposition 7.1 for this very subtree. This is done in the same way since, when doing the counterclockwise exploration process, the number of children of a vertex of type 1 on the spine is still geometric by Lemma 7.1, while vertices of type 2 only correspond to one corner. \(\square \)

7.2 The Case of Null Maps

The situation for null maps is slightly different, because the vertex \(E^+\) is no longer the root of the mobile. Consider a mobile \(({\mathbf {T,E,L}})\) obtained by merging at their roots the two components of a forest with distribution \(\widehat{\mathbb {P}}^{(2,2),(\frac{1}{2},\frac{1}{2})}_{\mathbf {\zeta },\nu }\), and let (MER) be the map obtained after applying the BDFG bijection. Recall that \(E^+\) is the first vertex of type 1 and label 0 encountered when running the clockwise contour process of \({\mathbf {T}}\). Note that it is either on the spine or on its left side.

An adaptation of the reasoning used in the previous section will work and give us that the number of vertices \(E^+\) is connected to is indeed EI. First, for the number such vertices which are descendants of \(E^+\) in \({\mathbf {T}}\), we find ourselves exactly back to the positive case: if \(E^+\) is not on the spine then we apply Proposition 7.1 to an EI number of subtrees rooted at \(E^+\), and if \(E^+\) is on the spine, we separate the subtrees on the left side of the spine, on the right side of the spine and the subtree containing the spine. Second, we look for points of which \(E^+\) is the successor, but which are not descendants of \(E^+\). These can be obtained by running both the clockwise and counterclockwise contour processes, starting at the root, and stopping them the first time we reach a 0 label. The same arguments as in the proof of Proposition 7.1 show that we encounter an EI number of vertices of labels 1 and \(\frac{1}{2}\) on the way, thus ending the complete proof of Theorem 7.1. \(\square \)