Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Loop measures have become important in the analysis of random walks and fields arising from random walks. Such measures appear in work of Symanzik [9] but the recent revival came from the Brownian loop soup [7] which arose in the study of the Schramm-Loewner evolution. The random walk loop soup is a discrete analogue for which one can show convergence to the Brownian loop soup. The study of such measures and soups has continued: in continuous time by Le Jan [8] and in discrete time in [5, 6]. The purpose of this note is to give an introduction to the discrete time measures and to discuss two of the applications: the relation with loop-erased walk and spanning trees, and a distributional identity between a function of the loop soup and the square of the Gaussian free field. This paper is not intended to be a survey but only a sample of the uses of the loop measure.

While the term “loop measure” may seem vague, we are talking about a specific measure from which a probabilistic construction, the “loop soup” is derived. We are emphasizing the loop measure rather than the loop soup which is a Poissonian realization of the measure because we want to allow the loop measure to take negative or complex values. However, we do consider the loop soup as a complex measure. Measures with negative and complex weights can arise even when studying probabilistic objects; for example, sharp asymptotics for the planar loop-erased random walk were derived in [4] using a loop measure with signed weights.

We will start with some basic definitions. In many ways, the loop measure can be considered a way to understand matrices, especially determinants, and some of the results have very classical counterparts. Most of the theorems about the basic properties can be found in [5, Chap. 9] although that book restricted itself to positive measures. We redo some proofs just to show that positivity of the entries is not important. A key fact is that the total mass of the loop measure is the negative of the logarithm of the determinant of the Laplacian.

We next introduce the loop-erased random walk and show how one can use loop measures to give a short proof of Kirchhoff matrix-tree theorem by using an algorithm due to David Wilson for generating uniform spanning trees.

Our next section describes an isomorphism theorem found by Le Jan that is related to earlier isomorphism theorems of Brydges et al. [1, 2] and Dynkin [3]. In this case, one shows that the local time of a continuous time version of the loop soup has the same distribution as the square of a Gaussian field. Le Jan established this by constructing a continuous-time loop soup. We choose a slightly different, but essentially equivalent, method of using the discrete loop soup and then adding exponential waiting times. This is similar to the construction of continuous time Markov chains by starting with a discrete time chain and then adding the waiting times. In order to get the formulas to work, one needs to consider a correction term that is given by “trivial loops”.

We finally give some discussion of complex Gaussian fields with positive definite Hermitian weights. We first consider real (signed) weights and relate this to the real Gaussian free field. Finally we consider a complex Gaussian field and show that it can be considered as a pair of real Gaussian fields.

2 Definitions

We will consider edge weights, perhaps complex valued, on a finite state space A. A set of weights is the same thing as a matrix Q indexed by A.

  • We call Q acceptable if the matrix with entries | Q(x, y) | has all eigenvalues in the interior of the unit disc. (This is not a standard term, but we will use it for convenience.)

  • We say Q is positive if the entries are nonnegative and Q is real if the entries are real.

  • As usual, we say that Q is symmetric if Q(x, y) = Q(y, x) for all x, y and Q is Hermitian if \(Q(x,y) = \overline{Q(y,x)}\) for all x, y.

  • If Q is Hermitian we say that Q is positive definite if all the eigenvalues are strictly greater than zero, or equivalently if \(\overline{\mathbf{x}} \cdot Q\mathbf{x} > 0\) for all non-zero x.

If \(A \subsetneq A^{{\prime}}\) and Q is the transition matrix for an irreducible Markov chain on \(A^{{\prime}}\), then Q restricted to A is positive and acceptable. This is one of the main examples of interest. If Q is any matrix, then \(\lambda Q\) is acceptable for \(\lambda > 0\) sufficiently small.

If \(V \subset A\) with k elements, we will write Q V for the k × k matrix obtained by restricting Q to V. A path in A of length n is a finite sequence of points

$$\displaystyle{\omega = [\omega _{0},\ldots,\omega _{n}],\;\;\;\;\omega _{j} \in A.}$$

We write | ω |  = n for the number of steps in the path and ω R for the reversed path

$$\displaystyle{\omega ^{R} = [\omega _{ n},\ldots,\omega _{0}].}$$

We allow the trivial paths with | ω |  = 0. We write \(\mathcal{K}_{x,y}(A)\) for the set of all paths in A with ω 0 = x, ω n  = y; if x = y, we include the trivial path.

The matrix Q gives the path measure defined by

$$\displaystyle{Q(\omega ) =\prod _{ j=1}^{n}Q(\omega _{ j-1},\omega _{j}),\quad \omega = [\omega _{0},\ldots,\omega _{n}] \in \bigcup _{x,y\in A}\mathcal{K}_{x,y}(A),}$$

where Q(ω) = 1 if | ω |  = 0. Note that if Q is Hermitian, then \(Q(\omega ^{R}) = \overline{Q(\omega )}\). A path ω is a (rooted) loop (rooted at ω 0 ) if ω 0 = ω n . Note that we write Q both for the edge weights (matrix entries) and for the induced measure on paths.

We let Δ = IQ denote the Laplacian. We write G(x, y) = G Q(x, y) for the Green’s function that can be defined either as

$$\displaystyle{G =\varDelta ^{-1} =\sum _{ j=0}^{\infty }Q^{j}}$$

or by

$$\displaystyle{G(x,y) = Q[\mathcal{K}_{x,y}(A)] =\sum _{\omega \in \mathcal{K}_{x,y}(A)}Q(\omega ).}$$

Provided Q is acceptable, these sums converge absolutely. We write

$$\displaystyle{G(x,y) = G_{R}(x,y) + i\,G_{I}(x,y),}$$

where G R , G I are real matrices.

Let

$$\displaystyle{ f_{x} =\sum Q(\omega ) }$$
(1)

where the sum is over all paths ω from x to x of length at least one that have no other visits to x. A standard renewal argument shows that

$$\displaystyle{ G(x,x) =\sum _{ k=0}^{\infty }f_{ x}^{k}, }$$
(2)

and since the sum is convergent,

$$\displaystyle{\vert f_{x}\vert < 1.}$$

If \(V \subset A\), we will write

$$\displaystyle{G_{V }(x,y) = G^{Q_{V } }(x,y) =\sum _{\omega \in \mathcal{K}_{x,y}(V )}Q(\omega ),}$$

for the corresponding Green’s function associated to paths in V. The next proposition is a well known relation between the determinant of the Laplacian and the Green’s function.

Proposition 2.1

If A ={ x 1 ,…,x n } and \(A_{j} = A\setminus \{x_{1},\ldots,x_{j-1}\}\) ,

$$\displaystyle{\frac{1} {\det \varDelta } =\prod _{ j=1}^{n}G_{ A_{j}}(x_{j},x_{j}).}$$

Proof

By induction on n. If n = 1 and q = Q(x 1, x 1), there is exactly one path of length k in A 1 and it has measure q k. Therefore

$$\displaystyle{G_{A_{1}}(x_{1},x_{1}) =\sum _{ k=0}^{\infty }q^{k} = \frac{1} {1 - q}.}$$

Assume the result is true for each \(A_{j} \subsetneq A\), and note that if \(g(x) = G_{A_{j}}(x,x_{j})\), then

$$\displaystyle{[I - Q_{A_{j}}]\,g =\delta _{x_{j}}}$$

Using Cramer’s rule to solve this linear system. we see that

$$\displaystyle{G_{A_{j}}(x_{j},x_{j}) = \frac{\det [I - Q_{A_{j+1}}]} {\det [I - Q_{A_{j}}]}.}$$

Proposition 2.2

If Q is a Hermitian acceptable matrix, then for each x, G(x,x) > 0. In particular, Δ and G = Δ −1 are positive definite Hermitian matrices.

Proof

It is immediate that Δ and G are Hermitian. If ω is a path in (1), then so is ω R. Since \(Q(\omega ^{R}) = \overline{Q(\omega )}\), we can see that \(\mathfrak{I}[f_{x}] = 0\), and hence − 1 < f x  < 1. As in (2), we can write

$$\displaystyle{G(x,x) =\sum _{ k=0}^{\infty }f_{ x}^{k} = \frac{1} {1 - f_{x}} > 0.}$$

Combining this with Proposition 2.1, we see that each principal minor of Δ is positive and hence Δ is positive definite.

3 Loop Measures

3.1 Definition

Let \(\mathcal{O} = \mathcal{O}(A)\) denote the set of rooted loops of strictly positive length. If Q is an acceptable weight, then the (rooted) loop measure (associated to Q) is the complex measure m = m Q on \(\mathcal{O}\), given by

$$\displaystyle{m(\omega ) = \frac{Q(\omega )} {\vert \omega \vert }.}$$

Note that the loop measure is not the same thing as the path measure restricted to loops. An unrooted loop is an equivalence class of rooted loops in \(\mathcal{O}\) under the equivalence relation generated by

$$\displaystyle{[\omega _{0},\ldots,\omega _{n}] \sim [\omega _{1},\ldots,\omega _{n},\omega _{1}].}$$

In other words, an unrooted loop is a loop for which one forgets the “starting point”. We will write \(\tilde{\omega }\) for unrooted loops and we let \(\tilde{\mathcal{O}}\) denote the set of unrooted loops. We write \(\omega \sim \tilde{\omega }\) if ω is in the equivalence class \(\tilde{\omega }\). The measure m induces a measure that we call \(\tilde{m}\) by

$$\displaystyle{\tilde{m}(\tilde{\omega }) =\sum _{\omega \sim \tilde{\omega }}m(\omega ).}$$

We make several remarks.

  • Unrooted loops have forgotten their roots but have not lost their orientation. In particular, \(\tilde{\omega }\) and \(\tilde{\omega }^{R}\) may be different unrooted loops.

  • Since Q(ω) and | ω | are functions of the unrooted loop, we can write \(Q(\tilde{\omega }),\vert \tilde{\omega }\vert \). If Q is Hermitian, then \(Q(\tilde{\omega }^{R}) = \overline{Q(\tilde{\omega })}.\)

  • Let \(d(\tilde{\omega })\) denote the number of rooted loops ω with \(\omega \sim \tilde{\omega }\). Note that \(d(\tilde{\omega })\) is an integer that divides \(\vert \tilde{\omega }\vert \), but it is possible that \(d(\tilde{\omega }) < \vert \tilde{\omega }\vert \). For example, if a, b, c are distinct elements and \(\tilde{\omega }\) is the unrooted loop with representative

    $$\displaystyle{\omega = [a,b,c,a,b,a,b,c,a,b,a],}$$

    then \(\vert \tilde{\omega }\vert = 10\) and \(d(\tilde{\omega }) = 5\). Note that

    $$\displaystyle{\tilde{m}(\tilde{\omega }) = \frac{d(\tilde{\omega })} {\vert \tilde{\omega }\vert } \,Q(\tilde{\omega }).}$$
  • Suppose that an unrooted loop \(\tilde{\omega }\) with \(\vert \tilde{\omega }\vert = n\) has \(d = d(\tilde{\omega })\) rooted representative. In other words, the loop “repeats” itself after d steps and does nd such repetitions. Suppose k > 0 of these rooted representatives are rooted at x. In the example above, k = 2 for x = a and x = b and k = 1 for x = c. Then the total number of times that the loop visits x is k(nd). Suppose that we give each of the k loops that are rooted at x measure \(Q(\tilde{\omega })/[kn/d]\) and give all the other rooted representatives of \(\tilde{\omega }\) measure zero. Then the induced measure on unrooted loops is the same as the usual unrooted loop measure, giving measure \((d/n)\,Q(\tilde{\omega })\) to \(\tilde{\omega }\).

  • In other words, if we give each rooted loop rooted at x measure Q(ω)∕k where k is the number of visits to x, then the induced measure on unrooted loops restricted to loops that intersect x is the same as \(\tilde{m}\).

  • One reason that the unrooted loop measure is useful is that one can move the root around to do calculations. The next lemma is an example of this.

Let

$$\displaystyle{F(A) = F^{Q}(A) =\exp \left (\sum _{\tilde{\omega } \in \tilde{\mathcal{O}}}\tilde{m}(\tilde{\omega })\right ) =\exp \left (\sum _{\omega \in \mathcal{O}}m(\omega )\right ).}$$

If \(V \subset A\), we let

$$\displaystyle{F_{V }(A) =\exp \left (\sum _{\tilde{\omega }\in \tilde{\mathcal{O}},\tilde{\omega }\cap V \neq \emptyset }\tilde{m}(\tilde{\omega })\right ).}$$

Note that F A (A) = F(A). If V = { x}, we write just F x (A). The next lemma relates the Green’s function to the exponential of the loop measure; considering the case where Q is positive shows that the sum converges absolutely. As a corollary, we will have a relationship between the determinant of the Laplacian and the loop measure.

Lemma 3.1

$$\displaystyle{F_{x}(A) = G(x,x).}$$

More generally, if \(V =\{ x_{1},\ldots,x_{l}\} \subset A\) and \(A_{j} = A\setminus \{x_{1},\ldots,x_{j-1}\}\) , then

$$\displaystyle{F_{V }(A) =\prod _{ j=1}^{l}G_{ A_{j}}(x_{j},x_{j}).}$$

Proof

Let \(\mathcal{A}_{k}\) denote the set of \(\tilde{\omega } \in \tilde{\mathcal{O}}\) that have k different representatives that are rooted at x. By spreading the mass evenly over these k representatives, as described in the second and third to last bullets above, we can see that

$$\displaystyle{\tilde{m}\left [\mathcal{A}_{k}\right ] = \frac{1} {k}\,f_{x}^{k}.}$$

Hence,

$$\displaystyle{\tilde{m}\left [\bigcup _{k=1}^{\infty }\mathcal{A}_{ k}\right ] =\sum _{ k=1}^{\infty }\frac{1} {k}\,f_{x}^{k} = -\log [1 - f_{ x}] =\log G(x,x).}$$

This gives the first equality and by iterating this fact, we get the second equality.

Corollary 3.2

$$\displaystyle{F(A) = \frac{1} {\det \varDelta }.}$$

Proof

Let A = { x 1, , x n }, A j  = { x j , , x n }. By Proposition 2.1 and Lemma 3.1,

$$\displaystyle{\frac{1} {\det \varDelta } =\prod _{ j=1}^{n}G_{ A_{j}}(x_{j},x_{j}) = F(A).}$$

Suppose f is a complex valued function defined on A to which we associate the diagonal matrix

$$\displaystyle{D_{f}(x,y) =\delta _{x,y}\,f(x).}$$

Let Q f  = D 1∕(1+f)Q, that is,

$$\displaystyle{Q_{f}(x,y) = \frac{Q(x,y)} {1 + f(x)}.}$$

If Q is acceptable, then for f sufficiently small, Q f will be an acceptable matrix for which we can define the loop measure m f . More specifically, if \(\omega = [\omega _{0},\ldots,\omega _{n}] \in \mathcal{O}\), then

$$\displaystyle\begin{array}{rcl} m_{f}(\omega )& =& \frac{Q_{f}(\omega )} {\vert \omega \vert } = m(\omega )\,\prod _{j=1}^{n} \frac{1} {1 + f(\omega _{j})}, {}\\ \tilde{m}_{f}(\tilde{\omega })& =& \tilde{m}(\tilde{\omega })\,\prod _{j=1}^{n} \frac{1} {1 + f(\omega _{j})}. {}\\ \end{array}$$

Hence, if \(G_{f} = G^{Q_{f}}\),

$$\displaystyle{\det G_{f} =\exp \left (\sum _{\omega \in \mathcal{O}}m_{f}(\omega )\right ) =\exp \left (\sum _{\tilde{\omega }\in \tilde{\mathcal{O}}}\tilde{m}_{f}(\tilde{\omega })\right ).}$$

Example

Consider a one-point space A = { x} with Q(x, x) = q ∈ (0, 1). For each n > 0, there is exactly one loop ω n of length n with Q(ω n) = q n, m(ω n) = q nn. Then, Δ is the 1 × 1 matrix with entry 1 − q,

$$\displaystyle{G_{A}(x,x) =\sum _{ n=0}^{\infty }q^{n} = \frac{1} {1 - q},}$$

and

$$\displaystyle{\sum _{\omega \in \mathcal{O}}m(\omega ) =\sum _{ n=1}^{\infty }\frac{q^{n}} {n} = -\log [1 - q].}$$

3.2 Relation to Loop-Erased Walk

Suppose \(\overline{A}\) is a finite set, \(A \subsetneq \overline{A},\partial A = \overline{A}\setminus A\), and Q is a an acceptable matrix on \(\overline{A}\). Let \(\mathcal{K}(A)\) denote the set of paths ω = [ω 0, , ω n ] with ω n  ∈ ∂ A and \(\{\omega _{0},\ldots,\omega _{n-1}\} \subset A.\) For each path ω, there exists a unique loop-erased path LE(ω) obtained from ω by chronological loop-erasure as follows.

  • Let \(j_{0} =\max \{ j:\omega _{j} =\omega _{0}\}\).

  • Recursively, if j k  < n, then \(j_{k+1} =\max \{ j:\omega _{j} =\omega _{j_{k}+1}\}\).

  • If j k  = n, then \(LE(\omega ) = [\omega _{j_{0}},\ldots,\omega _{j_{k}}].\)

If η = [η 0, , η k ] is a self-avoiding path in \(\mathcal{K}(A)\), we define its loop-erased measure by

$$\displaystyle{\hat{Q}(\eta;A) =\sum _{\omega \in \mathcal{K}(A),\,LE(\omega )=\eta }Q(\omega ).}$$

The loop measure gives a convenient way to describe \(\hat{Q}(\eta;A)\).

Proposition 3.3

$$\displaystyle{\hat{Q}(\eta;A) = Q(\eta )\,F_{\eta }(A).}$$

Proof

We can decompose any path ω with LE(ω) = η uniquely as

$$\displaystyle{l^{0} \oplus [\eta _{ 0},\eta _{1}] \oplus l^{1} \oplus [\eta _{ 1},\eta _{2}] \oplus \cdots \oplus l^{k-1} \oplus [\eta _{ k-1},\eta _{k}]}$$

where l j is a rooted loop rooted at η j that is contained in \(A_{j}:= A\setminus \{\eta _{0},\eta _{1},\ldots,\eta _{j-1}\}\). By considering all the possibilities, we see that the measure of all walks with LE(ω) = η is

$$\displaystyle{G_{A}(\eta _{0},\eta _{0})\,Q(\eta _{0},\eta _{1})\,G_{A_{1}}(\eta _{1},\eta _{1})\,\cdots \,G_{A_{k-1}}(\eta _{k-1},\eta _{k-1})\,Q(\eta _{k-1},\eta _{k}),}$$

which can be written as

$$\displaystyle{Q(\eta )\,\prod _{j=0}^{k-1}G_{ A_{j}}(\eta _{j},\eta _{j}) = Q(\eta )\,F_{\eta }(A).}$$

There is a nice application of this to spanning trees. Let A = { x 0, x 1, , x n } be the vertex set of a finite connected graph, and let Q be the transition probability for simple random walk on the graph, that is, Q(x, y) = 1∕d(x) if x and y are adjacent, where d(x) is the degree of x. Consider the following algorithm due to David Wilson [10] to choose a spanning tree from A:

  • Start with the trivial tree consisting of a single vertex, x 0, and no edges.

  • Start a random walk at x 1 and run it until it reaches x 0. Erase the loops (chronologically) and add the edges of the loop-erased walk to the tree.

  • Let x j be the vertex of smallest index that has not been added to the tree yet. Start a random walk at x j , let it run until it hits a vertex that has been added to the tree. Erase loops and add the remaining edges to the tree.

  • Continue until we have a spanning tree.

It is a straightforward exercise using the last proposition to see that for any tree, the probability that it is chosen is exactly

$$\displaystyle{\left [\prod _{j=1}^{n}d(x_{ j})\right ]^{-1}\,F(A^{{\prime}})}$$

which by Corollary 3.2 can be written as

$$\displaystyle{\left [\det [I - Q_{A^{{\prime}}}]\,\prod _{j=1}^{n}d(x_{ j})\right ]^{-1} = \frac{1} {\det [D - K]}.}$$

Here D(x, y) = δ x, y d(x) is the diagonal matrix of degrees and K is the adjacency matrix, both restricted to \(A^{{\prime}}\). (The matrix DK is what graph theorists call the Laplacian.) We can therefore conclude the following. The second assertion is a classical result due to Kirchhoff called the matrix-tree theorem.

Theorem 3.4

Every spanning tree is equally likely to be chosen in Wilson’s algorithm. Moreover, the total number of spanning trees is \(\det [D - K].\) In particular, \(\det [D - K]\) does not depend on the ordering {x 0 ,…,x n } of the vertices of A.

4 Loop Soup and Gaussian Free Field

4.1 Soups

If \(\lambda > 0\), then the Poisson distribution on \(\mathbb{N} =\{ 0,1,2,\ldots \}\) is given by

$$\displaystyle{q^{\lambda }(k) = e^{-\lambda }\,\frac{\lambda ^{k}} {k!}.}$$

We can use this formula to define the Poisson “distribution” for \(\lambda \in \mathbb{C}\). In this case \(q^{\lambda }\) is a complex measure supported on \(\mathbb{N}\) with variation measure \(\vert q^{\lambda }\vert \) given by

$$\displaystyle{\vert q^{\lambda }\vert (k) = \vert e^{-\lambda }\vert \,\frac{\vert \lambda \vert ^{k}} {k!} = e^{-\mathfrak{R}(\lambda )}\,\frac{\vert \lambda \vert ^{k}} {k!},}$$

and total variation

$$\displaystyle{\|q^{\lambda }\| =\sum _{ k=0}^{\infty }\vert q^{\lambda }\vert (k) =\exp \{ \vert \lambda \vert -\mathfrak{R}(\lambda )\} \leq e^{2\vert \lambda \vert }.}$$

Note that

$$\displaystyle{\sum _{k=1}^{\infty }\vert q^{\lambda }\vert (k) =\exp \{ \vert \lambda \vert -\mathfrak{R}(\lambda )\}\,[1 - e^{-\vert \lambda \vert }] \leq \vert \lambda \vert \,e^{2\vert \lambda \vert }.}$$

The usual convolution formula \(q^{\lambda _{1}} {\ast} q^{\lambda _{2}} = q^{\lambda _{1}+\lambda _{2}}\) holds, and if

$$\displaystyle{\sum _{j=1}^{\infty }\vert \lambda _{ j}\vert < \infty,}$$

 we can define the infinite convolution

$$\displaystyle{\prod _{j}^{{\ast}}q^{\lambda _{j} } =\lim _{n\rightarrow \infty }(q^{\lambda _{1}} {\ast}\cdots {\ast} q^{\lambda _{n}}) = q^{\sum \lambda _{j}}.}$$

If \(\lambda > 0\) and M t is a Poisson process with parameter \(\lambda\), then the distribution of M t is

$$\displaystyle{ q_{t}(\{k\}) = q^{t\lambda }(k) = e^{-t\lambda }\,\frac{(t\lambda )^{k}} {k!},\;\;\;\;k = 0,1,2,\ldots }$$
(3)

The family of measures {q t } satisfy the semigroup law q s+t  = q s q t . If we are only interested in the measure q t , then we may choose \(\lambda\) in (3) to be complex. In this case the measures {q t } are not probability measures but they still satisfy the semigroup law. We call this the Poisson semigroup of measures with parameter \(\lambda\) and note that the Laplace transform is given by

$$\displaystyle{\sum _{k=0}^{\infty }e^{k\alpha }q_{ t}(\{k\}) =\exp (t\lambda (e^{\alpha } - 1)).}$$

Suppose m is a complex measure on a countable set X, that is, a complex function with

$$\displaystyle{\sum _{x\in X}\vert m(x)\vert < \infty.}$$

Then we say that the soup generated by m is the semigroup of measures {q t : t ≥ 0} on \(\mathbb{N}^{X}\) where q t is the product measure of \(\{q_{t}^{x}: x \in X\}\) where \(\{q_{t}^{x}: t \geq 0\}\) is a Poisson semigroup of measures with parameter m(x). Pushing forward q t along the map \(\phi \mapsto \sum _{x\in X}\phi (x)\) to a measure on \(\mathbb{N} \cup \{\infty \}\), we see that it agrees with \(\prod ^{{\ast}}q_{t}^{x}\) on \(\mathbb{N}\) and thus q t is supported on the pre-image of \(\mathbb{N}\), the set of \(\phi \in \mathbb{N}^{X}\) with finite support which we will call \(\mathbb{N}_{\mathrm{fin}}^{X}\). The complex measure q t satisfies

$$\displaystyle{\|q_{t}\| \leq \prod _{x\in X}\|q_{t}^{x}\| \leq \exp \left \{2t\sum _{ x\in X}\vert m(x)\vert \right \}.}$$

Soups were originally defined when m is a positive measure on X, in which case it is defined as an independent collection of Poisson processes \(\{M_{t}^{x}: x \in X\}\) where \(M_{t}^{x}\) has rate m(x). A realization \(\mathcal{C}_{t}\) of the soup at time t is a multiset of X in which the element x appears \(M_{t}^{x}\) times. In this case q t gives the distribution of the vector \((M_{t}^{x}: x \in X)\).

4.2 Loop Soup

Suppose Q is an acceptable weight with associated loop measure m. Let 0 < ε < 1 be such that the matrix with entries P ε (x, y): = e ε | Q(x, y) | is still acceptable. Let m be the rooted loop measure associated to Q and note that

$$\displaystyle{ \sum _{\omega \in \mathcal{O}}\vert \omega \vert \,e^{\epsilon \vert \omega \vert }\,\vert m(\omega )\vert =\sum _{\omega \in \mathcal{O}}P_{\epsilon }(\omega ) < \infty. }$$
(4)

The (rooted) loop soup is a “Poissonian realization” of the measure m. To be more precise, recall that \(\mathcal{O}\) is the set of rooted loops in A with positive length. A multiset \(\mathcal{C}\) of loops is a generalized subset of \(\mathcal{O}\) in which loops can appear more than once. In other words it is an element \(\{\mathcal{C}(\omega ):\omega \in \mathcal{O}\}\) of \(\mathbb{N}^{\mathcal{O}}\) where \(\mathcal{C}(\omega )\) denotes the number of times that ω appears in \(\mathcal{C}\). Then the rooted loop soup is the semigroup of measures \(\mathcal{M}_{t} = \mathcal{M}_{t,m}\) on \(\mathbb{N}^{\mathcal{O}}\) given by the product measure of the Poisson semigroups \(\{\mathcal{M}_{t}^{\omega }:\omega \in \mathcal{O}\}\) where \(\mathcal{M}_{t}^{\omega }\) has parameter m(ω). The measure \(\mathcal{M}_{t}^{\omega }\) is Poisson with parameter tm(ω) and hence \(\|\mathcal{M}_{t}^{\omega }\| \leq \exp \{ 2t\vert m(\omega )\vert \}\) and

$$\displaystyle{ \sum _{k=1}^{\infty }\vert \mathcal{M}_{ t}^{\omega }(k)\vert \leq t\vert m(\omega )\vert \,e^{2t\vert m(\omega )\vert }. }$$
(5)

For any x ∈ A and rooted loop ω, we define the (discrete) local time N ω(x) to be the number of visits of ω to x: 

$$\displaystyle{N^{\omega }(x) =\sum _{ j=0}^{\vert \omega \vert -1}1\{\omega _{ j} = x\} =\sum _{ j=1}^{\vert \omega \vert }1\{\omega _{ j} = x\}.}$$

Note that this is a function of an unrooted loop, so we can also write \(N^{\tilde{\omega }}(x)\). Also \(N^{\omega ^{R} }(x) = N^{\omega }(x)\). We define the additive function \(L: \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}\rightarrow \mathbb{N}^{A}\) by

$$\displaystyle{L_{\mathcal{C}}(x) =\sum _{\omega \in \mathcal{O}}\mathcal{C}(\omega )\,N^{\omega }(x).}$$

By pushing forward by L, the loop soup \(\mathcal{M}_{t}\) induces a measure on \(\mathbb{N}^{A}\) which we denote by μ t  = μ t, m and refer to as the discrete occupation field. Indeed, since \(\mathcal{M}_{t}\) is a product measure, we can write μ t as

$$\displaystyle{\mu _{t} =\prod _{ \omega \in \mathcal{O}}^{{\ast}}\mu _{ t}^{\omega }}$$

where the notation \(\prod ^{{\ast}}\) means convolution and \(\mu _{t}^{\omega }\) denotes the measure supported on {kN ω: k = 0, 1, 2, } with

$$\displaystyle{\mu _{t}^{\omega }(kN^{\omega }) = e^{-tm(\omega )}\,\frac{[tm(\omega )]^{k}} {k!}.}$$

For future reference we note that since \(N^{\omega } = N^{\omega ^{R} }\),

$$\displaystyle{[\mu _{t}^{\omega } {\ast}\mu _{ t}^{\omega ^{R} }](kN^{\omega }) = e^{-t[m(\omega )+m(\omega ^{R})] }\,\frac{t^{k}\,[m(\omega ) + m(\omega ^{R})]^{k}} {k!},}$$

and hence,

$$\displaystyle{ \mu _{2t} =\prod _{ \omega \in \mathcal{O}}^{{\ast}}\mu _{ 2t}^{\omega } =\prod _{ \omega \in \mathcal{O}}^{{\ast}}\mu _{ t}^{\omega } {\ast}\mu _{ t}^{\omega } =\prod _{ \omega \in \mathcal{O}}^{{\ast}}\mu _{ t}^{\omega } {\ast}\mu _{ t}^{\omega ^{R} } =\mu _{t,m^{R}}, }$$
(6)

where

$$\displaystyle{m^{R}(\omega ) = m(\omega ) + m(\omega ^{R}).}$$

4.3 A Continuous Occupation Field

In order to get a representation of the Gaussian free field, we need to change the discrete occupation field to a continuous time occupation field. We will do so in a simple way by replacing N ω(x) with a sum of N ω(x) independent rate one exponential random variables. This is similar to the method of constructing continuous time Markov chains from discrete time chains by adding exponential waiting times.

We say that a process Y (t) is a gamma process if it has independent increments, Y (0) = 0, and for any t, s ≥ 0,  Y (t + s) − Y (t) has a Gamma(s, 1) distribution. In particular, Y (n) is distributed as the sum of n independent rate one exponential random variables. Let {Y x: x ∈ A} be a collection of independent gamma processes. If \(\bar{s} =\{ s_{x}: x \in A\} \in [0,\infty )^{A}\), we write \(Y (\bar{s})\) for the random vector \((Y ^{x}(s_{x}))\). The Laplace transform is well known,

$$\displaystyle{\mathbb{E}\left [\exp \{-Y (\bar{s}) \cdot f\}\right ] =\prod _{x\in A} \frac{1} {[1 + f(x)]^{s_{x}}},}$$

provided that \(\|f\|_{\infty } < 1\). In particular, if \(\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}\), then

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\exp \{-Y (L_{\mathcal{C}}) \cdot f\}\right ]& =& \prod _{x\in A} \frac{1} {[1 + f(x)]^{L_{\mathcal{C}}(x)}} \\ & =& \prod _{\omega \in \mathcal{O}}\prod _{x\in A} \frac{1} {[1 + f(x)]^{\mathcal{C}(\omega )\,N^{\omega }(x)}} \\ & =& \prod _{\omega \in \mathcal{O}}\exp \left [-\mathcal{C}(\omega )(\ln (1 + f) \cdot N^{\omega })\right ].{}\end{array}$$
(7)

For positive Q, we could then define a continuous occupation field in terms of random variables, and we let \(\mathcal{L}_{t} = Y (L_{\mathcal{C}_{t}})\) by taking \(\mathcal{C}_{t}\) as an independent loop soup corresponding to | Q | . In order to handle the general case, we define the “distribution” of the continuous occupation field at time t to be the complex measure ν t  = ν t, m on \([0,\infty )^{A}\) given by

$$\displaystyle{ \nu _{t}(V ) =\sum _{\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\mathcal{M}_{t}(\mathcal{C})\,\mathbb{P}\{Y (L_{\mathcal{C}}) \in V \} =\sum _{\bar{k}\in \mathbb{N}^{A}}\mu _{t}(\bar{k})\,\mathbb{P}\{Y (\bar{k}) \in V \}, }$$
(8)

where \(V \subset [0,\infty )^{A}.\) We will write

$$\displaystyle{\nu _{t}[h(\mathcal{L})] =\int _{[0,\infty )^{A}}h(\mathcal{L})d\nu _{t}(\mathcal{L})}$$

provided that \(\int _{[0,\infty )^{A}}\vert h(\mathcal{L})\vert \,d\vert \nu _{t}\vert (\mathcal{L}) < \infty.\)

Lemma 4.1

If \(\mathbb{E}[\vert h(\mathcal{L}_{t})\vert ] < \infty \) , then \(\vert \nu _{t}\vert [\vert h(\mathcal{L})\vert ] < \infty \) .

Proof

First, note that

$$\displaystyle\begin{array}{rcl} \vert \mathcal{M}_{t}(\mathcal{C})\vert & =& \left \vert \prod _{\omega \in \mathcal{O}}\mathcal{M}_{t}^{\omega }(\mathcal{C}(\omega ))\right \vert {}\\ & =& \left \vert e^{-t\sum _{\omega \in \mathcal{O}}m(\omega )}\right \vert \prod _{\omega \in \mathcal{O}}\frac{(t\vert m(\omega )\vert )^{\mathcal{C}(\omega )}} {\mathcal{C}(\omega )!} =\alpha \mathbb{P}\{\mathcal{C}_{t} = \mathcal{C}\} {}\\ \end{array}$$

with \(\alpha = e^{t\sum _{\omega \in \mathcal{O}}\vert m(\omega )\vert -\mathfrak{R}(m(\omega ))}\). Thus, taking \(\sup\) over all finite partitions \(\{V _{i}\}_{i=1}^{n}\) of V into measurable sets,

$$\displaystyle\begin{array}{rcl} \vert \nu _{t}\vert (V )& =& \sup \sum _{i=1}^{n}\vert \nu _{ t}(V _{i})\vert =\sup \sum _{ i=1}^{n}\left \vert \sum _{ \mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\mathcal{M}_{t}(\mathcal{C})\,\mathbb{P}\{Y (L_{\mathcal{C}}) \in V _{i}\}\right \vert {}\\ &\leq & \sup \sum _{i=1}^{n}\sum _{ \mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\vert \mathcal{M}_{t}(\mathcal{C})\vert \,\mathbb{P}\{Y (L_{\mathcal{C}}) \in V _{i}\} {}\\ & =& \sum _{\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\vert \mathcal{M}_{t}(\mathcal{C})\vert \,\mathbb{P}\{Y (L_{\mathcal{C}}) \in V \} =\alpha \mathbb{P}\{\mathcal{L}_{t} \in V \}. {}\\ \end{array}$$

We will compute the Laplace transform of the measure ν t , but first we need use the following lemma.

Lemma 4.2

Suppose S is a countable set and \(F: S \times \mathbb{N} \rightarrow \mathbb{C}\) is a function with F(s,0) = 1 for all s ∈ S,

$$\displaystyle{\sum _{s\in S}\left \vert \sum _{n=1}^{\infty }F(s,n)\right \vert < \infty }$$

and

$$\displaystyle{\sum _{\psi \in \mathbb{N}_{\mathrm{fin}}^{S}}\left \vert \prod _{s\in S}F(s,\psi (s))\right \vert < \infty.}$$

Then,

$$\displaystyle{\prod _{s\in S}\sum _{n=0}^{\infty }F(s,n) =\sum _{\psi \in \mathbb{N}_{\mathrm{fin}}^{S}}\prod _{s\in S}F(s,\psi (s)).}$$

Proof

Since \(\sum _{s\in S}\vert \sum _{n=1}^{\infty }F(s,n)\vert < \infty \), the product on the left-hand side does not depend on the order. For this reason we may assume that S is the positive integers and write

$$\displaystyle\begin{array}{rcl} \prod _{s=1}^{\infty }\sum _{ n=0}^{\infty }F(s,n)& =& \lim _{ J\rightarrow \infty }\prod _{s=1}^{J}\sum _{ n=0}^{\infty }F(s,n) {}\\ & =& \lim _{J\rightarrow \infty }\sum _{\psi \in \mathbb{N}^{J}}\prod _{s=1}^{J}F(s,\psi (s)) =\sum _{\psi \in \mathbb{N}_{\mathrm{fin}}^{\infty }}\prod _{s=1}^{\infty }F(s,\psi (s)). {}\\ & & {}\\ \end{array}$$

The last equality uses the absolute convergence of the final sum.

Proposition 4.3

For f sufficiently small,

$$\displaystyle{ \nu _{t}[\exp (-\mathcal{L}\cdot f)] = \left (\frac{\det G_{f}} {\det G} \right )^{t}. }$$
(9)

Proof

We first claim that there exists δ > 0 such that if \(\|f\|_{\infty } <\delta\),

$$\displaystyle{\mathbb{E}[\vert \exp \{-\mathcal{L}_{t} \cdot f\}\vert ] < \infty,}$$

so that the left hand side of (9) is well defined. Indeed, if \(\|f\|_{\infty } <\delta\), and (1 −δ) = e ε, then for any \(\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\omega }\)

$$\displaystyle{\mathbb{E}\left [\vert \exp \{- Y (L_{\mathcal{C}}) \cdot f\}\vert \right ] \leq \prod _{\omega \in \mathcal{O}}\vert 1 -\delta \vert ^{-\mathcal{C}(\omega )\,\vert \omega \vert } =\prod _{\omega \in \mathcal{O}}e^{\epsilon \,\vert \omega \vert \,\mathcal{C}(\omega )},}$$

and hence

$$\displaystyle\begin{array}{rcl} \mathbb{E}[\vert \exp \{- Y (L_{\mathcal{C}_{t}}) \cdot f\}\vert ]& =& \mathbb{E}\left [\mathbb{E}[\vert \exp \{- Y (L_{\mathcal{C}_{t}}) \cdot f\}\vert \big\vert \mathcal{C}_{t}]\right ] {}\\ & \leq & \mathbb{E}\left [\prod _{\omega \in \mathcal{O}}e^{\epsilon \vert \omega \vert \mathcal{C}_{t}(\omega )}\right ] {}\\ & =& \prod _{\omega \in \mathcal{O}}\exp \left (t\vert m(\omega )\vert (e^{\epsilon \vert \omega \vert }- 1)\right ) {}\\ \end{array}$$

which is finite for ε sufficiently small by (4).

We assume that \(\|f\|_{\infty } <\delta\). Using (7) we get

$$\displaystyle\begin{array}{rcl} \nu _{t}[\exp (-\mathcal{L}\cdot f)]& =& \sum _{\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\mathcal{M}_{t}(\mathcal{C})\,\mathbb{E}\left [\exp (-Y (L_{\mathcal{C}}) \cdot f)\right ] {}\\ & =& \sum _{\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\prod _{\omega \in \mathcal{O}}\mathcal{M}_{t}^{\omega }(\mathcal{C}(\omega ))\exp \left [\mathcal{C}(\omega )(-\ln (1 + f) \cdot N^{\omega })\right ] {}\\ & =& \prod _{\omega \in \mathcal{O}}\sum _{n=0}^{\infty }\mathcal{M}_{ t}^{\omega }(n)\exp \left [n(-\ln (1 + f) \cdot N^{\omega })\right ] {}\\ & =& \prod _{\omega \in \mathcal{O}}\exp \left [tm(\omega )(\exp (-\ln (1 + f) \cdot N^{\omega }) - 1)\right ]. {}\\ \end{array}$$

The third equality uses Lemma 4.2, which is valid as

$$\displaystyle{\sum _{\omega \in \mathcal{O}}\left \vert \sum _{n=1}^{\infty }\frac{(tm(\omega ))^{n}} {n!} \exp \left [n(-\ln (1 + f) \cdot N^{\omega })\right ]\right \vert =\sum _{\omega \in \mathcal{O}}\left \vert \exp \left [tm_{f}(\omega )\right ] - 1\right \vert < \infty }$$

and

$$\displaystyle{\sum _{\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\left \vert \prod _{\omega \in \mathcal{O}}\mathcal{M}_{t}^{\omega }(\mathcal{C}(\omega ))\exp \left [\mathcal{C}(\omega )(-\ln (1 + f) \cdot N^{\omega })\right ]\right \vert < \infty }$$

for sufficiently small f since \(\left \vert \mathcal{M}_{t}\right \vert \) is a finite measure on \(\mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}\). Above we used that

$$\displaystyle\begin{array}{rcl} \exp (-\ln (1 + f) \cdot N^{\omega })& =& \prod _{x\in A}\exp \ln \left [\left ( \frac{1} {1 + f(x)}\right )^{N^{\omega }(x)}\right ] {}\\ & =& \prod _{j=0}^{\vert \omega \vert -1} \frac{1} {1 + f(\omega _{j})}, {}\\ \end{array}$$

which also gives us

$$\displaystyle\begin{array}{rcl} \nu _{t}[\exp (-\mathcal{L}\cdot f)]& =& \exp \left [t\sum _{\omega \in \mathcal{O}}m(\omega )\prod _{j=0}^{\vert \omega \vert -1} \frac{1} {1 + f(\omega _{j})}\right ]\,\exp \left [-t\sum _{\omega \in \mathcal{O}}m(\omega )\right ] {}\\ & =& \exp \left [t\sum _{\omega \in \mathcal{O}}m_{f}(\omega )\right ]\,\exp \left [-t\sum _{\omega \in \mathcal{O}}m(\omega )\right ] {}\\ & =& \left (\frac{\det G_{f}} {\det G} \right )^{t}. {}\\ \end{array}$$

The last equality is by Corollary 3.2.

Consider the one-point example at the end of Sect. 3.1. If we let s denote the function f taking the value s, then

$$\displaystyle{m_{s}(\omega ^{n}) = \frac{1} {n}\,\left [ \frac{q} {1 + s}\right ]^{n},}$$

and

$$\displaystyle{\sum _{\omega ^{n}\in \mathcal{O}}m_{s}(\omega ^{n}) =\sum _{ n=1}^{\infty }\frac{1} {n}\,\left [ \frac{q} {1 + s}\right ]^{n} = -\log \left [1 - \frac{q} {1 + s}\right ].}$$

Therefore, if \(\mathcal{L}_{t}\) denotes the continuous time occupation field at time t,

$$\displaystyle{ \mathbb{E}\left [e^{-s\mathcal{L}_{t} }\right ] = \left (\frac{\det G_{s}} {\det G} \right )^{t} = \left [\frac{1 + s - q(1 + s)} {1 + s - q} \right ]^{t}. }$$
(10)

We recall that we have defined the (discrete time) loop measure \(m(\omega ) = \frac{Q(\omega )} {\vert \omega \vert }\) and then we have added continuous holding times. Another approach, which is the original one taken by Le Jan [8], is to construct a loop measure on continuous time paths. Here we start with Q, add the waiting times to give a measure on continuous time loops, and then divide the measure by the (continuous) length. Considered as a measure on unrooted continuous time loops, the two procedures are essentially equivalent (although using discrete time loops makes it easier to have “jumps” from a site to itself).

4.4 Trivial Loops

We will see soon that the loop soup and the square of the Gaussian free field are closely related, but because our construction of the loop soup used discrete loops and only added continuous time afterwards, we restricted our attention to loops of positive length. We will need to add a correction factor to the occupation time to account for these trivial loops which are formed by viewing the continuous time process before its first jump.

Consider the one-point example at the end of Sect. 3.1. The Gaussian free field with covariance matrix [IQ]−1 is just a centered normal random variable Z with variance 1∕(1 − q) which we can write as \(N/\sqrt{1 - q}\) where N is a standard normal. Since N 2 has a χ 2 distribution with one degree of freedom, we see that

$$\displaystyle{\mathbb{E}\left [e^{-sZ^{2}/2 }\right ] = \sqrt{ \frac{1 - q} {1 - q + s}}}$$

If we compare this to (10), we can see that

$$\displaystyle{\mathbb{E}\left [e^{-sZ^{2}/2 }\right ] = \mathbb{E}\left [e^{-s\mathcal{L}_{\frac{1} {2} } }\right ]\,\left [1 + s\right ]^{-1/2}.}$$

The second term on the right-hand side is the moment generating function for a Gamma\((\frac{1} {2},1)\) random variable. Hence we can see that Z 2∕2 has the same distribution at \(\mathcal{L}_{\frac{1} {2} } + Y\) where Y is an independent Gamma\((\frac{1} {2},1)\) random variable.

The trivial loops we will add are not treated in the same way as the other loops. To be specific, we add another collection of independent gamma processes \(\{Y _{\mbox{ trivial}}^{x}\}_{x\in A}\) and define the occupation field of the trivial loops as

$$\displaystyle{\mathcal{T}_{t}(x) = Y _{\mbox{ trivial}}^{x}(t).}$$

When viewed in terms of the discrete time loop measure, this seems unmotivated. It is useful to consider the continuous time loop measure in terms of continuous time Markov chains. For any t prior to the first jump of the Markov chain, the path will form a trivial loop of time duration t. As the Markov chain has exponential holding times, the path measure (analogue of Q) to assign to such a trivial loop is e tdt, and so the loop measure (analogue of m) should be t −1e tdt. Hence in the continuous time measure, we give trivial loops of time duration t weight e tt. Since e tt is the intensity measure for the jumps of a gamma process, we see that the added occupation time at x corresponds to \(\mathcal{T}_{t}(x)\).

We write \(\nu _{t}^{\mathcal{T}}\) for the probability distribution of \(\mathcal{T}_{t}\). In other words, it is the distribution of independent gamma processes {Y x(t): x ∈ A}. Note that if \(\mathcal{L}\in [0,\infty )^{A}\),

$$\displaystyle{ \nu _{t}^{\mathcal{T}}[\exp \{-f \cdot \mathcal{L}\}] =\prod _{ x\in A} \frac{1} {[1 + f(x)]^{t}} = [\det D_{1+f}]^{-t}. }$$
(11)

We will also write

$$\displaystyle{\rho _{t} =\nu _{t} {\ast}\nu _{t}^{\mathcal{T}},}$$

which using (8) can also be written as

$$\displaystyle{\rho _{t}(V ) =\sum _{\mathcal{C}\in \mathbb{N}_{\mathrm{fin}}^{\mathcal{O}}}\mathcal{M}_{t}(\mathcal{C})\,\mathbb{P}\{Y (L_{\mathcal{C}} +\bar{ t}) \subset V \},}$$

where \(\bar{t}\) denotes the vector each of whose components equals t.

4.5 Relation to the Real Gaussian Free Field

If A is a finite set with | A |  = n, and G is a symmetric, positive definite real matrix, then the (centered, discrete) Gaussian free field on A with covariance matrix G is the random function \(\phi: A \rightarrow \mathbb{R}\), defined by having density

$$\displaystyle{ \frac{1} {(2\pi )^{n/2}\,\sqrt{\det G}}\,\exp \left (-\frac{1} {2}\phi \cdot G^{-1}\phi \right ) = \frac{1} {(2\pi )^{n/2}\,\sqrt{\det G}}\,\exp \left (-\frac{1} {2}\vert J\phi \vert ^{2}\right )}$$

with respect to Lebesgue measure on \(\mathbb{R}^{n}\). Here J is a positive definite, symmetric square root of G −1. In other words, ϕ is a | A | -dimensional mean zero normal random variable with covariance matrix \(\mathbb{E}[\phi (x)\phi (y)] = G(x,y)\).

Lemma 4.4

Suppose G is a symmetric positive definite matrix, Δ = G −1 , and let ϕ denote a Gaussian free field with covariance matrix G. Then for all f sufficiently small,

$$\displaystyle{ \mathbb{E}\left [\exp \left (-\frac{1} {2}\phi ^{2} \cdot f\right )\right ] = \frac{1} {\sqrt{\det (\varDelta + D_{f } )}}\, \frac{1} {\sqrt{\det G}}. }$$
(12)

Proof

This is a standard calculation,

$$\displaystyle{ \begin{array}{rl} &\mathbb{E}\bigg[\exp \left (-\frac{1} {2}\phi ^{2} \cdot f\right )\bigg] \\ &\quad = \frac{1} {(2\pi )^{n/2}\,\sqrt{\det G}}\int _{\mathbb{R}^{n}}\exp \left (-\frac{1} {2}\phi ^{2} \cdot f\right )\exp \left (-\frac{1} {2}\phi \cdot G^{-1}\phi \right )d\phi \\ &\quad = \frac{1} {(2\pi )^{n/2}\,\sqrt{\det G}}\int _{\mathbb{R}^{n}}\exp \left (-\frac{1} {2}\phi \cdot (\varDelta + D_{f})\phi \right )d\phi.\\ \end{array} }$$

If f is sufficiently small, then Δ + D f is a positive definite symmetric matrix and so has a positive definite square root, call it R f . Then

$$\displaystyle\begin{array}{rcl} \int _{\mathbb{R}^{n}}\exp \left (-\frac{1} {2}\phi \cdot (\varDelta + D_{f})\phi \right )d\phi & =& \int _{\mathbb{R}^{n}}\exp \left (-\frac{1} {2}R_{f}\phi \cdot R_{f}\phi \right )d\phi {}\\ & =& \frac{1} {\det R_{f}}\int _{\mathbb{R}^{n}}\exp \left (-\frac{1} {2}\phi \cdot \phi \right )d\phi {}\\ & =& \frac{(2\pi )^{n/2}} {\sqrt{\det (\varDelta +D_{f } )}}. {}\\ \end{array}$$

Theorem 4.5

If Q is a symmetric, acceptable real matrix, and ϕ is the discrete Gaussian free field on A with covariance matrix G, then the distribution of \(\frac{1} {2}\phi ^{2}\) is  \(\rho _{\frac{1} {2} }\) .

Proof

It suffices to show that the Laplace transforms for \(\frac{1} {2}\phi ^{2}\) and \(\rho _{\frac{1} {2} }\) exist and agree on a neighborhood of zero. We have calculated the transforms for \(\mathcal{L}_{\frac{1} {2} }\) and ϕ 2 in (9) and (11) giving

$$\displaystyle\begin{array}{rcl} \rho _{\frac{1} {2} }\left [\exp \{\mathcal{L}\cdot f\}\right ]& =& \nu _{\frac{1} {2} }\left [\exp \{\mathcal{L}\cdot f\}\right ]\,\nu _{\frac{1} {2} }^{\mathcal{T}}\left [\exp \{\mathcal{L}\cdot f\}\right ] {}\\ & =& \det (D_{1+f}^{-1})^{\frac{1} {2} }\left (\frac{\det G_{f}} {\det G} \right )^{\frac{1} {2} } {}\\ & =& \det (D_{1+f}^{-1})^{\frac{1} {2} }\left (\frac{\det (I - D_{1+f}^{-1}Q)^{-1})} {\det G} \right )^{\frac{1} {2} } {}\\ & =& \left (\frac{\det (D_{1+f} - Q)^{-1}} {\det G} \right )^{\frac{1} {2} } {}\\ & =& \frac{1} {\sqrt{\det (\varDelta + D_{f } )}}\, \frac{1} {\sqrt{\det G}}. {}\\ \end{array}$$

Comparing this to (12) completes the proof.

Conversely, suppose that a symmetric, positive definite real matrix G is given, indexed by the elements of A and let {ϕ(x): x ∈ A} denote the Gaussian free field. If the matrix Q: = IG −1 is positive definite and acceptable, then we can use loops to give a representation of {ϕ(x)2: x ∈ A}. If G has negative entries then so must Q (since the Green’s function for positive weights is always positive).

4.6 Complex Weights

There is also a relation between complex, Hermitian weights and a complex Gaussian field. Let A be a finite set with n elements. Suppose \(G^{{\prime}}\) is a positive definite Hermitian matrix and let K be a positive definite Hermitian square root of \((G^{{\prime}})^{-1}\). The (centered) complex Gaussian free field on A with covariance matrix \(G^{{\prime}}\) is defined to be the measure on complex functions \(h: \mathbb{R}^{A} \rightarrow \mathbb{C}\) with density

$$\displaystyle{ \frac{1} {\pi ^{n}\,\det G^{{\prime}}}\,\exp \left (-\overline{h} \cdot (G^{{\prime}})^{-1}h\right ) = \frac{1} {\pi ^{n}\,\det G^{{\prime}}}\,\exp \left (-\vert Kh\vert ^{2}\right )}$$

with respect to Lebesgue measure on \(\mathbb{C}^{n}\) (or \(\mathbb{R}^{2n}\)). Equivalently, the function \(\psi = \sqrt{2}\,h\) has density

$$\displaystyle{ \frac{1} {(2\pi )^{n}\,\det G^{{\prime}}}\,\exp \left (-\frac{1} {2}\overline{\psi } \cdot (G^{{\prime}})^{-1}\psi \right ) = \frac{1} {(2\pi )^{n}\,\det G^{{\prime}}}\,\exp \left (-\frac{1} {2}\left \vert K\psi \right \vert ^{2}\right ), }$$
(13)

It satisfies the covariance relations

$$\displaystyle\begin{array}{rcl} \mathbb{E}\left [\overline{h}(x)\,h(y)\right ]& =& \frac{1} {2}\,\mathbb{E}\left [\overline{\psi (x)}\,\psi (y)\right ] = G^{{\prime}}(x,y), \\ \mathbb{E}\left [h(x)\,h(y)\right ]& =& 0. {}\end{array}$$
(14)

The complex Gaussian free field on a set of n elements can be considered as a real field on 2n elements by viewing the real and imaginary parts as separate components. The next proposition makes this precise. Let A  = { x : x ∈ A} be another copy of A and \(\overline{A} = A \cup A^{{\ast}}\). We can view \(\overline{A}\) as a “covering space” of A and let \(\varPhi: \overline{A} \rightarrow A\) be the covering map, that is, Φ(x) = Φ(x ) = x. We call A and A the two “sheets” in \(\overline{A}\). Let \(G^{{\prime}} = G_{R} + iG_{I}\) and define G on \(\overline{A}\) by

$$\displaystyle\begin{array}{rcl} G(x,y)& =& G(x^{{\ast}},y^{{\ast}}) = G_{ R}(x,y), {}\\ G(x,y^{{\ast}})& =& -G(x^{{\ast}},y) = -G_{ I}(x,y). {}\\ \end{array}$$

Note that G is a real, symmetric, positive definite matrix.

Proposition 4.6

Suppose \(G^{{\prime}} = G_{R} + iG_{I}\) is a positive definite Hermitian matrix indexed by A and suppose G is the positive definite, symmetric matrix indexed by \(\overline{A}\) ,

Let \(\{\phi _{z}: z \in \overline{A}\}\) be a centered Gaussian free field on \(\overline{A}\) with covariance matrix G. If

$$\displaystyle{ \psi _{x} =\phi _{x} + i\,\phi _{x^{{\ast}}}, }$$
(15)

then {ψ x : x ∈ A} is a complex centered Gaussian free field with covariance matrix \(2G^{{\prime}}\) .

Proof

Let K = K R + iK I be the Hermitian positive definite square root of \((G^{{\prime}})^{-1}\) and write \((G^{{\prime}})^{-1} =\varDelta _{R} + i\,\varDelta _{I}\). The relation \(K^{2} = (G^{{\prime}})^{-1}\) implies

$$\displaystyle{K_{R}^{2} - K_{ I}^{2} =\varDelta _{ R},\quad K_{R}\,K_{I} + K_{I}\,K_{R} =\varDelta _{I}.}$$

and \(G^{{\prime}}\,(G^{{\prime}})^{-1} = I\) implies

$$\displaystyle{G_{R}\,\varDelta _{R} - G_{I}\,\varDelta _{I} = I,\quad G_{R}\,\varDelta _{I} + G_{I}\,\varDelta _{R} = 0.}$$

Therefore,

$$\displaystyle{G^{-1} = \left [\begin{array}{cc} \varDelta _{R}& -\varDelta _{I} \\ \varDelta _{I} & \varDelta _{R} \end{array} \right ],}$$

and J 2 = G −1 where

$$\displaystyle{J = \left [\begin{array}{cc} K_{R}& - K_{I} \\ K_{I} & K_{R} \end{array} \right ].}$$

In particular, | J ϕ | 2 =  | K ψ | 2. Moreover if \(\lambda > 0\) is an eigenvalue of \(G^{{\prime}}\) with eigenvector x + i y, then

$$\displaystyle{G_{R}\,\mathbf{x} - G_{I}\mathbf{y} =\lambda \, \mathbf{x},\quad G_{R}\,\mathbf{y} + G_{I}\,\mathbf{x} =\lambda \, \mathbf{y},}$$

from which we see that

$$\displaystyle{G\,\left [\begin{array}{c} \mathbf{x}\\ \mathbf{y} \end{array} \right ] =\lambda \, \left [\begin{array}{c} \mathbf{x}\\ \mathbf{y} \end{array} \right ],\;\;\;\;G\,\left [\begin{array}{c} -\mathbf{y}\\ \mathbf{x} \end{array} \right ] =\lambda \, \left [\begin{array}{c} -\mathbf{y}\\ \mathbf{x} \end{array} \right ].}$$

Since the eigenvalues of G are the eigenvalues of \(G^{{\prime}}\) with double the multiplicity,

$$\displaystyle{\det G = [\det G^{{\prime}}]^{2}.}$$

Therefore, (13) can be written as

$$\displaystyle{ \frac{1} {(\sqrt{2\pi })^{2n}\,\sqrt{\det G}}\,\exp \left (-\frac{1} {2}\left \vert J\phi \right \vert ^{2}\right ),}$$

which is the density for the centered real field on \(\overline{A}\) with covariance matrix G.

We will discuss the analogue to Theorem 4.5 for complex Hermitian weights. We can either use the complex weights \(Q^{{\prime}} = I - (G^{{\prime}})^{-1} = Q_{R} + i\,Q_{I}\) on A to give a representation of { | ψ(x) | 2: x ∈ A} or we can use the weights on \(\overline{A}\) given by

$$\displaystyle{ Q = \left [\begin{array}{cc} Q_{R}& - Q_{I} \\ Q_{I} & Q_{R} \end{array} \right ] = I-G^{-1}, }$$
(16)

to give a representation of \(\{\vert \phi (z)\vert ^{2}: z \in \overline{A}\}\). The latter contains more information so we will do this. Note that Q is a positive definite symmetric matrix, but may not be acceptable even if \(Q^{{\prime}}\) is.

Provided that \(Q^{{\prime}}\) and Q are acceptable, let \(\hat{m},m\) denote the loop measures derived from them respectively. As before, let \(\mathcal{O}\) denote the set of (rooted) loops of positive length in A. Let \(\overline{\mathcal{O}}\) be the set of such loops in \(\overline{A}\). Note that \(\hat{m}\) is a complex measure on \(\mathcal{O}\) and m is a real measure on \(\overline{\mathcal{O}}\). We write

$$\displaystyle{\hat{m}(\omega ) = \hat{m}_{R}(\omega ) + i\,\hat{m}_{I}(\omega ).}$$

Recall that \(\varPhi: \overline{A} \rightarrow A\) is the covering map. We also write \(\varPhi: \overline{\mathcal{O}}\rightarrow \mathcal{O}\) for the projection, that is, if \(\omega ^{{\prime}} = [\omega _{0}^{{\prime}},\ldots,\omega _{k}^{{\prime}}] \in \overline{\mathcal{O}}\) then \(\varPhi (\omega ^{{\prime}})\) is the loop of length k whose jth component is \(\varPhi (\omega _{j}^{{\prime}})\). We define the pushforward measure Φ m on \(\mathcal{O}\) by

$$\displaystyle{\varPhi _{{\ast}}m(\omega ) = m\left [\varPhi ^{-1}(\omega )\right ] =\sum _{\varPhi (\omega ^{{\prime}})=\omega }m(\omega ^{{\prime}}).}$$

Proposition 4.7

$$\displaystyle{\varPhi _{{\ast}}m(\omega ) = 2\,\hat{m}_{R}(\omega ) = \hat{m}(\omega ) + \hat{m}(\omega ^{R}).}$$

Proof

Let \(\mathcal{S}_{k} =\{ R,I\}^{k}\) and if \(\pi = (\pi ^{1},\ldots,\pi ^{k}) \in \mathcal{S}_{k}\) we write d(π) for the number of components that equal I. Let \(\mathcal{S}_{k}^{e}\) denote the set of sequences \(\pi \in \mathcal{S}_{k}\) with d(π) even.

Suppose \(\omega = [\omega _{0},\ldots,\omega _{k}] \in \mathcal{O}\). There are 2k loops \(\omega ^{{\prime}} = [\omega _{0}^{{\prime}},\omega _{1}^{{\prime}},\ldots,\omega _{k}^{{\prime}}] \in \overline{\mathcal{O}}\) such that \(\varPhi (\omega ^{{\prime}}) =\omega\). We can write each such loop as an ordered triple \((\omega,\theta,\pi )\). Here \(\theta \in \{ 0,{\ast}\}\) and \(\pi \in \mathcal{S}_{k}^{e}\). We obtain \(\omega ^{{\prime}}\) from \((\omega,\theta,\pi )\) as follows. If \(\theta = 0\) then \(\omega _{0}^{{\prime}} =\omega _{0}\), and otherwise \(\omega _{0}^{{\prime}} =\omega _{ 0}^{{\ast}}\). For j ≥ 1, \(\omega _{j}^{{\prime}} \in \{\omega _{j},\omega _{j}^{{\ast}}\}\). If π j = R, then \(\omega _{j}^{{\prime}}\) is chosen to be in the same sheet as \(\omega _{j-1}^{{\prime}}\). If π j = I, then \(\omega _{j}^{{\prime}}\) is chosen in the opposite sheet to \(\omega _{j-1}^{{\prime}}\). Since d(π) is even, we see that \(\omega _{n}^{{\prime}} =\omega _{ 0}^{{\prime}}\) so this gives a loop in \(\overline{\mathcal{O}}\) with \(\varPhi (\omega ^{{\prime}}) =\omega\).

By expanding the product we see that

$$\displaystyle\begin{array}{rcl} Q^{{\prime}}(\omega )& =& \prod _{ j=1}^{k}\left [Q_{ R}(\omega _{j-1},\omega _{j}) + i\,Q_{I}(\omega _{j-1},\omega _{j})\right ] {}\\ & =& \sum _{\pi \in \mathcal{S}_{k}}i^{d(\pi )}\prod _{ j=1}^{k}\,Q_{\pi ^{ j}}(\omega _{j-1},\omega _{j}), {}\\ \mathrm{Re}\left [Q^{{\prime}}(\omega )\right ]& =& \sum _{\pi \in \mathcal{S}_{k}^{e}}i^{d(\pi )}\prod _{ j=1}^{k}\,Q_{\pi ^{ j}}(\omega _{j-1},\omega _{j}), {}\\ \end{array}$$

Note that

$$\displaystyle\begin{array}{rcl} Q(\omega _{j-1}^{{\prime}},\omega _{ j}^{{\prime}})& =& -Q_{ I}(\omega _{j-1},\omega _{j}),\quad \omega _{j-1}^{{\prime}}\in A,\;\;\;\omega _{ j}^{{\prime}}\in A^{{\prime}}, {}\\ Q(\omega _{j-1}^{{\prime}},\omega _{ j}^{{\prime}})& =& Q_{ I}(\omega _{j-1},\omega _{j}),\quad \omega _{j-1}^{{\prime}}\in A^{{\prime}},\;\;\;\omega _{ j}^{{\prime}}\in A, {}\\ Q(\omega _{j-1}^{{\prime}},\omega _{ j}^{{\prime}})& =& Q_{ R}(\omega _{j-1},\omega _{j}),\quad \mbox{ otherwise }. {}\\ \end{array}$$

If d[π] is even, then d[π]∕2 denotes the number of times that the path \(\omega ^{{\prime}}\) goes from A to A. Using this we can write

$$\displaystyle{\mathrm{Re}\left [Q^{{\prime}}(\omega )\right ] = \frac{1} {2}\,Q\left [\varPhi ^{-1}(\omega )\right ] = \frac{1} {2}\sum _{\varPhi (\omega ^{{\prime}})=\omega }Q(\omega ^{{\prime}}).}$$

The factor 1∕2 compensates for the initial choice of \(\omega _{0}^{{\prime}}\). Since \(Q^{{\prime}}(\omega ^{R}) = \overline{Q^{{\prime}}(\omega )}\), we see that

$$\displaystyle{Q^{{\prime}}(\omega ) + Q^{{\prime}}(\omega ^{R}) = Q\left [\varPhi ^{-1}(\omega )\right ].}$$

Since

$$\displaystyle{\hat{m}(\omega ) = \frac{Q^{{\prime}}(\omega )} {\vert \omega \vert },\;\;\;\;\varPhi _{{\ast}}m(\omega ) = \frac{Q\left [\varPhi ^{-1}(\omega )\right ]} {\vert \omega \vert },}$$

we get the result.

Since

$$\displaystyle{\det G^{{\prime}} =\exp \left \{\sum _{\omega \subset A}\hat{m}(\omega )\right \},\quad \det G =\exp \left \{\sum _{\omega ^{{\prime}}\subset \overline{A}}m(\omega )\right \},}$$

we get another derivation of the relation \(\det G = [\det G^{{\prime}}]^{2}.\)

Given the loop measure \(\hat{m}\) on A (or the loop measure m on \(\overline{A}\)), we can consider the discrete occupation field at time t as a measure \(\mu _{t,\hat{m}}\) on \(\mathbb{N}^{A}\) (or μ t, m on \(\mathbb{N}^{\overline{A}}\), respectively). The measure μ t, m pushes forward to a measure \(\varPhi _{{\ast}}\mu _{t,m}\) on \(\mathbb{N}^{A}\) by adding the components of x and x . It follows from (6) and Proposition 4.7 that

$$\displaystyle{\varPhi _{{\ast}}\mu _{t,m} =\mu _{2t,\hat{m}}.}$$

Also the “trivial loop occupation field” on \(\overline{A}\) at time t induces an occupation field on A by addition. This has the same distribution as the trivial loop occupation field on A at time 2t since there are two points in \(\overline{A}\) corresponding to each point in A. Hence Φ ρ t, m has the same distribution as \(\rho _{2t,\hat{m}}\).

Using Theorem 4.5 and Proposition 4.6 we get the following.

  • Suppose \(Q^{{\prime}}\) is a positive definite acceptable Hermitian matrix indexed by A. Let \(G^{{\prime}} = (I - Q^{{\prime}})^{-1}\).

  • Let Q be the positive definite real matrix on \(\overline{A}\) as in (16). Let G = (IQ)−1.

  • Let \(\{\phi (z): z \in \overline{A}\}\) be a centered Gaussian free field on \(\overline{A}\) with covariance matrix G provided Q is acceptable.

  • If \(h(x) = [\phi (x) + i\,\phi (x^{{\ast}})]/\sqrt{2}\), then h is a complex Gaussian free field on A with covariance matrix \(G^{{\prime}}\).

  • If ρ t denotes the continuous occupation field on \(\overline{A}\) (including trivial loops) given by Q at time t then \(\{\frac{1} {2}\phi (z)^{2}: z \in \overline{A}\}\) has distribution \(\rho _{\frac{1} {2} }.\)

  • If \(\rho _{t}^{{\prime}}\) denotes the continuous occupation field on A (including trivial loops) given by \(Q^{{\prime}}\) at time t then { | h(z) | 2: z ∈ A} has distribution \(\rho _{1}^{{\prime}}.\)