1 Introduction

In this paper we construct Gaussian-type measures on the space of Riemannian metrics on a fixed manifold and make some elementary observations about them, leaving deeper results for further work. We will being with several motivations for our construction and with directions for further work.

Let (Mg) be a Riemannian manifold. Quantum Chaos is a general term for the study of connections between the dynamics of the associated geodesic flow on TM (corresponding to the physics of a classical particle moving freely on M) and the spectrum of the Laplace–Beltrami operator on \(L^2(M)\) (corresponding to the physics of a quantum particle moving freely on M). We note the conjectures of Bohigas, Giannoni and Schmidt [7] about asymptotic behaviour of level spacings between Laplace eigenvalues for classically chaotic systems, and M. Berry’s random wave conjectures [5] about asymptotic properties of eigenfunctions. These conjectures appear to be very difficult to prove using standard semiclassical methods, and a natural idea is instead to consider them on average in some sense in the space of metrics on M, or perhaps to use random methods to construct examples or counter examples.

Another motivation for our construction is of developing geometric analysis on (often infinite-dimensional) manifolds of metrics. Most important progress to date involved differentiation on manifolds of metrics, in particular the study of \(L^2\) distance between metrics and related questions [1012]. The next natural step is to define integration on manifolds of metrics, hence the need to define and study measures on those manifolds. Related questions have been considered in [17] (for manifolds of maps) and in [8].

We now turn to our construction. In the predecessor work [8], the authors took a fixed “reference” (or “background”) metric \(g^0\) on M, and then considered a random metric \(g = e^{2\varphi }g^0\) in the conformal class of \(g^0\), where the (logarithm of) the conformal factor \(\varphi \) varied as a Gaussian random field on M, constructed using the eigenfunctions of the Laplacian for the reference metric. In the present paper, we work in a transverse direction: given the reference metric \(g^0\) we choose a random deformation g among those metrics having the same volume form as \(g^0\). Again we parametrize those metrics by exponentiating a Gaussian random field on M. Beyond describing the construction we limit out study to the statistics of various distance functions on the space of metrics, leaving deeper investigation for later papers.

Remark 1.1

It is possible to combine the two constructions, adding a conformal factor to our deformation. We mainly avoid doing this since the (completion of the) space of all Riemannian metrics is singluar, unlike the case of a fixed volume form.

Remark 1.2

Our construction depends on a choice of a global orthonormal frame in the tangent bundle (a global section of the frame bundle). The existence of such a frame is known as parallelizability, and is a topological property of M. We do not believe this assumption is essential; rather it simplifies the presentation here. For example, if one patches together deformations on parts of the manifold using a partition of unity, the distance statistics would not be as nice.

With this choice in hand, our construction is equivariant under the diffeomorphism group of the manifold: the pushforward of our probability measure by a diffeomorphism is equal to the measure obtained by pushing forward the reference metric and the frame.

We give the construction in Sect. 3. It is based on viewing the space of metrics with a given volume form as the space of sections of a bundle over M with fibers diffeomophic to the symmetric space \(S = \mathrm {SL}_n({\mathbb {R}})/\mathrm {SO}(n)\) (\(n=\dim M\)). This symmetric space supports an invariant Riemannian metric which can then be used to define an \(L^2\) distance on the space of metrics, which coincides with the distance arising from a Riemannian structure on this (infinite-dimensional) space. This distance is introduced in Sect. 2.3 and is studied as a random variable in Sect. 4, where tail estimates are obtained in terms of geometric constants.

In the Appendix, similar computations are carried out for a Lispchitz-type distance, also considered in [3]. Those estimates are then applied to establish integrability and existence of exponential moments for the diameter, Laplace eigenvalue and volume entropy functionals of our random Riemannian metrics.

Initial directions for further work involve studying the nature of the deformation we obtain (computation of the probability of the metric to lie in a small ball around the reference metric, and the behaviour of the isoperimetic constant under the deformation). In a foundational direction, we will address in a sequel questions about convergence and tightness (i.e. relative compactness in the weak-* topology) of our families of measures.

We expect that the Gaussian measure we have introduced in this paper will have applications that extend significantly beyond the basic questions considered here, in particular to the motivating problems discussed above.

2 The space of metrics

We fix once and for all a compact smooth manifold M without boundary and write n for its dimension. We also fix a smooth volume form \(\mathrm{dv}\) on M.

We rely crucially on the symmetric space structure of the space P of positive-definite matrices of determinant 1 and on the related structure theory of \(\mathrm {SL}_n({\mathbb {R}})\). In the discussion below we state the facts we use; proofs and further details may be found in the text [18], which concentrates on this case, and in [13] which develops the general theory of symmetric spaces associated to semisimple Lie groups.

2.1 The space of metrics

We start by giving a coordinate-free description of the set of Riemannian metrics with the volume form \(\mathrm{dv}\) on M. We then restrict to a class of manifolds for which there is a coordinate system simplifying the description.

Let V be a finite-dimensional real vector space with dual space \(V^*\), and let \(\mathrm {Sym}(V) = \{ g\in {{\mathrm{Hom}}}(V,V^*) \vert g^* = g\}\) be the space of symmetric bilinear forms on V. Among those we distinguish \(\mathrm {Pos}(V) = \{g\in \mathrm {Sym}(V) \vert \forall v\in V:g(v,v) > 0\}\), the space of positive-definite bilinear forms on V. Let \(\mathrm {SL}(V)\subset \mathrm {GL}(V)\) denote the special and general linear groups on V, and \(\mathfrak {sl}(V)\subset \mathfrak {gl}(V)\) their Lie algebras. Then \(\mathrm {GL}(V)\) acts on \(\mathrm {Pos}(V)\) by

$$\begin{aligned} h^{-1} \cdot g = h^*\circ g \circ h. \end{aligned}$$
(1)

It is well-known that this action is transitive; the stabilizer of any \(h\in \mathrm {Pos}(V)\) is a maximal compact subgroup isomorphic to \(\mathrm {O}(n)\). Moreover, the orbits of \(\mathrm {SL}(V)\) are precisely the level sets of the determinant function \(g\mapsto \det (g_0^{-1} g)\) where \(g_0\) is a fixed isomorphism \(V\rightarrow V^*\). Each level set is then of the form \(\mathrm {SL}(V)/K_{g_0}\) where \(K_{g_0} = {{\mathrm{Stab}}}_{\mathrm {SL}(V)}(g_0) \simeq \mathrm {SO}(n)\) and we give it the \(\mathrm {SL}(V)\)-invariant Riemannian structure coming from the Killing form of \(\mathrm {SL}(V)\), making it into a simply connected Riemannian manifold of non-positive curvature.

Remark 2.1

Since \(\mathrm {Pos}(V)\) is an open subset of the vector space \(\mathrm {Sym}(V)\), we may trivialize its tangent bundle by identifying each tangent space with \(\mathrm {Sym}(V)\). The reader may then verify that with this identification, the tangent space at g to the \(\mathrm {SL}(V)\)-orbit of g is exactly \(\{ X \in \mathrm {Sym}(V) \vert {{\mathrm{Tr}}}(g^{-1}X) = 0 \}\). Here we compose the linear maps \(X\in {{\mathrm{Hom}}}(V,V^*)\) and \(g^{-1} \in {{\mathrm{Hom}}}(V^*,V)\) to obtain a map in \({{\mathrm{End}}}(V)\) which has a trace. The reader may also verify that, since the congruence action above is linear as an action on \(\mathrm {Sym}(V)\), the derivative of the action of \(h^{-1}\) at g is the map \(X\mapsto h^{-1}Xh\) (composition of linear maps).

Now the Riemannian structure on the orbit claimed above is

$$\begin{aligned} \rho _g(X,X) = {{\mathrm{Tr}}}(g^{-1}Xg^{-1}X), \end{aligned}$$
(2)

and it is an immediate calculation that this metric is \(\mathrm {SL}(V)\)-equivariant: that \(\rho _{h.g}(h.X,h.X) = \rho _g(X,X)\).

With the usual translation of notions from vector spaces to vector bundles, we associate to the tangent bundle TM the vector bundles \({{\mathrm{Hom}}}(TM,T^*M)\) and \(\mathrm {Sym}(TM)\), the symmetric space-valued bundle \(\mathrm {Pos}(TM)\), and the group bundles \(\mathrm {GL}(TM)\) and \(\mathrm {SL}(TM)\).

By definition, a Riemannian metric on M is a smooth section of \(\mathrm {Pos}(M)\); we denote the space of sections by \({{\mathrm{Met}}}(M)\). To such a metric there is an associated Riemannian volume form, and we let \({{\mathrm{Met}}}_{\mathrm{dv}}(M)\) denote the space of metrics whose volume form is \(\mathrm{dv}\). Fixing a metric \(g_0 \in {{\mathrm{Met}}}_{\mathrm{dv}}(M)\), the above discussion identifies \({{\mathrm{Met}}}_{\mathrm{dv}}(M)\) with the space of sections of the bundle over M whose fibers are isomorphic to \(\mathrm {SL}_n({\mathbb {R}})/\mathrm {SO}(n)\). Moreover, the fibre at x of this bundle is equipped with a transitive isometric action of \(\mathrm {SL}(T_x M)\), where the metric is the one pulled back from the identification with \(S = \mathrm {SL}_n({\mathbb {R}})/\mathrm {SO}(n)\) (the pullback is well-defined since the metric on S is \(\mathrm {SL}_n({\mathbb {R}})\)-invariant).

Remark 2.2

It is a classical result of Ebin [11] that the diffeomorphism group acts transitively on the space of smooth volume forms of total volume 1, and therefore that the foliation of \({{\mathrm{Met}}}(M)\) by the orbits of the diffeomorphism group \(\mathrm{Diff}(M)\) descends to a foliation of \({{\mathrm{Met}}}_{\mathrm{dv}}(M)\) by the group \(\mathrm{Diff}_\mathrm{dv}(M)\) of volume-preserving diffeomorphisms. It follows that \({{\mathrm{Met}}}(M)/\mathrm{Diff}(M) \simeq {{\mathrm{Met}}}_{\mathrm{dv}}(M)/\mathrm{Diff}_\mathrm{dv}(M)\); we regard this space as the space of geometries on M.

In local co-ordinates \((x^1,\ldots ,x^n)\), the above construction reads as follows. One takes the basis \(\{\frac{\partial }{\partial x^i}\}_{i=1}^{n}\) for \(T_x M\) and its dual basis \(\{dx^i\}_{i=1}^{n}\) for \(T_x^* M\). Then fibers of \(\mathrm {Sym}(M)\) are represented by symmetric matrices, fibers of \(\mathrm {Pos}(M)\) by positive-definite symmetric matrices. The volume form associated to \(g\in {{\mathrm{Met}}}(M)\) is then given by \(|\det (g_x)|^{1/2} dx^1 \wedge \cdots \wedge dx^n\). \({{\mathrm{Met}}}_{\mathrm{dv}}(M)\) is then the metrics g such that \(\det (g_x) = \det (g^0_x)\) for all \(x\in M\), where \(g^0\) is any metric with Riemannian volume form \(\mathrm{dv}\). The group \(\mathrm {GL}_n({\mathbb {R}})\) then acts on the fibres via congruence transformations \(h^{-1} \cdot g = h^t g h\), with the stabilizer of \(g_x\) being the orthogonal group \(\mathrm {O}_{g_x}({\mathbb {R}})\simeq \mathrm {O}(n)\). Similarly, the group \(\mathrm {SL}_n({\mathbb {R}})\) acts transitively on the subset of the fibre with a given determinant, with point stabilizer \(\mathrm {SO}_{g_x}({\mathbb {R}}) \simeq \mathrm {SO}(n)\).

2.2 Deforming a metric

Fix \(g^0 \in {{\mathrm{Met}}}_{\mathrm{dv}}(M)\), and for \(x\in M\) let \(K_x \subset G_x = \mathrm {SL}(T_x M)\) be the orthogonal group of the positive-definite quadartic form \(g^0_x\), which is also the stabilizer of \(g^0_x\) under the congruence action (1). Fix a frame \(f_x\) in \(T_x M\), orthonormal with respect to the inner product defined by \(g^0_x\), and let \(A_x \subset G_x\) be the subgroup of matrices which are diagonal with positive entries in the basis \(f_x\). As noted above we can identify the set of positive-definite quadratic forms on \(T_x M\) with the same determinant as \(g^0\) with the symmetric space \(G_x / K_x\).

Remark 2.3

We warn the reader that we use the usual letter G to denote a semisimple Lie group and the letter g to denote a Riemannian metric. As such we don’t have \(g_x \in G_x\), and rather use \(h_x\) to denote an arbitrary element of \(G_x\).

Recall now the Cartan decomposition

$$\begin{aligned} G_x = K_x A_x K_x \end{aligned}$$
(3)

(see for example [18, §5.1]). This states that every \(h_x \in G_x\) can be written in the form \(h_x = k_{1,x} a_x k_{2,x}\) with \(k_{i,x}\in K_x\) and \(a_x\in A_x\), with \(a_x\) being unique up to the action of the Weyl group \(N_{G_x}(A_x) / Z_{G_x}(A_x)\), a group isomorphic to \(S_n\) acting by permutation of the coordinates with respect to the basis \(f_x\). Given \(a_x\), the two elements \(k_{i,x}\in K_x\) are unique up to the fact that \(Z_{K_x}(a_x)\) may not be trivial (generically this centralizer is equal to \(Z_{K_x}(A_x)\), which is either trivial or \(\{ \pm 1 \}\) depending on whether n is odd or even).

Recalling that \(k_{2,x}\in K_x\) stabilizes \(g^0_x\), it follows that for \(h_x\in G_x\) decomposed as above we have

$$\begin{aligned} h_x \cdot g^0_x = (k_{1,x} a_x)\cdot g^0_x. \end{aligned}$$

Since \(G_x\) acts transitively on the level set, it follows that every \(g^1_x\) with the same determinant \(g^0_x\) is of this form, and moreover that in that form the \(a_x\) is unique up to the action of \(S_n\) on \(A_x\).

Our goal is to randomly deform \(g^0\) by choosing elements \(k_x\) and \(a_x\) for every \(x\in M\). We shall discuss the “random” aspect of the construction in the next section, and concentrate at the moment on the topological issues involved in making such constructions well-defined.

Given the orthonormal frame \(f_x\), we can identify \(A_x\) with the space of positive diagonal matrices of determinant 1. Further, using the exponential map we may identify this group with its Lie algebra \(\mathfrak {a}\simeq {\mathbb {R}}^{n-1}\) of diagonal matrices of trace zero. We will therefore specify \(a_x\) by choosing such a matrix at each x, that is by choosing a function \(H:M\rightarrow \mathfrak {a}\).

While this clearly works locally, making a global identification requires a choice of frame \(f_x\) at every \(x\in M\), that is an everywhere non-zero section of the frame bundle of M or equivalently a trivialization of the tangent bundle of M, something which is not possible in general. For simplicity we have decided to only discuss here the case of manifolds where such sections exist, and defer more general constructions to future papers.

Remark 2.4

We required the existence of a smooth \(g^0\)-orthonormal frame. However, this is equivalent to the topological condition (“parallelizability”) of the existence of a smooth but not necessarily orthonormal frame. To see this note that starting with any non-zero smooth section of the frame bundle, applying pointwise the Gram–Schmidt procedure with respect to the metric \(g^0\) is a smooth operation and will produce a smooth orthonormal frame.

We survey here some facts about parallelizable manifolds, mainly to note that this class is rich enough to make our construction interesting. First, a parallelizable manifold is clearly orientable. Second, a necessary condition for parallelizability is the vanishing of the second Stiefel–Whitney class of the tangent bundle, which for orientable manifolds is equivalent to M being a spin manifold. Examples of parallelizable manifolds include all 3-manifolds, all Lie groups, the frame bundle of any manifold and the spheres \(S^n\) with \(n\in \{1,3,7\}\).

2.3 The \(L^2\) metric

Once the volume form is fixed, the action of \(\mathrm {SL}(T_x M)\) on the stalk of \({{\mathrm{Met}}}_{\mathrm{dv}}(M)\) at x identifies it with the symmetric space \(S = \mathrm {SL}_n({\mathbb {R}})/\mathrm {SO}(n)\). As noted above this space supports an \(\mathrm {SL}_n({\mathbb {R}})\)-invariant Riemannian metric of non-positive curvature. Denote its distance function \(d_S\); we then write \(d_x\) for the well-defined metric on the stalk at x of \({{\mathrm{Met}}}_{\mathrm{dv}}(M)\). Integrating this over M then gives a metric (to be denoted \(\Omega _2\)) on \({{\mathrm{Met}}}_{\mathrm{dv}}(M)\): given two Riemannian metrics \(g^0, g^1 \in {{\mathrm{Met}}}_{\mathrm{dv}}(M)\) on M with the same Riemannian volume form \(\mathrm{dv}\), we set

$$\begin{aligned} \Omega _2^2(g^0, g^1) = \int _M d_x^2(g^0_x, g^1_x) \mathrm{dv}(x). \end{aligned}$$

For a different point of view on this metric, recall that \(d_S\) is the distance function associated to the Riemannian metric (2). Fixing \(V={\mathbb {R}}^n\) with its standard metric and frame, we write \(G=\mathrm {SL}_n({\mathbb {R}})\), \(K = \mathrm {SO}(n)\) so that \(S=G/K\). In this setting one can find directly the geodesics connecting the standard metric to any metric which is diagonal in the standard basis. Using G-equivariance and the Cartan decomposition (3), the upshot is the following (for details see [18, §5.1]): let \(hK, h'K \in S=G/K\) correspond to two metrics of equal determinant. Then \(K h^{-1} h' K\) is a well-defined element of \(K\backslash G / K \simeq A/S_n\), where A is the group of diagonal matrices of determinant 1 and positive entries. Let \(a\in A\) be a representative for \(K h^{-1} h' K\). We then say that g and h are in relative position a. Writing \(\log a\) for the vector of n logarithms of the diagonal entries of a (note that the entries of \(\log a\) sum to 1, since \(\det a=1\)), it turns out that and \(d_S(hK, hK) = \left\| {\log a} \right\| \), where and \(\left\| {\cdot } \right\| \) is the usual \(\ell ^2\) norm.

3 Gaussian measures on the space of metrics

We next turn to the question of actually constructing our Gaussian measures. For a general reference on Gaussian random variables see [6]. In view of the decomposition considered in Sect. 2.1, it is natural to split the construction into diagonal and orthogonal parts.

Let \(g^0\) be our reference metric. Every other metric of \({{\mathrm{Met}}}_{\mathrm{dv}}\) is of the form \(g^1_x = k_x a_x \cdot g^0_x\) where ka are smooth functions on M such that \(k_x\in K_x\) and \(a_x\in A_x\). In Sects. 3.1 and 3.2 we describe random constructions of \(a_x\) and \(k_x\) respectively.

It is not hard to verify that \(\bigcup _x K_x,\bigcup _x A_x,\bigcup _x G_x\) are subbundles of the Lie-group bundle \(\mathrm {GL}(TM)\), and that their Lie algebras therefore furnish subbundles of the Lie algebra bundle \(\mathfrak {gl}(TM) \simeq {{\mathrm{End}}}(TM)\). Specifically, \({{\mathrm{Lie}}}(G_x)\) consists of the endomorphisms of \(T_x M\) of trace zero, \({{\mathrm{Lie}}}(K_x)\) consists of the endomorphisms which are skew-symmetric in the frame \(f_x\), and \({{\mathrm{Lie}}}(A_x)\) consists of those which are diagonal of trace zero in the frame.

For the constructions below we fix a complete orthonormal basis \(\{\psi _j\}_{j=0}^{\infty } \subset L^2(M)\) such that \(\Delta _{g^0}\psi _j+\lambda _j\psi _j=0\), with \(\lambda _j\) being a non-decreasing ordering of the spectrum of the Laplace operator \(\Delta _{g^0}\). Our constructions are in fact independent of the choice of basis of each eigenspace, but it is more convenient to make an explicit choice.

3.1 The radial part

We begin by defining a measure on the space of smooth functions \(x\mapsto H_x\) such that \(H_x \in {{\mathrm{Lie}}}(A_x)\) (sections of the bundle \(\bigcup _x {{\mathrm{Lie}}}(A_x)\)). We follow the recipe of [17]: choose decay coefficients \(\beta _j = F(\lambda _j)\) where F(t) is an eventually monotonically decreasing function of t and \(F(t)\rightarrow 0\) as \(t\rightarrow \infty \). Then set

$$\begin{aligned} H_x = \sum \limits _{j=1}^{\infty }\pi _n(\underline{\xi }_j) \beta _j\psi _j(x), \end{aligned}$$
(4)

where \(\underline{\xi }_j\) are i.i.d. standard Gaussians in \({\mathbb {R}}^{n}\), and \(\pi _n:{\mathbb {R}}^n \rightarrow {\mathbb {R}}^n\) is the orthogonal projection on the hyperplane \(\sum _{i=1}^{n} x_i = 0\).

Finally, set

$$\begin{aligned} a_x = \exp (H_x) \end{aligned}$$

where \(\exp \) is the exponential map to \(A_x\) from its Lie algebra.

The smoothness of H defined by (4) is given by [17, Theorem6.3]. The following two propositions apply whenever \(\underline{\xi }_j\) in (4) denotes a d-dimensional standard Gaussian, while M has dimension n.

Proposition 3.1

If \(\beta _j=O(j^{-r})\) where \(r>(q+\alpha )/n+1/2\), then H defined by (4) converges a.s. in \(C^{q,\alpha }(M,{\mathbb {R}}^d)\).

We remark that the exponents in Proposition 3.1 are independent of d (the dimension of the “target” space). Now Weyl’s law for M [2, 16] states that \(\lambda _j\) grows roughly as \(j^{2/n}\). It follows that

Proposition 3.2

If \(\beta _j=O(\lambda _j^{-s})\) where \(s>q/2+n/4\), then H defined by (4) converges a.s. in \(C^q(M,{\mathbb {R}}^d)\).

3.2 The angular part

In this paper we study invariants of \(g^1\) that can be bound only using a, so that our later calculations will only depend on the marginal distribution of a. Thus, as long as the choices of k and a are independent, the choice of k has no effect. In future work we plan to ask more detailed questions where this choice will become relevant. For example, determining the curvature of \(g^1\) following the ideas of [8] requires differentiating \(g^1_x\) with respect to x and this immediately involves the choice of \(k_x\). We thus propose the following specific choice, again using the recipe of Eq. (4). We set

$$\begin{aligned} k_x = \exp _x(u_x) \end{aligned}$$

where \(u_x\) is the Gaussian vector

$$\begin{aligned} u_x = \sum _{j=1}^{\infty } \underline{\eta }_j \delta _j \psi _j(x). \end{aligned}$$
(5)

Here \(\underline{\eta }_j\in \mathfrak {so}_n\) are i.i.d. standard Gaussian anti-symmetric matrices (i.e. each \(\underline{\eta }_j\) is given by \(d_n = n(n-1)/2\) i.i.d. standard Gaussian variables corresponding to the upper-triangular part of \(\underline{\eta }_j\)), and \(\delta _j = F_2(\lambda _j)\) are decay factors, given as functions of the corresponding eigenvalues.

Proposition 3.1 above applies again to give the smoothness properties of our random sections. In particular, since the exponents in Proposition 3.1 are independent of \(d_n\), substituting into Weyl’s law we get a straightforward analogue of Proposition 3.2 for the expression (5).

3.3 Remarks on the construction

For readers who may not wish to refer to a textbook such as [6], we briefly recall that a random vector such as \(H_x\) is \(Gaussian \) if its finite-dimensional marginals are Gaussian, which in our case roughly means (though we want more) that for every k points \(x_1,\ldots ,x_k\in M\), the joint distribution of the finite-dimensional vector \((H_{x_1},\ldots ,H_{x_k})\) is Gaussian.

Our Gaussian vectors are balanced (their expectation is zero) and they are therefore determined by their covariance function (roughly, the function on \(M\times M\) given by the expectation of \(H_{x_1}\otimes H_{x_2}\)).

Remark 3.3

For the convenience of the reader who prefers Gaussian variables to be defined by their covariance function, we note here the covariance functions relevant to our case.

Let \(\mathfrak {g}_x = \mathfrak {sl}(T_xM)\) denote the Lie algebra of \(\mathrm {SL}(T_x M)\). As noted above our Gaussian measure is defined on appropriate spaces of sections of subbundles of the bundle \(\bigcup _x \mathfrak {g}_x\). With sufficient continuity it is enough to consider the covariance operator evaluated on linear functionals of the form \(X\mapsto \alpha _x(X(x))\), where X is a section of the bundle and \(\alpha _x \in \mathfrak {g}_x^*\).

Our Gaussian measure for the diagonal part then has the covariance functions

$$\begin{aligned} R((x,k),(x',k'))=\delta _{kk'}\sum _j \beta _j^2 \psi _j(x)\psi _j(x'), \end{aligned}$$
(6)

where k is an index for the diagonal entries of a matrix in \(\mathfrak {g}_x\), diagonal with respect to our fixed frame and (xk) therefore denote the functional mapping the section \(H_x\) to the kth entry of the diagonal matrix at x. The angular part has a similar covariance function.

For standard choices of \(\beta _k\), we note that the covariance function for analogously-defined scalar fields would be well-known spectral invariants: we’d have

$$\begin{aligned} r(x,y)= {\left\{ \begin{array}{ll} Z(x,y,2s):=\sum _{k=1}^\infty \frac{\psi _k(x)\psi _k(y)}{\lambda _k^{2s}}, \qquad \beta _k=\lambda _k^{-s};\\ e^*(x,y,2t):=\sum _{k=1}^\infty \frac{\psi _k(x)\psi _k(y)}{e^{2t\lambda _k}}, \qquad \beta _k=e^{-t\lambda _k}. \end{array}\right. } \end{aligned}$$
(7)

Here Z(xy, 2s) is known as the spectral zeta function of \(\Delta _0\) (see e.g. [16]), while \(e^*(x,y,2t)\) is the corresponding heat kernel (see e.g. [4] or [9, Ch.6]), both taken without the constant term that would correspond to the constant eigenfunction \(\psi _0\) with eigenvalue zero.

Remark 3.4

When taking \(\beta _j = \lambda _j^{-s}\), the parameter s determines the a.s. Sobolev regularity of the random metric g via Propositions 3.1 and 3.2. If the metric \(g^0\) is real-analytic, then letting \(\beta _k=e^{-t\lambda _k}\) makes the random metric g real-analytic as well, with the parameter t related to the a.s. radius of analyticity (the exponent in rate of decay of Fourier coefficients).

Remark 3.5

A similar construction applies to the space of all Riemannian metrics on M (without necessarily fixing the volume form). We now work in the symmetric space \(\mathrm {GL}(T_x M)/\mathrm {O}(g^0_x)\). The only change is that in Eq. (4) one lets \(A_j\) be standard vector-valued Gaussians without the projection.

There is a Riemannian structure and an \(L^2\) metric (due to Ebin) defined on the space of all metrics. A detailed study of the metric properties of this space was undertaken in [10].

4 \(\Omega _2\) as a random variable

In this section we study the statistics of \(\Omega _2^2\).

4.1 The distribution function

We recall one definition of the (fiber-wise) distance \(d_x\) introduced in Sect. 2.3. For this choose a basis for \(T_x M\) orthonormal with respect to \(g^0(x)\) (in this basis the reference metric \(g^0_x\) is represented by the identity matrix). If the translation from \(g^0_x\) to \(g^1_x\) is given by the element \(k_x a_x \in G_x\) with \(a_x\) diagonal in the chosen basis, \(k_x\) orthogonal, then the metric \(g^1_x\) is represented by the symmetric positive-definite matrix \(k_x a_x^2 k_x^{-1}\). Writing \(e^{b_i(x)}\) for the diagonal entries of \(a_x\), we have

$$\begin{aligned} d_x^2(g^0_x,g^1_x) = \sum _{i=1}^{n} b_i(x)^2. \end{aligned}$$

Accordingly,

$$\begin{aligned} \Omega _2^2(g^0,g^1) = \int _M \left( \sum _{i=1}^n b_i(x)^2\right) \mathrm{dv}(x). \end{aligned}$$
(8)

In our random model, the vector-valued function b(x) is a Gaussian random field, chosen according to Eq. (4), where here we choose \(\pi _n\) to be the orthogonal projection. In other words b(x) is defined by projecting an isotropic Gaussian in \({\mathbb {R}}^n\) orthogonally to the hyperplane \(\sum _i b_i(x) = 0\). Integrating over x, we find that the distribution of \(\Omega _2^2\) is given by:

$$\begin{aligned} \Omega _2^2 \overset{D}{=} \sum _j \beta _j^2 \sum _{i=1}^{n-1} W_{i,j} \end{aligned}$$

where the \(W_{i,j}\) are independent random variables with \(\chi ^2\) distribution. We can rewrite this as

$$\begin{aligned} \Omega _2^2 \overset{D}{=} \sum _j \beta _j^2 V_j \end{aligned}$$

with i.i.d. \(V_j \sim \chi ^2_{n-1}\) (\(\chi ^2\) distribution with \(n-1\) degrees of freedom).

Recall that the moment generating function of the random variable X is the function \(M_X(t) = {\mathcal {E}}\left( \exp (tX)\right) \). These can be used, for example, to estimate the probability of large deviations of the variable X. Having represented \(\Omega _2^2\) as the sum of independent variables with known distribution, we can now explicitly compute its moment generating function as the product

$$\begin{aligned} M_{\Omega ^2_2}(t)= & {} \prod _j \prod _{i=1}^{n-1} M_{\chi ^2_1}(t\beta _j^2) = \prod _j \prod _{i=1}^{n-1} (1-2t \beta _j^2)^{-1/2}\\= & {} \prod _j(1-2t \beta _j^2)^{-(n-1)/2} \end{aligned}$$

The following result is proved similarly.

Proposition 4.1

The characteristic function \(E(\exp (it \Omega ^2_2))\) can be computed explicitly as

$$\begin{aligned} \prod _j \prod _{k=1}^{n-1} (1-2it \beta _j^2)^{-1/2} =\prod _j(1-2it \beta _j^2)^{-(n-1)/2}. \end{aligned}$$

4.2 Tail estimates for \(\Omega _2^2\)

Here we apply [14, Lemma1, (4.1)] to estimate the probability of the following events:

$$\begin{aligned} \mathrm{{Prob}}\{\Omega _2^2>R^2\},\quad R\rightarrow \infty . \end{aligned}$$
(9)

We let \(W=\sum _i a_i Z_i^2\) with \(Z_i\) i.i.d. standard Gaussians, and for \((n-1)(j-1)+1\le i\le (n-1)j\), we have \(a_i=\beta _j^2\) (i.e. each \(\beta _j^2\) is repeated \((n-1)\) times). We let \(\left\| {a} \right\| _\infty =\sup _j a_j\). Assume from now on that \(\beta _j=F(\lambda _j)\) is a monotone decreasing function; then \(\left\| {a} \right\| _\infty =a_1=\beta _1^2\).

It is shown in [14, Lemma1, (4.1)] that for \(W_k=\sum _{i=1}^{k(n-1)}a_i Z_i^2\), we have

$$\begin{aligned} \mathrm{{Prob}}\left\{ W_k\ge \sum _{i=1}^{k(n-1)}a_i+2\left( \sum _{i=1}^{k(n-1)}a_i^2\right) ^{1/2}\sqrt{y} + 2\left\| {a} \right\| _\infty y\right\} \le e^{-y}. \end{aligned}$$

Letting \(k\rightarrow \infty \), we get the following quantities:

  • \(W:=\lim _{k\rightarrow \infty }W_k=\Omega _2^2\);

  • \(A^2=\sum _{i=1}^\infty a_i=(n-1)\sum _{j=1}^\infty \beta _j^2\);

  • \(B^4=\sum _{i=1}^\infty a_i^2=(n-1)\sum _{j=1}^\infty \beta _j^4\);

  • \(\left\| {a} \right\| _\infty =a_1=\beta _1^2\).

With \(x^2\) instead of y, we get:

$$\begin{aligned} \mathrm{{Prob}}\{W\ge A^2 + 2B^2 x +2\left\| {a} \right\| _\infty ^2 x^2\} \le e^{-x^2}. \end{aligned}$$

Solving

$$\begin{aligned} R^2=2||a||_\infty ^2 x^2+2B^2 x+A^2. \end{aligned}$$

for x gives (for \(R\ge A\)) the following root:

$$\begin{aligned} x(R)=\frac{-B^2+\sqrt{B^4+2(R^2-A^2)||a||_\infty ^2}}{2||a||_\infty ^2}. \end{aligned}$$
(10)

We conclude that

$$\begin{aligned} \mathrm{{Prob}}\{\Omega _2\ge R\} \le e^{-(x(R))^2}, \end{aligned}$$

where x(R) is given by (10).

It is easy to show that there exists a constant \(C=C(A,B,||a||_\infty )\) such that for \(R\ge A\), we have

$$\begin{aligned} x(R)^2\ge \frac{R^2}{2||a||_\infty ^2}-CR=\frac{R^2}{2\beta _1^2}-CR. \end{aligned}$$

We also notice that

$$\begin{aligned} \mathrm{{Prob}}\{\Omega _2\ge R\}\ge \mathrm{{Prob}}\{\beta _1^2Z_1^2\ge R^2\}= \Psi \left( \frac{R}{\beta _1}\right) \ge C\frac{\beta _1e^{-R^2/(2\beta _1^2)}}{R}, \end{aligned}$$

provided \(R\ge \beta _1\).

To summarize:

Theorem 4.2

For \(R\ge A\), we have

$$\begin{aligned} \frac{C\beta _1}{R}\exp \left( \frac{-R^2}{2\beta _1^2}\right) \le \mathrm{{Prob}}\{\Omega _2\ge R\} \le \exp \left( \frac{-R^2}{2\beta _1^2}+CR\right) . \end{aligned}$$