Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

The Heat Kernel Signature (HKS), introduced by Sun et al. in [8], is known to be a powerful shape signature. In [8] it is shown that the HKS is not only an isometric invariant but contains almost all intrinsic information of a surface. Thus it is well suited for detecting similar shaped regions of surfaces. The HKS is derived from the process of heat diffusion and consequenlty equipped with a time parameter. This multiscale property allows to adjust the size of the neighborhood that influences the value of the HKS at a point. Additionally, the HKS is not sensitive to small perturbations of the underlying surface, e.g. a tunnel between small sets of points. Several methods employ the HKS to detect similar shaped surfaces globally, see [1, 46]. Also from a visual point of view the HKS characterizes a surface very well, since, for small time values, it is closely related to the Gaussian curvature of the surface. For large time values it can be considered as the curvature on a larger scale. Our idea is to use the HKS for the visualization of tensor fields. Since positive definite tensor fields can be considered as Riemannian metrics, i.e. together with its domain as Riemannian manifolds, the definition of the HKS is still applicable for positive definite tensor fields. Consequently, we obtain a scalable Gaussian curvature of the Riemannian manifold associated with tensor field.

For a better understanding we illustrate the relation between the HKS of a two dimensional surface M and a positive definite tensor field (i.e. the metric tensor field of the surface) in Fig. 1. If g is the metric of the surface M and \(f: {\mathbb{R}}^{2} \supset U\mapsto {\mathbb{R}}^{3}\) a parametrization of M, i.e. f(U) = M, we can compute the pull back of the metric g on U by f, denoted by f g. The metric f g is a positive definite tensor field on U which is well characterized by the HKS of the surface. In this chapter we propose a method for computing the HKS directly for a positive definite tensor field defined on \(U \subset {\mathbb{R}}^{2}\), interpreting the tensor field as the metric of a surface. We do not demand that there exists a simple embedding of the surface in some Euclidean space. We restrict ourselves to 2D positive definite tensor fields in this chapter, but the definition of the HKS and the numerical realization presented here is also valid in higher dimensions. However, the computational complexity will be a problem in higher dimensions, see also the remark in Sect. 6.

Fig. 1
figure 1

Commutative diagram illustrating the relation between the HKS of a surface and a positive definite tensor field. Metric of the surface depicted as ellipses (top left), the parametrized surface (top right), HKS on the surface (bottom right) and the HKS on U (bottom right)

In Sect. 2 we give a short introduction to the HKS. The application of the HKS to tensor fields is explained in Sect. 3. To compute the HKS we need to compute the eigenvalues of a the Laplacian on a Riemannian manifold (M, g). In case of surfaces the embedding in the Euclidean space can be utilized, whereas, in the case of tensor fields, all computations must be done by using the tensor only. A finite element method to achieve this for tensor fields on a uniform grid is proposed in Sect. 4, alongside with some numerical tests. Results of our method are shown in Sect. 5.

Heat Kernel Signature

The Heat Kernel Signature (HKS) is typically used for the comparison of surfaces. It is derived from the heat equation and assigns each point of the surface a time dependent function \([0,\infty ) \rightarrow \mathbb{R}\) which depends only on the metric of the surface. Conversely, all information about the metric are contained in the HKS under quite weak assumptions. For smaller time values the HKS at a point is governed by smaller neighbourhoods, i.e. one can control the portion of surface which should be taken into account. This makes the HKS a powerful tool for the identification of similar shaped parts with different level of detail by comparing the HKS for different time values. However, the HKS is not restricted to surfaces, it is defined for arbitrary Riemannian manifolds. We employ this fact to apply the HKS on positive definite tensor fields. A short introduction to the HKS is given in this section. For details we refer the reader to [8]. A detailed treatment of the heat operator and the heat kernel can be found in [7].

Let (M, g) be a compact, oriented Riemannian manifold and Δ the Laplace-Beltrami operator (also called just Laplacian) on M which is a equivalent to the usual Laplacian in case of flat spaces. Given an initial heat distribution \(h(x) = h(0,x) \in {C}^{\infty }(M)\) on \(M\), considered to be perfectly insulated, the heat distribution \(h(t,x) \in {C}^{\infty }({\mathbb{R}}^{+}\times M)\) at time t is governed by the heat equation

$$\displaystyle{(\partial _{t}+\varDelta )h(t,x) = 0.}$$

One can show that there exists a function \(k(t,x,y) \in {C}^{\infty }({\mathbb{R}}^{+} \times M \times M)\) satisfying

$$\displaystyle\begin{array}{rcl} (\partial _{t} +\varDelta _{x})k(t,x,y)& =& 0, {}\\ \lim _{t\rightarrow 0}\int k(t,x,y)h(y)\,dy& =& h(x), {}\\ \end{array}$$

where Δ x denotes the Laplacian acting in the x variable. The function k(t, x, y) is called heat kernel. Let now H t be the integral operator defined by

$$\displaystyle{H_{t}h(x) =\int _{M}k(t,x,y)h(y)\,dy,}$$

then h(t, x) = H t h(x) satisfies the heat equation. Consequently H t takes an initial heat distribution h(x) to the heat distribution h(t, x) at time t. The operator H t is called heat operator.

The heat kernel can be computed by the formula

$$\displaystyle{ k(t,x,y) =\sum _{i}{e}^{-\lambda _{i}t}\phi _{ i}(x)\phi _{i}(y), }$$
(1)

where λ i and ϕ i are the eigenvalues and eigenfunctions of Δ. Since Δ is invariant under isometries, Eq. (1) shows that this is also true for the heat kernel. Moreover, the metric can be computed from the heat kernel by the formula

$$\displaystyle{\lim _{t\rightarrow \infty }t\log k(t,x,y) = -\frac{1} {4}{d}^{2}(x,y),}$$

where d(x, y) denotes the geodesic distance between two points x, y ∈ M. Thus, for a given manifold M, the information contained by the heat kernel and the metric are equivalent. Another important property of the heat kernel is its multi-scale property. For the heat kernel t plays the role of a spatial scale of influence, i.e. k(t, x, ⋅ ) depends mainly on small neighborhoods of x for small t, whereas k(t, x, ⋅ ) is influenced by larger neighborhoods of x for larger t.

The HKS is defined in [8] to be the function \(\mathit{HKS} \in {C}^{\infty }({\mathbb{R}}^{+} \times M)\) given by

$$\displaystyle{ \mathit{HKS}(t,x) = k(t,x,x). }$$
(2)

Since the heat kernel is much more complex than the HKS, one might expect to loose a lot of information when regarding the HKS instead of the heat kernel. But, as shown in [8], the metric can be reconstructed from the HKS under quite weak assumptions. This means that the HKS of a positive definite tensor field contains almost all information of the tensor field itself and is consequently much more informative than usual scalar quantities like the trace or the determinant.

Relation to Curvature

In order to obtain a more intuitive understanding of the HKS we study its relation to the curvature of the manifold M. For small values of the time parameter t the HKS has the row expansion

$$\displaystyle{ \mathit{HKS}(t,x) = \frac{1} {4\pi t}\sum _{i=1}^{\infty }u_{ i}(x){t}^{i}. }$$
(3)

The general form of the functions u i (x) is discussed in [7]. For the two-dimensional manifolds considered in this chapter the first three functions can be written as

$$\displaystyle\begin{array}{rcl} u_{0}(x)& =& 1, {}\\ u_{1}(x)& =& \frac{1} {3}K(x), {}\\ u_{2}(x)& =& \frac{1} {45}\left (4K{(x)}^{2} - 3\varDelta K(x)\right ), {}\\ \end{array}$$

where K is the Gaussian curvature of M. Consequently, for a small value of t, the value of the HKS consists mainly of \(\frac{1} {3}K\) plus a constant. The derivation of the stated u i from the general case can be found in the Appendix.

HKS for Tensor Fields

The HKS introduced in Sect. 2 is defined for any compact, oriented Riemannian manifold. Thus the HKS is not restricted to surfaces embedded in \({\mathbb{R}}^{n}\). If we have a metric tensor g, i.e. a symmetric positive definite tensor field, defined on a region \(U \subset {\mathbb{R}}^{n}\), then (U, g) forms a Riemannian manifold. Since there is a Riemannian manifold associated with a positive definite tensor field in this way, we can compute the HKS for any positive definite tensor field. In this section we illustrate the relation of the HKS for surfaces and tensor fields by considering a parametrized surface and the pullback of its metric.

Let \(f: {\mathbb{R}}^{2} \supset U \rightarrow {\mathbb{R}}^{3}\) be a parametrized surface. On the one hand, we can compute the HKS for the surface f(U). On the other hand, we can define a metric g on U by

$$\displaystyle{ g: T_{u}(U) \times T_{u}(U) \rightarrow \mathbb{R},\quad (v,w)\mapsto \left \langle J(u)v\,,\,J(u)w\right \rangle, }$$
(4)

where J(u) denotes the Jacobian of f at u ∈ U and 〈 ⋅ , ⋅ 〉 the standard scalar product on \({\mathbb{R}}^{3}\). That is, g is the pullback f 〈 ⋅ , ⋅ 〉 of 〈 ⋅ , ⋅ 〉 by f. The components of g are given by \(g_{ij} = ({J}^{T}J)_{ij}\).

This makes (U, g) a Riemannian manifold which is isometric to f(U) (equipped with the metric induced by \({\mathbb{R}}^{3}\)) and f the associated isometry. Now we can compute the HKS directly on U by using g as metric. This is equivalent to computing the HKS on the surface f(U) and then pull it back to the parameter space U by f, i.e.

$$\displaystyle{\mathit{HKS}_{U}(t,u) = \mathit{HKS}_{f(U)}(t,f(u)),}$$

where HKS U and HKS f(U) denote the HKS on U and f(U), respectively. In other words: The diagram in Fig. 1 commutes.

Figure 1 also shows that the HKS of (U, g) (bottom left) is a meaningful visualization of the metric. Thus we are interested in a method for computing the HKS directly for tensor fields, so that no embedded surface with the tensor field as metric tensor needs to be constructed. We propose such a method in Sect. 4.

Numerical Realization

To our knowledge, the HKS has only been used for triangulated surfaces, so far. We want to use the HKS for the visualization of two-dimensional symmetric positive definite tensor fields T defined on a rectangular region \(U \subset {\mathbb{R}}^{2}\). Thus we need a method to compute the HKS of T or, more precisely, of the Riemannian manifold (U, T) associated with T. A finite element method for solving this problem is proposed in this section. Moreover, we discuss the boundary conditions and check the correctness of our results numerically.

From Eq. (1) follows that we can compute the heat kernel signature by the formula

$$\displaystyle{\mathit{HKS}(t,x) =\sum _{i}{e}^{-\lambda _{i}t}\phi _{ i}(x)\phi _{i}(x),}$$

where λ i and ϕ i are the eigenvalues and eigenfunctions of the Laplacian Δ on (U, T). Thus we need a suitable discretization of Δ. Our first idea was to adapt the Laplacian from the framework of discrete exterior calculus, see [3], which is closely related to the cotangent Laplacian and widely used for triangulated surfaces. However, this discretization makes intensive use of edge lengths, whereas triangulating the domain U and computing edge lengths by the metric g results in triangles which might not even satisfy the triangle inequality. Thus there seems to be no easy modification of this approach. Instead we propose a finite element method to compute the eigenvalues of the Laplacian.

According to Sect. 3 we can think of T as the metric of a surface in local coordinates. In this case the Laplacian is given by

$$\displaystyle{\varDelta f = \frac{1} {\sqrt{\vert T\vert }\,}\mathrm{div}\left (\sqrt{\vert T\vert }\,{T}^{-1}\nabla f\right )}$$

for any function f ∈ C (M), where \(\mathrm{div}\) and ∇ denote the divergence and the gradient on U, respectively. Hence we have to solve the eigenvalue equation

$$\displaystyle{ \frac{1} {\sqrt{\vert T\vert }\,}\mathrm{div}\left (\sqrt{\vert T\vert }\,{T}^{-1}\nabla \phi \right ) =\lambda \phi,}$$

or equivalently

$$\displaystyle{\mathrm{div}\left (\sqrt{\vert T\vert }\,{T}^{-1}\nabla \phi \right ) =\lambda \sqrt{\vert T\vert }\,\phi.}$$

The weak formulation of this problem is given by

$$\displaystyle{\int \nolimits_{U}\mathrm{div}\left (\sqrt{\vert T\vert }\,{T}^{-1}\nabla \phi \right )\psi \,\mathit{dx} =\lambda \int \nolimits_{ U}\sqrt{\vert T\vert }\,\phi \psi \,\mathit{dx}}$$

while this equation must hold for every smooth function ψ. We can rewrite the left hand side to

$$\displaystyle\begin{array}{rcl} & & \int_{U}\mathrm{div}\left (\sqrt{\vert T\vert }\,{T}^{-1}\nabla \phi \right )\psi \,\mathit{dx} {}\\ & =& \int_{U}\mathrm{div}\left (\sqrt{\vert T\vert }\,{T}^{-1}(\nabla \phi )\psi \right )\,\mathit{dx} -\int_{U}\sqrt{\vert T\vert }\,{T}^{-1}\nabla \phi \, \cdot \,\nabla \psi \,\mathit{dx} {}\\ & =& \int _{\partial U}\left (\sqrt{\vert T\vert }\,{T}^{-1}(\nabla \phi )\psi \right )\,\cdot \,n\,\mathit{dx} -\int_{U}\sqrt{\vert T\vert }\,{T}^{-1}\nabla \phi \,\cdot \,\nabla \psi \,\mathit{dx}, {}\\ \end{array}$$

where n denotes the outward pointing normal of the boundary. If we apply Neumann boundary conditions, i.e. ∇ϕ ⋅ n = 0, the first term vanishes. Finally, we have to solve the equation

$$\displaystyle{\int_{U}\sqrt{\vert T\vert }\,{T}^{-1}\nabla \phi \,\cdot \,\nabla \psi \,\mathit{dx} = -\lambda \int_{U}\sqrt{\vert T\vert }\,\phi \psi \,\mathit{dx}.}$$

Choosing basis functions h i the stiffness matrix L and the mass matrix M are given by

$$\displaystyle\begin{array}{rcl} L_{ij}& =& \int_{U}\sqrt{\vert T\vert }\,{T}^{-1}\nabla h_{ i}\,\cdot \,\nabla h_{j}\,\mathit{dx}, {}\\ M_{ij}& =& -\int_{U}\sqrt{\vert T\vert }\,h_{i}h_{j}\,\mathit{dx}, {}\\ \end{array}$$

and we solve the generalized eigenvalue equation

$$\displaystyle{Lv =\lambda Mv.}$$

In our examples the tensor fields are given on regular grids and we use bilinear basis functions h k .

Usual boundary conditions like Dirichlet or Neumann boundary conditions influence the HKS significantly. In particular for large time values the influence is not limited to the immediate vicinity of the boundary. Neumann boundary conditions cause the HKS to have higher values close to the boundary; their physical meaning is that the heat is perfectly insulated. Dirichlet boundary conditions cause the HKS to have a fixed value at the boundary. To overcome this problem we reflect a part of the field at the boundary. Now we can use Neumann boundary conditions for the sake of simplicity and obtain a significantly reduced influence of the boundary, see Fig. 2. The physical meaning of these reflected boundaries is that the heat at the boundary can diffuse outwards in the same way than inwards.

Fig. 2
figure 2

The result on the left is strongly influenced by the boundary. This effect can be reduced significantly by reflecting a portion of the tensor field on the boundary (middle) and cropping the result (right)

Numerical Verification

We check the correctness of the FEM described above experimentally by comparing the HKS for a surface with the pull back of its metric, i.e. the commutativity of Fig. 1 is reflected by our verification. Consider a bumpy torus parametrized by

$$\displaystyle{f(u,v) = \left (\begin{array}{c} \mathit{cos}(u)\left (r_{1}(u) + r_{2}(v)\mathit{cos}(v)\right ) \\ \mathit{sin}(u)\left (r_{1}(u) + r_{2}(v)\mathit{cos}(v)\right ) \\ r_{2}(v)\mathit{sin}(v) \end{array} \right ),}$$

where the radii are modulated, i.e. major radius and minor radius are given by \(r_{1}(u) = 3 + \frac{1} {2}\mathit{cos}(10u)\) and \(r_{2}(v) = 1 + \frac{1} {5}\mathit{cos}(8v)\), respectively. This results in the surface that is also used in Fig. 1. We compute the metric of the bumpy torus on [0, 2π]2 by formula (4) and sample the resulting tensor field with four different resolutions of 502, 1002, 2002 and 4002 points. For this four datasets we compute the HKS with the FEM described above. The results are compared with the HKS for the bumpy torus given as a triangulated surface with 4002 points, while the HKS is computed by the standard FEM Laplacian for triangulated surfaces, see e.g. [9]. If we denote the HKS for the tensor field by HKS T and for surfaces by HKS S , the relative difference for t = 1 is given by

$$\displaystyle{\frac{\Vert \mathit{HKS}_{T}(1,\,\cdot \,) -\mathit{HKS}_{S}(1,\,\cdot \,)\Vert _{L_{2}}} {\Vert \mathit{HKS}_{S}(1,\,\cdot \,)\Vert _{L_{2}}} = \frac{{\left (\int _{{[0,2\pi ]}^{2}}{\left (\mathit{HKS}_{T}(1,x) -\mathit{HKS}_{S}(1,x)\right )}^{2}\,\mathit{dx}\right )}^{\frac{1} {2} }} {{\left (\int _{{[0,2\pi ]}^{2}}{\left (\mathit{HKS}_{S}(1,x)\right )}^{2}\,\mathit{dx}\right )}^{\frac{1} {2} }}.}$$

See Table 1 for the relative difference between HKS T for different resolutions and HKS S . It is obvious that HKS T approaches HKS S quickly for increasing resolutions.

Table 1 Relative difference between HKS for tensors and surfaces

Results

We show several results of our method in this section. In the following we investigate the significance of the HKS and the meaning of Gaussian curvature in the general tensor context. To get a more intuitive understanding, we analyze the influence of the eigenvalues and eigenvectors on the HKS, and consider synthetic tensor fields with constant eigenvectors and eigenvalues, respectively. As central structural components of tensor fields we also consider isolated degenerate points in our analysis. As real world example we apply the method to a diffusion tensor data set of the brain. During the whole section we use the colormap shown in Fig. 2, which ranges from the minimum to the maximum over all results in one figure, unless otherwise stated.

In Fig. 3 we analyze easy examples of diagonal tensor fields, i.e. T 12 = 0. These fields serve as examples of tensor fields with variable eigenvalues but constant eigenvectors. For the tensor field T 1, where component \(T_{11}^{1}\) is a Gaussian function depending on u 1 and \(T_{22}^{1}\) is constant, the HKS is constant. The field T 2 is very similar to T 1, the only difference is that \(T_{11}^{2}\) depends on u 2. In this case the HKS is not constant anymore. To understand this we consider the formula (3) for t = 1, i.e.

$$\displaystyle{ \mathit{HKS}(1,x) = \frac{1} {4\pi }\left (1 + \frac{1} {3}K(x) + \frac{1} {45}\left (4K{(x)}^{2} - 3\varDelta K(x)\right )+\ldots \right ), }$$
(5)

while the Gaussian curvature for diagonal T is given by

$$\displaystyle{ K = - \frac{1} {2\sqrt{T_{11 } T_{22}}}\left (\partial _{u_{1}} \frac{\partial _{u_{1}}T_{22}} {\sqrt{T_{11 } T_{22}}} - \partial _{u_{2}} \frac{\partial _{u_{2}}T_{11}} {\sqrt{T_{11 } T_{22}}}\right ). }$$
(6)

Consequently, if \(\partial _{u_{1}}T_{22} = 0\) and \(\partial _{u_{2}}T_{11} = 0\) we have K = 0 and thus HKS(1, x) is constant for T 1. The tensor field T 3 has components \(T_{11}^{3}\) and \(T_{22}^{3}\) depending on u 2 and u 1, respectively, consequently both diagonal components have influence on the HKS. The field T 4 satisfies \(T_{11}^{4} = T_{22}^{4}\) where \(T_{11}^{4}\) and \(T_{22}^{4}\) depend radially on u. As expected, the HKS depends also radially on u.

Fig. 3
figure 3

HKS of diagonal tensor fields for small t. The function f is given by \(f(x) = 1 + 10{e}^{-{x}^{2} }\) and the tensor fields are defined for u ∈ [−5, 5]2

The HKS for tensor fields with constant eigenvalues and variable eigenvectors are shown in Fig. 4. These fields are defined by choosing fixed eigenvalues and a variable major eigenvector field, which is visualized by some integral lines. We observe that the HKS is also influenced by the eigenvectors and has high values in compressing regions and low values in expanding regions. This shows also that the HKS and other scalar quantities like trace, determinant and anisotropy, which depend only on the eigenvalues.

Fig. 4
figure 4

Tensor fields defined on [0, 1]2 by constant eigenvalues and an analytic major eigenvector field (see caption). The HKS is shown for small t and the major eigenvector field is visualized by some integral lines. The dependency of the HKS on the eigenvectors demonstrates a significant difference to other scalar quantities like trace, determinant and anisotropy. The colormap ranges from the minimum to the maximum of the single images

In Fig. 5 we consider degenerate points of tensor fields with indices between − 2 and 3. The index ind is defined to be the number of rotations of the eigenvector fields along a curve enclosing the degenerate point (and no other degenerate points). For a more formal definition see [2]. The results show that the HKS also hints at topological features like degenerate points, although there seems to be no obvious way to derive the tensor field topology from the HKS.

Fig. 5
figure 5

HKS of degenerate points with different index ind for small t. The tensor field is defined on u ∈ [−1, 1]2 and the eigenvalues are given by 10 + 3 | u | and 10 − 3 | u | 

As a last example Fig. 6 shows a diffusion tensor dataset of a brain. Again, the HKS is evaluated for different timesteps. Although the extraction of a slice might cut valuable information the structure of the brain becomes obvious by the HKS. The reason for this is that the heat transfer is based on the same mathematical foundations as the diffusion process which is described by the diffusion tensor field. This means that also the HKS has an immediate link to the diffusion process, which makes the HKS a promising quantity for the analysis of diffusion tensor data. It suggests, that the curvature has also a deeper meaning for Riemannian manifolds associated with a diffusion tensor dataset. Moreover, the time parameter t allows to focus on smaller and larger structures.

Fig. 6
figure 6

HKS of a brain dataset for different t. The colormap ranges from the minimum to the maximum of the single images (Brain dataset courtesy of Gordon Kindlmann at the Scientific Computing and Imaging Institute, University of Utah, and Andrew Alexander, W.M. Keck Laboratory for Functional Brain Imaging and Behavior, University of Wisconsin-Madison)

Conclusion and Future Work

By applying the HKS to tensor fields we have developed a new method for the visualization of tensor fields. Compared to common scalar invariants like the trace or the determinant it provides additional information, as Fig. 4 shows. A special strength of the method is its inherent level of detail properties. Thus, it is possible to emphasize smaller or larger structures. In contrast to naive Gaussian smoothing the scaling is directly driven by the tensor data itself. For diffusion tensor data the results are very promising. For the future we plan on further investigating the significance of the HKS for further applications. It might be of interest to compare the scaling properties to ideas of anisotropic diffusion. We will also work on an extension to 3D, since due to the projection of 3D tensors on 2D slices much valuable information is lost.

From a theoretical point of view the method can be generalized easily to 3D tensor fields. With the exception of the formulas indicating the relation to Gaussian curvature, all formulas are valid in higher dimensions. The problem is that the computation of the eigenvalues of the Laplacian takes very long for most 3D data. The computation of the first 500 eigenvalues for a dataset with 2562 points already takes a few minutes, thus the computation time for a dataset with 2563 points will not be feasible. For tensor fields defined on surfaces a generalization is also no problem from a theoretical point of view, but in this case the interpretation is even more difficult. The HKS of the standard metric on the surface results in the usual HKS for surfaces, i.e. the HKS is influenced not only by the tensor field but also by the surface itself.

Appendix

The row expansion of the HKS for small values of t is given by

$$\displaystyle{\mathit{HKS}(t,x) = \frac{1} {4\pi t}\sum _{i=1}^{\infty }u_{ i}(x){t}^{i}.}$$

In the general case the functions u i are given by

$$\displaystyle\begin{array}{rcl} u_{0}(x)& =& 1, {}\\ u_{1}(x)& =& \frac{1} {6}R(x), {}\\ u_{2}(x)& =& \frac{1} {360}\left (2R_{ijkl}{R}^{ijkl}(x) + 2R_{ jk}{R}^{jk}(x) + 5{R}^{2}(x) - 12\varDelta R(x)\right ), {}\\ \end{array}$$

where R ijkl is the Riemann curvature Tensor, \(R_{jk} = R_{jik}^{i}\) the Ricci tensor and \(R = R_{j}^{j}\) the Ricci scalar or scalar curvature. For surfaces these tensors can be written in terms of the Gaussian curvature by

$$\displaystyle\begin{array}{rcl} R_{ijkl}& =& K(g_{ik}g_{jl} - g_{il}g_{jk}), {}\\ R_{jk}& =& Kg_{jk}, {}\\ R& =& 2K. {}\\ \end{array}$$

Thus we find

$$\displaystyle\begin{array}{rcl} R_{ijkl}{R}^{ijkl}& =& K(g_{ ik}g_{jl} - g_{il}g_{jk})K({g}^{ik}{g}^{jl} - {g}^{il}{g}^{jk}) {}\\ & =& {K}^{2}\left (g_{ ik}g_{jl}{g}^{ik}{g}^{jl} - g_{ ik}g_{jl}{g}^{il}{g}^{jk} - g_{ il}g_{jk}{g}^{ik}{g}^{jl} + g_{ il}g_{jk}{g}^{il}{g}^{jk}\right ) {}\\ & =& {K}^{2}\left (\delta _{ i}^{i}\delta _{ j}^{j} -\delta _{ k}^{l}\delta _{ l}^{k} -\delta _{ k}^{l}\delta _{ l}^{k} +\delta _{ i}^{i}\delta _{ j}^{j}\right ) {}\\ & =& {K}^{2}\left (4 - 2 - 2 + 4\right ) = 4{K}^{2}, {}\\ & & {}\\ R_{jk}{R}^{jk}& =& {K}^{2}g_{ jk}{g}^{jk} = {K}^{2}\delta _{ j}^{j} = 2{K}^{2}. {}\\ \end{array}$$

And consequently the functions u i can be expressed in terms of K by

$$\displaystyle\begin{array}{rcl} u_{0}(x)& =& 1, {}\\ u_{1}(x)& =& \frac{1} {3}K(x), {}\\ u_{2}(x)& =& \frac{1} {45}\left (4K{(x)}^{2} - 3\varDelta K(x)\right ). {}\\ & & {}\\ \end{array}$$