Abstract
This work derives the basic balance laws of Codazzi, Ricci, and Gauss for the isometric embedding of an n-dimensional Riemannian manifold into the \(m=\frac{n}{2}\left( n+1\right) \)-dimensional Euclidean space. It is shown how the balance laws can be expressed in quasi-linear symmetric form and how weak solutions for the linearized problem can be established from the Lax-Milgram theorem.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
Riemann introduced the notion of an abstract manifold with metric structure, his motivation being the problem of defining a surface in Euclidean space independently of the underlying Euclidean space. The isometric embedding problem seeks to establish conditions for the Riemannian manifold to be a submanifold of a Euclidean space having the same metric. For example, consider the smooth n-dimensional Riemannian manifold \(M^{n}\) with metric tensor g. In terms of local coordinates \(x_{i},\,i=1,2,\ldots ,n\) the distance on \(M^{n}\) between neighbouring points is
where here and throughout the standard summation convention is adopted. Now let \(\mathrm {I\!R\!}^{m}\) be m-dimensional Euclidean space, and let \(y:M^{n}\rightarrow \mathrm {I\!R\!}^{m}\) be a smooth map so that the distance between neighbouring points is given by
where the subscript comma denotes partial differentiation with respect to the local coordinates \(x_{i}\). Global embedding of \(M^{n}\) in \(\mathrm {I\!R\!}^{m}\) is equivalent to proving the existence of the smooth map y for each \(x\in M^{n}\). Isometric embedding requires the existence of maps y for which the distances (4.1.1) and (4.1.2) are equal. That is,
or
which may be compactly rewritten as
where
and the inner product in \(\mathrm {I\!R\!}^{m}\) is denoted by the symbol “\(\cdot \)”.
The classical isometric embedding of a 2-dimensional Riemannian manifold into a 3-dimensional Euclidean space is comparatively well studied and comprehensively discussed in the book by Han and Hong [HH06]. By contrast, the embedding of n-dimensional Riemannian manifolds into \(n(n+1)/2\) Euclidean space has only a comparatively small literature. When \(n=3\), the main results are due to Bryant et al. [BGY83], Nakamura and Maeda [NM86, NM89], Goodman and Yang [GY88], and most recently to Poole [Poo10]. The general, but related, case when \(n\ge 3\) is considered by Han and Khuri [HK12]. These studies all rely on a linearization of the full nonlinear system (4.1.4) to establish the embedding y for given metric \(g_{ij}\) of the Riemannian manifold .
Applied analysts familiar with continuum mechanics and quasi-linear balance laws might find a presentation of the embedding problem within the context of symmetric quasi-linear forms appealing since there is an accompanying extensive literature originating with Friedrichs [Fri56]. For this and related references, the reader may consult Han and Hong [HH06]. It appears, however, that when the critical Janet dimension is \(m=n(n+1)/2\) the isometric embedding problem \((M^{n},g) \rightarrow \mathrm {I\!R\!}^{m}\) has not yet been expressed in symmetric quasi-linear form. The purpose of these self-contained notes is to demonstrate how this may be achieved using the Gauss, Codazzi , and Ricci relations . The existence and uniqueness of a weak solution to these equations is then proved by means of the Lax-Milgram theorem.
2 Basic Isometric Embedding Equations
Let (X, g) denote an n-dimensional Riemannian manifold with ascribed metric tensor g. Suppose the manifold (X, g) can be embedded globally into \(\mathrm {I\!R\!}^{m}\). (The term immersion is used when the embedding is local.) As stated in Sect. 4.1, this assumption implies that there exist a system of local coordinates \(x_{i},\, i=1,2,\ldots n\) on X and embeddings \(y_{j}(x_{i}), \,j=1,2,\ldots m\) such that (4.1.5) holds.
As an example, consider the 2-dimensional Riemannian manifold viewed as a surface in \(\mathrm {I\!R\!}^{3}\) and given by \(y^{1}=x_{1},y^{2}=x_{2},y^{3}=f(x_{1},x_{2})\), for a smooth function f. See Fig. 4.1.
In introductory courses, Pythagoras’ theorem is used to write the distance along the surface as
and consequently the corresponding metric is
Now consider the inverse problem: given the metric as a positive-definite covariant symmetric tensor, to find components \(y^{1},y^{2},y^{3}\) that determine the surface. The components \(y^{1}=x_{1},\,y^{2}=x_{2}\) are known, so the question is, can the nonlinear system of partial differential equations (4.2.1) be solved for f given g? (The general system is provided by (4.1.5).) For the example of the embedding of \((M^{2},g)\) into \(\mathrm {I\!R\!}^{3}\), the metric tensor may be displayed in the matrix form
which shows that for the system (4.2.1), there is an equation for each component of g. More generally, the symmetry of \(g_{jk}\) reduces (4.1.5) to three equations for three unknowns \(y^{1},y^{2},y^{2}\), leading to a determined system. On other hand, the embedding of \((M^{2},g)\) in \(\mathrm {I\!R\!}^{2}\) still has three equations but only two components \(y^{1},y^{2}\) of the unknown vector y, (the overdetermined case), while the embedding of \((M^{2},g)\) into \(\mathrm {I\!R\!}^{4}\) has three equations to determine four unknown components \((y^{1},y^{2},y^{3},y^{4})\) (the underdetermined case).
For an n-dimensional Riemannian manifold the components of the corresponding metric tensor may be represented by the \(n\times n\) symmetric matrix
There are \(n(n+1)/2\) entries on and above the diagonal, and we conclude in general that the isometric embedding problem (recovering the “surface” from the metric) is
where m is the number of unknowns \((y^{1},y^{2},\ldots ,y^{m})\), and \(n(n+1)/2\) are the number of equations. The crucial number
is called the Janet dimension .
Not too many solutions can be expected in the overdetermined case, and the question of uniqueness has been pursued by several mathematicians. The underdetermined case provides the flexibility of more unknowns than equations rendering superfluous Riemann’s concept of an abstract surface. Specifically, for m sufficiently large, the manifold \((M^{n},g)\) embeds globally and smoothly into \(\mathrm {I\!R\!}^{m}\), and \((M^{n},g)\) looks exactly like a surface. The following theorem is the precise statement.
Theorem 4.2.1
(Nash [Nas56]) Let \(3\le k \le \infty \). A \(C^{k}\)-Riemannian manifold \((M^{n},g)\) has a \(C^{k}\)-embedding into \(\mathrm {I\!R\!}^{m}\) (globally) if
Nash’s theorem has been improved but the main point to note is that results for global embedding always refer to the underdetermined system. Global embedding (smoothly) is in general not possible for determined systems, where the number of equations equals the number of unknowns, and which conceptually is more familiar in applied mathematics.
It is appropriate to quote from the following relevant section in the paper by S-T Yau [Yau06]:
Section 3.13. Isometric embedding. Given a metric tensor on a manifold, the problem of isometric embedding is equivalent to finding enough functions \(f_{1},\ldots , f_{N}\) so that the metric can be written as \(\Sigma (df_{i})^{2}\). Much work was accomplished for two-dimensional surfaces (as mentioned in Sect. 2.1.2). Isometric embedding for general dimensions was solved in the famous work of J. Nash. Nash used his implicit function theorem which depends on various smoothing operations to gain derivatives. In a remarkable work, Gunther was able to avoid the Nash procedure. He used only standard Hölder regularity estimates for the Laplacian to reproduce the Nash isometric embedding with the same regularity result. In his book, Gromov was able to lower the codimension of the work of Nash. He called his method the h-principle.
When the dimension of the manifold is n, the expected dimension of the Euclidean space for the manifold to be isometrically embedded is \(n(n+1)/2\). It is important to understand manifolds isometrically embedded into Euclidean space with this optimal dimension. Only in such a dimension does it make sense to talk about rigidity questions. It remains a major open problem whether one can find a nontrivial family of isometric embeddings of a closed manifold into Euclidean space with an optimal dimension.....
Chern told me that he and Levy studied local isometric embedding of a three manifold into six dimensional Euclidean space, but they did not write any manuscript on it. The major work in this subject is due to E. Berger, Bryant, Griffiths, and Yang. They show that a generic three dimensional embedding system is strictly hyperbolic, and the generic four dimensional system is of principle type. Local existence is true for a generic metric using a hyperbolic operator and the Nash-Moser implicit function theorem...
Remark 4.2.1
The theory of isometric embedding is a classical subject, but our knowledge is still rather limited, especially in dimensions greater than three. Many difficult problems are related to nonlinear mixed type equations or hyperbolic differential equations over closed manifolds.
2.1 Preliminary Lemmas
In this section, we state and prove some lemmas of subsequent interest.
Lemma 4.2.1
Let \(X= X^{\prime }\times I\subset \mathrm {I\!R\!}^{n}\), where \(X^{\prime }\subset \mathrm {I\!R\!}^{n-1}\) is an open domain and I is a connected open interval. Given smooth functions \(f: X\times \mathrm {I\!R\!}^{m}\rightarrow \mathrm {I\!R\!}^{m}\) and \(A_{0}:X^{\prime }\rightarrow \mathrm {I\!R\!}^{m}\), where \(t\in I\), there exists a unique solution \(A: X\rightarrow \mathrm {I\!R\!}^{m}\) to the system of ordinary differential equations
where \(\partial _{n}=\partial _{x_{n}}\).
Proof
The proof is just that of the standard existence -uniqueness theorem for ordinary differential equations. Here, the independent variable \(x_{n}\) is “time”, t is the initial time where the data \(A_{0}(x^{\prime })\) is specified, \(x^{\prime }\) are parameters on which the data \(A_{0}(x^{\prime })\) and prescribed \(f(x^{\prime },x_{n},A)\) may depend, and A is the unknown function (dependent variable) that is required to be determined.
Lemma 4.2.2
Let \(X\subset \mathrm {I\!R\!}^{n}\) be an open contractible domain and let \(f_{i}:X\times \mathrm {I\!R\!}^{m}\rightarrow \mathrm {I\!R\!}^{m}\) satisfy
for each \((x,A)\in X\times \mathrm {I\!R\!}^{m}\), where the Einstein summation convention is used here and throughout unless otherwise stated. Then given \(x_{0}\in X\) and \(A_{0}\in \mathrm {I\!R\!}^{m}\), there exists a unique solution \(A:X\rightarrow \mathrm {I\!R\!}^{m}\) to the system
where \(\partial _{i}=\partial _{x_{i}},\) and \(x=(x_{1},\ldots , x_{n})\).
Proof
Lemma 4.2.1 establishes existence and uniqueness provided the system of ordinary differential equations is consistent. But differentiation gives
and the required condition is given by
On expanding the partial derivatives , we obtain
which by (4.2.4) reduces to
which is hypothesis (4.2.3) stipulated in the Lemma.\(\Box \)
Remark 4.2.2
Lemma 4.2.2 is a nonlinear version of the Poincaré lemma, which rather than the fundamental theorem of the calculus uses instead the existence and uniqueness theorem of ordinary differential equations. In the standard Poincaré lemma, the functions \(f_{i}\) do not depend upon A and the statement
implies the existence of a “potential” A with
where
2.2 Riemannian Structure in Local Coordinates
We recall some standard results whose derivation and further discussion may be found in most textbooks on differential geometry or tensor analysis.
Let (X, g) be an n-dimensional Riemannian manifold with metric g, and denote the kth covariant derivative by \(\nabla _{k}\). This derivative permits differentiation along the manifold, and for scalars \(\phi \), vectors \(\phi _{i}\) and second order tensors \(\phi _{ij}\) is given respectively by
where the Christoffel symbols are calculated from the metric g by the formula
The metric tensor with components \(g^{kl}\) (upper indices) is the inverse of that with components \(g_{ij}\) (lower indices) so that
where \(\delta ^{k}_{l}\) is the usual Kronecker delta of mixed order defined by
Kronecker deltas of upper and lower order are defined similarly.
It is well-known that the following identities hold between the above quantities:
The Riemann curvature tensor , \(R^{l}_{\textit{ijk}}\) , defined in terms of Christoffel symbols by
is known to satisfy the operator identity
By lowering indices, we have the covariant Riemann curvature tensor
or
which possesses the minor skew-symmetries
and the interchange (or major) symmetry
Cyclic interchange of indices leads to the first Bianchi identity ;
and also to the second Bianchi identity;
Remark 4.2.3
(Special case n \(=\) 2) When \(n=2\), the covariant Riemann curvature tensor reduces to
where K is the Gauss curvature given by
for any vectors \(\xi ,\, \eta \).
Remark 4.2.4
The mixed and covariant Riemann curvature tensors involve the first derivatives of Christoffel tensors and therefore second derivatives of the metric g. Consequently, the Gauss curvature is expressed in terms of first and second derivatives of the metric. This is Gauss’ Theorema Egregium.
2.3 Non-commutativity of Covariant Derivatives of Vectors
We establish the operator identity (4.2.17) when applied to a vector. That is, we prove the formula
demonstrating that the second covariant derivative of a vector does not commute.
It follows from (4.2.6) to (4.2.7) that
since relation (4.2.7) yields
Similarly, it may be shown that
on using the relation
Subtraction of (4.2.28) from (4.2.27) gives
because by definition (4.2.16) we have
3 Isometric Immersion
As before, we let (X, g) be an n-dimensional Riemannian manifold with metric g. An isometric immersion is a \(\mathrm {I\!R\!}^{m}\)-valued function \(y:(X,g)\rightarrow (\mathrm {I\!R\!}^{m}, .)\) when the induced metric is the same as the original. That is, in terms of local coordinates \((x^{1},x^{2},\ldots ,x^{n})\) there holds
where the dot “\(\cdot \)” denotes the canonical Euclidean metric in the coordinate patch \((y^{1},\ldots ,y^{m})\) in \(\mathrm {I\!R\!}^{m}\).
On letting ds be the distance between neighbouring points in \(\mathrm {I\!R\!}^{m}\), when y is known, we have from the Pythagoras theorem that
On the other hand, the general distance formula for the abstract Riemannian manifold (X, g) due to Riemann is given by
It is then natural to ask under what conditions can the two expressions for the distance be equated to determine a realization of the manifold.
We investigate this question by again first considering the case \(n=2,m=3\). The tangents to the surface (manifold) are given by \(\partial _{1}y\) and \(\partial _{2}y\) and span the tangent space at the point \(y(x)=\left( y^{1}(x^{1},x^{2}), y^{2}(x^{1},x^{2}),y^{3}(x^{1},x^{2})\right) \). The unit normal vector at this point is defined (up to a sign) by the usual vector cross product
In higher dimensions, although there is no cross-product, similar ideas may be used. Indeed, on the manifold (X, g) the coordinate patch \(y=(y^{1},\ldots ,y^{m})\) generates the collection of tangents
that span the tangent space to the manifold. Define this tangent space to be \(T_{x}X\) and note that it is n-dimensional. Let \(N_{x}X\) denote the \((m-n)\)-dimensional subspace orthogonal and complementary to \(T_{x}X\), and for each x choose a fixed orthogonal basis of \(N_{x}X\) given by
where each \(N_{r},\,r=n+1,\ldots , m\), is assumed to depend smoothly on x.
3.1 The Second Derivative of an Immersion
Now, for each x, the vectors \(\left\{ \partial _{1}y(x),\ldots ,\partial _{n}y(x),N_{n+1}(x),\ldots , N_{m}(x)\right\} \) comprise a basis of \(\mathrm {I\!R\!}^{m}\), and as such are linearly independent. Therefore, for each pair of indices \(1\le i,\,j\le n\), the vector \(\partial ^{2}_{ij}y(x)\) can be written as a linear combination of these base vectors. In other words, there exist unique coefficients \(\tilde{\Gamma }^{k}_{ij},\, 1\le k\le n\) and \(H^{\mu }_{ij},\, n+1\le \mu \le m\) such that
or in components,
Since partial derivatives commute, the decomposition (4.3.2) implies
As just mentioned, the set \(\left\{ \partial _{1}y(x),\ldots ,N_{m}\right\} \) is a basis in \(\mathrm {I\!R\!}^{m}\), and therefore we have the symmetries
The notation \(\tilde{\Gamma }^{k}_{ij}\) is intentional since it will be proved in Sect. “The Coefficients \(\tilde{\Gamma }^{k}_{ij}\)” that the coefficients are precisely the Christoffel symbols \(\Gamma ^{k}_{ij}\) defined in (4.2.8). It will then follow from (4.2.7) that in terms of the covariant derivative, the relation (4.3.2) can be expressed as
3.1.1 The Coefficients \(\tilde{\Gamma }^{k}_{ij}\)
prove that in expressions (4.3.2) and (4.3.3) for the tangent direction of the second derivatives \(\partial ^{2}_{ij}y(x)\), the coefficients \(\tilde{\Gamma }^{k}_{ij}\) are precisely the Christoffel symbols \(\Gamma ^{k}_{ij}\). On taking the scalar product of both sides of (4.3.2) with the tangent vector \(\partial _{q}y(x)\), and after noting that \(\partial _{q}y(x)\cdot N_{\mu }(x)=0\), we obtain
The last equation follows since y(x) is an immersion and therefore
Differentiation with respect to \(x_{i}\) of relation (4.3.8) yields the identity
which by (4.3.7) reduces to
This expression, together with the symmetry (4.3.4) of \(\tilde{\Gamma }^{k}_{ij}\), is now used in definition (4.2.8) to give
which establishes the assertion. Note that these derivations have employed the formula (4.2.9).
Henceforth, the superposed tilde is removed from the coefficient \(\tilde{\Gamma }^{k}_{ij}\) in the decomposition (4.3.2).
3.1.2 The Coefficients \(H^{\mu }_{ij}\)
Let us further consider the decomposition (4.3.2). The assumed orthogonality of the set \(\left\{ \partial _{1}y(x),\ldots ,N_{m}(x)\right\} \), and in particular that of the set \(\left\{ N_{n+1}(x),\ldots , N_{m}(x)\right\} \), so that
enables us to write
The tensors \(H^{\mu }_{ij}(x),\, \mu =n+1,\ldots ,m\), as already shown in (4.3.5), are symmetric with respect to i, j and form the second fundamental form . The first fundamental form is given by the tensor g.
3.2 Decomposition of First Derivative of \(N_{\mu }(x)\)
In this section, the first derivative of the normals \(N_{\mu }(x)\) is treated analogously to that of the decomposition of the first derivative of the tangent vectors expressed by (4.3.2). We prove
Lemma 4.3.1
There exist functions (the induced connection on the normal bundle over the embedding)
such that
whose component version is given by
Proof
The normals \(N_{\mu }\) are postulated to form an orthonormal set in \(\mathrm {I\!R\!}^{m}\) so that differentiation of (4.3.10) gives
Moreover, because the tangents and normals form a full set of orthonormal vectors that span \(\mathrm {I\!R\!}^{m}\), we have the decomposition
which on scalar multiplication by \(N_{\nu }\) and use of (4.3.10) leads to
Upon substitution in (4.3.15), we conclude that
as stated in the Lemma.
On the other hand, we also have
and on recalling (4.3.10) and (4.3.11), we deduce that
where we again recall the relation \(g^{jk}g_{kp}=\delta ^{j}_{p}\). We conclude that
which after substitution in (4.3.16) and in conjunction with (4.3.17) proves the Lemma.\(\Box \).
3.3 The Second Partial Derivatives of Normal Vectors
In this section we establish the well-known Codazzi and Ricci equations as a consequence of the property that second partial derivatives of the normal vectors commute. The Gauss equations are derived in the next section after further discussion of the Codazzi equations.
Now, differentiation of (4.3.13) gives
which after substitution from (4.3.2) to (4.3.13) leads to
On collecting terms in the tangential and normal directions, we rewrite the last equation as
But the second derivatives of the normal commute, so that
and from the terms in the tangent direction, we can read off the Codazzi equations
Similarly, terms in the normal direction lead to the Ricci equations
The more traditional form of the Codazzi equations is recovered by the following simple computation. On differentiation of (4.2.9), we obtain
which after multiplying by \(g^{rs}\) and appealing to (4.2.9) leads to
where (4.3.1) is used. We now conclude from (4.3.2) in conjunction with the orthogonality relations (4.3.18) and (4.2.9) that
We perform the differentiation of the first term on the left and right of the Codazzi equations (4.3.21), and then substitute from (4.3.26) after suitably changing indices to obtain
Multiplication of both sides of the last equation by \(g_{q\alpha }\) together with (4.2.9) yields
By virtue of the symmetry \(g^{rp}=g^{pr}\), and by changing dummy superscripts, the third and fourth terms on either side cancel to give
The usual form of the Codazzi equations is now obtained by the subtraction of \(\Gamma ^{p}_{ij}H^{\mu }_{\alpha p}\) from both sides of the last equation. This gives
We apply the formula (4.2.7) for the covariant derivative of a second order tensor to write
and use the symmetry of the Christoffel symbols and of the coefficients \(H^{\mu }_{ij}\) to derive the Codazzi equations in the form
Remark 4.3.1
(The hypersurface) When the manifold is a hypersurface, we have \(m=n+1\) and there is only one normal \(N_{n+1}\), since \(n+1\le \nu \le m=n+1\). But \(N_{n+1}\) is a unit vector so that
and consequently
The appropriate member of the system (4.3.13) is
which after using (4.3.29) and the orthogonal set \(\partial _{1}y,\ldots , N_{n+1}\) leads us to
and therefore \(A^{n+1}_{(n+1)i}=0\). The conclusion, which can be alternatively derived by applying the skew-symmetry \(A^{\nu }_{\mu i}=-A^{\mu }_{\nu i}\), implies that for a hypersurface the Codazzi equations simplify to
Remark 4.3.2
(Determined case for hypersurfaces) When dealing with hypersurfaces in the determined case, we have \(m=n(n+1)/2=(n+1)\) so that \(n=2\) and \(m=3\). This is the classical case of \((M^{2},g)\) embedded into \(\mathrm {I\!R\!}^{3}\).
4 The Gauss and Codazzi Equations
This section further discusses the derivation of equations obtained in the previous section.
We commute partial derivatives and then use (4.3.2) to obtain
On appealing again to (4.3.2) and also to (4.3.13), we can reduce (4.4.1) to
where the last expression has been separated into tangential and normal components. In consequence, the orthogonality relation (4.3.18) implies that each component must vanish. We have
where the antisymmetry relation (4.3.17) for the vectors \(A^{\nu }_{\mu k}\) is employed. The system (4.4.3) is the previously derived Codazzi equations .
From the tangential component in (4.4.2), we have
which upon noting the expression (4.2.16) for the Riemann curvature tensor becomes
from which follows the Gauss relation
on recalling the antisymmetry \(R_{\textit{qijk}}=-R_{iqjk}\), and that summation over repeated superscripts is implied.
5 Summary for \((M^{n},g)\rightarrow (\mathrm {I\!R\!}^{m},\cdot )\)
We summarise the conclusions obtained so far. Notice that \(A^{\nu }_{\mu j}\) are components of vectors for \(j=1,2,3, \ldots , n\) with the indices \(\nu ,\mu \) accounting only for the dimensions \(n+1\le \mu , \nu \le m\).
A necessary condition for the existence of an isometric embedding is that there exist functions
such that the Gauss equations hold
along with the Codazzi equations
and the Ricci equations
The Ricci system (4.5.3) can be expressed in covariant form by the addition and subtraction of the term
to obtain
6 Reconstruction of an Isometric Embedding
In this section we state and sketch of the proof of a theorem giving necessary and sufficient conditions for the existence of an isometric embedding. We have
Theorem 4.6.1
Consider a simply connected n-dimensional Riemannian manifold X with coordinates \((x^{1},\ldots ,x^{n})\) and Riemannian metric \(g\,(=\!g_{ij}).\) Let \(1\le i,j\le n,\) and suppose there exist symmetric functions \(H^{\mu }_{ij}=H^{\mu }_{ji}\) and anti-symmetric functions
such that Eqs. (4.5.1)–(4.5.3) are satisfied.
Then there exist functions \(N_{n+1},\ldots ,N_{m}:X\rightarrow \mathrm {I\!R\!}^{m}\) and a function \(y:X\rightarrow \mathrm {I\!R\!}^{m}\) for which the following formulae hold
and
Remark 4.6.1
The theorem states that the conditions on \(H^{\mu }_{ij}, A^{\nu }_{\mu i}\) together with (4.6.1)–(4.6.3) are both necessary and sufficient for the embedding \((M^{n},g)\rightarrow (\mathrm {I\!R\!}^{m},\cdot )\), \(X=M^{n}\); that is, the conditions are necessary and sufficient for the existence of vector functions y(x).
Sketch of Proof
Let \(\left\{ e_{1},\ldots ,e_{m}\right\} \) be the standard orthonormal basis of \(\mathrm {I\!R\!}^{m}\). For a fixed point \(x_{0}\in X\), define \(\left\{ \partial _{1}y(x_{0}),\ldots ,\partial _{n}y(x_{0}),N_{n+1}(x_{0}),\ldots ,N_{m}(x_{0}\right\} \) to satisfy (4.5.3)–(4.6.2). As a possible choice, we set \(N_{\mu }(x_{0})=e_{\mu }\) and \(y(x_{0})=0\), and select \(\left\{ \partial _{1}y(x_{0}),\ldots ,\partial _{n}y(x_{0})\right\} \) to be a linear combination of \(\left\{ e_{1},\ldots ,e_{n}\right\} \) such that (4.6.2) holds at \(x_{0}\).
Remark 4.6.2
When \(g_{ij}(x_{0})=\delta _{ij}\), we may choose
Let \(\phi _{p}=\partial _{p}y(x_{0})\), and observe that (4.6.4)–(4.6.5) form a total differential system for the unknown \(\mathrm {I\!R\!}^{m}\)-valued function \(\left\{ \phi _{1},\ldots ,\phi _{n}, N_{n+1},\ldots ,N_{m}\right\} \). This conclusion may be checked by first differentiating equations (4.6.4) and (4.6.5) to show that the compatibility conditions obtained by constructing partial derivatives are consequences of the Gauss equations (4.5.1), Codazzi equations (4.5.2), Ricci equations (4.5.3), and the original equations (4.6.4) and (4.6.5). In consequence, and by Lemma 4.2.2, we conclude that there exists a unique solution (the “potential” \(\phi _{p}\)) that extends the initial data specified at \(x_{0}\).
Moreover, the differentials of Eqs. (4.6.1)–(4.6.3) are consequences of (4.6.4) and (4.6.5). Therefore, they hold not only at \(x_{0}\) but also on all of X.
Finally, the symmetry of the right side of (4.6.4) implies \(\partial _{j}\phi _{i}=\partial _{i}\phi _{j}\), and consequently by Lemma 4.2.2, there exists a unique \(\mathrm {I\!R\!}^{m}\)-valued function y on X such that
The proof of Theorem 4.6.1 is complete.\(\Box \)
6.1 Examples
It is important that the number of independent equations matches the number of independent unknowns. The following examples illustrate this aspect, and also serve as introduction to a counting process developed by Blum .
6.1.1 Example 1. \((M^{2},g)\rightarrow (\mathrm {I\!R\!}^{3},\cdot )\)
In this example, we have \(n=2\) and \(m=3\) so that \(1\le i,j,k\le 2\) and \(\mu =\nu =3\). The second fundamental form therefore can be represented as the matrix
Furthermore, since \(n=2\), we may use (4.2.24) to write
where K is the Gauss curvature . Consequently, the Gauss equations (4.4.4) reduce to the single equation
Upon slight rearrangement, the Codazzi equations (4.5.2) become
which on specialisation to the example under consideration reduce to
Consequently, there are three equations (4.6.7), (4.6.9) and (4.6.10) in the three unknowns \(H^{3}_{11},H^{3}_{12},H^{3}_{22}\).
On employing the Gauss equations (4.6.7) to eliminate one of the unknowns, we obtain a quasi-linear system. Accordingly, the Gauss relation becomes a “constitutive relation”.
6.1.2 Example 2. \((M^{3},g)\rightarrow (\mathrm {I\!R\!}^{6},\cdot )\)
In this example, we have \(1\le i,j\le 3\) and \(4\le \mu ,\nu \le 6\), and the Gauss equations (4.4.4) reduce to
where the six non-zero components of the Riemann curvature tensor are
We are left, therefore, with six non-trivial Gauss equations, the remainder being identically satisfied.
The second fundamental form may be expressed as the matrix array of 6 independent entries for each \(\mu \):
from which it can be seen that the Codazzi equations (4.5.2) are just a statement about cross derivatives along rows (or columns since \(H^{\mu }_{ij}\) is symmetric). Apparently, there are 3 equations across each row, but the couplings
after subtraction yield
Thus instead of 9 couplings for each \(\mu \), there are only 8. In consequence, as \(\mu =4,5,6\) there are 24 Codazzi equations. In summary, we have
-
1.
Equations
-
(a)
6 Gauss equations.
-
(b)
24 Codazzi equations.
-
(c)
9 Ricci equations.
-
(d)
Thus, there are a total of 39 equations.
-
(a)
-
2.
Unknowns
-
(a)
\(6\times 3=18\) independent components \(H^{\mu }_{ij}\) of the second fundamental form.
-
(b)
\(3\times 3=9\) coefficients \(A^{\nu }_{\mu k}=-A^{\mu }_{\nu k}\).
-
(c)
Thus, there are a total of 27 unknowns.
-
(a)
We conclude that there are more equations than unknowns despite the embedding problem \((M^{3},g)\rightarrow (\mathrm {I\!R\!}^{6},\cdot )\) being determined (\(m=n(n+1)/2; n=3, m=6\)), which implies that not all equations are independent in the Gauss, Codazzi, Ricci system.
6.2 Blum’s Counting Process
The rather painful counting process illustrated in the previous examples is examined in a series of papers published in the 1940s and 1950s by R. Blum [Blu55, Blu46, Blu47] and further described in the excellent survey by Goenner [Goe77].
The description in [Geo77, p. 143] of Blum’s counting result for the embedding \((M^{n},g)\rightarrow \mathrm {I\!R\!}^{m},\cdot )\) may be paraphrased as follows.
Theorem 4.6.2
When the Gauss equations (4.4.4) are satisfied, and Goenner’s matrices M and N, defined below, are of maximal rank, then (i) for \(0\le p=m-n\le n(n-2)/8\) all Codazzi and Ricci equations are consequences of the Gauss equations; (ii) for \(n(n-2)/8<p=m-n\le n(n-1)/2\) a system of
Codazzi equations are independent. The remainder of the Codazzi equations and all the Ricci equations are a consequence of the independent Codazzi system and of the Gauss equations.
Goenner’s matrices M and N are given by
Of course even these definitions are not particularly enlightening, and Goenner has given results that are easier to state but which we will not repeat here. Also since the above notation may be confusing, we note that M, N are the coefficient matrices in systems (4.2.5) and (4.2.6) of Goenner, i.e.,
The matrix M has \(\frac{1}{2}\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) \left( {\begin{array}{c}n\\ 3\end{array}}\right) \) rows and \(\frac{1}{3}pn(n^{2}-1)\) columns, the matrix N has \(\frac{p}{2}\left( {\begin{array}{c}n+1\\ 2\end{array}}\right) \left( {\begin{array}{c}n-1\\ 2\end{array}}\right) \) rows and \(\left( {\begin{array}{c}p\\ 2\end{array}}\right) \left( {\begin{array}{c}n\\ 2\end{array}}\right) \) columns. Notice that
and
A useful example is given by the case \(n=3, m=6, p=3\). In this case, the symmetries in \(N_{abcd}^{\mu ij}\) yield that only non-zero terms are of the form \(N_{a123}^{\mu ij}\) and the equations
become
where
But row operations reduce the coefficient matrix N to obtain
and the condition on N of Blum is just that \(H^{4},H^{5},H^{6}\) each be of full rank 3. The matrix N is \(9\times 9\) as predicted by Blum’s theorem and the matrix M is \(3\times 24\). We can write the system
in the form
In this representation the three repetitions for \(\mathcal {C} _{123}^{\mu }\) are not accounted for and hence the vector \(\mathcal {C}\) has 27 entries instead of the 24 predicted by Blum’s theorem . If any one of the \(H^{\mu }\) has full rank 3 then M will have full rank 3.
The example “(\(M^{3},g)\rightarrow (\mathrm {I\!R\!}^{6},\cdot \))” in Sect. 4.6.1, for which \(m=6, n=3\) and \(p=3\), satisfies the condition in category (ii) of the above theorem which gives.
and there are
independent Codazzi equations. All the Ricci equations are implied by these independent Codazzi equations and the Gauss equations . Thus, Blum’s count gives 21 independent Codazzi equations, whereas the elementary count conducted in the example produced 24 Codazzi equations.
The discrepancy is explained by observing that the elementary counting omitted to include the three equations in Bianchi’s second identity. Substitution of the Gauss equations in these three equations gives three more relations between derivatives of the second fundamental forms and consequently there are only 21 and not 24 independent Codazzi equations.
Combined with the 6 Gauss equations there are 27 equations for the 27 unknowns consisting, as already shown, of 18 entries of the second fundamental forms and 9 coefficients \(A^{\nu }_{\mu k}\). Nevertheless, it is unclear how even local existence can be proved for this system.
In the determined system, we have \(m=n(n+1)/2\), and category (ii) of Blum’s theorem again applies with \(p=n(n-1)/2\) so that there are \(n^{2}(n^{2}-1)(3n-2)/24\) independent Codazzi equations. Under the maximal rank condition, the Codazzi and Gauss equations imply the Ricci equations.
6.2.1 Sketch of the Proof of Blum’s Theorem When \(n=3,m=6\)
Throughout this section, unless otherwise stated, the summation convention is suspended for repeated indices \(\mu \).
Step 1
Particular forms of the covariant Codazzi equations (4.3.28) are
which by subtraction yield the equation
We conclude that the Codazzi equations (4.6.16) are implied by the pair (4.6.14) and (4.6.15) so that for \(n=3,\,m=6\) the number of independent Codazzi equations is reduced by 3.
Step 2
Next, we rewrite the Codazzi equations (4.3.28) as
where \(\epsilon _{ijk}\) is the standard Einstein alternating tensor given by
Let \(\text{ cof } A\) be the cofactor of the entry A in the matrix [A]. Then we have
and consequently
where the last expression is obtained by interchange of suffixes \(j\leftrightarrow k,\,m\leftrightarrow n.\)
After a further interchange of suffixes, the Codazzi equations (4.6.17) may be written as
and substituting these relations in (4.6.19) yields
where there is no sum on \(\mu \). The interchange \(m\leftrightarrow n,\, j\leftrightarrow k\) in the last expression then gives
Now sum over \(\mu \) to obtain
Next, define the second order Ricci tensor R to be
which can be concisely written in matrix form as
It then follows from the Gauss equations (4.6.11) that
and on substituting in (4.6.24) to eliminate the cofactor term, we obtain
It is easy to infer from the second Bianchi identity (4.2.23) that the first term on left in the last equation vanishes, i.e.,
The second term on the left of (4.6.30) is zero as \(A^{\mu }_{\nu k}\) is skew-symmetric in \(\mu ,\,\nu \) (see (4.3.12)). Consequently, the left side of (4.6.30) is identically zero. The combination, therefore, of the Codazzi and Gauss equations leads to three trivial relations which reduce the number of independent Codazzi equations by an additional 3.
Step 3.
It is convenient to introduce extra notation with respect to the covariant Codazzi (4.6.17) and Ricci (4.5.4) equations as follows;
Covariant differentiation of (4.6.31) yields
The Codazzi equations (4.6.31) enable the last term to be expressed as
and (4.6.33) then is reduced to
The interchange of indices \(i\rightarrow j\rightarrow k\rightarrow i\) in the last term leads to the further reduction
But from (4.2.26), we may derive the commutation relation
which may be expressed as
where Gauss’ equations (4.5.1) are employed in the derivation of the last line of the previous equation.
Finally, on appealing to the formula for the commutativity of \(\epsilon _{ijk}\nabla _{k}\nabla _{j}H^{\mu }_{i\alpha }\), and the Gauss equations, we find from (4.6.34) that
The maximal rank condition on \(H^{\nu }_{lj}\) implies (4.6.35) has a unique solution
and the 9 Ricci equations are satisfied.
7 Symmetrization of the Codazzi Equations
The required symmetrization is achieved by using the Codazzi equations to derive a certain matrix equation.
On noting the skew-symmetric relation (4.3.12), we may rewrite the Codazzi equations (4.6.17) in the slightly different form
a subset of which is
The Codazzi equations (4.7.1) may now be used to eliminate the covariant derivative on the right of the identity (4.6.19) to obtain
Now let
so that by standard algebra of determinants, we have
which together with the expression (4.6.18) enables the identity (4.6.23) to be written as
Terms on the left may be simplified on further appeal to (4.7.5) to give
and consequently,
The next part of the construction of a matrix equation involves the multiplication of (4.7.2) by
to obtain
We combine the systems (4.7.6) and (4.7.7) into the matrix array of equations given by
We examine in detail the terms in this matrix equation and for this purpose introduce further notation. For example, the block matrices in the matrix coefficient of \(\nabla _{2}\) are given by
so that the \(12\times 12\) composite matrix \(B^{\mu }_{2}\) defined as
is symmetric, and the second term on the left in (4.7.8) involving \(\nabla _{2}\) becomes
where
The matrices \(B^{\mu }_{l}\) appearing in the coefficient of \(\nabla _{l}\) are defined in a manner similar to (4.7.11).
Every coefficient matrix in (4.7.8) is symmetric including that for \(l=1\), but a separate argument is used to check the first coefficient matrix in the first term on the left of (4.7.8). This matrix, denoted by \(B^{\mu }_{0}\), is written as
where
Consequently, in terms of the vector \(U^{\mu }\) given by (4.7.12), the Codazzi system (4.7.8) may be expressed as
where \(Q^{\mu }\), the third term on the left of (4.7.8), is given explicitly by the 12-dimensional vector
8 Symmetrization of the Linearized Codazzi Equations
8.1 Remarks on linearization
Let \(\epsilon >0\) be a small positive parameter, and suppose that a small perturbation in the variable \(y_{i}\) is given by
with corresponding small perturbations in other quantities given , for example, by
In these expansions, the superposed dot is intended to suggest differentiation with respect to \(\epsilon \).
8.2 Linearization of the Codazzi Equations
We now linearize (4.7.2) and (4.7.1) in the sense that after substitution from (4.8.2)–(4.8.4) all terms of order higher than the first in \(\epsilon \) are neglected. Moreover, in the linearization it is convenient to remove the overbar without risk of confusion. Then, in view of the definition of the covariant derivative (see (4.2.5)–(4.2.7)), linearization of (4.7.2) and (4.7.1) respectively yields
and
which by interchange of indices becomes
On multiplying (4.8.7) by \(\epsilon _{jim}H^{\mu }_{kj}\) and suspending summation over the repeated index \(\mu \), we obtain
which on recalling (4.7.5), we rewrite as
Next, consider the particular equation (4.8.5), which after multiplication by
becomes
The linearized Codazzi system (4.8.9) and (4.8.10) may be concisely expressed by introducing the definitions
when the system may be written as the single matrix equation
where \(B^{\mu }_{l}\) is defined analogously to (4.7.11), and \(\dot{U}^{\mu }\) is the linearization of the vector (4.7.12).
9 The Ricci Equations
next discuss the Ricci equations (4.5.3), and without loss of generalityFootnote 1 set
and (4.5.2) simplify to
We note that \(A^{\nu }_{\mu 2},\,A^{\nu }_{\mu 3}\) are therefore completely determined by their data on a plane \(x_{1}=\text{ constant }=-L\) and on the set \(H^{\nu }_{jk}\). Accordingly, we may introduce the substitutions
in the expression (4.7.16) for the matrix Q to eliminate explicit dependence on \(A^{\mu }_{\nu l}.\) Observe that dependence on \(A^{\mu }_{\nu l}\) is reduced to dependence on data provided on \(x_{1}=-L\). This data, of course, must be consistent with the additional Ricci equations.
10 The Full Nonlinear System
We emphasize analogies with continuum mechanics by restating the full nonlinear system in terms employed by that theory.
The balance laws are given by the quasi-linear Codazzi equations (4.7.15):
where from (4.7.12) we have \(U^{\mu }\in \mathrm {I\!R\!}^{12}\) for each \(\mu =4,5,6\), and \(Q^{\mu }\) is given by (4.7.16).
The constitutive relations are provided by the Gauss equations (4.5.1)
together with constitutive relations for \(A^{\mu }_{\nu l}\) given by (4.9.1), (4.9.4), and (4.9.5 ) .
According to Blum’s theorem [Blu55], when the elements \(H^{\mu }_{jk}\) form a full rank matrix, there are 27 independent equations in 27 unknowns \(H^{\mu }_{jk}\) and \(A^{\mu }_{\nu l}\) since the Ricci equations follow from the Gauss and Codazzi equations. Observe, however, that relations (4.9.4) and (4.9.5) do not completely eliminate the terms \(A^{\mu }_{\nu l}\) in favour of the terms \(H^{\mu }_{ij}\) , because initial data on \(x_{1}=-L\) still enter into the values of \(A^{\mu }_{\nu l}\).
11 The Linearized Ricci Equations
In the notation of Sect. 4.8.1, the linearized Ricci equations (4.9.1), (4.9.4), and (4.9.5) are given by
When \(\dot{A}^{\nu }_{\mu 2}(-L,x_{2},x_{3})\) and \(\dot{A}^{\nu }_{\mu 3}(-L,x_{2},x_{3})\) vanish on the boundary of the domain, their contribution to (4.11.2) and (4.11.3) is zero. Furthermore, the integral terms in (4.11.1) and (4.11.3) are bounded by \(K\text{ vol } (\Omega )\), where
and in consequence, we obtain
Proposition 4.11.1
The quantities \(\dot{A}^{\nu }_{\mu 2}\) and \(\dot{A}^{\nu }_{\mu 3}\) satisfy the bounds
Proof of ( 4.11.4 )
Typical terms in the relation (4.11.2) may be expressed as
where there is no sum on p, q.
The Cauchy-Schwarz inequality applied to the first expression leads to the bounds
and consequently, on noting that the term on the right is independent of \(x_{1}\), we have
or
A similar argument gives
where the expression on the right is again independent of \(x_{1}\).
Thus we conclude that
from which follows
which leads to the final bound
12 The Linearized Gauss Equations
In view of the notation adopted in Sect. 4.8.1, the linearized Gauss equations become
The system (4.12.1) consists of 6 equations in the 18 components \(\dot{H}^{\mu }_{ij}\). We say \(H^{\mu }_{ij}\) is non-degenerate in the neighbourhood of \(x=0\) when 6 of the components of \(\dot{H}^{\mu }_{ij}\) can be solved in terms of the remaining 12 components and \(\dot{R}_{ijkl}\). A sufficient condition for non-degeneracy is provided by [BGY83, Theorem F] which establishes non-degeneracy when at least one component of the Riemann curvature tensor \(R_{ijkl}\) is non-zero.
Accordingly, let us assume that the set \(H^{\nu }_{ij}\) is non-degenerate in a neighbourhood of \(x=0\). This implies that the vector
where \(U^{\mu }\), defined in (4.7.12), can be written as
In this relation, \(\dot{H}\) denotes the distinguished 12 components of the set \(\dot{H}_{ij}^{\mu }\), and \(\dot{R}\) denotes the 6 non-trivial elements corresponding to the perturbed Riemann curvature tensor. It follows that \(\dot{U}\in \mathrm {I\!R\!}^{36}\), \(\dot{H}\in \mathrm {I\!R\!}^{12}\), \(\dot{R}\in \mathrm {I\!R\!}^{6}\), and therefore in (4.12.3), C represents a \(36\times 12\) matrix, while D represents a \(36\times 6\) matrix.
13 The Closed Symmetric System for the Linearized Problem and Quasi-linear Problem
With reference to the symmetrized and linearized Codazzi equations (4.8.13), let us set
and use this notation to write (4.8.13) as
Observe that since \(\dot{Q}\) depends linearly on the sets \(\dot{H}^{\mu }_{ij}\) and \(\dot{A}^{\mu }_{\nu m}\) as given in (4.8.11), we may introduce matrices \(E,\,F\) to represent the dependence by
where \(\dot{U}\in \mathrm {I\!R\!}^{36},\,\dot{A}\in \mathrm {I\!R\!}^{6}\), E is a \(36\times 36\) matrix, and F is a \(36\times 6\) matrix.
Upon substitution of (4.12.3) in (4.13.3) we obtain
where \(G=EC\) is a \(36\times 12\) matrix, and \(J=ED\) is a \(36\times 36\) matrix. In consequence, the system (4.13.2) has the form
which after rearrangement becomes
We multiply (4.13.6) on left by the \(12\times 36\) matrix \(C^{T}\) to obtain the equivalent but compact form
where
The linearized Ricci equations (4.11.2) and (4.11.3) with
next give
Insertion of (4.13.8) and (4.13.9) into (4.13.7) yields a symmetric system of 12 equations in the 12 unknowns \(\dot{H}\) which are weakly non-local due to (4.13.9). The relations (4.11.4) and (4.11.5), however, indicate that the non-locality is very weak.
Remark 4.13.1
(Non-linear problem) The derivation just described is for the linearized system, but examination of the individual steps in the argument shows that for the non-linear problem the same procedure also yields a quasi-linear system of 12 equations.
14 The Weak Form of the Closed System
The purpose of previous sections is to formulate the theory in a manner suitable for proofs of existence and uniqueness in the embedding problem, which are developed in this Section.
Define the linear operator \(\mathcal {L}\) in terms of the general variable \(\widehat{H}\) by
where \(\dot{A}\) is defined by (4.13.8) and (4.13.9).
We wish to consider the weak form of equations associated with the operator \(\mathcal {L}\). For this purpose, let \((\cdot ,\cdot )\) denote the inner product on the space \(L^{2}(\Omega ,\mathrm {I\!R\!}^{12})\) and let the function \(V\in C^{\infty }_{0}(\Omega ,\mathrm {I\!R\!}^{12})\). The weak form of the equation
is then given by
where \(\mathcal {L}^{*}\) is the adjoint operator to \(\mathcal {L}\). We conclude from (4.14.2) that \((\mathcal {L}^{*}V,\widehat{H})\) defines a bilinear form on \(H^{1}_{0}(\Omega ,\mathrm {I\!R\!}^{12})\).
The proofs of existence and uniqueness rely upon the Lax-Milgram theorem (see, for example, [Yos65]) stated here for convenience.
Theorem 4.14.1
(Lax-Milgram Theorem). Let X be a Hilbert space and \(\mathcal {C}(\chi ,\psi )\) a (possibly complex) bilinear functional defined on the product space \(X\times X\). Let \(\Vert \cdot \Vert _{X}\) and \((\cdot ,\cdot )_{X}\) denote the norm and inner product on X. Suppose that
for positive constants \(\delta ,\gamma \). Then there exists a uniquely determined bounded linear operator T with bounded inverse \(T^{-1}\) such that whenever \(\chi ,\,\psi \in X\) there holds
To apply the Lax-Milgram theorem to the weak equation (4.14.2), we set \(X=H^{1}_{0}(\Omega ,\mathrm {I\!R\!}^{12})\), and let \(\mathcal {C}(\chi ,\psi )=(\mathcal {L}^{*}\chi ,\psi )\). Note, however, that Condition (i) holds but not Condition (ii). To overcome this difficulty, we introduce additional terms to (4.14.2) that regularize the equation. Let \(\epsilon >0\) be an arbitrary positive constant. Then the regularized problem is given by
in which we employ the notation
where we recall the \((,\cdot ,)\) denotes the Euclidean inner product in \(\mathrm {I\!R\!}^{12}\). We now let the bilinear form \(\mathcal {C}_{\epsilon }\) be defined by the expression on the left of (4.14.3). Upon assuming the weaker coerciveness estimate
for some positive constant \(\delta _{1}\), we have
The Lax-Milgram theorem clearly applies to the regularized problem and shows that a solution \(\widehat{H}_{\epsilon }=T_{\epsilon }\Lambda \) exists to (4.14.3) and satisfies
or alternatively
for all \(V\in H^{1}_{0}(\Omega ,\mathrm {I\!R\!}^{12})\). Accordingly, on setting \(V=\widehat{H}_{\epsilon }\) in (4.14.5), we obtain
The first term on the left of (4.14.7) may be bounded from below using assumption (4.14.4), while terms on the right may be bounded from above using the Cauchy-Schwarz inequality. These operations lead to the bounds
The arithmetic-geometric mean inequality in the form
applied to terms on the right then yields
which after rearrangement gives
We conclude that \(\widehat{H}_{\epsilon }\) is bounded independently of \(\epsilon \) when \(\Lambda \in H^{1}_{0}(\Omega )\), and consequently \(\widehat{H}_{\epsilon }\) has a weakly convergent subsequence (also denoted by \(\widehat{H}_{\epsilon }\)) so that
now pass to the limit as \(\epsilon \rightarrow 0\) in (4.14.6) and for all \(V\in C^{\infty }_{0}(\Omega )\) obtain the relation
which proves the existence of a weak solution \(\widehat{H}\). Its uniqueness follows from the coercivity assumption (4.14.4).
Let us summarize the result in the following theorem.
Theorem 4.14.2
Suppose the operator \(\mathcal {L}\) defined by (4.14.1) satisfies the coercivity condition
for some \(\delta _{1} >0\). Then the weak form of the linearized embedding problem (4.14.2) has a unique solution for all \(\Lambda \in H^{1}_{0}(\Omega )\).
The next step is to apply Theorem 4.6.1 to the system (4.14.2), (4.14.6) and (4.14.7). Assume first that the (undotted) embedding is perturbed in a small neighbourhood of the point \(x=0\) chosen as the origin of a system of normal coordinates where the Christoffel symbols \(\Gamma ^{q}_{ij}\) vanish. When the small neigbourhood is taken to be the box \(-L\le x_{i}\le L,\,i=1,2,3\), the quantity \(\dot{A}\), defined by (4.13.8) and (4.13.9) that satisfies the bounds (4.11.4) and (4.11.5), becomes negligible in the box and do not enter into the coercivity computations. Accordingly, we have
Theorem 4.14.3
When the quadratic form
is positive-definite (or negative-definite) at \(x=0\) there exists a unique weak solution to the linearized isometric embedding equations (4.14.2), (4.13.8), and (4.13.9).
The parameters entering into the \(12\times 12\) symmetric coefficient matrix
are \(H^{\mu }_{ij},\,\partial _{1}\mathcal {A}_{0},\,\partial _{1}\mathcal {A}_{1},\,\partial _{2}\mathcal {A}_{2},\,\partial _{3}\mathcal {A}_{3},\,A^{\nu }_{\mu 2},\,A^{\nu }_{\mu 3}\) all evaluated at \(x=0\). In consequence, the classical chain rule may be applied to \(\mathcal {A}_{0},\,\mathcal {A}_{1},\,\mathcal {A}_{2},\,\mathcal {A}_{3}\) to show that the parameters in the coefficient matrix reduce to \(H^{\mu }_{ij},\,\partial _{l}H^{\mu }_{il},\,A^{\nu }_{\mu 2},\,A^{\nu }_{\mu 3}\) evaluated at \(x=0\). We therefore conclude that
- (i):
-
The Gauss relations provide 12 independent \(H^{\mu }_{ij}\).
- (ii):
-
The differentiated Gauss relations provide 15 independent \(\partial _{l}H^{\mu }_{ij}\). (See, for example, Poole [Poo10].)
- (iii):
-
There are 6 independent \(A^{\nu }_{\mu 2},\,A^{\nu }_{\mu 3}\).
Hence there are \(12+15+6=33\) free parameters entering into the \(12\times 12\) matrix (4.14.11) resulting in considerable simplification.
Notes
- 1.
Deane Yang pointed out this equality to me and called it a “gauge condition ”. An analogy with continuum mechanics might be setting the pressure equal to zero on the surface of a water wave.
References
Blum R (1946) Ueber die Bedingungsgleichungen einer Riemann’schen Mannigfaltiggkeit, die in einer Euklidischen Mannigfaltigkeit enigebetter ist. (in German). Bull Math Soc Roum 47:144–201
Blum R (1947) Sur les tenseurs dérivés de Gauss et Codazzi. C R Acad Sci Paris 244:708–709
Blum R (1955) Subspaces of Riemannian spaces. Can J Math 7:445–452
Bryant RL, Griffiths PA, Yang D (1983) Characteristics and existence of isometric embeddings. Duke Math J 50:893–994
Friedrichs KO (1956) Symmetric positive linear differential equations. Commun Pure Appl Math 11:333–418
Goenner HF (1977) On the inderdependency of the Gauss-Codazzi-Ricci equations of local isometric embedding. Gen Relativ Gravit 8:139–145
Goodman J, Yang D (1988) Local solvability of nonlinear partial differential equations of real principal type. Unpublished www.deaneyang.com/paper,goodman-yang.pdf
Han Q, Khuri M (2012) The linearized system for isometric embeddings and its characteristic variety. Adv Math 23:263–293
Han Q, Hong J-X (2006) Isometric embedding of Riemannian manifolds in Euclidean spaces. American Mathematical Society, Providence
Nakamura G, Maeda Y (1986) Local isometric embedding problem of Riemannian \(3\)-manifolds into \({\rm {I}}\!{\rm {R}}^{6}\). Proc Jpn Acad Ser A Math Sci 62:257–259
Nakamura G, Maeda Y (1989) Local smooth isometric embeddings of low-dimensional Riemannian manifolds into Euclidean spaces. Trans Am Math Soc 313:1–51
Poole TE (2010) The local isometric embedding problem for \(3\)-dimensional Riemannian manifolds with cleanly vanishing curvature. Commun Partial Differ Equ 35:1802–1826
Nash JF Jr (1956) The embedding problem for Riemannian manifolds. Ann Math 63:20–63
Yau S-T (2006) Perspectives on geometric analysis [arXiv:math/0602363, vol 2, 16 Feb. 2006]: also Proceedings of International Conference on Complex Geometry and Related Fields, AMS/IP. Stud Adv Math., vol 39, pp 289–378. American Mathematical Society, Providence (2007)
Yosida K (1965) Functional analysis. Springer, Berlin
Acknowledgments
I would like to thank my co-investigators on the research project on Higher Dimensional Isometric Embedding: G.-Q. Chen (Oxford), J.Clelland (Colorado), D.Wang (Pittsburgh), and D.Yang (Poly-NYU) for their many comments and suggestions over several years. Our project commenced in Palo Alto, California, at the American Institute for Mathematics (AIM). Financial support from Estelle Basor and Brian Conrey via the AIM SQuaRE’s program has been especially helpful. In addition, my research has been supported by the Simons Foundation Collaborative Research Grant 252531 and the Korean Mathematics Research Station at KAIST (Daejeon, S.Korea). In fact, these notes were the basis for a lecture series at KAIST given at the kind invitation of Professor Y.-J. Kim. Additional thanks are due to Keble College (Oxford) where I was a visitor in April 2013 and had further opportunity to complete these notes. Very special thanks are extended to Irene Spencer and Mary McAuley of the Department of Mathematics and Statistics, University of Strathclyde (Glasgow) for the wonderful success in transforming my moderately intelligible hand written draft into the present “tex” version. Finally, I would like to express my gratitude to the organizers of the ICMS (Edinburgh) Workshop “Differential Geometry and Continuum Mechanics” . These are Jack Carr, G.-Q. Chen, M.Grinfeld, R.J. Knops, and J. Reese. Indeed, it was Michael Grinfeld who kindly arranged for Irene and Mary to type and organize these notes.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Slemrod, M. (2015). Lectures on the Isometric Embedding Problem \((M^{n},g)\rightarrow \mathrm {I\!R\!}^{m},\,m=\frac{n}{2}(n+1)\) . In: Chen, GQ., Grinfeld, M., Knops, R. (eds) Differential Geometry and Continuum Mechanics. Springer Proceedings in Mathematics & Statistics, vol 137. Springer, Cham. https://doi.org/10.1007/978-3-319-18573-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-18573-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-18572-9
Online ISBN: 978-3-319-18573-6
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)