Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Outline of the Chapter

Assume that S is a smooth surface, in the sense explained in Sect. 13.2, and that there is a function u(P) harmonic in Ω (the exterior of S), regular at infinity, and we have performed a very large number of measurements that can be expressed as functionals of u(P) at every point P ∈ S

$$\begin{array}{rcl} F[u(P),P] = f(P),& &\end{array}$$
(15.1)

then one can attempt to determine u(P) by solving the BVP

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta u = 0 &\text{ in }\Omega \\ F[u] = f &\text{ on }S \\ u \rightarrow 0 &\text{ at }\infty.\end{array} \right.& &\end{array}$$
(15.2)

In our context u(P) is in fact the anomalous gravity potential T(P), which is actually related to the full gravity potential W(P) through

$$\begin{array}{rcl} T(P) = W(P) - U(P)\ ;& &\end{array}$$
(15.3)

U(P), the normal potential, is a known function of the point P in space, where P is identified in terms of coordinates by means of a Cartesian frame centered to the reference ellipsoid. It is important to remember that (see Part I, (1.147))

$$\begin{array}{rcl} O(T)\cong1{0}^{-5}O(U),& &\end{array}$$
(15.4)

so that T can legitimately be considered as a perturbation of U.

In this chapter, S is either the earth topographic surface, or the telluroid (see Part I, Sect. 2.3), suitably smoothed by taking into account that our data f(P) are not really given everywhere on the surface and that in any case what we are aiming at is only an approximation of the solution of (15.2) by means of a finite sum of spherical harmonics, namely a global model. In this context clearly S can be averaged over squares of some kilometers without increasing the approximation error, up to a maximum degree of a few thousands. This is particularly true when the influence of the uppermost thin layer of topographic masses is (approximately) accounted for by the residual terrain correction (cf. Part I, Chap. 4). Another warning is that, as we know, true global models are built by using other data than those referring to the boundary, in fact they are rather derived by space geodetic techniques, like ground satellite tracking, satellite-to-satellite tracking, satellite gradiometry etc. Here these data will be considered as known, since we concentrate on the BVP part only of this approximation procedure and we would like to know whether the procedure is stable, i.e. whether, if we use a certain norm for the data {f(P)} and another norm for the solution {T(P)}, in order to be able to understand what is “small” and what is “large”, to a small perturbation of data corresponds a small perturbation of the solution.

In this sense the theory that we shall outline in this chapter is a basis for the construction of the so-called high resolution earth gravity models, represented by a set of harmonic coefficients up to a maximum degree of some thousands. This can be done with or without the help of the knowledge of lower degree harmonics depending on what data we consider as boundary values.

Typical in this sense would be either the free air gravity anomaly or the gravity disturbance. The first, in a linearized version, writes

$$\begin{array}{rcl} & & \Delta g(P) ={ \mathbf{e}}_{\gamma } \cdot \nabla T(P) + \frac{\gamma \prime } {\gamma }T(P) \\ & & \left ({\mathbf{e}}_{\gamma } = \frac{\gamma } {\vert \gamma \vert },\ \gamma \prime = \frac{\partial \gamma } {\partial h}\right ). \end{array}$$
(15.5)

The linearized equation (15.5) holds according to Molodensky’s theory in the scalar version, where we know for each point of the boundary \(({\vartheta},\lambda ),W\) and g, (cf. Part I, Sect. 2.3, point 3). The second can be written

$$\begin{array}{rcl} \delta g ={ \mathbf{e}}_{\gamma } \cdot \nabla T\ ;& &\end{array}$$
(15.6)

(15.6) applies when we assume to know beyond \(({\vartheta},\lambda )\) also the ellipsoidal height h of the point P and the gravity modulus g(P) (cf. Part I, Sect. 2.3, point 2).

In the first case we derive (15.5) by linearizing a free boundary BVP, or Molodensky’s problem, where the ellipsoidal height \(h({\vartheta},\lambda )\) of the boundary is unknown and in fact related to the known normal height \({h}^{{_\ast}}({\vartheta},\lambda )\) by Bruns’s relation (cf. Part I, (2.36))

$$\begin{array}{rcl} h({\vartheta},\lambda ) = {h}^{{_\ast}}({\vartheta},\lambda ) + \zeta \ \ ;\ \ \zeta = \frac{T({\vartheta},\lambda )} {\gamma ({\vartheta},\lambda )}.& &\end{array}$$
(15.7)

In the second case, (15.6) is derived by linearizing the expression of \(\vert \mathbf{g}\vert \) on a known fixed boundary.

So, the surface where data (15.5) are given is a “smoothed” version of the telluroid, while for (15.6) we can think of a smoothed version of the actual topographic surface. In both cases we shall make on S the hypothesis that it is star-shaped, i.e. that it can be expressed in spherical coordinates by an equation of the form \(r = R({\vartheta},\lambda )\); furthermore we shall assume that \(R({\vartheta},\lambda )\) has bounded first and second derivatives, i.e. that S has a bounded inclination with respect to \({\mathbf{e}}_{r}\) and a bounded curvature. In order to speed up the notation of the section we shall use, through this chapter, the following symbols

$$\begin{array}{rcl} & & \sigma \equiv ({\vartheta},\lambda ) = \text{ corresponding to a direction in space } \\ & & \text{ or a point on the unit sphere} \\ & & {R}_{\sigma } = R(\sigma ) = R({\vartheta},\lambda ) \\ & & {\nabla }_{\sigma } ={ \mathbf{e}}_{{\vartheta}} \frac{\partial } {\partial {\vartheta}} +{ \mathbf{e}}_{\lambda } \frac{1} {\sin {\vartheta}} \frac{\partial } {\partial \lambda } \\ & &{ \mathbf{n}}_{P} = \text{ unit vector normal to $S$ at $P$} \\ & & I = \text{ inclination of $S$ with respect to }{\mathbf{e}}_{r} \\ & & \cos I ={ \mathbf{n}}_{P} \cdot {\mathbf{e}}_{r} \\ & & J = {(\cos I)}^{-1} \\ & & d\sigma = \text{ unit sphere area element } \\ & & dS = J{R}_{\sigma }^{2}d\sigma = \text{ surface $S$ - area element} \end{array}$$
(15.8)

Furthermore we shall use an index + or − to represent minimum and maximum of a certain quantity with respect to σ; so

$$\begin{array}{rcl} & & {R}_{+} {=\max }_{\sigma }{R}_{\sigma },\ {R}_{-} {=\min }_{\sigma }{R}_{\sigma },\ \delta R = {R}_{+} - {R}_{-} \\ & & {J}_{+} {=\max }_{\sigma }J = {(\cos {I}_{+})}^{-1} \end{array}$$
(15.9)

and so forth.

Finally, let us remark that we have used the notation

$$\begin{array}{rcl}{ \mathbf{e}}_{\gamma } \cdot \nabla u\cong\frac{\partial u} {\partial h} = u \prime \ ;& &\end{array}$$
(15.10)

in the sequel the same notation will be used as well for \(\frac{\partial u} {\partial r} = u \prime \), when the context implies an unambiguous identification between the two alternatives.

All that given, in Sect. 15.2 we prepare the precise definitions of the BVP problems we are going to analyze, and of the spaces where the solution will be sought. In Sect. 15.3 and in Sect. 15.4 we analyze the two main BVP’s, namely Molodensky’s problem and the fixed boundary BVP, proving theorems of existence and uniqueness under suitable conditions on the geometry of S (cf. Sansò and Venuti 2008). In Sect. 15.5 we start the discussion of the numerical implementation of an approximate solution of our BVP’s; in particular we start from the traditional least squares method and we show how it compares with the classical Galerkin approach. Despite some simplifications, even Galerkin’s equations are too complicated to allow for a direct numerical solution, although some numerical work has been done to study direct solutions on the surface S. So, following the geodetic tradition, some simplified iterative solutions have to be devised. These are illustrated in Sect. 15.6 (Sacerdote and Sansò 2010). Finally in Sect. 15.7 we briefly introduce new datasets relative to the gravity field, that space technology is recently providing. The use of such data sets can be done along the line of a solution of a BVP and for this reason they are shortly presented within this chapter.

2 A Precise Definition of the Two Main BVP’s and of Their Solution Spaces

What is peculiar of this chapter is that in the rather large literature concerning geodetic BVP we shall choose for the data the L 2(S) topology, because this is what is implicitly assumed in many approximation procedures, specially when we discretize S so that the L 2(S) norm resembles a quite familiar sum of squares.

Correspondingly we expect \(T \prime = \frac{\partial T} {\partial r}\) to be in L 2(S) too so that for the solution a suitable norm could be that of H 1, 2(S), namely the one that guarantees that \(\vert \nabla T\vert \) is in L 2(S) too. This is essential if we want to build an approximation valid for gravity anomalies and deflections of the vertical, up to the boundary.

For technical reasons however we shall not use exactly the classical L 2(S) and H 1, 2(S) norms but rather an equivalent form, namely (remember that we shall put an H in front of the symbols of our Hilbert spaces, to underline that we are dealing with spaces of harmonic functions) we shall put

$$\begin{array}{rcl} v \in {\mathit{HL}}^{2}(S),& & \|{v\|}_{ 0}^{2} = \int\limits{v}^{2}({R}_{ \sigma },\sigma ){R}_{\sigma }d\sigma \end{array}$$
(15.11)
$$\begin{array}{rcl} u \in H{H}^{1,2}(S),& & \|{u\|}_{ 1}^{2} =\| r\vert \nabla u{\vert \|}_{ 0}^{2} = \int\limits\,\vert \nabla u({R}_{\sigma },\sigma ){\vert }^{2}{R}_{ \sigma }^{3}d\sigma.\end{array}$$
(15.12)

Let us notice that, based on the above remark, the norm \({\|\ \|}_{0}\) defined by (15.11) can be used for both harmonic functions defined through Ω, that admit a trace on S according to Cimmino (Cimmino 1952; Miranda 1970), or functions which are just defined on S, like the data f. The same is not true for the \({\|\ \|}_{1}\), (15.12), because ∇ u implies also the knowledge of u′.

The target of this chapter is precisely to show that, assuming that data are bounded in L 2(S), the solutions T of our geodetic BVP is bounded in HH 1, 2(S), i.e. that, under suitable conditions on R σ, there is constant C such that

$$\begin{array}{rcl} \|{T\|}_{1} \leq C\|{f\|}_{0}& &\end{array}$$
(15.13)

where f is − R σ Δg or − R σδg, depending whether we treat the problem with data (15.5) or (15.6).

The linearized Molodensky problem with boundary values (15.5) is in fact the one we are still using for the determination of high resolution global models, yet it has more unfavourable mathematical properties due to the fact that in spherical approximation its solution is non-unique.

On the contrary the BVP with boundary values (15.6) is much easier to analyze and has superior stability properties. yet one could argue that the data for such a problem are not available. This is certainly true at the present time, however the possibility of a direct survey of the topographic surface from space by SAR and the nowadays common use of GPS together with gravimeters, providing the ellipsoidal height at every new point of gravity measurement, make this form of the geodetic BVP more and more important for the future.

A warning on the notation used in the chapter is that many times we need to define a not-better specified constant: for that we shall always use the symbol C, without necessarily implying that it is a specific constant assuming the same value in all cases.

To start to give the appropriate formulation of our problems we need here some preliminary propositions.

Proposition 1.

There are functions \(\{{Z}_{\mathcal{l}m}\}\) in HL 2(S) such that, fixing a radius \(\overline{R} > {R}_{+}\) and a sphere \(\overline{S}\), with radius \(\overline{R}\), encompassing \(S,\forall u \in {\mathit{HL}}^{2}(S)\)

$$\begin{array}{rcl}{ \left < {Z}_{\mathcal{l}m},u\right >}_{0}& =& \int\limits {Z}_{\mathcal{l}m}({R}_{\sigma },\sigma )u({R}_{\sigma },\sigma ){R}_{\sigma }d\sigma \\ & =& \int\limits_{\overline{S}}{S}_{\mathcal{l}m}(\overline{R},\sigma )u(\overline{R},\sigma )d\sigma ,\qquad \left (d\sigma = \frac{d\overline{S}} {{\overline{R}}^{2}}\right ),\end{array}$$
(15.14)

with \({S}_{\mathcal{l}m}\) the outer solid spherical harmonics of degree \(\mathcal{l}\) and order m.

Proof.

Since dist \((\overline{S},S) = \overline{R} - {R}_{+} > 0\), we have for the Green function of S

$$\begin{array}{rcl} P \in \overline{S},Q \in S\ ;\quad \vert {G}_{{n}_{Q}}(P,Q)\vert \leq C.& &\end{array}$$
(15.15)

Therefore, using (13.168) and the fact that \(dS = J{R}_{\sigma }^{2}d\sigma \), we have \(\forall u \in {\mathit{HL}}^{2}(S)\)

$$\begin{array}{rcl} \int\limits_{\overline{S}}{u}^{2}(P)d\sigma \leq C\int\limits_{S}{u}^{2}(Q)dS \leq C{J}_{ +}{R}_{+}\|{u\|}_{0}^{2}.& &\end{array}$$
(15.16)

On the other hand the linear functionals

$$\begin{array}{rcl}{ L}_{\mathcal{l}m}(u) =\int\limits_{\overline{S}}{S}_{\mathcal{l}m}ud\sigma & &\end{array}$$
(15.17)

are indeed bounded, since

$$\begin{array}{rcl} \vert {L}_{\mathcal{l}m}(u){\vert }^{2} \leq \int\limits_{\overline{S}}{S}_{\mathcal{l}m}^{2}d\sigma \int\limits_{\overline{S}}{u}^{2}d\sigma \leq \frac{4\pi } {{\overline{R}}^{2\mathcal{l}+2}}C{J}_{+}{R}_{+}\|{u\|}_{0}^{2}.& &\end{array}$$
(15.18)

Therefore (15.14) is just Riesz theorem (Theorem 2) applied to \({L}_{\mathcal{l}m}(u)\).

Let us underline that the functions \({Z}_{\mathcal{l}m}\) so defined are in HL 2(S), namely they are harmonic in the whole Ω and it is their trace on S that is used to verify the identity (15.14). \(\square \)

Proposition 2.

Let us define

$$\begin{array}{rcl}{ V }_{L} = \text{ Span}\{{Z}_{\mathcal{l}m}\ ;\vert m\vert \leq \mathcal{l},\mathcal{l} \leq L\}& &\end{array}$$
(15.19)

and call \({V }_{L}^{\perp }\) the orthogonal complement of V L in HL 2(S); then

$$\begin{array}{rcl} u \in {V }_{L}^{\perp }\Leftrightarrow u = O\left ( \frac{1} {{r}^{L+2}}\right ).& &\end{array}$$
(15.20)

Furthermore \(\{{Z}_{\mathcal{l}m}\}\) is a system of linearly independent functions.

Proof.

In fact if \(u \in {V }_{L}^{\perp }\), i.e. \(u\perp {V }_{L}\), we have (recall that \({S}_{\mathcal{l}m}(\overline{R},\sigma )\,=\,{Y }_{\mathcal{l}m}(\sigma )\setminus {\overline{R}}^{\mathcal{l}+1}\))

$$\begin{array}{rcl}{ \overline{u}}_{\mathcal{l}m} = \frac{1} {4\pi }\int\limits_{\overline{S}}{Y }_{\mathcal{l}m}(\sigma )u(\overline{R},\sigma )d\sigma = 0\quad \forall m,\forall \mathcal{l} \leq L& &\end{array}$$
(15.21)

and viceversa. Hence, for \(r \geq \overline{R}\),

$$\begin{array}{rcl} u(r,\sigma ) = \sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{\overline{u}}_{ \mathcal{l}m}{\left (\frac{\overline{R}} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma ),& &\end{array}$$
(15.22)

confirming the statement (15.20). To prove that \(\{{Z}_{\mathcal{l}m}\}\) is a system of linearly independent functions, we note that

$$\begin{array}{rcl} \sum\limits_{\mathcal{l}=0}^{L}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} = 0& & \\ \end{array}$$

implies

$$\begin{array}{rcl} 0& =& \sum\limits_{\mathcal{l}=0}^{L}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}{\left < {Z}_{\mathcal{l}m},{S}_{jk}\right >}_{0} \\ & =& \sum\limits_{\mathcal{l}=0}^{L}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}\int\limits_{\overline{S}}\frac{{Y }_{\mathcal{l}m}(\sigma )} {{\overline{R}}^{\mathcal{l}+1}} \frac{{Y }_{jk}(\sigma )} {{\overline{R}}^{j+1}} d\sigma \\ & =& \frac{4\pi } {{\overline{R}}^{2j+2}}{a}_{jk},\qquad (\vert k\vert \leq j,\ j \leq L).\end{array}$$

\(\square \)

Let us define for the moment the linearized Molodensky problem, modified to exploit the knowledge of low-degree harmonics up to order L, as

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta T = 0 &\text{ in }\Omega \\ {\mathbf{e}}_{\gamma } \cdot \nabla T + \frac{\gamma \prime } {\gamma }T = -\Delta g -\sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}\frac{1} {r}{Z}_{\mathcal{l}m}&\text{ on }S \\ T = O\left ( \frac{1} {{r}^{L+2}} \right ) &\text{ for }r \rightarrow \infty.\end{array} \right.& &\end{array}$$
(15.23)

The unknowns in (15.23) are both T and the coefficients \(\{{a}_{\mathcal{l}m}\}\). The boundary condition in (15.23) can be conveniently put into the perturbative form

$$\begin{array}{rcl} & & \mathbf{r} \cdot \nabla T + 2T = f + \sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} + \\ & & +r({\mathbf{e}}_{r} +{ \mathbf{e}}_{\gamma }) \cdot \nabla T + \left (2 + r\frac{\gamma \prime } {\gamma }\right )T. \end{array}$$
(15.24)

Proposition 3.

The equation (15.24) is perturbative in the sense that, calling as usual \(\nu \) the ellipsoidal normal,

$$\begin{array}{rcl} & & \left \{\begin{array}{l} \epsilon ={ \mathbf{e}}_{r} +{ \mathbf{e}}_{\gamma }\cong{\mathbf{e}}_{r} -\nu \\ \vert \epsilon \vert \leq \frac{1} {2}{e}^{2} \end{array} \right. \\ & & ({e}^{2} = \text{ ellipsoid eccentricity}) \end{array}$$
(15.25)

and

$$\begin{array}{rcl} \left \{\begin{array}{l} \eta = 2 + r\frac{\gamma \prime } {\gamma } \\ \vert \eta \vert \leq 2{e}^{2},\end{array} \right.& &\end{array}$$
(15.26)

i.e.  \(\epsilon \) and η can be taken as perturbation parameters, small to the first order in e 2 .

Proof.

The estimate (15.25) is easily derived from the explicit expressions of \(\nu (\sigma )\) and \({\mathbf{e}}_{r}(\sigma )\), as functions of ellipsoidal colatitude \(\overline{{\vartheta}}\) and longitude λ, namely

$$\begin{array}{rcl} \nu (\sigma )& =& \left \vert \begin{array}{c} \sin \overline{{\vartheta}}\cos \lambda \\ \sin \overline{{\vartheta} } \sin \lambda \\ \cos \overline{{\vartheta}}\end{array} \right \vert \ ; \\ {\mathbf{e}}_{r}(\sigma )& =& \frac{1} {\sqrt{{\sin }^{2 } \overline{{\vartheta} } + {(1 - {e}^{2 } ){}^{2 } \cos }^{2 } \overline{{\vartheta}}}}\left \vert \begin{array}{c} \sin \overline{{\vartheta}}\cos \lambda \\ \sin \overline{{\vartheta} } \sin \lambda \\ (1 - {e}^{2})\cos \overline{{\vartheta}} \end{array} \right \vert.\end{array}$$

In fact, by using an approximation to the order of e 2, we have

$$\begin{array}{rcl}{ \mathbf{e}}_{r} \sim (1 + {e{}^{2}\cos }^{2}\overline{{\vartheta}})\nu - {e}^{2}\left \vert \begin{array}{c} 0\\ 0 \\ \cos \overline{{\vartheta}} \end{array} \right \vert & & \\ \end{array}$$

i.e.

$$\begin{array}{rcl} & & \vert {\mathbf{e}}_{r} -\nu \vert \sim {e}^{2}\vert \cos \overline{{\vartheta}}\vert \cdot \vert \left \vert \begin{array}{c} \cos \overline{{\vartheta}}\sin \overline{{\vartheta}}\cos \lambda \\ \cos \overline{{\vartheta} } \sin \overline{{\vartheta} } \sin \lambda \\ {\cos }^{2}\overline{{\vartheta}} - 1 \end{array} \right \vert \vert \\ & & = {e}^{2}\vert \cos \overline{{\vartheta}}\vert \sqrt{{\cos }^{2 }{ \overline{{\vartheta} } \sin }^{2 } \overline{{\vartheta} } {+\sin }^{4 } \overline{{\vartheta}}} = {e}^{2}\vert \cos \overline{{\vartheta}}\sin \overline{{\vartheta}}\vert \leq \frac{1} {2}{e}^{2} \\ \end{array}$$

The estimate (15.26) is calculated form the approximate expression (see Part I, (2.122))

$$\begin{array}{rcl} r\frac{\gamma \prime } {\gamma }\cong -\left ( \frac{r} {\mathcal{M}} + \frac{r} {\mathcal{N}}\right ) - 2\frac{{\omega }^{2}r} {\gamma } ,& & \\ \end{array}$$

making the computation up to O(e 2).

Remember that \(\mathcal{M}\) and \(\mathcal{N}\) are respectively the radius of curvature of the meridian and the grand normal already met in Part I, (1.206) and (1.137).

In fact, disregarding the height of the point on the surface S which gives a smaller contribution, one can write

$$\begin{array}{rcl} r& =& \mathcal{N}{{[\sin }^{2}\overline{{\vartheta}} + {(1 - {e}^{2}){}^{2}\cos }^{2}\overline{{\vartheta}}]}^{(1/2)} \sim \mathcal{N}(1 - {e{}^{2}\cos }^{2}\overline{{\vartheta}}) \\ \frac{\mathcal{N}} {\mathcal{M}}& =& \frac{1 - {e{}^{2}\cos }^{2}\overline{{\vartheta}}} {1 - {e}^{2}} \cong1 + {e{}^{2}\sin }^{2}\overline{{\vartheta}} \\ \frac{{\omega }^{2}r} {\gamma } & \cong & \frac{1} {2}{e}^{2}\ ; \\ \end{array}$$

the last estimate is done on a pure numerical basis with r equal to the mean radius of the earth. Accordingly one finds

$$\begin{array}{rcl} \left \vert r\frac{\gamma \prime } {\gamma } + 2\right \vert \sim {e}^{2}{\vert \sin }^{2}\overline{{\vartheta}} - {2\cos }^{2}\overline{{\vartheta}} + 1\vert = {e}^{2}\vert {2\sin }^{2}\overline{{\vartheta}} {-\cos }^{2}\overline{{\vartheta}}\vert \leq 2{e}^{2}.& & \\ \end{array}$$

\(\square \)

We are now able to give the definiton of the linearized Molodensky problem in perturbative form.

Definition 1.

We say that the linearized Molodensky problem is to find the potential T and numbers \(\{{a}_{\mathcal{l}m};0\,\leq \,\mathcal{l}\,\leq \,L,\ \vert m\vert \,\leq \,\mathcal{l}\}\) such that, denoting \(rT \prime = \mathbf{r} \cdot \nabla T\),

$$\begin{array}{rcl} \left \{\begin{array}{c} \Delta T = 0\qquad \text{ in }\Omega \\ rT \prime + 2T = f + \sum\limits_{\mathcal{l}=0}^{L}\sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} + r\epsilon \cdot \nabla T + \eta T\text{ on }S \\ T = O\left ( \frac{1} {{r}^{L+2}} \right ),\ r \rightarrow \infty.\end{array} \right.& &\end{array}$$
(15.27)

As it is easy to verify, comparing with (15.23) and (15.24), in (15.27) we have put

$$\begin{array}{rcl} f = -r\Delta g.& &\end{array}$$
(15.28)

For future reference we note that, denoting

$$\begin{array}{rcl}{ A}_{1} = r\left (- \frac{\partial } {\partial h} + \frac{\gamma \prime } {\gamma }\right )& &\end{array}$$
(15.29)

the Molodensky boundary operator, we have used in (15.27) the perturbative form

$$\begin{array}{rcl}{ A}_{1}& =& {A}_{1S} + {D}_{1} \equiv \left (-\frac{\partial } {\partial r} - 2\right ) + (r\epsilon \cdot \nabla + \eta )\end{array}$$
(15.30)
$$\begin{array}{rcl} \epsilon & =&{ \mathbf{e}}_{r} -\nu ,\ \eta = \frac{2} {r} + \frac{\gamma \prime } {\gamma }.\end{array}$$
(15.31)

In a similar way the fixed boundary BVP in a linearized form can be written as

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta T = 0 &\text{ in }\Omega \\ {\mathbf{e}}_{ \gamma } \cdot \nabla T = \delta g&\text{ on }S \\ T = O\left (\frac{1} {r}\right ) &r \rightarrow \infty.\end{array} \right.& &\end{array}$$
(15.32)

Note that we haven’t introduced into (15.32) the knowledge of a certain number of harmonics of low degree and the corresponding unknowns \(\{{a}_{\mathcal{l}m}\}\); the reason is simply that (15.32) can be very easily analyzed without introducing such an artifice, what is not possible for problem (15.27).

Paralleling the Definition 1 we can put (15.32) too into a perturbative form, by exploiting (15.25).

Definition 2.

The linearized fixed boundary BVP in perturbative form is to find a potential T in Ω satisfying

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta T = 0 &\text{ in }\Omega \\ rT \prime = f + r\epsilon \cdot \nabla T&\text{on }S \\ T = O\left (\frac{1} {r}\right ) & \rightarrow \infty.\end{array} \right.& &\end{array}$$
(15.33)

Also here one finds that in (15.20) we have put

$$\begin{array}{rcl} f = -r\delta g.& &\end{array}$$
(15.34)

3 The Analysis of the Linearized Molodensky Problem

The results of this paragraph and of the next are based on the work (Sansò and Venuti 2008). The analysis of this problem can be performed basically in two steps. First of all we define a simpified problem, without perturbative terms, and we completely analyze it. Then we go back to the original form (15.27) and we get the desired result.

Definition 3 (simple Molodensky’s problem). 

The simple Molodensky problem or Molodensky’s problem in spherical approximation is to find \(\{T,{a}_{\mathcal{l}m}\}\) such that

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta T = 0 &\text{ in }\Omega \\ rT \prime + 2T = f + \sum\limits_{\mathcal{l}=0}^{L}\sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m}&\text{ on }S \\ T = O\left ( \frac{1} {{r}^{L+2}} \right ) &r \rightarrow \infty.\end{array} \right.& &\end{array}$$
(15.35)

To proceed with the analysis of (15.35) we need a result which is adapted from Hörmander (cf. Hörmander 1976) to the specific star-shaped geometry used here.

Theorem 1 (energy integral). 

Let T be harmonic in Ω and satisfy the equation

$$\begin{array}{rcl} rT \prime + \alpha T = \mathbf{r} \cdot \nabla T + \alpha T = v\ ;& &\end{array}$$
(15.36)

then v is harmonic in Ω too. Furthermore assume that v ∈ HL 2(S), then T satisfies the identity

$$\begin{array}{rcl} \|{T\|}_{1}^{2} + (1 - 2\alpha )\int\limits\,d\Omega \vert \nabla T{\vert }^{2} = 2\int\limits_{S}v{T}_{n}\mathit{dS},& &\end{array}$$
(15.37)

with \({T}_{n} = \frac{\partial T} {\partial n}\) and \(\mathbf{n}\) the exterior normal of S.

Proof.

From (15.36) we derive by differentiation

$$\begin{array}{rcl} \Delta v = \mathbf{r} \cdot \nabla (\Delta T) + (\alpha + 1)\Delta T = 0& & \\ \end{array}$$

proving that v is harmonic in Ω too. Now let T be harmonic in Ω; note that the following identity holds

$$\begin{array}{rcl} & & \nabla \cdot [(\mathbf{r} \cdot \nabla T + \alpha T)\nabla T] = [(\mathbf{r} \cdot \nabla )\nabla T] \cdot \nabla T + (\alpha + 1)\vert \nabla T{\vert }^{2} \\ & & \equiv \frac{1} {2}r \frac{\partial } {\partial r}(\vert \nabla T{\vert }^{2}) + (\alpha + 1)\vert \nabla T{\vert }^{2}. \end{array}$$
(15.38)

Remember that, to apply Gauss’ theorem, we must consider that the normal \(\mathbf{n}\) to S is pointing in Ω. So by integrating (15.38) over Ω, with dΩ = r 2 drdσ, we find

$$\begin{array}{rcl} & & -\int\limits_{S}(\mathbf{r} \cdot \nabla T + \alpha T)\mathbf{n} \cdot \nabla \mathit{TdS} = -\int\limits_{S}v{T}_{n}\mathit{dS} \\ & & \quad = \frac{1} {2}\int\limits\,d\sigma \int\limits_{{R}_{\sigma }}^{+\infty }{\mathit{drr}}^{3} \frac{\partial } {\partial r}\vert \nabla T{\vert }^{2} + (\alpha + 1)\int\limits \vert \nabla T{\vert }^{2}d\Omega \\ & & \quad = \frac{1} {2}\int\limits d\sigma {R}_{\sigma }^{3}\vert \nabla T{\vert }^{2} + \left (\alpha -\frac{1} {2}\right )\int\limits \vert \nabla T{\vert }^{2}d\Omega. \end{array}$$
(15.39)

Rearranging we get (15.37). \(\square \)

Corollary 1.

Assume that \(\alpha \leq \frac{1} {2}\) and v ∈ HL 2(S), then T ∈HH 1,2 (S) and

$$\begin{array}{rcl} \|{T\|}_{1} \leq 2{J}_{+}\|{v\|}_{0};& &\end{array}$$
(15.40)

in particular (15.40) holds for α = 0, i.e. for v = rT′.

Assume viceversa that \(\alpha > \frac{1} {2}\) , then

$$\begin{array}{rcl} \|{T\|}_{1} \leq (2\alpha - 1){J}_{+}\|{T\|}_{0} + 2{J}_{+}\|{v\|}_{0},& &\end{array}$$
(15.41)

meaning that if one can prove that T ∈ HL 2(S) then we have T ∈HH 1,2 (S) too.

Proof.

Note that, by the Schwarz inequality, whatever is v, the following inequality holds

$$\begin{array}{rcl} \vert 2\int\limits_{S}v{T}_{n}\mathit{dS}\vert & =& 2\vert \int\limits_{S}v{T}_{n}{R}_{\sigma }^{2}Jd\sigma \vert \\ & &\leq 2{J}_{+}{\left \{\int\limits_{S}{v}^{2}{R}_{ \sigma }d\sigma \right \}}^{(1/2)}{\left \{\int\limits_{S}{T}_{n}^{2}{R}_{ \sigma }^{3}d\sigma \right \}}^{(1/2)} \\ & & \leq 2{J}_{+}\|{v\|}_{0}\|{T\|}_{1}. \end{array}$$
(15.42)

So if \((1 - 2\alpha ) \geq 0\), from (15.37) and (15.42) we find

$$\begin{array}{rcl} \|{T\|}_{1}^{2} \leq \vert 2\int\limits_{S}v{T}_{n}\mathit{dS}\vert \leq 2{J}_{+}\|{v\|}_{0}\|{T\|}_{1},& & \\ \end{array}$$

proving (15.40).

If, on the contrary, 1 − 2α < 0, we have from (15.37)

$$\begin{array}{rcl} \|{T\|}_{1}^{2}& =& -(2\alpha - 1)\int\limits_{S}T{T}_{n}\mathit{dS} + 2\int\limits_{S}v{T}_{n}\mathit{dS} \\ & =& \int\limits_{S}[-(2\alpha - 1)T + 2v]{T}_{n}{R}_{\sigma }^{2}\mathit{Jd}\sigma ;\end{array}$$
(15.43)

if we apply the Schwarz inequality to (15.43) we get (15.41). \(\square \)

We are able now to proceed in the analysis of (15.35).

Proposition 4.

The simple Molodensky problem (15.35) is equivalent to the modified Dirichlet problem

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta v = 0 &\text{ in }\Omega \\ v{\vert }_{S} = f + \sum\limits_{\mathcal{l}=0}^{L}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m}&\text{ on }S \\ v = 0\left ( \frac{1} {{r}^{L+2}} \right ), &\end{array} \right.& &\end{array}$$
(15.44)

with

$$\begin{array}{rcl} v = rT \prime + 2T& &\end{array}$$
(15.45)

on condition that

$$\begin{array}{rcl} L \geq 1& &\end{array}$$
(15.46)

Proof.

If T is harmonic, v is harmonic too in force of Theorem 1. That the boundary condition in (15.44) is satisfied is tautological, given the definition (15.45). Furthermore, if \(T\,=\,O\left ( \frac{1} {{r}^{L+2}} \right )\), recalling Proposition 2, we must have for \(r \geq \overline{R}\)

$$\begin{array}{rcl} T = \sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{\overline{T}}_{ \mathcal{l}m}{\left (\frac{\overline{R}} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma ),& &\end{array}$$
(15.47)

so that

$$\begin{array}{rcl} v = \sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}} - (\mathcal{l} - 1){\overline{T}}_{ \mathcal{l}m}{\left (\frac{\overline{R}} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma ),& &\end{array}$$
(15.48)

and the third of (15.44) is satisfied.

Viceversa if v satisfies (15.44) with L ≥ 1, we can reverse (15.47) and (15.48) in the sense that from the known development \((r \geq \overline{R})\)

$$\begin{array}{rcl} v = \sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{\overline{v}}_{ \mathcal{l}m}{\left (\frac{\overline{R}} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma )& &\end{array}$$
(15.49)

we derive in \(r \geq \overline{R}\), for T, the expression

$$\begin{array}{rcl} T = \sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}} \frac{{\overline{v}}_{\mathcal{l}m}} {\mathcal{l} - 1}{\left (\frac{\overline{R}} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma )\ & &\end{array}$$
(15.50)

which shows that in that region T is harmonic too and furthermore it satisfies the third of (15.44). Now from the identity

$$\begin{array}{rcl} r \frac{\partial } {\partial r}(\Delta T) + 4\Delta T = \Delta v = 0\text{ in }\Omega & & \\ \end{array}$$

multiplied by r 3, we can write

$$\begin{array}{rcl} \frac{\partial } {\partial r}({r}^{4}\Delta T) = 0,& &\end{array}$$
(15.51)

which integrated between r and \(\overline{R}\), considering that \(\Delta T{\vert }_{r=\overline{R}} = 0\), gives

$$\begin{array}{rcl}{ r}^{4}\Delta T = 0,\ {R}_{ \sigma } \leq r \leq \overline{R}.& &\end{array}$$
(15.52)

Therefore ΔT = 0 in the whole of Ω.

Note that critical in our reasoning is the fact we can never have \(\mathcal{l} = 1\) in (15.50), because for the smallest value of \(\mathcal{l}\) we have L + 1 ≥ 2.

In fact, it is easy to see that there can be no function T which simultaneously satisfies

$$\begin{array}{rcl} \Delta T = 0,\ rT \prime + 2T = \frac{{Y }_{1m}} {{r}^{2}}.& & \\ \end{array}$$

In addition, whatever is the first degree spherical harmonic \(\frac{{Y }_{1m}} {{r}^{2}}\), one has

$$\begin{array}{rcl} \left (r \frac{\partial } {\partial r} + 2\right )\frac{{Y }_{1m}} {{r}^{2}} \equiv 0,& & \\ \end{array}$$

i.e. there is no one-to-one correspondence between T and v, when first degree spherical harmonics are still present. This is avoided by condition (15.46).

Finally, we note that, when L ≥ 1, we can write from (15.45)

$$\begin{array}{rcl} \frac{\partial } {\partial r}({r}^{2}T) = rv& &\end{array}$$
(15.53)

which integrates to

$$\begin{array}{rcl} T(r,\sigma ) = -\frac{1} {{r}^{2}}\int\limits_{r}^{+\infty }sv(s,\sigma )\mathit{ds}.& &\end{array}$$
(15.54)

Again the fact that \(v\,=\,O\left ( \frac{1} {{r}^{3}} \right )\), at least, guarantees the convergence of the integral in (15.54). \(\square \)

Proposition 5.

Let us call w the solution of the Dirichlet problem

$$\begin{array}{rcl} \left \{\begin{array}{l} \Delta w = 0\text{ in }\Omega \\ w = f\text{ on } S\ ;\end{array} \right.& &\end{array}$$
(15.55)

that w ∈ HL 2(S) exists and is unique, when f ∈ L2(S), is a theorem by Cimmino that we don’t prove here (cf. Cimmino 1952).

Then the solution of (15.44) is given by

$$\begin{array}{rcl} v = -{P}_{{V }_{L}^{\perp }}w& &\end{array}$$
(15.56)

with

$$\begin{array}{rcl}{ P}_{{V }_{L}^{\perp }} = I - {P}_{{V }_{L}}& &\end{array}$$
(15.57)

the orthogonal projector on \({V }_{L}^{\perp }\) in HL 2(S), and the solution of the simple Molodensky problem T is given by (15.54).

Proof.

Equation 15.44 can be written as

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta \left (v -\sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m}\right ) = 0\ \ &\text{ in }\Omega \\ v -\sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} = f &\text{ on }S,\end{array} \right.& &\end{array}$$
(15.58)

showing that we must have

$$\begin{array}{rcl} v -\sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} = w.& &\end{array}$$
(15.59)

Since the third of (15.44) is equivalent to \(v \in {V }_{L}^{\perp }\) (see Proposition 2), it is enough to apply \({P}_{{V }_{L}^{\perp }}\) to both members of (15.59) to get (15.56). \(\square \)

We note also that (15.59) determines \(\{{a}_{\mathcal{l}m}\}\) too, since

$$\begin{array}{rcl} \sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} = v - w = -{P}_{{V }_{L}}w& &\end{array}$$
(15.60)

and \(\{{Z}_{\mathcal{l}m}\}\) are linearly independent.

We note as well that (15.56) implies the important relation

$$\begin{array}{rcl} \|{v\|}_{0} \leq \| {w\|}_{0} =\| {f\|}_{0}\ ;& &\end{array}$$
(15.61)

because the orthogonal projection of an element of a Hilbert space has always norm not larger than the projected vector.

Before we close the analysis of the simple Molodensky problem, we still need another technical result which we formulate as a proposition.

Proposition 6.

Let u be any function in \({V }_{L}^{\perp }\) , i.e.  \(u = O\left ( \frac{1} {{r}^{L+2}} \right )\) when \(r \rightarrow \infty \) ; assume further that \(u \in {\mathit{HH}}^{1,2}(S)\) , i.e. that \(\|{u\|}_{1} < +\infty \) ; then the following inequality holds

$$\begin{array}{rcl} \|{u\|}_{0} \leq {C}_{0L}\|{R}_{\sigma }{u \prime \|}_{0} \leq {C}_{0L}\|{u\|}_{1},& &\end{array}$$
(15.62)

with

$$\begin{array}{rcl}{ C}_{0L} = {J}_{+}\left ( \frac{\delta R} {{R}_{+}} + \frac{2} {L + 2}\right ) \equiv {J}_{+}{C}_{L}\ ;& &\end{array}$$
(15.63)

see (15.9) for the meaning of symbols.

Proof.

Put

$$\begin{array}{rcl}{ u}_{+} = u({R}_{+},\sigma )& &\end{array}$$
(15.64)

and note that

$$\begin{array}{rcl} u{\vert }_{S} = u({R}_{\sigma },\sigma ) = u{\vert }_{S} - {u}_{+} + {u}_{+}& & \\ \end{array}$$

so that one can write

$$\begin{array}{rcl} \|{u\|}_{0} \leq \| {u{}_{+}\|}_{0} +\| u - {u{}_{+}\|}_{0}.& &\end{array}$$
(15.65)

Note that for \(r = {R}_{+}\) one can put

$$\begin{array}{rcl}{ u}_{+} = \sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{u}_{ \mathcal{l}m}^{+}{Y }_{ \mathcal{l}m}(\sigma )& &\end{array}$$
(15.66)

and that for r > R  +  one has

$$\begin{array}{rcl} ru \prime = -\sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{u}_{ \mathcal{l},m}^{+}(\mathcal{l} + 1){\left (\frac{{R}_{+}} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma ).& &\end{array}$$
(15.67)

A direct computation shows that

$$\begin{array}{rcl} \|{u{}_{+}\|}_{0}^{2}& =& \int\limits d\sigma {R}_{\sigma }{u}_{+}^{2} \leq {R}_{ +}4\pi \sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{u}_{ \mathcal{l}m}^{{+}^{2} } \\ & & \leq {R}_{+}4\pi \frac{2L + 3} {{(L + 2)}^{2}}\sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}\frac{{(\mathcal{l} + 1)}^{2}} {2\mathcal{l} + 1} {u}_{\mathcal{l}m}^{{+}^{2} }\ ;\end{array}$$
(15.68)

similarly

$$\begin{array}{rcl} & & \int\limits d\sigma \int\limits_{{R}_{+}}^{+\infty }{(ru \prime )}^{2}dr =\int\limits_{{\Omega }_{+}}{(u \prime )}^{2}d\Omega \\ & & = (4\pi {R}_{+})\sum\limits_{\mathcal{l}=L+1}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}\frac{{(\mathcal{l} + 1)}^{2}} {2\mathcal{l} + 1} {u}_{\mathcal{l}m}^{{+}^{2} }.\end{array}$$
(15.69)

Comparing (15.68) and (15.69), noticing that \(\frac{(2L+3)} {{(L+2)}^{2}} < \frac{2} {L+2}\), we derive

$$\begin{array}{rcl} \|{u{}_{+}\|}_{0}^{2} \leq \frac{2} {L + 2}\int\limits_{{\Omega }_{+}}{(u \prime )}^{2}d\Omega.& &\end{array}$$
(15.70)

On the other hand

$$\begin{array}{rcl} \vert {u}_{+} - u{\vert }_{S}{\vert }^{2}& =& \vert u({R}_{ +},\sigma ) - u({R}_{\sigma },\sigma ){\vert }^{2} ={ \left \vert \int\limits_{{R}_{\sigma }}^{{R}_{+} }u \prime \mathit{dr}\right \vert }^{2} \\ & \leq & \int\limits_{{R}_{\sigma }}^{{R}_{+} }{r}^{2}{(u \prime )}^{2}dr\int\limits_{{R}_{\sigma }}^{{R}_{+} } \frac{1} {{r}^{2}}dr = \frac{{R}_{+} - {R}_{\sigma }} {{R}_{\sigma }{R}_{+}} \int\limits_{{R}_{\sigma }}^{{R}_{+} }{r}^{2}{u \prime }^{2}\mathit{dr}.\end{array}$$
(15.71)

Multiplying (15.71) by R σ and integrating on dσ we obtain

$$\begin{array}{rcl} \|{u}_{+} - {u\|}_{0}^{2} \leq \frac{\delta R} {{R}_{+}}\int\limits_{\Omega \setminus {\Omega }_{+}}{(u \prime )}^{2}d\Omega.& &\end{array}$$
(15.72)

So, going back to (15.65) and applying the Cauchy inequality, we get

$$\begin{array}{rcl} \|{u\|}_{0}^{2}& & \leq {\left \{\sqrt{ \frac{2} {L + 2}}{\left [\int\limits_{{\Omega }_{+}}{(u \prime )}^{2}d\Omega \right ]}^{(1/2)} + \sqrt{ \frac{\delta R} {{R}_{+}}}{\left [\int\limits_{\Omega \setminus {\Omega }_{+}}{(u \prime )}^{2}d\Omega \right ]}^{(1/2)}\right \}}^{2} \\ & & \leq {C}_{L}\left [\int\limits_{{\Omega }_{+}}{(u \prime )}^{2}d\Omega +\int\limits_{\Omega \setminus {\Omega }_{+}}{(u \prime )}^{2}d\Omega \right ] \\ & & = {C}_{L}\|{u \prime \|}_{{L}^{2}(\Omega )}^{2} \leq {C}_{ L}\|\nabla {u\|}_{{L}^{2}(\Omega )}^{2}. \end{array}$$
(15.73)

On the other hand

$$\begin{array}{rcl} \int\limits_{\Omega }\vert \nabla u{\vert }^{2}d\Omega & =& -\int\limits_{S}u{u}_{n}\mathit{dS} = -\int\limits u{u}_{n}J{R}_{\sigma }^{2}d\sigma \\ & & \leq {J}_{+}{\left (\int\limits {u}^{2}{R}_{ \sigma }d\sigma \right )}^{(1/2)}{\left (\int\limits {u}_{n}^{2}{R}_{ \sigma }^{3}d\sigma \right )}^{(1/2)} \leq {J}_{ +}\|{u\|}_{0} \cdot \| {u\|}_{1}.\end{array}$$
(15.74)

By using (15.74) into (15.73) and simpifying \(\|{u\|}_{0}\), we get (15.62). \(\square \)

We are ready now to derive the sought result for the simple Molodensky problem.

Theorem 2 (simple Molodensky’s problem). 

The solution of the problem (15.35), explicitly provided by formula (15.52) with v defined in (15.45) and satisfying the inequality (15.61), is such that, if

$$\begin{array}{rcl} 4{J}_{+}{C}_{0L} < 1,& &\end{array}$$
(15.75)

then

$$\begin{array}{rcl} \|{T\|}_{1} \leq {C}_{1L}\|{f\|}_{0}& &\end{array}$$
(15.76)

with

$$\begin{array}{rcl}{ C}_{1L} = \frac{2{J}_{+}} {1 - 4{J}_{+}{C}_{0L}}.& &\end{array}$$
(15.77)

Proof.

Write (15.45) as

$$\begin{array}{rcl} rT \prime = v - 2T& &\end{array}$$
(15.78)

to the effect that (use (15.61) and (15.62) with u = T)

$$\begin{array}{rcl} \|{R}_{\sigma }{T \prime \|}_{0}& \leq & \|{v\|}_{0} + 2\|{T\|}_{0} \\ & \leq & \|{f\|}_{0} + 2\|{T\|}_{0} \\ & \leq & \|{f\|}_{0} + 2{C}_{0L}\ \|{T\|}_{1}.\end{array}$$
(15.79)

Now recall (15.40), basically claiming that, when α = 0,

$$\begin{array}{rcl} \|{T\|}_{1} \leq 2{J}_{+}\|{R}_{\sigma }{T \prime \|}_{0}& &\end{array}$$
(15.80)

and combine with (15.79) to get (15.76) and (15.77). \(\square \)

At this point we are able to pass to analyze the linearized Molodensky problem, that we shall consider as written in the perturbative form (15.27).

Theorem 3.

The solution of the linearized Molodensky problem (15.27) exists is unique in HH 1,2(S) and such that

$$\begin{array}{rcl} \|{T\|}_{1} \leq {C}_{2L}\|{f\|}_{0}& &\end{array}$$
(15.81)

with

$$\begin{array}{rcl}{ C}_{2L} = {C}_{1L}{[1 - {C}_{1L}({\epsilon }_{+} + {C}_{0L}{\eta }_{+})]}^{-1},& &\end{array}$$
(15.82)

where \(\epsilon = \vert \epsilon \vert \), if the condition

$$\begin{array}{rcl} 2{C}_{0L} < \frac{1 - 2{\epsilon }_{+}{J}_{+}} {{J}_{+}(2 + {\eta }_{+})}& &\end{array}$$
(15.83)

is satisfied.

Furthermore, under the same condition (15.83), the simple iterative sequence

$$\begin{array}{rcl} r{T \prime }_{(n+1)} + 2{T}_{(n+1)}& =& f + \sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}^{(n+1)}{Z}_{ \mathcal{l}m} \\ & & +r\epsilon \cdot \nabla {T}_{(n)} + \eta {T}_{(n)} \end{array}$$
(15.84)

converges to the solution of (15.27) in HH 1,2(S).

Proof.

From the equation

$$\begin{array}{rcl} rT \prime + 2T = f + \sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} + r\epsilon \cdot \nabla T + \eta T& &\end{array}$$
(15.85)

and (15.76) of Theorem 2 we derive

$$\begin{array}{rcl} \|{T\|}_{1}& \leq & {C}_{1L}\|f + r\epsilon \cdot \nabla T + \eta {T\|}_{0} \\ & \leq & {C}_{1L}\|{f\|}_{0} + {C}_{1L}{\epsilon }_{+}\|{R}_{\sigma }\vert \nabla T{\vert \|}_{0} + {C}_{1L}{\eta }_{+}\|{T\|}_{0}.\end{array}$$
(15.86)

With the help of (15.62) and (15.86) becomes

$$\begin{array}{rcl} \|{T\|}_{1} \leq {C}_{1L}\|{f\|}_{0} + {C}_{1L}{\epsilon }_{+}\|{T\|}_{1} + {C}_{1L}{\eta }_{+}{C}_{0L}\|{T\|}_{1}.& &\end{array}$$
(15.87)

Reordering, we see that if condition

$$\begin{array}{rcl}{ C}_{1L}({\epsilon }_{+} + {\eta }_{+}{C}_{0L}) < 1& &\end{array}$$
(15.88)

is satisfied, then (15.81) and (15.82) hold true.

Recalling (15.77) we verify that (15.88) is equivalent to (15.83).

Moreover, let us re-write (15.85) in the form

$$\begin{array}{rcl} & & -{A}_{1S}T -\sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} \equiv rT \prime + 2T -\sum\limits_{\mathcal{l},m=0}^{L}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} \\ & & = f + {D}_{1}T \equiv f + r\epsilon \cdot \nabla T + \eta T \end{array}$$
(15.89)

By means of Theorem 2, if L is such as to satisfy (15.75) and (15.89) can be written, after multiplying by the projection operator \({P}_{{V }_{L}^{\perp }}\) (cf. (15.57)) and noting that it has to be \({P}_{{V }_{L}^{\perp }}T = T\) because of the third of (15.27), as well as \({P}_{{V }_{L}^{\perp }}{A}_{1S}T = {A}_{1S}{P}_{{V }_{L}^{\perp }}T = {A}_{1S}T\)

$$\begin{array}{rcl} T = -{A}_{1S}^{-1}{P}_{{ V }_{L}^{\perp }}f - {A}_{1S}^{-1}{P}_{{ V }_{L}^{\perp }}{D}_{1}T.& &\end{array}$$
(15.90)

Equation 15.90 is meaningful because A 1S is indeed invertible if we restrict its range to \({V }_{L}^{\perp }\).

As we can easily understand the condition (15.88) implies, for the operator \({A}_{1S}^{-1}{P}_{{V }_{L}^{\perp }}{D}_{1}\), which transforms HH 1, 2(S) into HH 1, 2(S),

$$\begin{array}{rcl} \|{A}_{1S}^{-1}{P}_{{ V }_{L}^{\perp }}{D}_{1}\| \leq {C}_{1L}({\epsilon }_{+} + {\eta }_{+}{C}_{0L}) < 1.& &\end{array}$$
(15.91)

So (15.88) becomes the condition that \({A}_{1S}^{-1}{P}_{{V }_{L}^{\perp }}{B}_{1}\) is a contraction, which is well-known to be solvable by simple iteration. This proves (15.84). \(\square \)

It is interesting to observe that the updating at each step of the constants \(\{{a}_{\mathcal{l}m}\}\) is necessary to implement the action of \({P}_{{V }_{L}^{\perp }}\), i.e. to guarantee that the known term of (15.84) at step n is depurated of the component on V L , so that \({T}_{(n+1)}\) is known to keep on the correct asymptotic behaviour \({T}_{(n+1)} = O\left ( \frac{1} {{r}^{L+2}} \right )\).

4 The Analysis of the Linearized Fixed Boundary BPV

We can now switch to the discussion of the fixed-boundary BVP in linearized form, where the observation equations on the boundary S are as in (15.6).

In analogy with Definition 3, we can introduce here too the linearized problem in spherical approximation.

Definition 4 (simple Hotine’s problem). 

The simple Hotine problem, or fixed boundary problem in spherical approximation, is to find T such that

$$\begin{array}{rcl} \left \{\begin{array}{lll} \Delta T = 0 &&\text{ in }\Omega \\ rT \prime = f&&\text{on }S \\ T = O\left (\frac{1} {r}\right )&&r \rightarrow \infty.\end{array} \right.& &\end{array}$$
(15.92)

Theorem 4.

If f ∈ L 2 (S), the simple Hotine problem has one and only one solution in HH 1,2(S) satisfying the inequality

$$\begin{array}{rcl} \|{T\|}_{1} \leq 2{J}_{+}\|{f\|}_{0}.& &\end{array}$$
(15.93)

Proof.

In analogy to what we did for the simple Molodensky problem, we first transform (15.92) into an equivalent Dirichlet problem

$$\begin{array}{rcl} \left \{\begin{array}{ll} \Delta v = 0\qquad &\text{ in }\Omega \\ v = f&\text{on }S \\ v = O\left (\frac{1} {r}\right )&r \rightarrow \infty.\end{array} \right.& &\end{array}$$
(15.94)

where

$$\begin{array}{rcl} v = rT \prime .& &\end{array}$$
(15.95)

We note that (15.95) can indeed be inverted providing T as

$$\begin{array}{rcl} T(r,\sigma ) = -\int\limits_{r}^{+\infty }\frac{1} {s}v(s,\sigma )ds.& &\end{array}$$
(15.96)

Since there is one and only one v, solution of (15.94), there is one and only one T solution of (15.92), given by (15.96).

Moreover, this solution satisfies (15.93), which is nothing but the energy integral theorem (see (15.40)) applied in this case with α = 0. \(\square \)

Theorem 5.

If f ∈ L 2 (S), the linearized fixed boundary BVP (15.33) has one and only one solution T in HH 1,2(S), satisfying

$$\begin{array}{rcl} \|{T\|}_{1} \leq {C}_{3L}\|{f\|}_{0},& &\end{array}$$
(15.97)

where

$$\begin{array}{rcl}{ C}_{3L} = 2{J}_{+}{(1 - 2{\epsilon }_{+}{J}_{+})}^{-1},& & \\ \end{array}$$

on condition that

$$\begin{array}{rcl} 2{\epsilon }_{+}{J}_{+} < 1.& &\end{array}$$
(15.98)

Proof.

We apply (15.77)–(15.74) obtaining

$$\begin{array}{rcl} \|{T\|}_{1} \leq 2{J}_{+}\|{f\|}_{0} + 2{J}_{+}{\epsilon }_{+}\|{T\|}_{1}\ ;& &\end{array}$$
(15.99)

if condition (15.98) is satisfied, (15.97) is proved with

$$\begin{array}{rcl}{ C}_{3L} = 2{J}_{+}{(1 - 2{\epsilon }_{+}{J}_{+})}^{-1}.& & \\ \end{array}$$

\(\square \)

Remark 1.

There is no need to say that defining \({A}_{2S} = -\frac{\partial } {\partial r}\) and \({D}_{2} = r\epsilon \cdot \nabla \), the condition (15.98) guarantees that, similarly to (15.84), the iterative scheme

$$\begin{array}{rcl} -{A}_{2S}{T}_{n+1} = f + {D}_{2}{T}_{n}& & \\ \end{array}$$

is convergent in HH 1, 2(S).

Remark 2.

Already the ease with which we produce the result of Theorem 5 as compared with the difficulty, or at least the complicacy, in proving Theorem 3, is a clear symptom of the superiority of the linearized fixed boundary BVP, with respect to the linearized Molodensky problem. As a matter of fact, the conditions under which we are able to guarantee the well-posedness (existence, uniqueness and stability of the solution) of the latter are more demanding than for the former.

In fact, remember that to prove Theorem 3 we assumed to know already the harmonic coefficients of the asymptotic development of T up to degree L.

In Exercises 24 the reader is invited to relate the conditions of validity of Theorems 3 and 5 to the geometry of the boundary.

5 From Least Squares to Galerkin’s Method

Now that existence and stability of problems (15.27) and (15.32) have been studied, we would like to implement a numerical method to approximate the solution.

This can be done by constructing some finite dimensional subspace of HH 1, 2(S), where we can look for a model potential T M (r, σ) in such a way that the corresponding boundary values f M (σ) do approximate the data f(σ) in L 2(S); this is basically what can be called the least squares method in a Hilbert space.

Note that if we can reasonably guarantee that \({f}_{M} \rightarrow f\) in L 2(S), Theorems 3 and 5 tell us that \({T}_{M} \rightarrow T\) in HH 1, 2(S).

Yet, as already observed, the harmonic coefficients expressing T M from L to M will vary, as one can see for instance from the plot of the degree variances of EGM96, EGM08 (see Fig. 3.6 in Part I).

The two models in fact, although using different data, have been computed with quite the same methodology which, as we shall see in the rest of this section and in the next, can be reconducted to an approximation of a least squares solution.

A natural subspace useful to our purpose is that generated by the outer spherical harmonics \({S}_{\mathcal{l}m}(r,\sigma )\). In fact let us put

$$\begin{array}{rcl} & & {H}_{\mathit{LM}} = \text{ Span}\left \{{S}_{\mathcal{l}m}(r,\sigma ),\ \vert m\vert \leq \mathcal{l},\ L \leq \mathcal{l} \leq M\right \}, \\ & & {S}_{\mathcal{l}m}(r,\sigma ) ={ \left (\frac{R} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma ) \end{array}$$
(15.100)

In (15.100) naturally it is not strictly necessary to use the radius R, yet it is numerically convenient and in particular it is convenient to put R equal to the mean value of R σ in such a way that \(\delta \overline{R} = {R}_{\sigma } - R\) is as small as possible in the average. It has to be noted that by suitably choosing L > 0 we see that the harmonic functions in H LM are all \(O\left ( \frac{1} {{r}^{L+1}} \right )\) at infinity; this fact can be used to automatically account for the case that we have an a-priori knowledge of coefficients up to degree L − 1. This means that we shall look for a potential T M of the form

$$\begin{array}{rcl}{ T}_{M}(r,\sigma ;\mathbf{T}) = \sum\limits_{\mathcal{l}=L}^{M}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{T}_{ \mathcal{l}m}{S}_{\mathcal{l}m}(r,\sigma ),& &\end{array}$$
(15.101)

depending on the vector of unknown parameters \(\mathbf{T} =\{ {T}_{\mathcal{l}m}\}\), and we shall compute from it the data model

$$\begin{array}{rcl}{ f}_{M}(\sigma )& =& {A}_{1}{T}_{M} -\sum\limits_{\mathcal{l}=0}^{L-1}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m} \\ & =& \sum\limits_{\mathcal{l}=L}^{M}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{T}_{ \mathcal{l}m}{A}_{1}({S}_{\mathcal{l}m}) -\sum\limits_{\mathcal{l}=0}^{L-1}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{a}_{ \mathcal{l}m}{Z}_{\mathcal{l}m},\end{array}$$
(15.102)

with

$$\begin{array}{rcl}{ A}_{1} = r\left (- \frac{\partial } {\partial h} + \frac{\gamma \prime } {\gamma }\right )& &\end{array}$$
(15.103)

for the linearized Molodensky problem, and

$$\begin{array}{rcl}{ f}_{M}(\sigma ) = {A}_{2}{T}_{M} = \sum\limits_{\mathcal{l}=L}^{M}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{T}_{ \mathcal{l}m}{A}_{2}({S}_{\mathcal{l}m})& &\end{array}$$
(15.104)

with

$$\begin{array}{rcl}{ A}_{2} = r\left (- \frac{\partial } {\partial h}\right )& &\end{array}$$
(15.105)

for the linearized fixed boundary BVP.

Note that f M (σ) is a linear function of the unknown vector \(\mathbf{T}\) too. The idea is that we should use the coefficients \(\mathbf{T}\) (and \(\{{a}_{\mathcal{l}m}\}\) in the case of (15.102)) at our disposal to produce by means of f M (σ) the best approximation of f(σ) in the sense of L 2(S); Theorems 3 and 4 then tell us that we are meanwhile approximating T in HH 1, 2(S). In other words we have to solve the least squares problem

$$ \begin{array}{rcl} {\min }_{\mathbf{T}}\|f(\sigma ) - {f}_{M}{(\sigma )}\|_{0}^{2}\ ;& & \\ \end{array}$$

the minimization extends to \(\{{a}_{\mathcal{l}m}\}\) in the case (15.102).

Noting that the operator A 1 is defined by (15.102), we obtain for such a problem the linear system

$$\begin{array}{rcl} \left \{\begin{array}{l} \sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{\left < {A}_{ 1}{S}_{\mathcal{l}m},\ {A}_{1}{S}_{jk}\right >}_{0}{T}_{jk}+ \\ -\sum\limits_{j=0}^{L-1}\ \sum\limits_{k=-j}^{j}{\left < {A}_{ 1}{S}_{\mathcal{l}m},\ {Z}_{jk}\right >}_{0}{a}_{jk} ={ \left < {A}_{1}{S}_{\mathcal{l}m},\ f\right >}_{0} \\ -\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{\left < {Z}_{ \mathcal{l}m},\ {A}_{1}{S}_{jk}\right >}_{0}{T}_{jk}+ \\ + \sum\limits_{j=0}^{L-1}\ \sum\limits_{k=-j}^{j}{\left < {Z}_{ \mathcal{l}m},\ {Z}_{jk}\right >}_{0}{a}_{jk} = -{\left < {Z}_{\mathcal{l}m},\ f\right >}_{0}.\end{array} \right.& &\end{array}$$
(15.106)

In this system the first equations hold for all compatible m and \(L \leq \mathcal{l} \leq M\), while the second equations hold for \(0 \leq \mathcal{l} \leq L - 1\).

Similarly, but in a simpler form, for the problem (15.104) one arrives at

$$\begin{array}{rcl} \sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{\left < {A}_{ 2}{S}_{\mathcal{l}m},\ {A}_{2}{S}_{jk}\right >}_{0}{T}_{jk} ={ \left < {A}_{2}{S}_{\mathcal{l}m},\ f\right >}_{0}.& &\end{array}$$
(15.107)

The numerical solution of (15.106) or (15.107) cannot be obtained by direct methods, when M is as large as several thousands, because the normal matrices implied are fully populated, since the integrals implicit in the scalar products are performed on functions like \({S}_{\mathcal{l}m}({R}_{\sigma },\sigma )\) or \({Z}_{\mathcal{l}m}({R}_{\sigma },\sigma )\), which are not orthogonal in L 2(S).

Nevertheless they can be solved iteratively by taking the perturbative structure of \({A}_{1},{A}_{2}\) into account.

In order to simplify our reasoning we shall for the moment forget the \(\{{a}_{jk}\}\) unknowns and we shall rather treat the principal part of (15.106), and, in a perfectly parallel way, of (15.107). We leave to the last part of this section the task of introducing back \({Z}_{\mathcal{l}m}\) into the perturbative scheme.

Remember that \({\|\ \|}_{0}\) and the scalar product in (15.106) should be consistent with (15.11); however in the present context we choose to map the points P of the surface S on the unit sphere σ as shown in Fig. 15.1.

Fig. 15.1
figure 1

The mapping \(P \rightarrow {P}_{e} \rightarrow {P}_{0}\) from S to σ

Subsequently we decide to write the L 2 scalar product in an equivalent form as

$$\begin{array}{rcl} \|f{(P)\|}_{0}^{2} = \frac{1} {4\pi }\int\limits {f}^{2}(P)d{\sigma }_{{ P}_{0}}.& &\end{array}$$
(15.108)

With this proviso, (15.106) yields the solution of the least squares principle

$$ \begin{array}{rcl} {\min }_{{T}_{M}}\|r\Delta g - r{\Delta g{}_{M}\|}^{2} ={\min }_{{ T}_{M}} \frac{1} {4\pi }\int\limits {[\Delta g(P) - \Delta {g}_{M}(P)]}^{2}{r}_{ P}^{2}d{\sigma }_{{ P}_{0}};\qquad & &\end{array}$$
(15.109)

where P is mapped to P 0 as explained above.

Also, to simplify the notation, in the following formulas we don’t write formally the range of summation of the indexes \(\mathcal{l},m\) or j, k which however are assumed to run over \(-\mathcal{l} \leq m \leq \mathcal{l},\ \mathcal{l} = L,\ldots ,M\), and so forth. Therefore we can write more explicitely the main part of (15.106) as

$$\begin{array}{rcl} & & \sum\limits_{\mathcal{l},m}{\left < {A}_{1}{S}_{jk},{A}_{1}{S}_{\mathcal{l}m}\right >}_{0}{T}_{\mathcal{l}m} \\ & & =\sum\limits_{\mathcal{l},m}\left \{ \frac{1} {4\pi }\int\limits ({A}_{1}{S}_{jk}(P))\frac{GM} {\overline{R}} ({A}_{1}{S}_{\mathcal{l}m}(P))d{\sigma }_{{P}_{0}}\right \}{T}_{\mathcal{l}m} \\ & & = \frac{1} {4\pi }\int\limits ({A}_{1}{S}_{jk}(P))\Delta g(P){r}_{P}d{\sigma }_{{P}_{0}} ={ \left < {A}_{1}{S}_{jk},r\Delta g\right >}_{0}\end{array}$$
(15.110)

where the point P is restricted to the surface S, i.e. we set

$$\begin{array}{rcl}{ r}_{P} = R + \delta R({\vartheta},\lambda )& & \\ \end{array}$$

so that we have

$$\begin{array}{rcl}{ S}_{\mathcal{l}m}(P) ={ \left [ \frac{R} {R + \delta R({\vartheta},\lambda )}\right ]}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}({\vartheta},\lambda ).& &\end{array}$$
(15.111)

Remark 3.

Since we don’t have really observations Δg(P) for every point of the boundary, (15.110) can be conveniently discretized in the following way.

Let B rs denote the geographic square

$$\begin{array}{rcl} \left \{\begin{array}{l} {B}_{rs} =\{ r\Delta \leq {\vartheta} < (r + 1)\Delta ,\ s\Delta \leq \lambda \leq (s + 1)\Delta \} \\ r = 0,1,\ldots N - 1,\ s = 0\ldots 2N - 1,\ \Delta = \frac{36{0}^{\circ }} {N}\end{array} \right.& &\end{array}$$
(15.112)

and put

$$\begin{array}{rcl} \left \{\begin{array}{l} {(\overline{f})}_{rs} = \frac{1} {\vert {B}_{rs}\vert }\int\limits_{{B}_{rs}}f({\vartheta},\lambda )d\sigma \\ \vert {B}_{rs}\vert = \text{ area of }{B}_{rs} = \Delta [\cos r\Delta -\cos (r + 1)\Delta ]\cong\sin {{\vartheta}}_{r} \cdot {\Delta }^{2} \\ {{\vartheta}}_{r} = (r + \frac{1} {2})\Delta \end{array} \right.\qquad & &\end{array}$$
(15.113)

If we have N rs observations (or point values) of f in B rs we can put, instead of (15.113),

$$\begin{array}{rcl}{ (\overline{f})}_{\mathit{rs}} = \frac{1} {{N}_{\mathit{rs}}}{\Sigma }_{n}f({P}_{n})\ ;\ {P}_{n} \in {B}_{\mathit{rs}},\quad & &\end{array}$$
(15.114)

with an error which tends to zero if f is smooth enough and \({N}_{\mathit{rs}} \rightarrow \infty \), i.e. the data density tends to infinity.

Accordingly the elements of the normal matrix of (15.110) can be approximated by

$$\begin{array}{rcl} \frac{1} {4\pi }{\Sigma }_{\mathit{rs}}{(\overline{{A}_{1}{S}_{jk}})}_{\mathit{rs}}\frac{GM} {\overline{R}} {(\overline{{A}_{1}{S}_{\mathcal{l}m}})}_{\mathit{rs}}\vert {B}_{\mathit{rs}}\vert = {D}_{jk,\mathcal{l}m},& &\end{array}$$
(15.115)

where the summation is over all the blocks \({B}_{\mathit{rs}}\) covering the unit sphere σ; the known term (15.110) can be written as

$$\begin{array}{rcl} \frac{1} {4\pi }{\Sigma }_{\mathit{rs}}{(\overline{{A}_{1}{S}_{jk}})}_{\mathit{rs}}{(\overline{r\Delta g})}_{\mathit{rs}}\vert {B}_{\mathit{rs}}\vert = {F}_{jk},& &\end{array}$$
(15.116)

so that (15.110) becomes the algebraic system

$$\begin{array}{rcl} \sum\limits_{\mathcal{l}=L}^{M}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{D}_{ jk,\mathcal{l}m}{T}_{\mathcal{l}m} = {F}_{jk}.& &\end{array}$$
(15.117)

Remark 4.

In spite of its deceiving simplicity one has to consider that (15.117) written for instance for M = { 2,160}, L = 20 implies the solution of a system with no particular structure with 4,665,200 unknowns. This is a formidable numerical task which at present can be tackled only by suitable sequential techniques, to get at least an approximate solution. This is exactly what we shall illustrate, guided by the geodetic intuition.

We shall obtain a simple solution of (15.117) through some steps:

  1. (a)

    First of all we put our basic equation into a perturbative form, namely, recalling (15.30) and (15.31),

    $$\begin{array}{rcl}{ A}_{1}T = {A}_{1S}T + {D}_{1}T = r\Delta g& & \end{array}$$
    (15.118)

    with

    $$\begin{array}{rcl}{ A}_{1S} = -r \frac{\partial } {\partial r} - 2& & \end{array}$$
    (15.119)

    and

    $$\begin{array}{rcl} \left \{\begin{array}{l} {D}_{1} = r(\epsilon \cdot \nabla + \eta ) \\ \epsilon = ({\mathbf{e}}_{r} -\nu ),\ \eta = \frac{2} {r} + \frac{\gamma \prime } {\gamma }.\end{array} \right.& & \end{array}$$
    (15.120)

    Since (see Proposition (3))

    $$\begin{array}{rcl} \vert \epsilon \vert \leq \frac{1} {2}{e}^{2}\quad \vert \eta \vert \leq 2{e}^{2}& & \end{array}$$
    (15.121)

    we can conveniently write (15.118) as

    $$\begin{array}{rcl}{ A}_{1S}T = \left (-r \frac{\partial } {\partial r} - 2\right )T = r\Delta g - r\epsilon \cdot \nabla T - r\eta T.& & \end{array}$$
    (15.122)

    If we have a good prior model we can compute \(\Delta {g}_{c} = \Delta g -\epsilon \cdot \nabla T - \eta T\) from it, or at most we can iterate to get a better solution.

    Moreover, since

    $$\begin{array}{rcl}{ A}_{1S}{S}_{jk} = (j - 1){S}_{jk},& & \end{array}$$
    (15.123)

    (15.110) simplifies to

    $$\begin{array}{rcl}{ \left < {S}_{jk},\ {A}_{1S}{T}_{M}\right >}_{0} ={ \left < {S}_{jk},\ r\Delta {g}_{c}\right >}_{0}.& & \end{array}$$
    (15.124)

Remark 5.

Equation 15.124 has a nice functional interpretation. In fact let us put

$$\begin{array}{rcl}{ V }_{L,M} = \text{ Span}\{{S}_{\mathcal{l}m}{\vert }_{S}\ ;\ \vert m\vert \leq \mathcal{l},\ \mathcal{l} = L,\ldots M\},& & \\ \end{array}$$

i.e. the subspace generated by linear combinations of functions \(\{{S}_{\mathcal{l}m}{\vert }_{S},{S}_{\mathcal{l}m}\,\in \,{H}_{\mathit{LM}}\}\) where, as in (15.111), we mean \({S}_{\mathcal{l}m}{\vert }_{S} = {S}_{\mathcal{l}m}(R + \delta R({\vartheta},\lambda ),{\vartheta},\lambda )\). If we call P M the orthogonal projector of L 2 on V L, M , we find T M by solving (recall (15.100))

$$\begin{array}{rcl}{ P}_{M}({A}_{1S}{T}_{M}) = {P}_{M}(r\Delta {g}_{c}),\ {T}_{M} \in {H}_{L,M},& &\end{array}$$
(15.125)

i.e. by projecting the original equation onto V L, M .

In fact one way to express that \(({A}_{1S}{T}_{M})\) and rΔg have the same projection on V L, M is exactly to claim that they must have the same scalar product with a base of V L, M , which in our case is (15.124).

Such a method of simple projection is known in functional analysis as Galerkin’s method. The interested reader can find much more material in the mathematical literature, e.g. in Mikhlin (1964) and Kirsch (1996). So by switching from the general form of the operator A 1 to its spherical approximation we find that least squares equations become identical to Galerkin’s equations.

To perform the next step (b) we need to understand more clearly how Galerkin’s method works. Basically we could say that given two Hilbert spaces X, Y and a bounded operator \(A : X \rightarrow Y\), of which we already know that there is a bounded inverse\({A}^{-1} : Y \rightarrow X\), we want to solve the infinite dimensional equation

$$\begin{array}{rcl} Ax = y.& &\end{array}$$
(15.126)

In order to make (15.126) finite dimensional, we first select two sequences of subspaces W M and V M in X and Y respectively

$$\begin{array}{rcl} & & {W}_{M} = \text{ Span}\{{\xi }_{n},n = 1\ldots M\} \\ & & {V }_{M} = \text{ Span}\{{\eta }_{n},n = 1\ldots M\}, \\ \end{array}$$

such that {ξ n } and {η n } are complete (generally non-orthonormal) bases, each in its own space. Then we substitute (15.126) with the finite dimensional square system

$$\begin{array}{rcl} \sum\limits_{m=1}^{M}\left < {\eta }_{ n},A{\xi }_{m}\right >{\lambda }_{m} = \left < {\eta }_{n},y\right >,& &\end{array}$$
(15.127)

where

$$\begin{array}{rcl}{ x}_{M} = \sum\limits_{m=1}^{M}{\lambda }_{ m}{\xi }_{m}.& &\end{array}$$
(15.128)

gives an approximation of x.

When

$$\begin{array}{rcl}{ \eta }_{n} = A{\xi }_{n}& & \\ \end{array}$$

the system (15.127) is the same as that of least squares and the convergence of x M given by (15.128) to the right solution is guaranteed, otherwise it has to be studied case by case.

So up to the level of the system (15.124) we are on a fully justified theoretical ground. Specifically in this case we have

$$\begin{array}{rcl} \{{\xi }_{n}\} \equiv \{ {S}_{\mathcal{l}m}(r,{\vartheta},\lambda )\}& &\end{array}$$
(15.129)

in the space \(X \equiv H{H}^{1,2}(S)\), i.e. the space of potentials T, while

$$\begin{array}{rcl} \{{\eta }_{n}\} =\{ (\mathcal{l} - 1){S}_{\mathcal{l}m}(R + \delta R({\vartheta},\lambda ),{\vartheta},\lambda )\}.& &\end{array}$$
(15.130)

Note has to be taken that the use of the same functions \({S}_{\mathcal{l}m}\) for ξ n and η n should not be misunderstood; in fact {ξ n } are potentials defined in Ω, while η n are surface functions in L 2(S), i.e defined on S only.

The next step (b) then is done on the basis of the remark that \({S}_{\mathcal{l}m}(R + \delta R,{\vartheta},\lambda )\) are quite close to \({Y }_{\mathcal{l}m}({\vartheta},\lambda )\) because \(O(\delta R/R) = 1{0}^{-3}\) at most,

  1. (b)

    We decide now to use \({Y }_{jk}({\vartheta},\lambda )\) instead of \({S}_{jk}{\vert }_{S}\) in (15.124), namely to substitute (15.124) with

    $$\begin{array}{rcl}{ \left < {Y }_{jk},\ {A}_{1S}{T}_{M}\right >}_{0} ={ \left < {Y }_{jk},r\Delta {g}_{c}\right >}_{0}.& & \end{array}$$
    (15.131)

    Numerical experiments fully support this choice.

Finally a third step has to be taken to come to a handable solution. We concentrate on the first member of (15.131) and first of all we set up the following identity:

$$\begin{array}{rcl} & &{ \left < {Y }_{jk},{A}_{1S}{T}_{M}\right >}_{0} \\ & & = {\Sigma }_{\mathcal{l},m}{T}_{\mathcal{l}m}\frac{GM} {R} (\mathcal{l} - 1) \cdot \frac{1} {4\pi }\int\limits_{\sigma }{Y }_{jk}({\vartheta},\lambda ){Y }_{\mathcal{l}m}({\vartheta},\lambda ) \cdot {\left ( \frac{R} {R + \delta R}\right )}^{\mathcal{l}+1}d\sigma \\ & & = \frac{GM} {R} {T}_{jk}(j - 1) \\ & & +\frac{GM} {R} {\Sigma }_{\mathcal{l},m}{T}_{\mathcal{l}m}(\mathcal{l} - 1) \cdot \frac{1} {4\pi }\int\limits_{\sigma }{Y }_{jk}({\vartheta},\lambda ){Y }_{\mathcal{l}m}({\vartheta},\lambda ) \cdot \left [{\left ( \frac{R} {R + \delta R}\right )}^{\mathcal{l}+1} - 1\right ]d\sigma \\ & & = \frac{GM} {R} {T}_{jk}(j - 1) + \frac{GM} {R} {\Sigma }_{\mathcal{l},m}{T}_{\mathcal{l}m}(\mathcal{l} - 1)\left < {Y }_{jk},{Y }_{\mathcal{l}m}{W}_{\mathcal{l}}\right > \\ & & = \frac{GM} {R} {T}_{jk}(j - 1) +{ \left < {Y }_{jk},r\Delta {g}_{M}(r,{\vartheta},\lambda ) - R\Delta {g}_{M}(R,{\vartheta},\lambda )\right >}_{0},\quad (r = R + \delta R).\end{array}$$
(15.132)

Notice that \({W}_{\mathcal{l}} = \left [{\left ( \frac{R} {R+\delta R}\right )}^{\mathcal{l}+1} - 1\right ]\) are weights depending on \(({\vartheta},\lambda )\) because δR is function of these variables and generally small; as a matter of fact Table 15.1 gives an idea of the behaviour of \({W}_{\mathcal{l}}\) as functions of δR at very high frequencies \((\mathcal{l} = \mbox{ 2,000})\).

Table 15.1 weights at degree 2,000 at various heights

So we can take the final step:

  1. (c)

    We write (15.131), using (15.132) too, in the form

    $$\begin{array}{rcl}{ T}_{jk}& =& \frac{1} {j - 1}{\left (\frac{GM} {R} \right )}^{-1}\left < {Y }_{ jk},r\Delta {g}_{c}\right > - \frac{1} {j - 1}{\left (\frac{GM} {R} \right )}^{-1} \\ & & \cdot {\left < {Y }_{jk},[r\Delta {g}_{M}(r,{\vartheta},\lambda ) - R\Delta {g}_{M}(R,{\vartheta},\lambda )]\right >}_{0} \\ & =& \frac{1} {j - 1}{\left (\frac{GM} {R} \right )}^{-1}\left < {Y }_{ jk},\{r\Delta {g}_{c} - [r\Delta {g}_{M}(r,{\vartheta},\lambda )\right. \\ & & -{\left.R\Delta {g}_{M}(R,{\vartheta},\lambda )]\}\right >}_{0}. \end{array}$$
    (15.133)

    If we remember what Δg c is in terms of the original free air anomalies, we can ultimately rewrite (15.133) in the perturbative form

    $$\begin{array}{rcl}{ T}_{jk}& =& \frac{1} {j - 1}{\left (\frac{GM} {R} \right )}^{-1}\left < {Y }_{ jk},\{r\Delta g - r\epsilon \cdot \nabla {T}_{M} - r\eta {T}_{M}\right. \\ & & -{\left.[r\Delta {g}_{M}(r,{\vartheta},\lambda ) - R\Delta {g}_{M}(R,{\vartheta},\lambda )]\}\right >}_{0} \end{array}$$
    (15.134)

    where in the second member \({T}_{M},\Delta {g}_{M}\) do depend on \(\{{T}_{jk}\}\) and r means \(r = R + \delta R({\vartheta},\lambda )\).

In principle (15.134), with the above remarks, is still an exact way of writing the Galerkin system, to find an approximate solution \(\{{T}_{jk},\vert k\vert \leq j,\ j = L,\ldots M\}\) from the datum Δg on S.

Its true numerical implementation naturally passes through a discretization of integrals similar to that presented in (15.115).

We note that if we think that all the terms appearing in the right hand side of (15.134) can be computed as “corrective terms” from some prior model \({({T}_{M})}_{\mathrm{prior}}\), then indeed (15.134) will give us straightforwardly the sought solution. Otherwise we still have to work on the right hand side as explained in next section.

Remark 6.

We want to call the attention on a point that has not been treated in this section. Namely we have just used a sphere of radius R as reference for the topography, while it would have been more suitable to use directly the ellipsoid \(\mathcal{E}\). This can be done, without much difficulty, by using the ellipsoidal harmonics representation of Part I, Sect. 3.9, with the difference that now it is not anymore true that \(rA{S}_{\mathcal{l}m}^{e} = {\lambda }_{\mathcal{l}m}{S}_{\mathcal{l}m}^{e}\), for some constant \({\lambda }_{\mathcal{l}m}\), as it happened in (15.123), so one further approximation has to be introduced or the already approximated “radial” functions Part I, (3.197) are to be used.

What has been done for the Molodensky boundary operator A 1 can be repeated step by step for the Hotine operator A 2. The only difference is in the perturbative decomposition (cfr. Sect. 14.4, Remark 1)

$$\begin{array}{rcl}{ A}_{2} = {A}_{2S} + {D}_{2}& &\end{array}$$
(15.135)

where now

$$\begin{array}{rcl} \left \{\begin{array}{l} {A}_{2S} = -r \frac{\partial } {\partial r} \\ {D}_{2} = r\epsilon \cdot \nabla.\end{array} \right.& &\end{array}$$
(15.136)

Naturally the known term is in this case rδg, and instead of (15.123) we have now

$$\begin{array}{rcl}{ A}_{2S}{S}_{jk} = (j + 1){S}_{jk}.& &\end{array}$$
(15.137)

Accordingly the analogous of the perturbative system (15.134) becomes

$$\begin{array}{rcl}{ T}_{jk}& =& \frac{1} {j + 1}{\left (\frac{GM} {R} \right )}^{-1}\left < {Y }_{ jk},\{r\delta g - r\epsilon \cdot \nabla {T}_{M} - [r\delta {g}_{M}(r{\vartheta},\lambda )+\right. \\ & & -\left.R\delta {g}_{M}(R,{\vartheta},\lambda )]\}\right >, \end{array}$$
(15.138)

with \(r = R + \delta R({\vartheta},\lambda )\).

As promised we return finally to the true normal system (15.106) for Molodensky’s problem.

Before doing that we simplify somewhat our equations, at least in notation, by recognizing that our perturbative scheme can be reduced to writing the following identities

$$\begin{array}{rcl}{ A}_{1}{S}_{\mathcal{l}m}& =& (\mathcal{l} - 1){S}_{\mathcal{l}m} + r\epsilon \cdot \nabla {S}_{\mathcal{l}m} + r\eta {S}_{\mathcal{l}m} \\ & =& (\mathcal{l} - 1){Y }_{\mathcal{l}m} + (\mathcal{l} - 1)[{S}_{\mathcal{l}m} - {Y }_{\mathcal{l}m}] + r\epsilon \cdot \nabla {S}_{\mathcal{l}m} + r\eta {S}_{\mathcal{l}m} \\ & =& (\mathcal{l} - 1){Y }_{\mathcal{l}m} + {\Psi }_{\mathcal{l}m} \end{array}$$
(15.139)
$$\begin{array}{rcl}{ A}_{2}{S}_{\mathcal{l}m}& =& (\mathcal{l} + 1){S}_{\mathcal{l}m} + r\epsilon \cdot \nabla {S}_{\mathcal{l}m} \\ & =& (\mathcal{l} + 1){Y }_{\mathcal{l}m} + (\mathcal{l} + 1)[{S}_{\mathcal{l}m} - {Y }_{\mathcal{l}m}] + r\epsilon \cdot \nabla {S}_{\mathcal{l}m} \\ & =& (\mathcal{l} + 1){Y }_{\mathcal{l}m} + {\Phi }_{\mathcal{l}m} \end{array}$$
(15.140)

and considering \({\Psi }_{\mathcal{l}m},{\Phi }_{\mathcal{l}m}\) as perturbations of the principal terms \((\mathcal{l} \pm 1){Y }_{\mathcal{l}m}\). In (15.139) and (15.140) \({S}_{\mathcal{l}m}\) means \({S}_{\mathcal{l}m}(R + \delta R({\vartheta},\lambda ),{\vartheta},\lambda )\).

Now we return to (15.106) and in order to make more transparent our solution we do that in an Example, assuming that S itself is a sphere, so that \({Y }_{\mathcal{l}m} - {S}_{\mathcal{l}m}{\vert }_{S} \equiv 0\).

Example 1.

Assume S to be sphere. We note immediately that in this case, according to the definition of \({Z}_{\mathcal{l}m}\), (15.14), we can take directly

$$\begin{array}{rcl}{ Z}_{\mathcal{l}m}(R,{\vartheta},\lambda ) = {Y }_{\mathcal{l}m}({\vartheta},\lambda ).& &\end{array}$$
(15.141)

Accordingly (15.106) becomes

$$\begin{array}{rcl}{ (\mathcal{l} - 1)}^{2}{T}_{ \mathcal{l}m}& =& -{\left < {\Psi }_{\mathcal{l}m},\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}(j - 1){T}_{ jk}{Y }_{jk}\right >}_{0} + \\ & & -(\mathcal{l} - 1){\left < {Y }_{\mathcal{l}m},\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{T}_{ jk}{\Psi }_{jk}\right >}_{0}\ + \\ & & -{\left < {\Psi }_{\mathcal{l}m},\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{T}_{ jk}{\Psi }_{jk}\right >}_{0} + \sum\limits_{j=0}^{L-1}\ \sum\limits_{k=-j}^{j}{\left < {\Psi }_{ \mathcal{l}m},{Y }_{jk}\right >}_{0}{a}_{jk} \\ & & +{\left < {A}_{1}{S}_{\mathcal{l}m},r\Delta g\right >}_{0}, \end{array}$$
(15.142)
$$\begin{array}{rcl}{ a}_{\mathcal{l}m}& =&{ \left < {Y }_{\mathcal{l}m},\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{T}_{ jk}{\Psi }_{jk}\right >}_{0} -{\left < {Y }_{\mathcal{l}m},r\Delta g\right >}_{0}.\end{array}$$
(15.143)

Note that in deriving (15.142) and (15.143) one has carefully to exploit the fact that \(\sum\limits_{j,k=0}^{L-1}{a}_{jk}{Z}_{jk}\), in this context is always L 2 orthogonal to all \({Y }_{\mathcal{l}m}\), with \(\mathcal{l} \geq L\).

6 Two Geodetic Solutions of Galerkin’s System

To the knowledge of the author only two methods have been applied to produce high resolution (M > 103) global models from Galerkin’s equations (15.133): one is the finite dimensional version of the change of boundary method (Sansò and Sona 1995) and has been implemented by Wenzel (see Wenzel 1998); the other one is the so-called downward continuation method, implemented by Rapp (1997a) and developed with his co-workers Pavlis et al. (2008). This second method is described with some variants and much more detail in the second part of the book, in Chap. 6.

  1. (a)

    Change of boundary

The concept can be illustrated for the Dirichlet problem, which is the only one for which we have a theoretical result. In any case remember that by means of the change of unknown

$$\begin{array}{rcl} u = -r\frac{\partial T} {\partial r} - 2T = r\Delta g,& &\end{array}$$
(15.144)

we can transform the BVP’s (15.27) and (15.33) in spherical approximation, into a Dirichlet problem (see on Proposition 4), so this last is not at all useless for a set up of geodetic relevance.

The idea is to take the boundary value of u = f, given on the true surface S and to shift it to a Bjerhammar sphere \(\overline{S}\) (see Fig. 15.2) be means of the pull-back correspondence (radial projection) \(P \rightarrow \overline{P}\). In other words we substitute the original Dirichlet BVP

$$ \begin{array}{rcl} \left \{\begin{array}{ll} \Delta u = 0 &\text{ in }\Omega \\ u(P){\vert }_{S } = f(P)&\text{on }S\end{array} \right.& &<EquationNumber>15.145</EquationNumber>\\ \end{array} $$

with the new Dirichlet problem

$$ \begin{array}{rcl} \left \{\begin{array}{ll} \Delta u = 0 &\text{ in }\Omega \\ u(\overline{P } ) = f(P)&\text{on }\overline{S}.\end{array} \right.& &<EquationNumber>15.146</EquationNumber>\\ \end{array} $$
Fig. 15.2
figure 2

Illustration of the pullback operator \(f(P)\,\rightarrow \,\widetilde{f}(\overline{\sigma })\)

The problem (15.146) can be easily solved by means of the Poisson integral, because it refers to a sphere. The operator that defines the new function on \(\overline{S}\)

$$\begin{array}{rcl} \overline{f}(\overline{P}) = f(P)& &\end{array}$$
(15.147)

is called here the pull-back operator and denoted as

$$\begin{array}{rcl} \mathit{PB} : {L}^{2}(S) \rightarrow {L}^{2}(\overline{S}).& &\end{array}$$
(15.148)

The function \(\overline{u}\) which is the solution of (15.146) can then be evaluated back at the surface S where it takes values \(\overline{u}{\vert }_{S}\) which are indeed different from f(P).

For the sake of definiteness we call \(\Pi \) the Poisson operator that gives \(\overline{u}\) from \(\overline{f}\) and L the “lift” operator such that

$$\begin{array}{rcl} L(\overline{f}) = (\Pi \overline{f}){\vert }_{S} = \overline{u}(P)& &\end{array}$$
(15.149)

With f we can form residuals

$$\begin{array}{rcl} \delta f(P) = f(P) -\overline{u}(P),& &\end{array}$$
(15.150)

which provide a new (hopefully smaller) boundary function on S. Now we pull back δf from S to \(\overline{S}\) and we iterate. The scheme is known to converge with continuous data in the sense of uniform convergence on S (Sansò and Sona 1995). How it works in L 2(S) is not known, yet if we implement a finite dimensional version of it by introducing the \({L}^{2}(\overline{S})\) projector

$$\begin{array}{rcl}{ P}_{\mathit{LM}}f = \sum\limits_{\mathcal{l},m=L}^{M}\left ( \frac{1} {4\pi }\int\limits {Y }_{\mathcal{l}m}({\vartheta},\lambda )f({\vartheta},\lambda )d\sigma \right ){Y }_{\mathcal{l}m}({\vartheta},\lambda )& &\end{array}$$
(15.151)

we get the iterative scheme of Fig. 15.3 that can be ultimately transformed back into a corresponding scheme for T by inverting (15.144). Since (15.144) is solved, for a finite dimensional potential, by

$$\begin{array}{rcl} \left \{\begin{array}{l} {T}_{M} = \sum\limits_{\mathcal{l}=L}^{M}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}\left ( \frac{1} {\mathcal{l} - 1}\right ){u}_{\mathcal{l}m}{S}_{\mathcal{l}m}(r,{\vartheta},\lambda ), \\ {T}_{\mathcal{l}m} = {(\mathcal{l} - 1)}^{-1}{u}_{\mathcal{l}m}\end{array} \right.& &\end{array}$$
(15.152)

we end up with an iterative scheme which is exactly the simple iterative solution of the Galerkin equations (15.133); therefore we can continue our reasoning on Fig. 15.2 and at the end transform the result back to the anomalous potential T by (15.152).

Fig. 15.3
figure 3

The finite dimensional change of boundary iterative scheme

Naturally we would like to know whether a scheme like that is convergent and, in case of a positive answer, whether it converges to the right solution.

For this purpose we can examine more closely Fig. 15.3. First we note that all functions on the left are defined on S while those on the right are defined on \(\overline{S},(\delta {\overline{f}}_{k})\), or in \(\overline{\Omega }(r \geq \overline{R}),({\overline{u}}_{k})\). We note too that if the points P on S (see Fig. 15.2) are already expressed in terms of spherical coordinates, i.e. as function of \(({\vartheta},\lambda )\) of \(\overline{P}\), then the pull-back operator PB is just the identity.

Moreover, we observe that while \({f}_{k},\delta {f}_{k}\) are general functions in \({L}^{2},\delta {\overline{f}}_{k},{\overline{u}}_{k}\) are on the contrary always finite dimensional functions with maximum degree M. This is achieved when we move horizontally to the right in Fig. 15.3, because we first apply the pull-back to δf k and then we truncate the resulting functions in \({L}^{2}(\overline{S})\)

$$\begin{array}{rcl} \delta {\overline{f}}_{k} = \frac{GM} {\overline{R}} \sum\limits_{\mathcal{l}=L}^{M}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{(\delta {\overline{f}}_{ k})}_{\mathcal{l}m}{Y }_{\mathcal{l}m}({\vartheta},\lambda ).& &\end{array}$$
(15.153)

The potentials corresponding to (15.153) are then

$$\begin{array}{rcl}{ \overline{u}}_{k} = \Pi \delta {\overline{f}}_{k} = \frac{GM} {\overline{R}} \sum\limits_{\mathcal{l}=L}^{M}\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{(\delta {\overline{f}}_{ k})}_{\mathcal{l}m}{S}_{\mathcal{l}m}(r,{\vartheta},\lambda ).& &\end{array}$$
(15.154)

Since we evaluate the size of \({\overline{u}}_{k}\) in \({\mathit{HL}}^{2}(\overline{S})\) as the size of \(\delta {\overline{f}}_{k}\) in \({L}^{2}(\overline{S})\), we have identically

$$\begin{array}{rcl} \|{\overline{u}{}_{k}\|}^{2} =\| \delta {\overline{f}{}_{ k}\|}^{2} ={ \left (\frac{GM} {\overline{R}} \right )}^{2}{\Sigma }_{ \mathcal{l},m}{(\delta {\overline{f}}_{k})}_{\mathcal{l}m}^{2}.& &\end{array}$$
(15.155)

Please notice that the factor \(\frac{GM} {\overline{R}}\) in front of (15.153) and (15.154) is conventional and introduced to make (15.152) consistent with (15.133).

Furthermore, always on Fig. 15.3, we read

$$\begin{array}{rcl} \delta {f}_{k}& =& f -\sum\limits_{n=1}^{k}{f}_{ n}\end{array}$$
(15.156)
$$\begin{array}{rcl}{ f}_{k+1}& =& L\delta {\overline{f}}_{k} \equiv {\overline{u}}_{k}{\vert }_{S},\end{array}$$
(15.157)
$$\begin{array}{rcl} \delta {f}_{k+1}& =& \delta {f}_{k} - L\delta {\overline{f}}_{k},\end{array}$$
(15.158)
$$\begin{array}{rcl} \delta {\overline{f}}_{k+1}& =& {P}_{\mathit{LM}}PB\delta {f}_{k+1} = \delta {\overline{f}}_{k} - {P}_{\mathit{LM}}\mathit{PBL}\delta {\overline{f}}_{k},\end{array}$$
(15.159)

Since \(\delta {\overline{f}}_{k}\) are just the residuals δf k , pulled back to \(\overline{S}\) and projected by P LM , we expect that convergence means \(\delta {\overline{f}}_{k} \rightarrow 0\) in \({L}^{2}(\overline{S})\). As all the functions \(\{\delta {\overline{f}}_{k}\}\) are in Span \(\{{Y }_{\mathcal{l}m},L \leq \mathcal{l} \leq M\}\), we see that (15.159) can be written in the equivalent form (remember that \({P}_{\mathit{LM}}^{2} = {P}_{\mathit{LM}}\))

$$\begin{array}{rcl} \delta {\overline{f}}_{k+1} = ({P}_{\mathit{LM}} - {P}_{\mathit{LM}}\mathit{PBL}{P}_{\mathit{LM}})\delta {\overline{f}}_{k}.& &\end{array}$$
(15.160)

Therefore a sensible sufficient conditions for \(\delta {\overline{f}}_{k}\) to tend to zero is just

$$\begin{array}{rcl}{ \chi }_{\mathit{LM}} =\| {P}_{\mathit{LM}} - {P}_{\mathit{LM}}{\mathit{PBLP}}_{\mathit{LM}}\| < 1;& &\end{array}$$
(15.161)

in (15.161) the norm can be understood in the sense of the L 2 operator norm or, considering that after all (15.160) is a finite dimensional relation, it can be cast in the form of the norm of the matrix that implements (15.160) as a transformation between the harmonic coefficients of \(\delta {\overline{f}}_{k}\) into those of \(\delta {\overline{f}}_{k+1}\). In any event (15.161) puts a bound on the topography (remember that in this section for the sake of simplicity the topography is directly attached to a sphere instead of the ellipsoid, however without changing the basic nature of the problem).

Yet condition (15.161) still has to be studied in detail, though present numerical experiments say that up to degree \(2 \cdot 1{0}^{3}\) convergence is verified. So we shall make the conjecture that (15.161) is satisfied by a realistic topography and we try to answer to the second question. We first note that \(\delta {\overline{f}}_{k} \rightarrow 0\) implies \({f}_{k} \rightarrow 0\) because of (15.157) as well as \({\overline{u}}_{k} \rightarrow 0\) because of (15.155). Even more, from (15.161) we see that

$$\begin{array}{rcl} \|\delta {\overline{f}}_{k+1}\| \leq {\chi }_{\mathit{LM}}^{k}\|\delta {\overline{f}}_{ 1}\|& &\end{array}$$
(15.162)

so that the series

$$\begin{array}{rcl} \sum\limits_{k=1}^{+\infty }\delta {\overline{f}}_{ k}& & \\ \end{array}$$

has to be \({L}^{2}(\overline{S})\) convergent. Accordingly the two series

$$\begin{array}{rcl} g = \sum\limits_{k=1}^{+\infty }{f}_{ k},\ \overline{u} = \sum\limits_{k=1}^{+\infty }{\overline{u}}_{ k}& &\end{array}$$
(15.163)

have to be convergent too. So if we use (15.156) we are justified to write

$$ \begin{array}{rcl} {\lim }_{k\rightarrow \infty }\delta {f}_{k} = f -\sum\limits_{n=1}^{+\infty }{f}_{ n} = f - g& &\end{array}$$
(15.164)

as well as

$$\begin{array}{rcl} 0& =& {\lim }_{k\rightarrow \infty }\delta {\overline{f}}_{k} ={\lim }_{k\rightarrow \infty }{P}_{\mathit{LM}}PB\delta {f}_{k} \\ & =& {P}_{\mathit{LM}}PBf - {P}_{\mathit{LM}}PBg.\end{array}$$
(15.165)

Just another way of writing (15.165) is

$$\begin{array}{rcl} \left < {Y }_{\mathcal{l}m},f\right > = \left < {Y }_{\mathcal{l}m},g\right >,\ \vert m\vert \leq \mathcal{l},L \leq \mathcal{l} \leq M.& &\end{array}$$
(15.166)

On the other hand

$$\begin{array}{rcl} g({\vartheta},\lambda )& =& \sum\limits_{k=1}^{+\infty }{f}_{ k} = \sum\limits_{k=1}^{+\infty }{\overline{u}}_{ k}(r,{\vartheta},\lambda ){\vert }_{S} \\ & =& \overline{u}(r,{\vartheta},\lambda ){\vert }_{S} = \overline{u}(\overline{R} + \delta R({\vartheta},\lambda ),{\vartheta},\lambda ),\end{array}$$
(15.167)

so that \(\overline{u}(r,{\vartheta},\lambda )\) is a potential with maximum degree M satisfying the Galerkin equations

$$\begin{array}{rcl} \left < {Y }_{\mathcal{l}m}({\vartheta},\lambda ),f({\vartheta},\lambda )\right > = \left < {Y }_{\mathcal{l}m}({\vartheta},\lambda ),\overline{u}(r,{\vartheta},\lambda ){\vert }_{S}\right >& &\end{array}$$
(15.168)

for all orders and all appropriate degrees. In other words the series obtained by adding all \({\overline{u}}_{k}\) provides the solution of our problem and answers to our question.

Remark 7.

We have to note that the series (15.163) defining the potential \(\overline{u}\) is added on the iteration indexk, but it provides in any event a sum which is still a finite degree potential. This should not be confused with the possibility of defining a convergent series with infinite degrees representing the solution of our BVP. Such a series in fact does not exist in general as we have already pointed out in Part I, Sect. 3.5.

  1. (b)

    The downward continuation approach

This approach is seemingly completely different form the previous one, because it goes back to the Galerkin system (15.133) and tries to transform it in such a way that the solution comes out without many iterations. This is more easily done in terms of the harmonic function (see (15.144))

$$\begin{array}{rcl} u = r\Delta g& &\end{array}$$
(15.169)

and the corresponding approximation (with degrees from L to M).

$$\begin{array}{rcl}{ u}_{M} = r\Delta {g}_{M}.& &\end{array}$$
(15.170)

As a matter of fact, recalling also (15.152), and putting \(f({\vartheta},\lambda ) = u{\vert }_{S}\), (15.133) can be written as

$$\begin{array}{rcl}{ ({u}_{M})}_{jk}& =&{ \left (\frac{GM} {\overline{R}} \right )}^{-1}\left < {Y }_{ jk}({\vartheta},\lambda ),f - [{u}_{M}(\overline{R} + \delta R,{\vartheta},\lambda ) - {u}_{M}(\overline{R},{\vartheta},\lambda )]\right > \\ \vert k\vert & \leq & j,\ L \leq j \leq M\ ; \end{array}$$
(15.171)

in (15.171) δR, the topography, is a function of \(({\vartheta},\lambda )\). Now instead of iterating on the coefficients, which in the right hand side are hidden in u M , we rather make a kind of Taylor development of \({u}_{M}(\overline{R} + \delta R,{\vartheta},\lambda ) - {u}_{M}(\overline{R},{\vartheta},\lambda )\), i.e.

$$\begin{array}{rcl}{ u}_{M}(\overline{R} + \delta R,{\vartheta},\lambda )\cong{u}_{M}(\overline{R},{\vartheta},\lambda ) -\sum\limits_{\alpha =1}^{a}\frac{{\partial }^{\alpha }{u}_{ M}(\overline{R} + \frac{1} {2}\delta R,{\vartheta},\lambda )} {\partial {r}^{\alpha }} \ \frac{{(-\delta R)}^{\alpha }} {\alpha !} \quad \qquad & &\end{array}$$
(15.172)

As we can see the development is done at the level of the midpoint between the topography and the ellipsoid, because this guarantees the best performance of the Taylor formula. We note as well that the development (15.172) is performed for the model u M and not for the true potential; in this way we overcome the objection that there cannot be downward continuation for \(u(r,{\vartheta},\lambda )\) in general.

The experience is that with one iteration at most (15.172) used in (15.171) can provide the right answer with an accuracy compatible with the order of magnitude of the coefficients at degree \(2 \cdot 1{0}^{3}\). As such the use of (15.172) can be considered as an accelerator of the iteration procedure, i.e., by computing a certain number of derivatives from the first iteration step, one avoids (or reduces) the subsequent steps.

This phenomenon is well illustrated by the next elementary example where some features of the two methods are highlighted.

Example 2.

We consider a situation in which the surface S has equation

$$\begin{array}{rcl} r& =& {(1 - \epsilon \cos {\vartheta})}^{-1} \\ & =& 1 + \epsilon \cos {\vartheta} + 0({\epsilon }^{2}) \\ \end{array}$$

and \(\overline{S}\) is just the sphere with unit radius. The potential we want to retrieve is

$$\begin{array}{rcl} u = \frac{1} {r}& & \\ \end{array}$$

so that the corresponding true boundary values are given by

$$\begin{array}{rcl} f = u{\vert }_{S} = 1 - \epsilon \cos {\vartheta}.& & \\ \end{array}$$

Method (a): we just consider f as given on \(\overline{S}\) and we note that, denoting \(\cos {\vartheta} = t\),

$$\begin{array}{rcl} f = 1 - \epsilon \cos {\vartheta} = {P}_{0} - \epsilon {P}_{1},\ (O(f) = O(1)),& & \\ \end{array}$$

so that the corresponding potential is, with the same notation of Fig. 15.3,

$$\begin{array}{rcl}{ \overline{u}}_{1} = \frac{{P}_{0}} {r} - \epsilon \frac{{P}_{1}} {{r}^{2}}.& & \\ \end{array}$$

If we now compute

$$\begin{array}{rcl}{ f}_{1}& =&{ \overline{u}}_{1}{\vert }_{S} = 1(1 - \epsilon t) - \epsilon t{(1 - \epsilon t)}^{2} \\ & =& 1 - 2\epsilon t + O({\epsilon }^{2}), \\ \end{array}$$

we can put

$$\begin{array}{rcl} \delta {f}_{1} = f - {f}_{1} = \epsilon t + O({\epsilon }^{2})\ \ (O(\delta {f}_{ 1}) = 0(\epsilon )).& & \\ \end{array}$$

Therefore

$$\begin{array}{rcl}{ \overline{u}}_{2} = \frac{\epsilon t} {{r}^{2}} + O({\epsilon }^{2}) = \epsilon \frac{{P}_{1}(t)} {{r}^{2}} + O({\epsilon }^{2})& & \\ \end{array}$$

and

$$\begin{array}{rcl}{ f}_{2} ={ \overline{u}}_{2}{\vert }_{S}& =& \epsilon t{(1 - \epsilon t)}^{2} + O({\epsilon }^{2}) \\ & =& \epsilon t + O({\epsilon }^{2}), \\ \end{array}$$

It is then clear that, if we iterate, we get for δf k an approximation of the order \(O({\epsilon }^{k})\).

It is useful to observe that indeed \(\delta {f}_{k} \rightarrow 0\) in this case because u already belongs to a finite dimensional space. We note as well that u has the degree zero component which we usually don’t have in T; this is because to solve the BVP for free air anomalies one usually assumes that the degree present in T start at least from \(\mathcal{l} = 2\). Yet one can always think to solve the BVP for the gravity disturbance δg which in principle is in biunivocal correspondence with u = rδg. After all one should keep in mind that the example is built on Dirichlet problem and it is done so simple that one can grasp immediately the type of convergence of the iteration scheme.

Method (b): we aim to prove that in the present example implementing the downward continuation, by means of the first vertical derivative only, speeds up convergence to the \(O({\epsilon }^{3})\) level of approximation in one step.

Note that in our case

$$\begin{array}{rcl} \delta R = r - 1 = {(1 - \epsilon t)}^{-1} - 1 = \epsilon t + {\epsilon }^{2}{t}^{2} + O({\epsilon }^{3})\ ;& & \\ \end{array}$$

moreover \(\frac{\partial u} {\partial r} = -\frac{1} {{r}^{2}}\) so that

$$\begin{array}{rcl} \frac{\partial u} {\partial r}\left (1 + \frac{1} {2}\delta R\right ) = -{\left ( \frac{1} {1 + \frac{1} {2}\epsilon t + O({\epsilon }^{2})}\right )}^{2} = -(1 - \epsilon t + O({\epsilon }^{2})).& & \\ \end{array}$$

Therefore the downward continued boundary value is

$$\begin{array}{rcl} & & f -\frac{\partial u(1 + \frac{1} {2}\delta R)} {\partial r} \cdot \delta R \\ & & = 1 - \epsilon t + (1 - \epsilon t + O({\epsilon }^{2}))(\epsilon t + {\epsilon }^{2}{t}^{2} + O({\epsilon }^{3})) \\ & & = 1 - \epsilon t + \epsilon t - {\epsilon }^{2}{t}^{2} + {\epsilon }^{2}{t}^{2} + O({\epsilon }^{3}) = 1 + O({\epsilon }^{3}).\end{array}$$

Accordingly the approximate potential is

$$\begin{array}{rcl} \overline{u} = \frac{1} {r} + O({\epsilon }^{3})& & \\ \end{array}$$

as announced.

7 New Data Sets from Spatial Gravity Surveying

The introduction of accelerometers on satellites and the possibility of accurately tracking orbits in continuous from the GNSS constellation, has opened a new era of a direct measurement of functionals of the gravity potential. Indeed every measurement of spatial geodesy is in one way or another related to the gravity field through the dynamics of the satellite. Yet what we are achieving now is something different, namely localized (i.e. referring to a point) functionals of W (and therefore of T) which can then be treated very much in the same way as we treat gravity observations on the earth surface. As already mentioned, this can be particularly useful, for instance, in areas with data gaps. Naturally an overall analysis of such data is usually performed so as to produce a set of harmonic coefficients up to some maximum degree. Yet another way of synthesizing the results of a specific mission is to produce at satellite altitude, or little below, grids of various functionals of T. This is the so-called spacewise approach to the analysis of satellite missions (cf. Rummel R et al. 1993)

To fix the ideas we shall shortly describe what one can do with a satellite mission at ∼ 250 km, bearing on board a GPS receiver and a cluster of accelerometers linked to form a gradiometer. To fix the ideas we shall assume that we are able to retrieve the position of the satellite every second (i.e. every 8 km along the orbit) with 1 cm error. The accelerometers, that feel the same gravimetric acceleration as the barycenter of the satellite, will measure only accelerations of non-gravitational forces \(\mathbf{f}\) with an accuracy in the range of a fraction of nGal (1 nGal\(= 1{0}^{-9}\) Gal), for instance \(O(\mathbf{f}) \sim \) 0.5 nGal. Therefore they are sensitive to variations in the gravitational acceleration, from one position to another one meter apart, of \(5 \cdot 1{0}^{-12}{\mathrm{\,s}}^{-2}\) (5 mEU; 1 Eotvos Unit\(\,= 1{0}^{-9}{\mathrm{\,s}}^{-2}\)).

A mission of this type, GOCE (Global gravity field and Ocean Circulation Experiment), has been launched by ESA in 2009.

Since the inclination of the orbit is not exactly polar, for the purpose of keeping the attitude of the satellite constant with respect to the sun, but has an inclination of \(\sim \)96\({}^{\circ }\), there are two small caps over the poles that are never visited by the satellite. All the rest of a sphere at satellite altitude is more or less covered with a dense irregular net of data points. Let us see shortly what are the observables derived by the satellite. These are of two types: one is from satellite tracking from GPS, the others are gradiometric.

We shall not enter into the detailed analysis of the observables but we shall rather give the principles from which we can derive the observation equations.

Tracking data and energy balance. We assume that GPS data can provide positions \({\mathbf{X}}_{I}(t)\) of the satellite in a quasi-inertial system I (cf. Part I, Sect. 1.4) as well as the velocity \(\dot{{\mathbf{X}}}_{I}(t)\) of the satellite. In the inertial system we can write the dynamic equation

$$\begin{array}{rcl} \ddot{{\mathbf{X}}}_{I} = {\nabla }_{\mathbf{X}}V +{ \mathbf{g}}_{P} + \mathbf{f},& &\end{array}$$
(15.173)

where V is the purely gravitational part of the earth gravity potential (remember that I is non-rotating with respect to stars), \({\mathbf{g}}_{P}\) is the set of perturbative gravitational accelerations (luni-solar attraction, tides, etc.), which can be assumed to be known, \(\mathbf{f}\) includes all the non-gravitational accelerations acting on the satellite (atmospheric drought, light pressure, albedo, etc.)

Essential is that \(\mathbf{f}\) is observed by the accelerometers. The only warning in using (15.173) is that if \(V (\mathbf{x})\) is the gravitational potential at the earth-fixed position \(\mathbf{x}\), then the function of \({\mathbf{X}}_{I}\) to be used in (15.173) is

$$\begin{array}{rcl} V = V ({R}^{t}(t){\mathbf{X}}_{ I})& &\end{array}$$
(15.174)

where R(t) is the rotation matrix that brings \(\mathbf{x}\) into \({\mathbf{X}}_{I} = R(t)\mathbf{x}\) and R t(t) its inverse. Accordingly the acceleration \({\mathbf{g}}_{I}(\mathbf{X}) = {\nabla }_{\mathbf{X}}V\) is given by

$$\begin{array}{rcl}{ \mathbf{g}}_{I}(\mathbf{X})& =&{ \left.R(t){\nabla }_{\mathbf{x}}V (\mathbf{x})\right \vert }_{\mathbf{x}={R}^{t}\mathbf{X}} \\ & =& R(t)\mathbf{g}({R}^{t}(t){\mathbf{X}}_{ I}),\end{array}$$
(15.175)

where \(\mathbf{g}\) is the pure gravitational part of the earth fixed gravity acceleration vector.

From (15.173) multiplying by \(\dot{{\mathbf{X}}}_{I}\) we find the (specific) power balance equation

$$\begin{array}{rcl} \frac{1} {2} \frac{d} {\mathit{dt}}\vert \dot{{\mathbf{X}}}_{I}{\vert }^{2} = \frac{1} {2} \frac{d} {\mathit{dt}}{v}_{I}^{2} = \frac{d} {\mathit{dt}}V ({\mathbf{X}}_{I}) + ({\mathbf{g}}_{P} + \mathbf{f}) \cdot \dot{{\mathbf{X}}}_{I}.& &\end{array}$$
(15.176)

Integrating along the orbit from 0 to t we find

$$\begin{array}{rcl} \frac{1} {2}{v}_{I}^{2}(t) -\frac{1} {2}{v}_{I}^{2}(0)& =& V ({\mathbf{X}}_{ I}(t)) - V ({\mathbf{X}}_{I}(0)) \\ & & +\int\limits_{0}^{t}({\mathbf{g}}_{ P} + \mathbf{f}) \cdot \dot{{\mathbf{X}}}_{I}\mathit{dt} \\ \end{array}$$

If we consider that \(V ({\mathbf{X}}_{I}) = {V }_{e}({\mathbf{X}}_{I}) + T({\mathbf{X}}_{I})\) and we put all together the constants

$$\begin{array}{rcl}{ E}_{0} = \frac{1} {2}{v}_{I}(0) - V ({\mathbf{X}}_{I}(0))& &\end{array}$$
(15.177)

we see that (15.177) can be written as

$$\begin{array}{rcl} T({\mathbf{X}}_{I}) + {E}_{0}& =& -{V }_{e}({\mathbf{X}}_{I}(t)) + \frac{1} {2}{v}_{I}^{2}(t) \\ & & +\int\limits_{0}^{t}({\mathbf{g}}_{ P} + \mathbf{f}) \cdot \dot{{\mathbf{X}}}_{I}dt,\end{array}$$
(15.178)

where to the LHS we have the unknown functional \(T({\mathbf{X}}_{I})\) and the unknown parameter E 0, while to the right we have only known or observed quantities.

So (15.178) is a localized observation equation for T, with the unknown parameter E 0.

Gradiometric observations. They are just derivatives of the vector of gravitational acceleration. These are obtained by differentiating the signals of the accelerometers, considering that the common part, namely the external forces, act in a similar way on all the accelerometers.

So what is left is a matrix of second derivatives M I . This is observed in the system I, but has to be the related to the matrix of second derivatives \(M = \left [ \frac{{\partial }^{2}V } {\partial \mathbf{x}\partial \mathbf{x}t}\right ]\) in the earth fixed system. Such a relation is known to be simply

$$\begin{array}{rcl}{ M}_{I} = R(t)M{R}^{t}(t).& &\end{array}$$
(15.179)

Accordingly, if we know R(t), we can retrieve the matrix of second derivatives, which, in a spherical local triad have observation equations (in spherical approximation).

Among them the observation of T rr (P) is particularly easy to handle and we have already invited the reader to compute its covariance function in Part I, Chap. 5, Exercise 2.

Naturally the true analysis of data and realistic observation equations are much more complicated than that. Yet the principles are presented and the observation equations, after a first filtering along the orbit taking into account the strong correlation of the accelerometers noise, can be used to predict by collocation suitable functionals on a sphere, more or less at satellite altitude.

It turns out that a good solution is to predict a regular grid of 10 ×10 size on a sphere at about 150 km altitude; at each knot T is predicted, with an error, in terms of \(\frac{T} {\gamma }\), of the order of 2 mm, and T rr with an error of 0,6 mEU.

Such grids constitute an entirely new data sets covering most of the earth at 150 km altitude and their use, in conjunction with ground data for an optimal estimate of high resolution geoid, for instance along the lines illustrated in Part I, Sect. 5.11, is a challenge that will keep geodesists busy for some years.

8 Exercises

Exercise 1.

Refer to the notation introduced in Sect. 3.1.

When S is a sphere we have \({I}_{+} = 0,{J}_{+} = 1\). Prove directly the inequality (15.40) for this case, when α = 0, by using the explicit representation

$$\begin{array}{rcl} T = \sum\limits_{\mathcal{l}=0}^{+\infty }\ \sum\limits_{m=-\mathcal{l}}^{\mathcal{l}}{T}_{ \mathcal{l}m}{\left (\frac{R} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma ),\ v = \sum\limits_{\mathcal{l},m=0}^{+\infty }{v}_{ \mathcal{l}m}{\left (\frac{R} {r} \right )}^{\mathcal{l}+1}{Y }_{ \mathcal{l}m}(\sigma ),& & \\ \end{array}$$

and showing that (15.40) is reduced to the algebraic inequality

$$\begin{array}{rcl} (\mathcal{l} + 1)(\mathcal{l} + 2) \leq 2{(\mathcal{l} + 1)}^{2},\ \mathcal{l} \geq 0.& & \\ \end{array}$$

Observe that this can be an equality if the degree zero only is present in the harmonic series.

(Hint: remember that \(\vert \nabla T{\vert }^{2} = {(u \prime )}^{2} + \frac{1} {{R}^{2}} \vert {\nabla }_{\sigma }T{\vert }^{2}\). Furthermore use the surface Gauss theorem, namely

$$\begin{array}{rcl} \int\limits_{\sigma }\vert {\nabla }_{\sigma }T{\vert }^{2}d\sigma =\int\limits_{\sigma }(-{\Delta }_{\sigma }T)Td\sigma ,& & \\ \end{array}$$

and remember that the spherical harmonics \(\{{Y }_{\mathcal{l}m}\}\) are eigenfunctions of the Laplace Beltrami operator Δ σ, i.e.

$$\begin{array}{rcl} -{\Delta }_{\sigma }{Y }_{\mathcal{l}m} = \mathcal{l}(\mathcal{l} + 1){Y }_{\mathcal{l}m}.)& & \\ \end{array}$$

Exercise 2.

Prove that the condition (15.75) can be cast in the form

$$\begin{array}{rcl} L \geq \frac{4{J}_{+}^{2}} {1 - 4{J}_{+}^{2} \frac{\delta R} {{R}_{+}} } - 2.& & \\ \end{array}$$

Verify that with \({I}_{+} = 6{0}^{\circ },\ {J}_{+} = 2,\ \frac{\delta R} {{R}_{+}} = \frac{30} {6,400}\); the simple Molodensky problem has a regular solution in HH 1, 2(S) if L ≥ 33, i.e. if the first 33 degrees of T are considered as known. The estimate is not strict.

Exercise 3.

Note that recalling the estimates (15.25) and (15.26) one can cast (15.83) into the form

$$\begin{array}{rcl} 2{C}_{0L} < \frac{1 - {e}^{2}{J}_{+}} {2{J}_{+}(1 + {e}^{2})}\cong \frac{1} {2{J}_{+}}[1 - {e}^{2}({J}_{ +} + 1)].& & \\ \end{array}$$

Verify that if we want to be able to handle a geometry where \({I}_{+} = 6{0}^{\circ }\) and, δR = 30 km (i.e. mountains up to 7,200 m) we must have

$$\begin{array}{rcl} L \geq 34,& & \\ \end{array}$$

which is not very different from the case of Exercise 2.

Exercise 4.

Use the estimate (15.25) to prove that condition (15.98) is satisfied if

$$\begin{array}{rcl}{ I}_{+} \leq 8{9}^{\circ }.6.& & \\ \end{array}$$

Exercise 5.

Consider the normal systems (15.106) and (15.107) in case that S is directly a sphere of radius R and note that in such a case one has \({S}_{\mathcal{l}m}{\vert }_{S} \equiv {Y }_{\mathcal{l}m},{Z}_{\mathcal{l}m} \equiv {Y }_{\mathcal{l}m}\) and

$$\begin{array}{rcl} & & {\Psi }_{\mathcal{l}m} = -r\epsilon \cdot \nabla {S}_{\mathcal{l}m} - \eta {S}_{\mathcal{l}m} \\ & & {\Phi }_{\mathcal{l}m} = -r\epsilon \cdot \nabla {S}_{\mathcal{l}m}, \\ \end{array}$$

as in Example 1. Prove that in this case the normal systems can be solved by the iterative schemes

$$\begin{array}{rcl}{ (\mathcal{l} - 1)}^{2}{T}_{ \mathcal{l}m}^{(N+1)}& =& -{\left < {\Psi }_{ \mathcal{l}m},\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}(i - j){Y }_{ jk}{T}_{jk}^{(N)}\right >}_{ 0} + \\ & & -(\mathcal{l} - 1){\left < {Y }_{\mathcal{l}m},\ \sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{\Psi }_{ jk}{T}_{jk}^{(N)}\right >}_{ 0} + \\ & & -{\left < {\Psi }_{\mathcal{l}m},\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{\Psi }_{ jk}{T}_{jk}^{(N)}\right >}_{ 0} \\ & & +\sum\limits_{j=0}^{L-1}\ \sum\limits_{k=-j}^{j}{\left < {\Psi }_{ \mathcal{l}m},{Y }_{jk}\right >}_{0}{a}_{jk}^{(N)} +{ \left < {A}_{ 1}{S}_{\mathcal{l}m},f\right >}_{0},\ L \leq \mathcal{l} \leq M, \\ {a}_{\mathcal{l}m}^{(N)}& =&{ \left < {Y }_{ \mathcal{l}m},\sum\limits_{j=L}^{M}\ \sum\limits_{k=-j}^{j}{\Psi }_{ jk}{T}_{jk}^{(N)}\right >}_{ 0} -{\left < {Y }_{\mathcal{l}m},f\right >}_{0},\ 0 \leq \mathcal{l} \leq L - 1, \\ \end{array}$$

for (15.106), and

$$\begin{array}{rcl}{ (\mathcal{l} + 1)}^{2}{T}_{ \mathcal{l}m}^{(N+1)}& =& (\mathcal{l} + 1){\left < {Y }_{ \mathcal{l}m},\sum\limits_{j=0}^{M}\ \sum\limits_{k=-j}^{j}{\Phi }_{ jk}{T}_{jk}^{(N)}\right >}_{ 0} \\ & & +{\left < {\Phi }_{\mathcal{l}m},\ \sum\limits_{j=0}^{M}\ \sum\limits_{k=-j}^{j}{Y }_{ jk}(j + 1){T}_{jk}^{(N)}\right >}_{ 0} + \\ & & -{\left < {\Phi }_{\mathcal{l}m},\sum\limits_{j=0}^{M}\ \sum\limits_{k=-j}^{j}{\Phi }_{ jk}{T}_{jk}^{(N)}\right >}_{ 0} +{ \left < {A}_{2}{S}_{\mathcal{l}m},f\right >}_{0}\\ \end{array}$$

for (15.107).

Exercise 6.

Consider the situation of Example 1 and write the perturbative system for {T jk }, when δg is given on the spherical boundary S.

(Hint: verify that (15.90) becomes

$$\begin{array}{rcl}{ (\mathcal{l} + 1)}^{2}{T}_{ \mathcal{l}m}& =& -(\mathcal{l} + 1)\sum\limits_{j,k=2}^{M}{\left < {Y }_{ \mathcal{l}m},{\Phi }_{\mathcal{l}m}\right >}_{0}{T}_{jk} + \\ & & -\sum\limits_{j,k=2}^{M}(j + 1){\left < {\Phi }_{ \mathcal{l}m},{Y }_{jk}\right >}_{0}{T}_{jk} + \\ & & -\sum\limits_{j,k=2}^{M}{\left < {\Phi }_{ \mathcal{l}m},{\Phi }_{jk}\right >}_{0}{T}_{jk} +{ \left < {A}_{2}{S}_{\mathcal{l}m},r\delta g\right >}_{0}.) \\ \end{array}$$