Keywords

9.1 Curvature Estimation on Discrete Data

Context and Objectives

In many shape processing applications, the estimation of differential quantities on the shape boundary is usually an important step. Their correct estimation makes easier further processing, like quantitative evaluation, feature detection, shape matching or visualization. This paper focuses on estimating the curvature tensor on the boundary of digital shapes. Such digital structures are subsets of the 3-dimensional digital space \(\mathbb{Z}^{3}\) and come generally from the digitization of some Euclidean shape. Of course, the curvature tensor estimation should be as close as possible to the curvature tensor of the underlying Euclidean shape before digitization. Digital data form a special case of discrete data with specific properties: (1) digital data cannot sample the boundary of the Euclidean shape (i.e. they do not lie on the shape boundary), (2) digital data are distributed around the true sample according to arithmetic noise, which looks rather uniform over a range [−h, h] from a statistical point of view, where h is the digitization grid step. Another way of stating these characteristics is to say that the Hausdorff distance between the Euclidean shape and its digitization is some O(h). Of course, the quality of the estimation should be improved as the digitization step gets finer and finer. This property is called the multigrid convergence [12, 29]. It is similar in spirit with the stability property in geometry processing: given a continuous shape and a specific sampling of its boundary, the estimated measure should converge to the Euclidean one when the sampling become denser (e.g. [2, 45]).

Curvature Estimation on Meshes

Digital data being discrete in nature, it is interesting to look at the curvature estimation techniques on triangulated meshes. In computer graphics and geometry processing, there exists a vast family of techniques to estimate either the mean or Gaussian curvatures, or sometimes the full curvature tensor. Most of them are local (i.e. limited to a 1-ring or 2-ring of neighbors) but exhibit correct results for nice meshes. They generally fall into three categories: fitting, discrete methods, curvature tensor estimation. We may refer to [56] and [24] for comprehensive evaluations, and Desbrun et al. [20] or Bobenko and Suris [5] for an entirely discrete theory. Most of them do not have theoretical convergence guarantees even without noise on the mesh. We may quote [47] and [54] as approaches trying to tackle perturbation through averaging.

For Gaussian curvature estimated with Gauss-Bonnet approach (angle defect), Xu [58] provides a stability theorem for triangulated mesh whose vertices lie on the underlying smooth manifold, with valence 6 and parallelogram condition (each 1-ring of neighbors is projected as a parallelogram onto a plane). Assuming a sampling with density δ, he provides an additional convergence property whenever the sampling is perturbated by some O(δ α), but α > 2 (inadequate for discrete data). Note that if the triangulated mesh does not satisfy these requirements, such estimation does not converge.

The integral measures of curvatures, based on normal cycle theory [16, 17] is another notable approach for estimating curvature information on a triangulated mesh. The authors exhibit some convergence results for triangulated meshes with vertices lying on the underlying smooth Euclidean shape boundary. In this case, if the mesh has Hausdorff distance to shape boundary below ε, convergence is obtained with speed/error O(ε) under some hypotheses.

Finally, in geometry processing, interesting mathematical tools have been developed to design differential estimators on smooth surfaces based on integral invariants [48, 49]. They consist in moving a kernel along the shape surface and in computing integrals on the intersection between the shape and the kernel. The authors have demonstrated that some integral quantities provide interesting curvature information when the kernel size tends to zero. They also achieve stability depending on the kernel radius and on ε, for instance in the case of a mesh sampling. Our new estimators rely on the same ideas.

Curvature Estimation on Point Clouds

When having only discrete data (i.e. a point cloud), the most natural way to approach curvature(s) is to fit a polynomial surface of degree two at least. Perhaps the most representative of these techniques is the osculating jets of Cazals and Pouget [10]. The authors provide O(δ 2) convergence results when the data is a surface sampling, assuming δ is the density of points. There is no theoretical result in presence of noise, although the least-square fitting of osculating jets is very robust to noise in practice.

Another family of techniques exploits the Voronoi diagram [1, 44, 45]. The idea behind these approaches is, instead of fitting the tangent space, to estimate at best the orthogonal space. The convolved covariance measure introduced by Mérigot et al. [45] is particularly appealing since this measure achieves robustness even for arbitrary compact sets, essentially in \(O(\sqrt{\epsilon })\). It is, in some sense, an integral measure of the covariance matrix of the normal cone around the point of interest. However, convergence of curvature(s) is subject to several parameters r and R which contribute contradictorily to the error. In practice, this approach gives results comparable to osculating jets for curvatures.

Recently, several authors have developed new interesting approaches for estimating the normal vector field on noisy point clouds, even in the presence of sharp features [6, 41, 59]. Furthermore, Boulch and Marlet [6] gives probabilistic convergence results. Although they cannot be used “as is” for curvature computation, they could be used in parallel with curvature estimation techniques to locate sharp features in a first pass, and to limit curvature estimations to smooth zones.

Following integral invariants approaches proposed in [48, 49], Digne and Morel [21] propose several differential estimators on point clouds. These estimators (normal vector field, curvature tensor…) also consider a spherical integration kernel and covariance matrices are constructed from point clouds or oriented point clouds (i.e. points equipped with normal vector). For a large set of estimators, the authors provide convergence results (as the spherical kernel radius tends to zero) in the continuous case. Our estimators on digital surfaces follow similar principles making explicit convergence speed.

Last, a very recent approach coming from geometric measure theory uses the theory of varifolds to design stable mean curvature estimations [7, 8]. This theory is generic enough to include smooth manifolds, discrete meshes, point clouds, and digital data into the same framework. For geometric estimation, this approach requires to have both position and normal approximation. If both are convergent (in some sense), then the regularized first variation of the varifold measure converges toward the mean curvature. The speed of convergence of this approach as well as its accuracy in practice remain to be explored.

Curvature Estimation on Digital Contours and Surfaces

In digital geometry, we usually consider multigrid convergence as an essential criterion [12]. Hence, in dimension 2, parameter free convergence results have been obtained for length [11] and normal vector estimation [57]. Based either on binomial convolution principles [22, 42], or polynomial fitting [50], convergence results can also be obtained for higher order derivatives of digital curves. Algorithms are parametrized by the size of the convolution or fitting kernel support and convergence theorems hold when such support size is an increasing function of the grid resolution and some shape characteristics.

For curvature estimation along 2D curves, multigrid convergence of parameter-free estimators is still challenging, although accurate experimental results have been obtained with maximal digital circular arcs [53] and with global optimization [28]. In 3D digital space, several empirical methods exist for estimating curvatures, but none achieves multigrid convergence (e.g. see [23, 38]). In [14], we recently presented a digital estimator for mean curvature for 2D and 3D digital objects, which achieves multigrid convergence in \(O(h^{\frac{1} {3} })\).

Desirable Properties for Digital Curvature Estimators

Our objective is to design a curvature tensor estimator for digital data such that: (1) it is provably multigrid convergent, (2) it is accurate in practice, (3) it is computable in an exact manner, (4) it can be efficiently computed either locally or globally (evaluation at a single surface point or extraction of the curvature tensor field), (5) it is robust to further perturbations (like bad digitization around the boundary, outliers).

Contributions and Outline of the Chapter

We achieve such a goal by adapting the integral invariant tool originally designed for smooth manifolds and triangulated meshes [48, 49]. We begin in Sect. 9.2 by giving some background on digitization processes with a few useful lemma, and by recalling the multigrid convergence property as well as integral invariants in the continuous setting. In order to define a sound digital version of such object, it appears clearly that digital moments (of up to 2nd order) must replace continuous moments. This is why we study digital moments in Sect. 9.3 and we establish several results related to volume and moments approximation. Then, Sect. 9.4 shows how curvature in 2D and mean curvature in 3D can be approximated with digital integral invariants. This approach relies solely on simple volume estimates. We are then ready to address in Sect. 9.5 the more involved issue of estimating principal curvatures and directions, as well as the normal vector. This is done by building a local digital covariance matrix. Section 9.6 provides a method to automatically set the radius parameter of integral invariants so as to achieve multigrid convergence. Section 9.8 presents a comprehensive evaluation of the practical accuracy of our estimators. Their robustness to perturbation on input data is then illustrated on numerous examples. Last, such estimators are shown to be useful for feature detection on digital surfaces. Singularities (edges) on shapes are detected by examining the behavior of curvature estimators in scale-space.

9.2 Background: Multigrid Convergence and Integral Invariants

9.2.1 Digital Space and Digitizations Operators

Since we are interested in evaluating both theoretically and experimentally the behavior of several differential estimators on digital object boundaries, we first have to formalize links between Euclidean objects and digital ones with the help of a digitization process. Let us consider a family \(\mathbb{X}\) of compact subsets of \(\mathbb{R}^{d}\) whose smoothness requirements will be specified later. The digital space is defined as the set of points of \(\mathbb{R}^{d}\) with integer coordinates, naturally denoted by \(\mathbb{Z}^{d}\). We denote G h (X) the Gauss digitization of X in a d−dimensional grid of grid step h:

$$\displaystyle{ \mathtt{G}_{h}(X):=\left \{\mathbf{z} \in \mathbb{Z}^{d},(h \cdot \mathbf{z}) \in X\right \}\,, }$$
(9.1)

where h ⋅ z is the uniform scaling of z by factor h. If \(\mathbf{z} \in \mathbb{Z}^{d}\), then Q z denotes the (closed) unit d-dimensional cube of \(\mathbb{R}^{d}\) centered on z and aligned with the axes of \(\mathbb{Z}^{d}\). We further define Q h (z): = h ⋅ Q z, so-called h-cube, as the scaled version of Q z (i.e. Q h (z) is a d-dimensional cube centered at h ⋅ z with edge length h). In addition to the Gauss digitization operator, we consider the inner Jordan digitization J h (X) and outer Jordan digitization J + h (X) at step h of a shape \(X \in \mathbb{X}\) as follows (see Fig. 9.1):

$$\displaystyle\begin{array}{rcl} \mathtt{J}^{-}_{ h}(X)&:=& \left \{\mathbf{z} \in \mathbb{Z}^{d},Q_{ h}(\mathbf{z}) \subset X\right \}\,,{}\end{array}$$
(9.2)
$$\displaystyle\begin{array}{rcl} \mathtt{J}^{+}_{ h}(X)&:=& \left \{\mathbf{z} \in \mathbb{Z}^{d},Q_{ h}(\mathbf{z}) \cap X\neq \emptyset \right \}\,.{}\end{array}$$
(9.3)

Given a digital set \(Z \subset \mathbb{Z}^{d}\), the body of Z is the embedding of Z into \(\mathbb{R}^{d}\), denoted by [⋅ ] h and defined as:

$$\displaystyle{ [Z]_{h}:=\bigcup _{\mathbf{z}\in Z}Q_{h}(\mathbf{z})\,. }$$
(9.4)

Let us now formalize relationships between Gauss, Jordan digitizations and the Euclidean shape X. We call Jordan strip the digitization J 0 h (X): = J + h (X) J h (X). Its body is clearly a union of h-cubes, with a thickness of at least one h-cube. First, it is straightforward to check that:

Fig. 9.1
figure 1

Illustration of the digitization models and notations

Lemma 1

J h (X) ⊂ G h (X) ⊂ J + h (X) and [J h (X)] h X ⊂ [J + h (X)] h .

From these objects, it is natural to consider the relations of ∂X, the topological boundary of X, with the digitized boundaries [G h (X)] h , [J h (X)] h or [J + h (X)] h . We have:

Lemma 2

[J 0 h (X)] h = [J + h (X)] h Int([J h (X)] h ) and ∂X ⊂ [J 0 h (X)] h .

Proof

The first equality is straightforward. For the inclusion ∂X ⊂ [J 0 h (X)] h , on the one hand we have by Lemma 1 that [J h (X)] h X, hence Int([J h (X)] h ) ⊂ Int(X), so ∂X ∩Int([J h (X)] h ) = ∅. On the other hand we have by the same lemma that X ⊂ [J + h (X)] h . By compactness of X, we have ∂XX and thus ∂X ⊂ [J + h (X)] h . Putting these two facts together concludes. □

The ε-offset of a shape X is the set of points at distance lower or equal to ε from X. It is denoted by X ε. Furthermore, the medial axis MA(∂X) of ∂X is the subset of \(\mathbb{R}^{d}\) whose points have more than one closest point to ∂X. The reach reach(X) of X is the infimum of the distance between ∂X and its medial axis. Shapes with positive reach have principal curvatures bounded by ± 1∕reach(X). They have a C 2 smooth boundary almost everywhere. The boundary of the Gauss digitization of X is close to the surface ∂X for smooth shapes:

Theorem 1 ([36])

Let X be a compact domain of \(\mathbb{R}^{d}\) such that the reach of ∂X is greater than ρ. Then, for any digitization step \(0 <h <\frac{2\rho } {\sqrt{d}}\) , the Hausdorff distance between sets ∂X and ∂[G h (X)] h is less than \(\sqrt{d}h/2\) . Hence

$$\displaystyle{ \partial [\mathtt{G}_{h}(X)]_{h} \subset \left (\partial X\right )^{\frac{\sqrt{d}} {2} h}. }$$
(9.5)

This is also true for Jordan digitizations, with lesser hypotheses on X, but with a larger bound:

Lemma 3

Let X be a compact domain of \(\mathbb{R}^{d}\) . Jordan digitizations of X are close to the boundary of X in the Hausdorff sense:

$$\displaystyle\begin{array}{rcl} \partial [\mathtt{J}^{-}_{ h}(X)]_{h}& \subset (\partial X)^{\sqrt{d}h}\,,&{}\end{array}$$
(9.6)
$$\displaystyle\begin{array}{rcl} \partial [\mathtt{J}^{+}_{ h}(X)]_{h}& \subset (\partial X)^{\sqrt{d}h}\,,&{}\end{array}$$
(9.7)
$$\displaystyle\begin{array}{rcl} [\mathtt{J}^{0}_{ h}(X)]_{h}& \subset (\partial X)^{\sqrt{d}h}\,.&{}\end{array}$$
(9.8)

Proof

First of all, one can check that [J 0 h (X)] h = [J h (X)] h [J + h (X)] h . It follows that proving the last inclusion implies the first two ones.

Now, let x ∈ [J 0 h (X)] h . There exists some h-cube Q h (z) that contains x, such that (i) Q h (z) ∩ X ≠ ∅ since zJ + h (X), and (ii) Q h (z) ⊄ X since zJ h (X). From (i), there exists y 1Q h (z) ∩ X. From (ii), there exists y 2Q h (z) and y 2X. The straight segment [y 1 y 2] joining y 1 to y 2 is an arc that goes from X to the complementary of X. By compactness of X, there exists some y 3 ∈ [y 1 y 2] with y 3∂X. By convexity of Q h (z), y 1Q h (z) and y 2Q h (z) implies [y 1 y 2] ⊂ Q h (z). It follows that y 3 belongs also to Q h (z). We have just found a point in the same h-cube as x, which lies in ∂X. Since two points in some h-cube may not be further away than \(\sqrt{d}h\), x is no further away than this distance from ∂X. □

The following result will also be useful.

Lemma 4 (Proof of Lemma 10 [36])

Let X be a compact domain of \(\mathbb{R}^{d}\) such that the reach of ∂X is greater than ρ, and let ε be some value smaller or equal to ρ. Then

$$\displaystyle{ \mathop{\mathrm{Vol}}\nolimits (\partial X^{\epsilon }) \leq 2^{d+1}\mathrm{Area}(\partial X)\epsilon. }$$
(9.9)

9.2.2 Multigrid Convergence of Global and Local Geometric Estimators

As discussed in various previous works (see for instance [12] for a survey), the idea of multigrid convergence is that when we define a quantity estimator on the digitization of some shape \(X \subset \mathbb{R}^{d}\), we check if the estimated quantity converges (theoretically and/or experimentally) to the associated one on X when h tends to zero. More formally,

Definition 1 (Multigrid Convergence for Local Geometric Quantities)

Given a digitization process D, a local discrete geometric estimator \(\hat{E}\) of some geometric quantity E is multigrid convergent for the family of shapes \(\mathbb{X}\) if and only if, for any \(X \in \mathbb{X}\), there exists a grid step h X > 0 such that the estimate \(\hat{E}(\mathtt{D}_{h}(X),\hat{\mathbf{x}},h)\) is defined for all \(\hat{\mathbf{x}} \in \partial [\mathtt{D}_{h}(X)]_{h}\) with 0 < h < h X , and for any x∂X,

$$\displaystyle{ \forall \hat{\mathbf{x}} \in \partial [\mathtt{D}_{h}(X)]_{h}\ \text{with}\ \|\hat{\mathbf{x}} -\mathbf{x}\|_{\infty }\leq h,\vert \hat{E}(\mathtt{D}_{h}(X),\hat{\mathbf{x}},h) - E(X,\mathbf{x})\vert \leq \tau _{X,\mathbf{x}}(h), }$$
(9.10)

where \(\tau _{X,\mathbf{x}}: \mathbb{R}^{+}\setminus \{0\} \rightarrow \mathbb{R}^{+}\) has null limit at 0. This function defines the speed of convergence of \(\hat{E}\) toward E at point x of X. The convergence is uniform for X when every τ X, x is bounded from above by a function τ X independent of x∂X with null limit at 0.

Note that when a geometrical quantity is global (e.g. area or volume), we do not need an explicit mapping between ∂X and [D h (X)] h , and Definition 1 can be rephrased to define the simpler multigrid convergence of global geometric quantities [12].

Instead of estimating the geometric quantity for all \(\hat{\mathbf{x}} \in \partial [\mathtt{G}_{h}(X)]_{h}\), classical local discrete estimators estimate the quantity at cells of the cellular boundary of a digital set, otherwise said at elements of the interpixel representation of the digital set boundary (pointels, linels or surfels). We usually consider a canonical Euclidean embedding of k−cells into \(\mathbb{R}^{d}\) (1-cells are mapped into unitary Euclidean segments, 2-cells into unit squares…), scaled by the factor h. Furthermore the estimated quantity \(\hat{E}(\mathtt{D}_{h}(X),\hat{\mathbf{x}},h)\) is constant for all \(\hat{\mathbf{x}}\) belonging to the embedding of a boundary k−cell.

9.2.3 Integral Invariants in the Continuous Setting

In geometry processing, integral invariants have been widely investigated to define estimators of differential quantities (see [48, 49] for a complete overview). For short, the main idea is to move a kernel on points x∂X and to compute integrals on the intersection between X and the kernel. Even if different kernels (e.g., Euclidean ball, Euclidean sphere) and different integration functions can be considered, we focus here on volumetric integral invariants defined as follows:

Definition 2

Given \(X \in \mathbb{X}\) and a radius \(R \in \mathbb{R}^{+{\ast}}\), the volumetric integral V R (x) at x∂X is given by (see Fig. 9.3)

$$\displaystyle{ V _{R}(\mathbf{x}):=\int _{B_{R}(\mathbf{x})}\chi (\mathbf{p})d\mathbf{p}\,, }$$
(9.11)

where B R (x) is the Euclidean ball with radius R and center x and χ(⋅ ) the characteristic function of X. In dimension 2, we simply denote A R (x) such quantity.

Several authors have detailed connections between V R (x) and curvature (resp. mean curvature) at x for shapes in \(\mathbb{R}^{2}\) (resp. \(\mathbb{R}^{3}\)) [9, 48, 49].

Lemma 5 ([49])

For a sufficiently smooth shape X in \(\mathbb{R}^{2}\) , x∂X, we have

$$\displaystyle{ A_{R}(\mathbf{x}) = \frac{\pi } {2}R^{2} -\frac{\kappa (X,\mathbf{x})} {3} R^{3} + O(R^{4})\,, }$$
(9.12)

where κ(X, x) is the curvature of ∂X at x . For a sufficiently smooth shape X in \(\mathbb{R}^{3}\) and x∂X, we have

$$\displaystyle{ V _{R}(\mathbf{x}) = \frac{2\pi } {3}R^{3} -\frac{\pi H(X,\mathbf{x})} {4} R^{4} + O(R^{5})\,, }$$
(9.13)

where H(X, x) is the mean curvature of ∂X at x .

Such results are obtained by Taylor expansion at x of the surface ∂X approximated by a parametric function y = f(x) in 2D and z = f(x, y) in 3D. From Eqs. (9.12) and (9.13) and with a fixed radius R, one can derive local estimators \(\tilde{\kappa }^{R}\) and \(\tilde{H}^{R}\) respectively:

$$\displaystyle{ \tilde{\kappa }^{R}(X,\mathbf{x}):= \frac{3\pi } {2R} -\frac{3A_{R}(\mathbf{x})} {R^{3}},\quad \tilde{H}^{R}(X,\mathbf{x}):= \frac{8} {3R} -\frac{4V _{R}(\mathbf{x})} {\pi R^{4}} \,. }$$
(9.14)

In this way, as R tends to zero, both estimated values converge to expected ones (respectively κ and H). More formally:

$$\displaystyle{ \tilde{\kappa }^{R}(X,\mathbf{x}) = \kappa (X,\mathbf{x}) + O(R),\quad \tilde{H}^{R}(X,\mathbf{x}) = H(X,\mathbf{x}) + O(R)\,. }$$
(9.15)

Similarly, directional information such as principal curvatures and thus Gaussian curvature can be retrieved from integral computations. Indeed, instead of computing the measure of B R (x) ∩ X as in Definition 2, we consider its covariance matrix. Given a non-empty subset \(Y \subset \mathbb{R}^{d}\), the covariance matrix of Y is given by

$$\displaystyle{ \mathcal{V }(Y ):=\int _{Y }(\mathbf{p} -\overline{Y })(\mathbf{p} -\overline{Y })^{T}d\mathbf{p} =\int _{ Y }\mathbf{p}\mathbf{p}^{T}d\mathbf{p} -\mathop{\mathrm{Vol}}\nolimits (Y )\overline{Y }\overline{Y }^{T}, }$$
(9.16)

where \(\overline{Y }\) is the centroid of Y and \(\mathop{\mathrm{Vol}}\nolimits (Y )\) its volume. For non negative integers p, q and s, we recall the definition of pqs-moments m pqs(Y ) of Y:

$$\displaystyle{ m^{pqs}(Y ):=\iiint _{ Y }x^{p}y^{q}z^{s}dxdydz\,. }$$
(9.17)

Note that the volume \(\mathop{\mathrm{Vol}}\nolimits (Y )\) is the 0-moment m 000(Y ), and that the centroid \(\overline{Y }\) is the vector of 1-moments normalized by the 0-moment, i.e. (m 100(Y ), m 010(Y ), m 001(Y ))Tm 000(Y ). For simplicity, let us denote by A the Euclidean set B R (x) ∩ X. The covariance matrix of A is then rewritten asFootnote 1:

$$\displaystyle\begin{array}{rcl} \mathcal{V }(A)& =& \left [\begin{array}{ccc} m^{200}(A)&m^{110}(A)&m^{101}(A) \\ m^{110}(A)&m^{020}(A)&m^{011}(A) \\ m^{101}(A)&m^{011}(A)&m^{002}(A) \end{array} \right ]\quad \quad \quad \quad \quad \quad \\ & & \quad - \frac{1} {m^{000}(A)}\left [\begin{array}{c} m^{100}(A) \\ m^{010}(A) \\ m^{001}(A)\\ \end{array} \right ] \otimes \left [\begin{array}{c} m^{100}(A) \\ m^{010}(A) \\ m^{001}(A)\\ \end{array} \right ]^{T}.{}\end{array}$$
(9.18)

In [48], authors have demonstrated that eigenvalues and eigenvectors of \(\mathcal{V }(A)\) provide principal curvature and principal direction information:

Lemma 6 ([48], Theorem 2)

Given a shape \(X \in \mathbb{X}\) , the eigenvalues λ 1 , λ 2 , λ 3 of \(\mathcal{V }(A)\) , where A: = B R (x) ∩ X and x∂X, λ 1λ 2λ 3 , have the following Taylor expansion:

$$\displaystyle\begin{array}{rcl} \lambda _{1}& =& \frac{2\pi } {15}R^{5} - \frac{\pi } {48}(3\kappa _{1}(X,\mathbf{x}) + \kappa _{2}(X,\mathbf{x}))R^{6} + O(R^{7})\,,{}\end{array}$$
(9.19)
$$\displaystyle\begin{array}{rcl} \lambda _{2}& =& \frac{2\pi } {15}R^{5} - \frac{\pi } {48}(\kappa _{1}(X,\mathbf{x}) + 3\kappa _{2}(X,\mathbf{x}))R^{6} + O(R^{7})\,,{}\end{array}$$
(9.20)
$$\displaystyle\begin{array}{rcl} \lambda _{3}& =& \frac{19\pi } {480}R^{5} - \frac{9\pi } {512}(\kappa _{1}(X,\mathbf{x}) + \kappa _{2}(X,\mathbf{x}))R^{6} + O(R^{7})\,,{}\end{array}$$
(9.21)

where κ 1(X, x) and κ 2(X, x) denote the principal curvatures of ∂X at x . Footnote 2

Hence, similarly to Eq. ( 9.14 ), one can define local estimators \(\tilde{\kappa }_{1}^{R}\) , \(\tilde{\kappa }_{2}^{R}\) and finally the Gaussian curvature \(\tilde{K}_{R}:=\tilde{\kappa }_{1}^{R} \cdot \tilde{\kappa }_{2}^{R}\) as functions of {λ i }1,2,3 and R. From Lemma 6, all these estimators approach expected quantities when R tends to 0.

When dealing with digital shapes D h (X), implementation of these estimators becomes straightforward: choose a radius R, center a Euclidean (or digital) ball at chosen points of [D h (X)] h (e.g. centroids of linels or surfels), compute the quantities (area, volume, covariance matrix) and finally estimate curvature information \(\tilde{\kappa }\), \(\tilde{H}\), \(\tilde{\kappa }_{1}\), \(\tilde{\kappa }_{2}\) or \(\tilde{K}\).

However, several issues are hidden in this approach: What are meaningful values for R according to the shape size and geometry? Do points of [D h (X)] h converge to points x∂X for which Lemmas 5 and 6 are valid? Does counting the number of pixels (resp. voxels) converge to A R (x) (resp. V R (x))? Does the digital covariance matrix converges to the expected one? The rest of the chapter addresses all these questions.

9.3 Digital Moments

Integral invariants rely on the precise estimation of the volume and covariance matrix of specific Euclidean subsets. These quantities can be expressed as functions of zeroth, first and second order moments of Euclidean subsets. It is thus of critical importance to estimate properly moments in the digital world in order to use integral invariants for approximating curvatures of digital shapes.

Since the approximation of moments is directly related to area and volume estimation, but also to integral approximation, there exists a vast literature on this topic. It is known since Gauss and Dirichlet that counting the number of integer points within a convex set provides an order one approximation of its total area (in 2D) / volume (in dD). In fact, much better bounds are achievable for 2D shapes if the boundary is strictly C 3-convex [26]. This holds also in higher dimensions [25, 33, 46]. The estimation of moments of digital sets was tackled in a series of papers of Klette and Žunić [3032]. To sum up their results, they give error upper bounds that are similar to Huxley’s bound for arbitrary moments of 2D shapes.

However, we will not use these bounds for several reasons: (i) they are valid for (strictly) smooth convex shapes while integral invariants may involve non smooth and non convex shapes, (ii) the bounds are given as big “O” and some shape geometry is hidden in the constant. We prefer possibly weaker bounds but we want them to be explicit and valid for more general shapes.

9.3.1 Moments and Digital Moments

Let X be some compact domain of \(\mathbb{R}^{d}\). The p 1p d -moment of X is defined as

(9.22)

The 0⋯0-moment of X is the volume of X (denoted \(\mathop{\mathrm{Vol}}\nolimits (X)\)). For any subset Z of \(\mathbb{Z}^{d}\), the p 1p d -digital moment of Z at step h is defined as

$$\displaystyle{ \hat{m}_{h}^{p_{1}\cdots p_{d} }(Z):=h^{d+p_{1}+\cdots +p_{d} }\sum _{(z_{1},\ldots,z_{d})\in Z}z_{1}\,^{p_{1} }\cdots z_{d}\,^{p_{d} }. }$$
(9.23)

The 0⋯0-digital moment of Z is the digital volume of Z (denoted by \(\widehat{\mathrm{Area}}(Z,h)\) when d = 2 and by \(\widehat{\mathop{\mathrm{Vol}}\nolimits }(Z,h)\) when d ≥ 3).

In the sequel, points and vectors of \(\mathbb{R}^{d}\) and \(\mathbb{Z}^{d}\) are written in bold, and for some point or vector \(\mathbf{z} \in \mathbb{Z}^{d}\), its coordinates or components are written with subscripts as z = (z 1, , z d ). We wish to bound the error between moments of X and digital moments of the digitization of X as some function of the digitization gridstep h. We thus give a particular attention to moments and digital moments of h-cubes. The following equalities are easily obtained by simple integration.

Lemma 7

Let \(\mathbf{z} \in \mathbb{Z}^{d}\) . Point z is the Gauss digitization of h-cube Q h (z), but also its inner or outer Jordan digitization. First orders moments and digital moments of h-cubes follow

$$\displaystyle\begin{array}{rcl} \begin{array}{lllll} m^{0\cdots 0}(Q_{h}(\mathbf{z}))& = h^{d}\qquad \qquad \hat{m}_{h}^{0\cdots 0}(\{\mathbf{z}\})& = h^{d}\end{array} & &{}\end{array}$$
(9.24)
$$\displaystyle\begin{array}{rcl} \begin{array}{lllll} m^{10\cdots 0}(Q_{h}(\mathbf{z}))& = h^{d+1}z_{1}\qquad \qquad \hat{m}_{h}^{10\cdots 0}(\{\mathbf{z}\})& = h^{d+1}z_{1}\end{array} & &{}\end{array}$$
(9.25)
$$\displaystyle\begin{array}{rcl} \begin{array}{lllll} m^{110\cdots 0}(Q_{h}(\mathbf{z}))& = h^{d+2}z_{1}z_{2}\qquad \qquad \hat{m}_{h}^{110\cdots 0}(\{\mathbf{z}\})& = h^{d+2}z_{1}z_{2}\end{array} & &{}\end{array}$$
(9.26)
$$\displaystyle\begin{array}{rcl} \begin{array}{lllll} m^{20\cdots 0}(Q_{h}(\mathbf{z}))& = h^{d+2}\left (z_{1}^{2} + \frac{h^{2}} {12} \right )\qquad \qquad \hat{m}_{h}^{20\cdots 0}(\{\mathbf{z}\})& = h^{d+2}z_{ 1}^{2} \end{array} & &{}\end{array}$$
(9.27)

Discrepancies between digital and continuous moments appear for moments p 1p d , when one of the p i is greater or equal to 2.

9.3.2 General Results for Volume Estimation Errors

The theorem below shows that the error between the volume of a shape X and the naive volume estimation on its digitization by simple enumeration is smaller than the volume of the offset of ∂X with distance \(\sqrt{d}h\).

Theorem 2

Let X be a compact domain of \(\mathbb{R}^{d}\) . Let D be any digitization process such that J h (X) ⊂ D h (X) ⊂ J + h (X). Digital and continuous volumes are related as follows:

$$\displaystyle{ \left \vert \mathop{\mathrm{Vol}}\nolimits (X) -\widehat{\mathop{\mathrm{Vol}}\nolimits }(\mathtt{D}_{h}(X),h)\right \vert \leq \mathop{\mathrm{Vol}}\nolimits (\partial X^{\sqrt{d}h}). }$$
(9.28)

Proof

First of all, 0-order continuous and digital moments of h-cubes coincide, so we have the following equality:

$$\displaystyle\begin{array}{rcl} \widehat{\mathop{\mathrm{Vol}}\nolimits }(\mathtt{D}_{h}(X),h)& =& \sum _{\mathbf{z}\in \mathtt{D}_{h}(X)}\hat{m}_{h}^{0\cdots 0}(\{\mathbf{z}\}) \\ & =& \sum _{\mathbf{z}\in \mathtt{D}_{h}(X)}m^{0\cdots 0}(Q_{ h}(\mathbf{z}))\qquad \text{(Lemma 7)} \\ & =& \mathop{\mathrm{Vol}}\nolimits ([\mathtt{D}_{h}(X)]_{h})\,. {}\end{array}$$
(9.29)

Then, by denoting AΔB the symmetric difference between sets A and B, we bound the volume difference as

$$\displaystyle\begin{array}{rcl} \left \vert \mathop{\mathrm{Vol}}\nolimits (X) -\mathop{\mathrm{Vol}}\nolimits ([\mathtt{D}_{h}(X)]_{h})\right \vert \leq \mathop{\mathrm{Vol}}\nolimits (X\varDelta [\mathtt{D}_{h}(X)]_{h})\,.& &{}\end{array}$$
(9.30)

Indeed, given two sets A and B, we have

$$\displaystyle\begin{array}{rcl} \vert \mathop{\mathrm{Vol}}\nolimits (A) -\mathop{\mathrm{Vol}}\nolimits (B)\vert & =& \vert \mathop{\mathrm{Vol}}\nolimits (A\setminus B) -\mathop{\mathrm{Vol}}\nolimits (B\setminus A)\vert {}\end{array}$$
(9.31)
$$\displaystyle\begin{array}{rcl} & \leq & \vert \mathop{\mathrm{Vol}}\nolimits (A\setminus B) +\mathop{ \mathrm{Vol}}\nolimits (B\setminus A)\vert {}\end{array}$$
(9.32)
$$\displaystyle\begin{array}{rcl} & \leq & \mathop{\mathrm{Vol}}\nolimits (A\varDelta B)\,.{}\end{array}$$
(9.33)

Now, for any sets A, B, Y 1, Y 2 with AY 1B and AY 2B, we have Y 1 ΔY 2B∖A. This follows from Y 1 ΔY 2 = (Y 1Y 2)(Y 1Y 2). Then, obviously (Y 1Y 2) ⊂ B and A ⊂ (Y 1Y 2).

Now, by Lemma 1, we have that [J h (X)] h X ⊂ [J + h (X)] h . But by hypothesis, J h (X) ⊂ D h (X) ⊂ J + h (X), so we also have [J h (X)] h ⊂ [D h (X)] h ⊂ [J + h (X)] h . We may thus apply the preceding property setting A: = [J h (X)] h , B: = [J + h (X)] h , Y 1: = X and Y 2: = [D h (X)] h . We get

$$\displaystyle{ X\varDelta [\mathtt{D}_{h}(X)]_{h} \subset [\mathtt{J}^{+}_{ h}(X)]_{h}\setminus [\mathtt{J}^{-}_{ h}(X)]_{h}\,. }$$
(9.34)

Putting preceding relations together gives

$$\displaystyle\begin{array}{rcl} \begin{array}{llllll} \left \vert \mathop{\mathrm{Vol}}\nolimits (X) -\widehat{\mathop{\mathrm{Vol}}\nolimits }(\mathtt{D}_{h}(X),h)\right \vert & \leq &\mathop{\mathrm{Vol}}\nolimits ([\mathtt{J}^{+}_{h}(X)]_{h}\setminus [\mathtt{J}^{-}_{h}(X)]_{h})&&\text{(using Eq.(9.29), (9.30) and (9.34))} \\ & \leq &\mathop{\mathrm{Vol}}\nolimits ([\mathtt{J}^{0}_{h}(X)]_{h}) &&\text{(definition of Jordan strip)} \\ & \leq &\mathop{\mathrm{Vol}}\nolimits (\partial X^{\sqrt{d}h})\,. &&\text{(Lemma 3)}\end{array} & &{}\end{array}$$
(9.35)

This concludes. □

It would be tempting to extend the preceding result to arbitrary moments. However, some moments are not non-negative measures and we cannot use directly the preceding argument. Hence, we postpone results on moment estimation to a later section.

Lemma 4 in conjunction with the preceding theorem allows us to relate this error to the shape area, for smooth enough shapes.

Corollary 1

Let X be a compact domain of \(\mathbb{R}^{d}\) such that the reach of ∂X is greater than ρ, and h is smaller than \(\frac{\rho }{\sqrt{d}}\) . Let D be any digitization process such that J h (X) ⊂ D h (X) ⊂ J + h (X). Digital and continuous volumes are related as follows:

$$\displaystyle{ \left \vert \mathop{\mathrm{Vol}}\nolimits (X) -\widehat{\mathop{\mathrm{Vol}}\nolimits }(\mathtt{D}_{h}(X),h)\right \vert \leq 2^{d+1}\sqrt{d}\mathrm{Area}(\partial X)h\,. }$$
(9.36)

9.3.3 Volume Approximation Within a Ball of Radius R

Integral invariants rely on moment estimation along the boundary of a shape X within a ball B R (x) of given radius R, for x∂X. Most results on volume and moments estimation are valid for smooth enough shapes, which is not the case of XB R (x). We thus establish results for intersection of smooth shapes. The following lemma is required.

Lemma 8

Let A, B be compact domains of \(\mathbb{R}^{d}\) and ε some positive number. Then ((AB))ε = ((∂A) ∩ B)ε ∪ (A ∩ (∂B))ε .

Proof

Figure 9.2 illustrates this lemma. It suffices to show that (AB) = ((∂A) ∩ B) ∪ (A ∩ (∂B)). For\(\fbox{$\subset$ }\), this comes from the facts that (AB) ⊂ AB and (AB) ⊂ (∂A) ∪ (∂B). For \(\fbox{$\supset$ }\), let y be a point of (∂A) ∩ B. If Int(A) is the interior of A, we have that ∂A = A∖Int(A) since A is compact. It follows that y ∈ (A∖Int(A)) ∩ B, then y ∈ (AB)(Int(A) ∩ B). But it holds that Int(AB) ⊂ (Int(A) ∩ B). Hence (AB)Int(AB) ⊃ (AB)(Int(A) ∩ B). Noticing that (AB)Int(AB) = (AB), we conclude that y(AB). The case yA∂B is similar. □

Fig. 9.2
figure 2

Illustration of Lemma 8

We prove below that the volume of the \(\sqrt{ d}h\)-offset of the boundary of a compact domain X intersected with a ball a radius R can be upper bounded by a constant times h. Furthermore, this bound does not depend on the geometry of X, as long as the radius R is smaller than half the reach of X.

Theorem 3

Let X be a compact domain of \(\mathbb{R}^{d}\) such that the reach of ∂X is greater than ρ. Let \(\mathbf{x} \in \mathbb{R}^{d}\) . Let R (the radius of the ball) and h (the gridstep) be some positive numbers such that \(h \leq \frac{R} {\sqrt{2d}}\) and 2Rρ.

$$\displaystyle\begin{array}{rcl} \mathop{\mathrm{Vol}}\nolimits \left ((\partial (X \cap B_{R}(\mathbf{x})))^{\sqrt{d}h}\right ) \leq K_{ 1}(d)R^{d-1}h\,,& &{}\end{array}$$
(9.37)

with \(K_{1}(d):=2\sqrt{d}\left (d + \left (1 + \frac{1} {\sqrt{2}}\right )^{d}\right )V _{d} + \frac{2\sqrt{6}} {3} \left (\frac{4+\sqrt{2}} {2} \right )^{d}V _{ d-1}\) . As a corollary, this volume is upper bounded by 68Rh when d = 2, and upper bounded by 154R 2 h for d = 3.

Proof

Lemma 8 shows that

$$\displaystyle{ \mathop{\mathrm{Vol}}\nolimits \left ((\partial (X \cap B_{R}(\mathbf{x})))^{\sqrt{d}h}\right ) \leq \mathop{\mathrm{Vol}}\nolimits \left ((\partial X \cap B_{ R}(\mathbf{x}))^{\sqrt{d}h}\right )+\mathop{\mathrm{Vol}}\nolimits \left ((X \cap \partial B_{ R}(\mathbf{x}))^{\sqrt{d}h}\right )\,. }$$
(9.38)

The rightmost term is not hard to bound above. Letting V d be the volume of the d-dimensional ball of radius 1, we proceed as follows:

$$\displaystyle\begin{array}{rcl} & & \mathop{\mathrm{Vol}}\nolimits \left ((X \cap \partial B_{R}(\mathbf{x}))^{\sqrt{d}h}\right ) \\ & & \leq \mathop{\mathrm{Vol}}\nolimits \left ((\partial B_{R}(\mathbf{x}))^{\sqrt{d}h}\right ) \\ & & =\mathop{ \mathrm{Vol}}\nolimits \left (B_{R+\sqrt{d}h}(\mathbf{x})) -\mathop{\mathrm{Vol}}\nolimits (B_{R-\sqrt{d}h}(\mathbf{x})\right ) \\ & & = V _{d}\left [(R + \sqrt{d}h)^{d} - (R -\sqrt{d}h)^{d}\right ] \\ & & = 2V _{d}\left [\binom{d}{1}R^{d-1}\sqrt{d}h +\sum _{ k=1}^{\lfloor \frac{d-1} {2} \rfloor }\binom{d}{2k + 1}R^{d-(2k+1)}\left (\sqrt{d}h\right )^{2k+1}\right ] \\ & & \leq 2V _{d}\left [d\sqrt{d}R^{d-1}h + R^{d-1}\sqrt{d}h\sum _{ k=1}^{\lfloor \frac{d-1} {2} \rfloor }\binom{d}{2k + 1} \frac{1} {\sqrt{2}^{d-(2k+1)}}\right ]\ \text{(since }\sqrt{d}h \leq R/\sqrt{2}\text{)} \\ & & \leq 2V _{d}\left [d\sqrt{d}R^{d-1}h + \left (1 + \frac{1} {\sqrt{2}}\right )^{d}\sqrt{d}R^{d-1}h\right ]\quad \text{(since }\sum _{ i=0}^{d}\binom{d}{i}x^{d-i} = (1 + x)^{d}\text{)} \\ & & \leq 2V _{d}\left (d + \left (1 + \frac{1} {\sqrt{2}}\right )^{d}\right )\sqrt{d}R^{d-1}h\,. {}\end{array}$$
(9.39)

The other term is harder to bound. The idea is to define a kind of cylinder of thickness \(2\sqrt{d}h\) that includes the set \((\partial X \cap B_{R}(\mathbf{x}))^{\sqrt{d}h}\), and then we bound its volume.

We define the set \(P:=\partial X \cap B_{R+\sqrt{2d}h}(\mathbf{x})\). It is a d − 1-manifold which is homeomorphic to a d − 1-disk since \(R + \sqrt{2d}h \leq \rho\). We choose thus a local parameterization \(\mathbf{z}: \mathcal{U} \rightarrow P\) sending any \(\mathbf{u} \in \mathcal{U}\) into P, where \(\mathcal{U} \subset \mathbb{R}^{d-1}\). Let n be the function that associates to each point of ∂X its outward unit normal vector. Let also N be the function which assigns to each \(\mathbf{u} \in \mathcal{U}\) the vector N(u) = n(z(u)), i.e. the unit vector normal to ∂X at z(u). By definition of the reach, the map \(\mathbf{Z}^{t}: \mathcal{U} \rightarrow \mathbb{R}^{d}\) such that Z t(u) = z(u) + t N(u) is injective for −ρtρ. Let S denote the shape operator of P, defined for surfaces by Sv: = −∇ v N for any tangent vector v of P. It is straightforward to check that \(\mathrm{S} \frac{\partial \mathbf{z}} {\partial u_{i}} = -\frac{\partial \mathbf{N}} {\partial u_{i}}\).

Looking now at partial derivatives of Z t, it follows that \(\frac{\partial \mathbf{Z}^{t}} {\partial u_{i}} = (\mathrm{Id} - t\mathrm{S}) \frac{\partial \mathbf{z}} {\partial u_{i}}\). The deformation of an area element of z onto Z t is thus given by det(Id − tS) and we may write:

(9.40)

Now, since ∂X has positive reach, function N is differentiable almost everywhere (a.e.). Furthermore, the normal vector variation is a.e. bounded by 1∕ρ. It follows that det(Id − tS) ≤ (1 + | t | ∕ρ)d−1 almost everywhere. Injecting this inequality into Eq.(9.40) gives

$$\displaystyle\begin{array}{rcl} \mathrm{Area}(\mathbf{Z}^{t}(\mathcal{U}))& \leq & (1 + \vert t\vert /\rho )^{d-1}\mathrm{Area}(P).{}\end{array}$$
(9.41)

For any \(x \in \mathbb{R}\), 0 < xρ, the cylindric shape \(C(x):=\{\mathbf{Z}^{t}(\mathbf{u}),\mathbf{u} \in \mathcal{U},t \in \mathbb{R},-x \leq t \leq x\}\) has a volume that is bounded as follows:

$$\displaystyle\begin{array}{rcl} \mathop{\mathrm{Vol}}\nolimits (C(x))& =& \int _{-x}^{x}\mathrm{Area}(\mathbf{Z}^{t}(\mathcal{U}))dt \\ & \leq & 2\mathrm{Area}(P)\int _{0}^{x}(1 + t/\rho )^{d-1}dt \\ & =& 2\mathrm{Area}(P) \frac{\rho } {d}\left ((1 + x/\rho )^{d} - 1\right ).{}\end{array}$$
(9.42)

We look more precisely at the volume of \(C(\sqrt{d}h)\). After some simple computations, we get:

$$\displaystyle\begin{array}{rcl} \mathop{\mathrm{Vol}}\nolimits (C(\sqrt{d}h))& \leq & 2\mathrm{Area}(P) \frac{\rho } {d} \frac{\sqrt{d}h} {\rho } \left (\sum _{i=1}^{d}\binom{d}{i}\left (\frac{\sqrt{d}h} {\rho } \right )^{i-1}\right ), \\ & \leq & 4\left (\frac{4 + \sqrt{2}} {4} \right )^{d}\frac{\sqrt{d}} {d} \mathrm{Area}(P)h, {}\end{array}$$
(9.43)

since \(\sqrt{d}h \leq \frac{R} {\sqrt{2}} \leq \frac{\sqrt{2}\rho } {4}\) and using the binomial expansion of \((1 + \frac{\sqrt{2}} {4} )^{d}\).

It remains to show that (i) \((\partial X \cap B_{R}(\mathbf{x}))^{\sqrt{d}h} \subset C(\sqrt{d}h)\) and (ii) to estimate Area(P).

  1. (i)

    Let \(\mathbf{y} \in (\partial X \cap B_{R}(\mathbf{x}))^{\sqrt{d}h}\). There is some y ∂XB R (x) with \(\|\mathbf{y} -\mathbf{y}^{{\prime}}\|\leq \sqrt{d}h\). As y is within the reach of ∂X, since \(\mathfrak{d}(y,\partial X) \leq \sqrt{d}h \leq \rho /2\), there is only one closest point y to y on ∂X. Let t be the distance ∥yy ∥. It follows that \(t \leq \|\mathbf{y} -\mathbf{y}^{{\prime}}\|\leq \sqrt{d}h\) since y belongs also to ∂X. And we may write y = y + t n(y ).

    If y = y , then y P and y belongs to \(C(\sqrt{d}h)\) since \(t \leq \sqrt{d}h\).

    Otherwise, we prove that \(\|\mathbf{y}^{{\prime}}-\mathbf{y}^{{\prime\prime}}\|\leq \sqrt{2d}h\). Without loss of generality, assume y is outside of X. Since y ∂X and X has reach greater than ρ, there is an outside osculating ball of radius ρ at y [36], which contains no point of X except y . It contains of course y. We denote p the orthogonal projection of y onto the straight line passing through y and pointing along the normal direction to ∂X. This straight line goes through y and through the center c of this osculating ball. We may also write p = y + p n(y ), where p is the signed distance between y and P along the normal direction. We use now Pythagoras’ theorem to get

    $$\displaystyle\begin{array}{rcl} \begin{array}{llllll} \|\mathbf{y}^{{\prime}}-\mathbf{y}^{{\prime\prime}}\|^{2} & = p^{2} +\| \mathbf{y}^{{\prime}}-\mathbf{p}\|^{2},&\|\mathbf{y}^{{\prime}}-\mathbf{p}\|^{2} + (t - p)^{2} & =\| \mathbf{y} -\mathbf{y}^{{\prime}}\|^{2},\end{array} & & {}\\ \end{array}$$

    which implies

    $$\displaystyle\begin{array}{rcl} \|\mathbf{y}^{{\prime}}-\mathbf{y}^{{\prime\prime}}\|^{2} =\| \mathbf{y} -\mathbf{y}^{{\prime}}\|^{2} + t(2p - t).& & {}\end{array}$$
    (9.44)

    Using first the fact that c = y + ρ n(y ) and point y is outside the osculating ball of radius ρ, and second the fact that \(\|\mathbf{y} -\mathbf{y}^{{\prime}}\|\leq \sqrt{d}h\), we obtain

    $$\displaystyle\begin{array}{rcl} \|\mathbf{y}^{{\prime}}-\mathbf{p}\|^{2} + (\rho -p)^{2}& \geq &\rho ^{2}\ \ \text{and}\ \ \|\mathbf{y}^{{\prime}}-\mathbf{p}\|^{2} + (t - p)^{2} \leq dh^{2} {}\\ \Rightarrow dh^{2}& \geq & t^{2} + 2p(\rho -t) {}\\ \Rightarrow p& \leq & \frac{dh^{2} - t^{2}} {\rho } \qquad \quad \text{(since }t \leq \sqrt{d}h \leq \rho /2\text{)}. {}\\ \end{array}$$

    We now inject the last inequality into Eq.(9.44) to bound ∥y y ∥:

    $$\displaystyle\begin{array}{rcl} \|\mathbf{y}^{{\prime}}-\mathbf{y}^{{\prime\prime}}\|^{2}& \leq & (\|\mathbf{y} -\mathbf{y}^{{\prime}}\|^{2} - t^{2}) + 2\frac{t} {\rho } (dh^{2} - t^{2}) {}\\ & \leq & 2dh^{2}.\qquad \quad \text{(since }t/\rho <1/2\text{ and }\|\mathbf{y} -\mathbf{y}^{{\prime}}\|\leq \sqrt{d}h\text{)} {}\\ \end{array}$$

    It follows that \(\|\mathbf{y}^{{\prime\prime}}-\mathbf{x}\| \leq \|\mathbf{y}^{{\prime\prime}}-\mathbf{y}^{{\prime}}\| +\| \mathbf{y}^{{\prime}}-\mathbf{x}\| \leq \sqrt{2d}h + R\), and since \(t \leq \sqrt{d}h\) we conclude that \(\mathbf{y} \in C(\sqrt{d}h)\).

  2. (ii)

    To estimate Area(P), we go back to its parametric definition as \(\mathrm{Area}(\mathbf{z}(\mathcal{U}))\). Since the maximal distance of an element of P to x is \(R + \sqrt{2d}h\) and is smaller than the reach, it follows that P projects injectively orthogonally on the tangent plane to ∂X at x. We may thus choose \(\mathcal{U}\) to be the orthogonal projection of P onto this tangent plane. It follows that we can define z as \(\forall \mathbf{u} \in \mathcal{U},\mathbf{z}(\mathbf{u}):=(u_{1},\ldots,u_{d-1},f(\mathbf{u}))\), where f is a height function. The area of P is then:

    (9.45)

    Let \(\varDelta:=R + \sqrt{2d}h\). Now, on the boundary of a shape with reach greater than ρ, the angle variation of the normal vector cannot exceed the angle variation on the sphere of radius ρ. It follows that we can bound above this angle variation by measuring the angle variation β between the pole and a point at distance Δ of the pole onto this sphere. One easily checks that \(\sin \beta = \frac{\varDelta } {2\rho }\). Since \(\varDelta = R + \sqrt{2d}h \leq \rho\), we get \(\beta \leq \frac{\pi } {6}\). It follows immediately that \(\left \vert \frac{\partial f} {\partial u_{i}}\right \vert\) cannot exceed the tangent of β, that is \(\left \vert \frac{\partial f} {\partial u_{i}}\right \vert \leq \tan \beta \leq \frac{\sqrt{3}} {3}\). We may now bound above all partial derivatives in Eq.(9.45) to get:

    (9.46)

    since \(\mathcal{U}\) is included in the disk of radius Δ centered on x and Δ ≤ 2R.

We are now in position to conclude the proof.

$$\displaystyle\begin{array}{rcl} \mathop{\mathrm{Vol}}\nolimits \left ((\partial X \cap B_{R}(\mathbf{x}))^{\sqrt{d}h}\right )& \leq & \mathop{\mathrm{Vol}}\nolimits (C(\sqrt{d}h))\qquad \qquad \text{(using (i))} {}\\ & \leq & 4\left (\frac{4 + \sqrt{2}} {4} \right )^{d}\frac{\sqrt{d}} {d} \mathrm{Area}(P)h\qquad \qquad \text{(with Eq.(9.43))} {}\\ & \leq & \frac{2\sqrt{6}} {3} \left (\frac{4 + \sqrt{2}} {2} \right )^{d}V _{ d-1}R^{d-1}h\qquad \qquad \text{(with Eq.(9.46))}. {}\\ \end{array}$$

In the last line, we also use the fact that for d ≥ 2 we have \(\sqrt{ 1 + 2/d} \leq \sqrt{2}\). Putting all together gives the result. □

9.3.4 Errors on Volume and Moment Estimation Within a Ball of Radius R

Theorem 3 gives us the main key to bound above the error in volume estimation, and more generally moments estimation, within a ball around the boundary of a compact domain X. We summarize in the following theorem our results on moments estimation.

Theorem 4

Let X be a compact domain of \(\mathbb{R}^{d}\) such that the reach of ∂X is greater than ρ. Let D be any digitization process such that J h (X) ⊂ D h (X) ⊂ J + h (X). Let x be any point of \(\mathbb{R}^{d}\) . Let R (the radius of the ball) and h (the gridstep) be some positive numbers such that \(h \leq \frac{R} {\sqrt{2d}}\) and 2Rρ. Let (p i ) i = 1…d be the integers defining the moment exponents, with 0 ≤ p i ≤ 2, and let σ: = p 1 + + p d , with σ ≤ 2. Then digital moments within a ball are multigrid convergent toward continuous moments as follows

$$\displaystyle\begin{array}{rcl} & \left \vert m^{p_{1}\cdots p_{d}}(X \cap B_{R}(\mathbf{x})) -\hat{m}_{h}^{p_{1}\cdots p_{d}}\left (\mathtt{D}_{h}(X \cap B_{R}(\mathbf{x}))\right )\right \vert & \\ & \leq K_{1}(d)R^{d-1}(\|\mathbf{x}\|_{\infty } + 2R)^{\sigma }\,h + \frac{h^{4}} {12} V _{d}R^{d}, &{}\end{array}$$
(9.47)

where V d is the volume of the unit d-dimensional ball. Furthermore, the term in h 4 is only present when one p i is equal to 2.

Proof

The difficult part lies in the fact that moments may not be positive. To simplify notations, let Y: = XB R (x). We must split the error in moment estimation into three sets, corresponding to parts of Y lying in [J h (Y )] h , in [D h (Y )] h [J h (Y )] h and in [J + h (Y )] h [D h (Y )] h .

$$\displaystyle\begin{array}{rcl} \left \vert m^{p_{1}\cdots p_{d} }(Y ) -\hat{m}_{h}^{p_{1}\cdots p_{d} }\left (\mathtt{D}_{h}(Y )\right )\right \vert & \leq & \sum _{\mathbf{z}\in \mathtt{J}^{+}_{h}(Y )\setminus \mathtt{D}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Y \cap Q_{h}(\mathbf{z}))\right \vert \\ & &\quad +\sum _{\mathbf{z}\in \mathtt{D}_{h}(Y )\setminus \mathtt{J}^{-}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Y \cap Q_{h}(\mathbf{z})) -\hat{m}_{h}^{p_{1}\cdots p_{d} }(\{\mathbf{z}\})\right \vert \\ & &\quad +\sum _{\mathbf{z}\in \mathtt{J}^{-}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Q_{h}(\mathbf{z})) -\hat{m}_{h}^{p_{1}\cdots p_{d} }(\{\mathbf{z}\})\right \vert {}\end{array}$$
(9.48)

The first term of this sum does not have any digital moment contribution since it lies outside the digitization of X. The third term does not require to intersect Y with the h-cube, since we are within the inner Jordan digitization.

  1. 1.

    We look at the third term. Lemma 7 tells us that it is equal to zero as long as 0 ≤ p i ≤ 1 for 1 ≤ id. Otherwise, if one p i is equal to 2, it is then straightforward to bound it as follows:

    $$\displaystyle\begin{array}{rcl} & & \sum _{\mathbf{z}\in \mathtt{J}^{-}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Q_{h}(\mathbf{z})) -\hat{m}_{h}^{p_{1}\cdots p_{d} }(\{\mathbf{z}\})\right \vert =\sum _{\mathbf{z}\in \mathtt{J}^{-}_{h}(Y )}\left \vert \frac{h^{d+4}} {12} \right \vert \qquad \qquad \text{(Lemma 7)} \\ & & = \frac{h^{4}} {12}\mathop{\mathrm{Vol}}\nolimits ([\mathtt{J}^{-}_{ h}(Y )]_{h}) \\ & & \leq \frac{h^{4}} {12}\mathop{\mathrm{Vol}}\nolimits (Y )\,, {}\end{array}$$
    (9.49)

    since the inner Jordan digitization of Y lies inside Y. It is clear then that \(\mathop{\mathrm{Vol}}\nolimits (Y ) =\mathop{ \mathrm{Vol}}\nolimits (X \cap B_{R}(\mathbf{x})) \leq \mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x})) = V _{d}R^{d}\).

  2. 2.

    We now consider the second term. For some zD h (Y ) J h (Y ), we reason on the sign of each component z i for i in 1 to d. We notice that, in the h-cube Q h (z), the component x i has the same sign as z i except when z i = 0. Let ε i = 1 when z i ≥ 0 and ε i = −1 otherwise. We can thus eliminate the signs and rewrite the difference below as:

    (9.50)

    The integral on the left is necessarily non-negative while the term on the right is non-negative and is subtracted to it. It follows that this error can be upper bounded by the maximum of both terms.

    (9.51)

    Since points involved in the left integral are all in B R (x), it follows that the left term is easily bounded by \((\|\mathbf{x}\|_{\infty } + R)^{\sigma }\mathop{\mathrm{Vol}}\nolimits (Y \cap Q_{h}(\mathbf{z}))\). Since the point z is in D h (Y ) J h (Y ), we have zJ + h (Y )Int(J h (Y )) and Lemma 3, equation Eq.(9.8), concludes that \((h\mathbf{z}) \in \partial Y ^{\sqrt{d}h}\). it follows that \(\|h\mathbf{z}\|_{\infty }\leq \|\mathbf{x}\|_{\infty } + R + \sqrt{d}h \leq \|\mathbf{x}\|_{\infty } + 2R\), since \(h <R/\sqrt{2d}\). The second term is thus bounded by \((\|\mathbf{x}\|_{\infty } + 2R)^{\sigma }\mathop{\mathrm{Vol}}\nolimits (Q_{h}(\mathbf{z}))\). Since \(\mathop{\mathrm{Vol}}\nolimits (Y \cap Q_{h}(\mathbf{z})) \leq \mathop{\mathrm{Vol}}\nolimits (Q_{h}(\mathbf{z}))\), we obtain

    $$\displaystyle\begin{array}{rcl} & & \sum _{\mathbf{z}\in \mathtt{D}_{h}(Y )\setminus \mathtt{J}^{-}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Y \cap Q_{h}(\mathbf{z})) -\hat{m}_{h}^{p_{1}\cdots p_{d} }(\{\mathbf{z}\})\right \vert \\ &&\quad \leq (\|\mathbf{x}\|_{\infty } + 2R)^{\sigma }\mathop{\mathrm{Vol}}\nolimits ([\mathtt{D}_{h}(Y )\setminus \mathtt{J}^{-}_{ h}(Y )]_{h}). {}\end{array}$$
    (9.52)
  3. 3.

    We finally look at the first term of the sum, which is easier to bound:

    $$\displaystyle\begin{array}{rcl} \sum _{\mathbf{z}\in \mathtt{J}^{+}_{h}(Y )\setminus \mathtt{D}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Y \cap Q_{h}(\mathbf{z}))\right \vert &\leq & (\|\mathbf{x}\|_{\infty } + R)^{\sigma }\sum _{\mathbf{z}\in \mathtt{J}^{+}_{h}(Y )\setminus \mathtt{D}_{h}(Y )}\mathop{ \mathrm{Vol}}\nolimits (Y \cap Q_{h}(\mathbf{z})). \\ & \leq & (\|\mathbf{x}\|_{\infty } + R)^{\sigma }\mathop{\mathrm{Vol}}\nolimits ([\mathtt{J}^{+}_{ h}(Y )\setminus \mathtt{D}_{h}(Y )]). {}\end{array}$$
    (9.53)

Since D h (Y ) J h (Y ) and J + h (Y ) D h (Y ) are disjoint, we may add inequalities Eqs. (9.52) and (9.53) as

$$\displaystyle\begin{array}{rcl} & & \sum _{\mathbf{z}\in \mathtt{D}_{h}(Y )\setminus \mathtt{J}^{-}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Y \cap Q_{h}(\mathbf{z})) -\hat{m}_{h}^{p_{1}\cdots p_{d} }(\{z\})\right \vert {}\\ & &\qquad +\sum _{\mathbf{z}\in \mathtt{J}^{+}_{h}(Y )\setminus \mathtt{D}_{h}(Y )}\left \vert m^{p_{1}\cdots p_{d} }(Y \cap Q_{h}(\mathbf{z}))\right \vert {}\\ & &\quad \leq (\|\mathbf{x}\|_{\infty } + 2R)^{\sigma }\mathop{\mathrm{Vol}}\nolimits ([\mathtt{J}^{+}_{ h}(Y )\setminus \mathtt{J}^{-}_{ h}(Y )]_{h}) {}\\ & & \quad \leq (\|\mathbf{x}\|_{\infty } + 2R)^{\sigma }\mathop{\mathrm{Vol}}\nolimits \left ((\partial Y )^{\sqrt{d}h}\right )\qquad \text{(Lemma 3, Eq.(9.8), page 299)} {}\\ & & \quad \leq (\|\mathbf{x}\|_{\infty } + 2R)^{\sigma }K_{1}(d)R^{d-1}\,h\qquad \qquad \text{(Theorem 3)} {}\\ \end{array}$$

This concludes by simple addition of the bound on the third term, i.e. Eq.(9.49). □

9.3.5 Conclusion

In this section we have determined links between continuous and digital moments of order up to 2 in arbitrary dimension. We have established several approximation error bounds for arbitrary compact domains. Furthermore, we have been able to estimate the approximation error on moments of intersections of a smooth shape with a ball of radius R and center x. Our error bound only depends on the dimension of the space, the radius R of the ball and the norm ∥x , and scales linearly with the digitization gridstep h. It is worthy to note that our results apply for arbitrary digitization processes, as long as they contain the inner Jordan digitization and are included in the outer Jordan digitization. In particular, it includes the Gauss digitization. We will use these results in the next sections to show the multigrid convergence of curvature estimators based on digital integral invariants.

9.4 Multigrid Convergence of Mean Curvature in 2D and 3D

We show in the section that the local mean curvature on the boundary of a digital shape can be approximated simply by intersecting the digital shape with a ball around the point of interest and counting the number of digital points within. This is related to integral invariants results [49] (recalled in Lemma 5, page 301), which requires only the computation of the volume of the shape intersected with a ball.

9.4.1 Definition of Mean Curvature Estimators

We begin by defining a 2D digital curvature estimator and a 3D digital mean curvature estimator, whose computation requires only the enumeration of digital points within a neighborhood around the point of interest.

Definition 3 (2D Integral Digital Curvature Estimator)

For any positive radius R, we define the 2D integral digital curvature estimator \(\hat{\kappa }^{R}\) of a digital shape \(Z \subset \mathbb{Z}^{2}\) at any point \(\mathbf{x} \in \mathbb{R}^{2}\) and for a grid step h > 0 as:

$$\displaystyle{ \forall 0 <h <R,\quad \hat{\kappa }^{R}(Z,\mathbf{x},h):= \frac{3\pi } {2R} -\frac{3\widehat{\mathrm{Area}}(B_{R/h}(\mathbf{x}/h) \cap Z,h)} {R^{3}} \,. }$$
(9.54)

Definition 4 (3D Integral Digital Mean Curvature Estimator)

For any positive radius R, we define the 3D integral mean digital curvature estimator \(\hat{H}^{R}\) of a digital shape \(Z \subset \mathbb{Z}^{3}\) at any point \(\mathbf{x} \in \mathbb{R}^{3}\) and for a grid step h > 0 as:

$$\displaystyle{ \forall 0 <h <R,\quad \hat{H}^{R}(Z,\mathbf{x},h):= \frac{8} {3R} -\frac{4\widehat{\mathop{\mathrm{Vol}}\nolimits }(B_{R/h}(\mathbf{x}/h) \cap Z,h)} {\pi R^{4}} \,. }$$
(9.55)

As one can see on Fig. 9.3, these estimators place a ball of Euclidean radius R around the point of interest x and count the number of digital points of Z, scaled back in \((h\mathbb{Z})^{d}\), within this ball. A simple linear formula is then applied on this number to get a curvature approximation.

Fig. 9.3
figure 3

Illustration of 2D integral digital curvature estimator \(\hat{\kappa }^{R}\). The shape X is a disk of radius 7. 5. To the left, the digitization gridstep h is 1, while h is 0. 5 to the right. We choose a ball of radius R = 3 and we wish to estimate the curvature at some arbitrary point x. We count the number of digital points within the orange ball, centered at x and of radius 3. To the left we count 12 points. Hence for h = 1, the estimated curvature \(\hat{\kappa }^{3}(\mathtt{G}_{1}(X),\mathbf{x},1)\) is 3π∕(2 × 3) − 3 × 12 × 12∕33 ≈ 0. 237. To the right we count 50 points. Hence for h = 0. 5, the estimated curvature \(\hat{\kappa }^{3}(\mathtt{G}_{0.5}(X),\mathbf{x},0.5)\) is 3π∕(2 × 3) − 3 × 51 × 0. 52∕33 ≈ 0. 154. Ground truth curvature is 1∕7. 5 ≈ 0. 133

The 2D example displayed on Fig. 9.3 indicates that the finer the digitization step, the better the approximation. This is indeed true, at least for shapes with smooth enough boundary.

9.4.2 Convergence at Points of ∂X (Weak Formulation)

In this section, we show that the curvature estimator \(\hat{\kappa }^{R}\) (resp. the mean curvature estimator \(\hat{H}^{R}\)) converges to the expected curvature (resp. mean curvature) for points x belonging to the boundary of a compact shape X, as long as X has positive reach. A preliminary version of this theorem that requires X to be convex with C 3-smooth boundary has been published in [14].

Theorem 5 (Convergence of Curvature Estimator \(\hat{\boldsymbol{\kappa }}^{\boldsymbol{R}}\) Along \(\boldsymbol{\partial X}\))

Let X be a compact domain of \(\mathbb{R}^{2}\) such that its boundary ∂X is C 3 -smooth and has reach greater than ρ.

Then the curvature estimator \(\hat{\kappa }^{R}\) at any point x of ∂X is multigrid convergent to the curvature κ(X, x) of X at point x for the Gauss digitization process, with convergence speed at least \(O(h^{\frac{1} {3} })\) when \(R =\varTheta (h^{\frac{1} {3} })\) and R < ρ∕2. More precisely, we have

$$\displaystyle{ \forall 0 <h \leq \frac{R} {2},\quad \left \vert \hat{\kappa }^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) -\kappa (X,\mathbf{x})\right \vert \leq O\left (h^{\frac{1} {3} }\right ). }$$
(9.56)

Proof

Note that the boundary is smooth enough so that integral invariants do have a Taylor expansion (and Lemma 5 applies). Furthermore, the domain being compact, it has necessarily a positive reach. Since the Gauss digitization lies in the Jordan strip (Lemma 1, page 298), we are in the hypotheses of Theorem 4. We bound the curvature approximation error as follows:

$$\displaystyle\begin{array}{rcl} & & \vert \hat{\kappa }^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) -\kappa (X,\mathbf{x})\vert \\ & &\quad \leq \left \vert \hat{\kappa }^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) -\tilde{\kappa }^{R}(X,\mathbf{x})\right \vert + O(R)\qquad \qquad \qquad \qquad \text{(from Eq.(9.15), page 301)} \\ & & \quad = \left \vert \frac{3A_{R}(\mathbf{x})} {R^{3}} -\frac{3\widehat{\mathrm{Area}}(B_{R/h}(\mathbf{x}/h) \cap \mathtt{G}_{h}(X),h)} {R^{3}} \right \vert + O(R)\qquad \text{(from Eq.(9.54) and (9.14))} \\ & & \quad = \frac{3} {R^{3}}\left \vert m^{00}(X \cap B_{ R}(\mathbf{x})) -\hat{m}_{h}^{00}\left (\mathtt{G}_{ h}(X \cap B_{R}(\mathbf{x}))\right )\right \vert + O(R)\,. {}\end{array}$$
(9.57)

The last equality follows from the definitions of A R and \(\widehat{\mathrm{Area}}\) expressed as 00-moments, and also from the fact G h (XB R (x)) = B Rh (xh) ∩G h (X) (this is easily checked for Gauss digitization). Theorem 4 then implies

$$\displaystyle\begin{array}{rcl} \vert \hat{\kappa }^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) -\kappa (X,\mathbf{x})\vert \leq \frac{3K_{1}(2)} {R^{2}} h + O(R)\,.& &{}\end{array}$$
(9.58)

The error is the sum of two terms, in which R has an opposite effect. The right term requires a radius R tending to zero, while the left term is minimized by a large radius. If R is chosen as some function kh α, where k is a constant, then the asymptotically minimizing error is for \(\alpha = \frac{1} {3}\). □

We have a similar result for the digital mean curvature estimator on 3D shapes.

Theorem 6 (Convergence of Mean Curvature Estimator \(\hat{\boldsymbol{H}}^{\boldsymbol{R}}\) Along \(\boldsymbol{\partial X}\))

Let X be a compact domain of \(\mathbb{R}^{3}\) such that its boundary ∂X is C 3 -smooth and has reach greater than ρ.

Then the mean curvature estimator \(\hat{H}^{R}\) at any point x of ∂X is multigrid convergent to the mean curvature H(X, x) of X at point x for the Gauss digitization process, with convergence speed at least \(O(h^{\frac{1} {3} })\) when \(R =\varTheta (h^{\frac{1} {3} })\) and R < ρ∕2. More precisely, we have

$$\displaystyle{ \forall 0 <h \leq \frac{R} {\sqrt{6}},\quad \left \vert \hat{H}^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) - H(X,\mathbf{x})\right \vert \leq O\left (h^{\frac{1} {3} }\right ). }$$
(9.59)

Proof

The proof follows exactly the same steps as the proof of Theorem 5, since on the one hand integral invariants also have a Taylor expansion in 3D and on the other hand Theorem 4 applies in arbitrary dimension. We bound the curvature approximation error as follows:

$$\displaystyle\begin{array}{rcl} & & \vert \hat{H}^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) - H(X,\mathbf{x})\vert \\ & &\quad \leq \left \vert \hat{H}^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) -\tilde{H}^{R}(X,\mathbf{x})\right \vert + O(R)\qquad \qquad \qquad \qquad \text{(from Eq.(9.15), page 301)} \\ & & \quad = \left \vert \frac{4V _{R}(\mathbf{x})} {\pi R^{4}} -\frac{4\widehat{\mathop{\mathrm{Vol}}\nolimits }(B_{R/h}(\mathbf{x}/h) \cap \mathtt{G}_{h}(X),h)} {\pi R^{4}} \right \vert + O(R)\qquad \qquad \text{(from Eq.(9.55) and (9.14))} \\ & & \quad = \frac{4} {\pi R^{4}}\left \vert m^{000}(X \cap B_{ R}(\mathbf{x})) -\hat{m}_{h}^{000}\left (\mathtt{G}_{ h}(X \cap B_{R}(\mathbf{x}))\right )\right \vert + O(R)\,. {}\end{array}$$
(9.60)

The last equality follows from the definitions of V R and \(\widehat{\mathop{\mathrm{Vol}}\nolimits }\) expressed as 000-moments, and also from the fact G h (XB R (x)) = B Rh (xh) ∩G h (X). Theorem 4 then induces

$$\displaystyle\begin{array}{rcl} \left \vert \hat{H}^{R}(\mathtt{G}_{ h}(X),\mathbf{x},h) - H(X,\mathbf{x})\right \vert \leq \frac{4K_{1}(3)} {\pi R^{2}} h + O(R)\,.& &{}\end{array}$$
(9.61)

As in 2D, the error is the sum of two terms, in which R has an opposite effect. We also obtain that the asymptotically minimizing error is for \(R = kh^{\frac{1} {3} }\), with k some constant. □

9.4.3 Multigrid Convergence for Smooth Enough Shapes

Theorems 5 and 6 are not multigrid convergence theorems, since convergence results are only valid on points of ∂X. However, the exact location of ∂X is generally unknown, and only approximate digital data is available. To achieve multigrid convergent theorems, we have to take into account the possible error in the position at which the curvature is estimated.

We therefore examine the perturbation of the moments when they are evaluated at a shifted position x + t.

Lemma 9

For any point x , a positive number R, and any vector t with norm t: = ∥t2R, we have

$$\displaystyle\begin{array}{rcl} \left \vert \mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x} + \mathbf{t})) -\mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x}))\right \vert \leq \frac{3^{d}} {2^{d}}V _{d}R^{d-1}\,t\,.& &{}\end{array}$$
(9.62)

Proof

We simply bound this difference of volumes by the volume of the difference of two balls with same center, as illustrated on Fig. 9.4. More precisely, we have:

$$\displaystyle\begin{array}{rcl} \left \vert \mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x} + \mathbf{t})) -\mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x}))\right \vert & =& \left \vert \mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x} + \mathbf{t})\varDelta B_{R}(\mathbf{x}))\right \vert {}\\ &\leq & \left \vert \mathop{\mathrm{Vol}}\nolimits \left (B_{R+ \frac{t} {2} }(\mathbf{q})\setminus B_{R-\frac{t} {2} }(\mathbf{q})\right )\right \vert, {}\\ \end{array}$$
Fig. 9.4
figure 4

Left: illustration of Lemma 9 for bounding volume. The symmetric difference of two balls is included in the shell with thickness equal to the distance between the centerpoints of these two balls and centered at their midpoint. Right: illustration of Lemma 11 for bounding moments. The symmetric difference of two balls is included in the shell with thickness equals to twice the distance between the centerpoints of these two balls and centered on one of the ball

where q is the midpoint between x and x + t. Since V d is the volume of the unit d-dimensional ball, it follows that:

$$\displaystyle\begin{array}{rcl} \left \vert \mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x} + \mathbf{t})) -\mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x}))\right \vert & \leq & V _{d}\left ((R + \frac{t} {2})^{d} - (R - \frac{t} {2})^{d}\right ) {}\\ & \leq & \frac{3^{d}} {2^{d}}V _{d}R^{d-1}\,t\,. {}\\ \end{array}$$

We may now prove the uniform multigrid convergence of both the 2D curvature estimator \(\hat{\kappa }^{R}\) and the 3D mean curvature estimator \(\hat{H}^{R}\) towards respectively the curvature κ and the mean curvature H for the Gauss digitization process and smooth enough shapes.

Theorem 7 (Multigrid Convergence of 2D Curvature Estimator \(\hat{\boldsymbol{\kappa }}^{\boldsymbol{R}}\))

Let X be a compact domain of \(\mathbb{R}^{2}\) such that its boundary ∂X is C 3 -smooth and has reach greater than ρ.

Then the curvature estimator \(\hat{\kappa }^{R}\) is multigrid convergent to the curvature κ for the Gauss digitization process on such shapes X, with convergence speed at least \(O(h^{\frac{1} {3} })\) when \(R =\varTheta (h^{\frac{1} {3} })\) and R < ρ∕2. More precisely, we have

$$\displaystyle\begin{array}{rcl} & & \forall 0 <h \leq \frac{R} {2},\,\,\forall \mathbf{x} \in \partial X,\,\,\forall \hat{\mathbf{x}} \in \partial [\mathtt{G}_{h}(X)]_{h} \text{{with}} \|\hat{\mathbf{x}} -\mathbf{x}\|_{\infty }\leq h, \\ & & \qquad \qquad \qquad \left \vert \hat{\kappa }^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\kappa (X,\mathbf{x})\right \vert \leq O\left (h^{\frac{1} {3} }\right )\,. {}\end{array}$$
(9.63)

More precisely the bound is no greater than \((\frac{27} {4} \pi \sqrt{2} + 3K_{1}(2))R^{-2}h + O(R)\) .

Proof

Note that from Theorem 1 (page 299), we have that any point of ∂X is close to a point of [G h (X)] h (distance less than \(h/\sqrt{d}\)) and the converse is also true. We proceed similarly as in Theorem 5, using Eq.(9.15) to get:

$$\displaystyle\begin{array}{rcl} & & \vert \hat{\kappa }^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\kappa (X,\mathbf{x})\vert {}\\ & &\quad \leq \left \vert \hat{\kappa }^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\tilde{\kappa }^{R}(X,\hat{\mathbf{x}})\right \vert + \left \vert \tilde{\kappa }^{R}(X,\hat{\mathbf{x}}) -\tilde{\kappa }^{R}(X,\mathbf{x})\right \vert + O(R). {}\\ \end{array}$$

The first error term is bounded by \(\frac{3K_{1}(2)} {R^{2}} h\) (see proof of Theorem 5). We take care of the second error term, letting \(\mathbf{t}:=\hat{ \mathbf{x}} -\mathbf{x}\) and t: = ∥t2:

$$\displaystyle\begin{array}{rcl} \left \vert \tilde{\kappa }^{R}(X,\hat{\mathbf{x}}) -\tilde{\kappa }^{R}(X,\mathbf{x})\right \vert & =& \frac{3} {R^{3}}\left \vert A_{R}(\hat{\mathbf{x}}) - A_{R}(\mathbf{x})\right \vert {}\\ & =& \frac{3} {R^{3}}\left \vert \mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x} + \mathbf{t}) \cap X) -\mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x}) \cap X)\right \vert {}\\ &\leq & \frac{3} {R^{3}}\left \vert \mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x} + \mathbf{t})) -\mathop{\mathrm{Vol}}\nolimits (B_{R}(\mathbf{x}))\right \vert {}\\ &\leq & \frac{27} {4} \pi R^{-2}\,t\,.\quad \text{(using Lemma 9)} {}\\ \end{array}$$

The last bound is valid since \(t =\|\hat{ \mathbf{x}} -\mathbf{x}\|_{2} \leq \sqrt{2}h\) by the fact that \(\|\hat{\mathbf{x}} -\mathbf{x}\|_{\infty }\leq h\). By hypothesis, hR∕2, so t < R.

We have just shown \(\left \vert \tilde{\kappa }^{R}(X,\hat{\mathbf{x}}) -\tilde{\kappa }^{R}(X,\mathbf{x})\right \vert \leq 6\pi \sqrt{2}R^{-2}h\), which concludes. □

We have a similar result for the 3D mean curvature estimator, whose proof follows the same arguments.

Theorem 8 (Multigrid Convergence of 3D Mean Curvature Estimator \(\hat{\boldsymbol{H}}^{\boldsymbol{R}}\))

Let X be a compact domain of \(\mathbb{R}^{3}\) such that its boundary ∂X is C 3 -smooth and has reach greater than ρ.

Then the 3D mean curvature estimator \(\hat{H}^{R}\) is multigrid convergent to the mean curvature H for the Gauss digitization process on such shapes X, with convergence speed at least \(O(h^{\frac{1} {3} })\) when \(R =\varTheta (h^{\frac{1} {3} })\) and R < ρ∕2. More precisely, we have

$$\displaystyle\begin{array}{rcl} & & \forall 0 <h \leq \frac{R} {\sqrt{6}},\,\,\forall \mathbf{x} \in \partial X,\,\,\forall \hat{\mathbf{x}} \in \partial [\mathtt{G}_{h}(X)]_{h} \text{{with}} \|\hat{\mathbf{x}} -\mathbf{x}\|_{\infty }\leq h, \\ & & \qquad \qquad \qquad \quad \left \vert \hat{H}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) - H(X,\mathbf{x})\right \vert \leq O\left (h^{\frac{1} {3} }\right ). {}\end{array}$$
(9.64)

More precisely the bound is no greater than \((18\sqrt{3} + 4K_{1}(3)/\pi )R^{-2}h + O(R)\) .

9.5 Multigrid Convergence of Curvature Tensor in 3D

We show in this section how we can approach the whole curvature tensor by simply intersecting a ball with the digital input shape. However, we do not simply count the number of digital points within but we rather compute the digital covariance matrix. The digital covariance matrix is shown to be close to the covariance matrix, which is known to contain curvature information. Although most of the results presented here could be naturally extended to arbitrary dimension d, we prefer to expose them in 3D.

9.5.1 Digital Covariance Matrix and Digital Principal Curvature Estimators

Similarly to the covariance matrix, the digital covariance matrix is defined through zeroth, first and second order digital moments.

Definition 5 (Digital Covariance Matrix)

For any digital subset \(Z \subset \mathbb{Z}^{3}\), its digital covariance matrix \(\mathcal{V }_{h}\) at step h is

$$\displaystyle\begin{array}{rcl} \mathcal{V }_{h}(Z)&:=& \left [\begin{array}{ccc} \hat{m}_{h}^{200}(Z)&\hat{m}_{h}^{110}(Z)&\hat{m}_{h}^{101}(Z) \\ \hat{m}_{h}^{110}(Z)&\hat{m}_{h}^{020}(Z)&\hat{m}_{h}^{011}(Z) \\ \hat{m}_{h}^{101}(Z)&\hat{m}_{h}^{011}(Z)&\hat{m}_{h}^{002}(Z) \end{array} \right ] \\ & & - \frac{1} {\hat{m}_{h}^{000}(Z)}\left [\begin{array}{c} \hat{m}_{h}^{100}(Z) \\ \hat{m}_{h}^{010}(Z) \\ \hat{m}_{h}^{001}(Z)\\ \end{array} \right ] \otimes \left [\begin{array}{c} \hat{m}_{h}^{100}(Z) \\ \hat{m}_{h}^{010}(Z) \\ \hat{m}_{h}^{001}(Z)\\ \end{array} \right ]^{T}.{}\end{array}$$
(9.65)

Following the truncated Taylor expansion of Lemma 6, we define estimators of principal curvatures from the diagonalization of the digital covariance matrix.

Definition 6

Let \(Z \subset \mathbb{Z}^{3}\) be a digital shape and h > 0 a grid step. For Rh, we define the integral principal curvature estimators \(\hat{\kappa }_{1}^{R}\) and \(\hat{\kappa }_{2}^{R}\) of Z at point \(\mathbf{y} \in \mathbb{R}^{3}\) and step h their respective integral principal direction estimators \(\hat{\mathbf{w}}_{1}^{R}\) and \(\hat{\mathbf{w}}_{2}^{R}\), and the integral normal estimator \(\hat{\mathbf{n}}^{R}\) as

$$\displaystyle\begin{array}{rcl} \begin{array}{llllll} \hat{\kappa }_{1}^{R}(Z,\mathbf{y},h)&:= \frac{6} {\pi R^{6}} (\hat{\lambda }_{2} - 3\hat{\lambda }_{1}) + \frac{8} {5R},&\hat{\mathbf{w}}_{1}^{R}(Z,\mathbf{y},h)&:=\hat{\nu }_{ 1}\end{array} & &{}\end{array}$$
(9.66)
$$\displaystyle\begin{array}{rcl} \begin{array}{lllll} \hat{\kappa }_{2}^{R}(Z,\mathbf{y},h)&:= \frac{6} {\pi R^{6}} (\hat{\lambda }_{1} - 3\hat{\lambda }_{2}) + \frac{8} {5R},&\hat{\mathbf{w}}_{2}^{R}(Z,\mathbf{y},h)&:=\hat{\nu }_{ 2}\end{array} & &{}\end{array}$$
(9.67)
$$\displaystyle\begin{array}{rcl} \begin{array}{lllll} &&\hat{\mathbf{n}}^{R}(Z,\mathbf{y},h)&:=\hat{\nu }_{3}\,,\end{array} & &{}\end{array}$$
(9.68)

where \(\hat{\lambda }_{1} \geq \hat{\lambda }_{2} \geq \hat{\lambda }_{3}\) are the eigenvalues of \(\mathcal{V }_{h}(B_{R/h}(\mathbf{y}/h) \cap Z)\), and \(\hat{\nu }_{1},\hat{\nu }_{2},\hat{\nu }_{3}\) are their corresponding eigenvectors.

In the following sections, we show that such estimators are indeed multigrid convergent to their respective geometric quantity. Proofs rely on the convergence of digital moments, on the stability of moments with respect to small displacements, and on matrix perturbation theory. Most of the work presented here has been published in [15].

9.5.2 Some Properties of Covariance Matrix and Moments

Before proving the multigrid convergence, we need to establish a few preliminary lemmas. First of all, we notice easily that covariance matrices are translation invariant.

Lemma 10

Translation invariance for covariance matrices:

  • \(\forall Y \subset \mathbb{R}^{3}\) , \(\forall \mathbf{v} \in \mathbb{R}^{3}\) , \(\mathcal{V }(Y + \mathbf{v}) = \mathcal{V }(Y )\) .

  • \(\forall Z \subset \mathbb{Z}^{3}\) , \(\forall \mathbf{v} \in \mathbb{Z}^{3}\) , \(\forall h> 0\) , \(\mathcal{V }_{h}(Z + \mathbf{v}) = \mathcal{V }_{h}(Z)\) .

Then, we must examine how moments are perturbated by a positioning error of the ball.

Lemma 11

For any point x , a positive number R, and any vector t with norm t: = ∥t2R, we have

$$\displaystyle\begin{array}{rcl} \left \vert m^{000}(B_{ R}(\mathbf{x} + \mathbf{t})) - m^{000}(B_{ R}(\mathbf{x}))\right \vert & \leq & O(tR^{2}),{}\end{array}$$
(9.69)
$$\displaystyle\begin{array}{rcl} \left \vert m^{100}(B_{ R}(\mathbf{x} + \mathbf{t})) - m^{100}(B_{ R}(\mathbf{x}))\right \vert & \leq & O(tR^{3}) + O(\|\mathbf{x}\|_{ \infty }tR^{2}),{}\end{array}$$
(9.70)
$$\displaystyle\begin{array}{rcl} \left \vert m^{200}(B_{ R}(\mathbf{x} + \mathbf{t})) - m^{200}(B_{ R}(\mathbf{x}))\right \vert & \leq & O(tR^{4}) + O(\|\mathbf{x}\|_{ \infty }tR^{3}) + O(\|\mathbf{x}\|_{ \infty }^{2}tR^{2}).{}\end{array}$$
(9.71)

Other moments with same order have the same respective bounds. Furthermore, these relations remain true if both B R (x + t) and B R (x) are intersected with the same set X.

Proof

We notice first that Eq.(9.69) has already been established in Lemma 9. For higher order moments pqs, we will use the following fact:

$$\displaystyle{ \emptyset \neq Y _{1} \subset Y _{2} \subset \mathbb{R}^{3}\Rightarrow \sup _{ Y \subset Y _{1}}\left \vert m^{pqs}(Y )\right \vert \leq \sup _{ Y \subset Y _{2}}\left \vert m^{pqs}(Y )\right \vert \,. }$$
(9.72)

As in Lemma 9, we notice that the difference of balls is included in the difference of two balls with same center. This is illustrated in Fig. 9.4, right.

$$\displaystyle\begin{array}{rcl} \left \vert m^{pqs}(B_{ R}(\mathbf{x} + \mathbf{t})) - m^{pqs}(B_{ R}(\mathbf{x}))\right \vert & \leq & \sup _{Y \subset B_{R}(\mathbf{x}+\mathbf{t})\varDelta B_{R}(\mathbf{x})}\left \vert m^{pqs}(Y )\right \vert, {}\\ & \leq & \sup _{Y \subset B_{R+t}(\mathbf{x})\setminus B_{R-t}(\mathbf{x})}\left \vert m^{pqs}(Y )\right \vert. {}\\ \end{array}$$

Hereafter we denote S R, t (x): = B R+t (x)∖B Rt (x).

For first order moments, we translate the shape to the origin, then we use the previous result plus the fact that the centered 100-moment is maximized for the x-positive half-ballFootnote 3:

$$\displaystyle\begin{array}{rcl} \sup _{Y \subset S_{R,t}(\mathbf{x})}\vert m^{100}(Y )\vert & \leq & \sup _{ Y \subset S_{R,t}(\mathbf{0})}\vert m^{100}(Y )\vert + \vert x_{ 1}\vert \cdot m^{000}(Y ) \\ & =& m^{100}(B_{ R+t}^{+}(\mathbf{0}) - B_{ R-t}^{+}(\mathbf{0})) + O(\vert x_{ 1}\vert tR^{2}) \\ & =& 2\pi (R^{3}t + Rt^{3}) + O((\|\mathbf{x}\|_{ \infty })tR^{2}) \\ & =& O(tR^{3}) + O(\|\mathbf{x}\|_{ \infty }tR^{2}). {}\end{array}$$
(9.73)

For second order moments, we translate the shape to the origin, then we use the two previous results plus the fact that the 200-moment is maximized for the ball:

$$\displaystyle\begin{array}{rcl} \sup _{Y \subset S_{R,t}(\mathbf{x})}\vert m^{200}(Y )\vert & \leq & \sup _{ Y \subset S_{R,t}(\mathbf{0})}\vert m^{200}(Y )\vert + 2\vert x_{ 1}\vert \cdot \vert m^{100}(Y )\vert + x_{ 1}^{2} \cdot m^{000}(Y ) \\ & =& m^{200}(S_{ R,t}(\mathbf{0})) +\| \mathbf{x}\|_{\infty }(O(tR^{3}) + O(\|\mathbf{x}\|_{ \infty }tR^{2})) +\| \mathbf{x}\|_{ \infty }^{2}O(tR^{2}) \\ & =& O(tR^{4}) + O(\|\mathbf{x}\|_{ \infty }tR^{3}) + O(\|\mathbf{x}\|_{ \infty }^{2}tR^{2}) {}\end{array}$$
(9.74)

Other moments are proved similarly. □

9.5.3 Multigrid Convergence of Digital Covariance Matrix

With Lemma 10 and Lemma 11, we can prove the multigrid convergence of the digital covariance matrix. Theorem 9 establishes its simple convergence (covariance matrices are computed at the same point y) and then Theorem 10 establishes its multigrid convergence.

Theorem 9 (Convergence of Digital Covariance Matrix)

Let X be a compact domain of \(\mathbb{R}^{3}\) such that its boundary ∂X has reach greater than ρ.

Then for any grid step h and radius R with \(0 <h \leq R/\sqrt{6} <\rho /(2\sqrt{3})\) , for arbitrary \(\mathbf{y} \in \mathbb{R}^{3}\) with d(y, ∂X) ≤ h, we have:

$$\displaystyle\begin{array}{rcl} \left \|\mathcal{V }_{h}(B_{R/h}(\mathbf{y}/h) \cap \mathtt{G}_{h}(X)) -\mathcal{V }(B_{R}(\mathbf{y}) \cap X)\right \| \leq O(R^{4}h).& & {}\\ \end{array}$$

The constants hidden in the big O do not depend on the shape size or geometry. ∥⋅ ∥ denotes the spectral norm on matrices.

Proof

To simplify expressions, we set A: = B R (y) ∩ X, A h : = B Rh (yh) ∩G h (X) = G h (B R (y) ∩ X) (the last equality is valid for the Gauss digitization). We begin by translating the sets A and A h towards the origin w.r.t. y. We must use a vector that takes into account the digitization, hence we shift A h by the vector \(\left [\frac{\mathbf{y}} {h}\right ]\), the integer vector closest to \(\frac{\mathbf{y}} {h}\), and we shift A with the vector \(h\left [\frac{\mathbf{y}} {h}\right ]\). We further set \(\tilde{A}_{h}:=A_{h} -\left [\frac{\mathbf{y}} {h}\right ]\) and \(\tilde{A}:=A - h\left [\frac{\mathbf{y}} {h}\right ]\). With these definitions and the translation invariance of digital covariance matrixes (Lemma 10), we get

$$\displaystyle\begin{array}{rcl} \mathcal{V }_{h}(B_{R/h}(\mathbf{y}/h) \cap \mathtt{G}_{h}(X))& =& \mathcal{V }_{h}(A_{h}) = \mathcal{V }_{h}\left (\tilde{A}_{h} + \left [\frac{\mathbf{y}} {h}\right ]\right ) = \mathcal{V }_{h}(\tilde{A}_{h}) \\ & =& \mathcal{V }_{h}(\mathtt{G}_{h}(\tilde{A})). {}\end{array}$$
(9.75)

The last equality comes from the fact that \(\tilde{A}_{h} = \mathtt{G}_{h}(A) -\left [\frac{\mathbf{y}} {h}\right ] = \mathtt{G}_{h}(A - h\left [\frac{\mathbf{y}} {h}\right ]) = \mathtt{G}_{h}(\tilde{A})\). We use also the translation invariance of the (continuous) covariance matrix to get

$$\displaystyle\begin{array}{rcl} \mathcal{V }(B_{R}(\mathbf{x}) \cap X)& =& \mathcal{V }(A) = \mathcal{V }\left (\tilde{A} + h\left [\frac{\mathbf{y}} {h}\right ]\right ) = \mathcal{V }(\tilde{A}).{}\end{array}$$
(9.76)

With Eq.(9.75) and Eq.(9.76), we rewrite the covariance estimation error as:

$$\displaystyle\begin{array}{rcl} \|\mathcal{V }_{h}(B_{R/h}(\mathbf{y}/h) \cap \mathtt{G}_{h}(X)) -\mathcal{V }(B_{R}(\mathbf{x}) \cap X)\| =\| \mathcal{V }_{h}(\mathtt{G}_{h}(\tilde{A})) -\mathcal{V }(\tilde{A})\|.& &{}\end{array}$$
(9.77)

We take a closer look at \(\mathcal{V }_{h}(\mathtt{G}_{h}(\tilde{A}))\):

$$\displaystyle\begin{array}{rcl} \mathcal{V }_{h}(\mathtt{G}_{h}(\tilde{A}))& =& \left [\begin{array}{ccc} \hat{m}_{h}^{200}(\tilde{A}_{h})&&\\ &\ddots &\\ \end{array} \right ] - \frac{1} {\hat{m}_{h}^{000}(\tilde{A}_{h})}\left [\begin{array}{c} \hat{m}_{h}^{100}(\tilde{A}_{h})\\ \vdots \end{array} \right ] \otimes \left [\begin{array}{c} \hat{m}_{h}^{100}(\tilde{A}_{h})\\ \vdots \end{array} \right ]^{T}.{}\\ \end{array}$$

We may rewrite the factor in front of the tensorial product using Theorem 4:

$$\displaystyle\begin{array}{rcl} \frac{1} {\hat{m}_{h}^{000}(\tilde{A}_{h})} = \frac{1} {m^{000}(\tilde{A})} + \left ( \frac{1} {m^{000}(\tilde{A})}\right )^{2}O(R^{2}h).& & {}\\ \end{array}$$

Since \(\tilde{A}\) is some translation of XB R (y), it has the same volume. This volume is some Θ(R 3) since y is at most at distance h of ∂X and \(h \leq R/\sqrt{6}\). We get:

$$\displaystyle\begin{array}{rcl} \frac{1} {\hat{m}_{h}^{000}(\tilde{A}_{h})} = \frac{1} {m^{000}(\tilde{A})} + O(R^{-4}h).& &{}\end{array}$$
(9.78)

We bound above the terms in the tensorial product. We write \(\mathbf{t}:=\mathbf{y} - h\left [\frac{\mathbf{y}} {h}\right ]\), which is the center of the ball of \(\tilde{A}\). As an example, we examine its top left term and use again Theorem 4:

$$\displaystyle\begin{array}{rcl} \hat{m}_{h}^{100}(\tilde{A}_{ h}) \times \hat{m}_{h}^{100}(\tilde{A}_{ h})& =& \left (m^{100}(\tilde{A}) + O\left (R^{2}\left (\left \|\mathbf{t}\right \|_{ \infty } + 2R\right )h\right )\right )^{2} \\ & =& \left (m^{100}(\tilde{A})\right )^{2} + O(R^{7}h) + O(R^{6}h^{2}).{}\end{array}$$
(9.79)

The last equality comes from the fact that \(\vert m^{100}(\tilde{A})\vert \leq m^{100}(B_{R+\sqrt{3}h}^{+}(\mathbf{0})) \leq 2\pi R^{4}\) and that ∥t is smaller than h which is smaller than a constant times R.

Other terms of the tensorial product are upper bounded similarly. We look now at the upper left term of the whole covariance matrix difference:

$$\displaystyle\begin{array}{rcl} & & \left \vert \left (\mathcal{V }_{h}(\mathtt{G}_{h}(\tilde{A})) -\mathcal{V }(\tilde{A})\right )_{1,1}\right \vert \\ & & = \left \vert \hat{m}_{h}^{200}(\tilde{A}_{ h}) - \frac{1} {\hat{m}_{h}^{000}(\tilde{A}_{h})}\left (\hat{m}_{h}^{100}(\tilde{A}_{ h})\right )^{2} - m^{200}(\tilde{A}) + \frac{1} {m^{000}(\tilde{A})}\left (m^{100}(\tilde{A})\right )^{2}\right \vert. \\ & & \leq \left \vert \hat{m}_{h}^{200}(\tilde{A}_{ h}) - m^{200}(\tilde{A})\right \vert \qquad \qquad \text{(using Eq.(9.78) and Eq.(9.79))} {}\end{array}$$
(9.80)
$$\displaystyle\begin{array}{rcl} & & \quad + \left \vert \left ( \frac{1} {m^{000}(\tilde{A})} + O(R^{-4}h)\right )\left (\left (m^{100}(\tilde{A})\right )^{2} + O(R^{7}h) + O(R^{6}h^{2})\right ) -\frac{\left (m^{100}(\tilde{A})\right )^{2}} {m^{000}(\tilde{A})} \right \vert \\ & &\leq O(R^{2}(\|\mathbf{t}\|_{ \infty } + 2R)^{2})h + O(R^{3}h^{4})\qquad \qquad \qquad \text{(Theorem 4)} \\ & & \quad + O(R^{4}h) + O(R^{3}h^{2}) + O(R^{2}h^{3}).\qquad \qquad \text{(using }m^{100}(\tilde{A}) = O(R^{4})\text{)} {}\end{array}$$
(9.81)

Noticing that from ∥t h and \(h \leq R/\sqrt{6}\), we conclude for this term. All other terms of the matrix are bounded above in a similar way. □

Theorem 10 (Multigrid Convergence of Digital Covariance Matrix)

Let X be a compact domain of \(\mathbb{R}^{3}\) such that its boundary ∂X has reach greater than ρ.

Then the digital covariance matrix is multigrid convergent toward the covariance matrix for the Gauss digitization on such shapes X for any radius \(R <\frac{\rho } {2}\) . More precisely, we have

$$\displaystyle\begin{array}{rcl} \forall 0 <h& <& \frac{R} {\sqrt{6}},\,\,\forall \mathbf{x} \in \partial X,\,\,\forall \hat{\mathbf{x}} \in \partial [\mathtt{G}_{h}(X)]_{h} \text{{ with }} \|\hat{\mathbf{x}} -\mathbf{x}\|_{\infty }\leq h, \\ & & \left \|\mathcal{V }_{h}(B_{R/h}(\hat{\mathbf{x}}/h) \cap \mathtt{G}_{h}(X)) -\mathcal{V }(B_{R}(\mathbf{x}) \cap X)\right \| \leq O(R^{4}h)\,.{}\end{array}$$
(9.82)

The constants hidden in the big O do not depend on the shape size or geometry. ∥⋅ ∥ denotes the spectral norm on matrices.

Proof

We split the difference of matrices into two parts:

$$\displaystyle\begin{array}{rcl} & & \|\mathcal{V }_{h}(B_{R/h}(\hat{\mathbf{x}}/h) \cap \mathtt{G}_{h}(X)) -\mathcal{V }(B_{R}(\mathbf{x}) \cap X)\| {}\\ & & \quad \leq \|\mathcal{V }_{h}(B_{R/h}(\hat{\mathbf{x}}/h) \cap \mathtt{G}_{h}(X)) -\mathcal{V }(B_{R}(\hat{\mathbf{x}}) \cap X)\| {}\\ & & \qquad +\| \mathcal{V }(B_{R}(\hat{\mathbf{x}}) \cap X) -\mathcal{V }(B_{R}(\mathbf{x}) \cap X)\|. {}\\ \end{array}$$

The first term is directly bounded by O(R 4 h) with Theorem 9, since point \(\hat{\mathbf{x}}\) is at distance less than \(h/\sqrt{3}\) from ∂X according to Theorem 1. It remains to bound the second term. Denoting \(\mathbf{t}:=\hat{\mathbf{x}} -\mathbf{x}\), t: = ∥t2, and X : = Xx, we simply shift everything to the origin using the invariance by translation of the covariance matrix:

$$\displaystyle\begin{array}{rcl} \|\mathcal{V }(B_{R}(\hat{\mathbf{x}}) \cap X) -\mathcal{V }(B_{R}(\mathbf{x}) \cap X)\| =\| \mathcal{V }(B_{R}(\mathbf{t}) \cap X^{{\prime}}) -\mathcal{V }(B_{ R}(\mathbf{0}) \cap X^{{\prime}})\|.& &{}\end{array}$$
(9.83)

We will apply Lemma 11 for the different moments occurring in the covariance matrix \(\mathcal{V }\). We denote by Y t the set B R (t) ∩ X and by Y 0 the set B R (0) ∩ X .

$$\displaystyle\begin{array}{rcl} \|\mathcal{V }(Y _{\mathbf{t}}) -\mathcal{V }(Y _{\mathbf{0}})\|& =& \left \|\left [\begin{array}{cc} m^{200}(Y _{\mathbf{t}}) - m^{200}(Y _{\mathbf{0}})&\\ &\ddots \end{array} \right ]\right. \\ & & \quad - \frac{1} {m^{000}(Y _{\mathbf{t}})}\left [\begin{array}{c} m^{100}(Y _{\mathbf{t}})\\ \vdots \end{array} \right ] \otimes \left [\begin{array}{c} m^{100}(Y _{\mathbf{t}})\\ \vdots \end{array} \right ]^{T} \\ & & \left.\quad + \frac{1} {m^{000}(Y _{\mathbf{0}})}\left [\begin{array}{c} m^{100}(Y _{\mathbf{0}})\\ \vdots \end{array} \right ] \otimes \left [\begin{array}{c} m^{100}(Y _{\mathbf{0}})\\ \vdots \end{array} \right ]^{T}\right \|.{}\end{array}$$
(9.84)

Matrix \(\mathcal{V }(Y _{\mathbf{t}}) -\mathcal{V }(Y _{\mathbf{0}})\) contains differences of geometrical moments of order two (e.g. m 200(Y t ) − m 200(Y 0 )) and quantities in the form of \(\varDelta:=\frac{m^{100}(Y _{\mathbf{ t}})^{2}} {m^{000}(Y _{\mathbf{t}})} -\frac{m^{100}(Y _{\mathbf{ 0}})^{2}} {m^{000}(Y _{\mathbf{0}})}\) (component (1, 1) in \(\mathcal{V }(Y _{\mathbf{t}}) -\mathcal{V }(Y _{\mathbf{0}})\) matrix). From Lemma 11, every error on second-order moments is in O(tR 4). To bound Δ quantities, we first observe that | m 000(Y t ) − m 000(Y 0 ) | = πR 2(t + O(t 2) + O(tR 2)) using Theorem 7 in [49]. Hence, we get

$$\displaystyle\begin{array}{rcl} \varDelta & =& \frac{m^{100}(Y _{\mathbf{t}})^{2}} {m^{000}(Y _{\mathbf{0}}) + O(tR^{2})} -\frac{m^{100}(Y _{\mathbf{0}})^{2}} {m^{000}(Y _{\mathbf{0}})} {}\\ & =& O(tR^{2}) \frac{m^{100}(Y _{\mathbf{t}})^{2}} {m^{000}(Y _{\mathbf{0}})^{2}} + \frac{m^{100}(Y _{\mathbf{t}})^{2} - m^{100}(Y _{\mathbf{0}})^{2}} {m^{000}(Y _{\mathbf{0}})} \,. {}\\ \end{array}$$

Since \(\frac{a} {b+O(x)} = \frac{a} {b} + \frac{a} {b^{2}} O(x)\), a 2b 2 = (ab)(a + b), and using again Lemma 11, we have

$$\displaystyle\begin{array}{rcl} \varDelta & =& O(tR^{4}) + (m^{100}(Y _{\mathbf{ t}}) + m^{100}(Y _{\mathbf{ 0}}))\frac{m^{100}(Y _{\mathbf{t}}) - m^{100}(Y _{\mathbf{0}})} {m^{000}(Y _{\mathbf{0}})} {}\\ \;& =& O(tR^{4}) + (O(tR^{3}) + O(R^{4}))\frac{m^{100}(Y _{\mathbf{t}}) - m^{100}(Y _{\mathbf{0}})} {m^{000}(Y _{\mathbf{0}})} {}\\ \end{array}$$

Since the boundary of X = Xx goes through point 0 and is smooth, the volume of Y 0 = B R (0) ∩ X is some Θ(R 3). Then, Lemma 11, Eq.(9.70), bounds the error of the 100-moments. Last, we have t < R since \(t <\sqrt{3}h\) and \(h <R/\sqrt{6}\). It follows

$$\displaystyle\begin{array}{rcl} \varDelta & = O(tR^{4}) + (O(tR^{3}) + O(R^{4}))\frac{O(tR^{3})} {\varTheta (R^{3})} = O(tR^{4}).& {}\\ \end{array}$$

The same bound is found for all terms of the matrix. Putting everything together gives the result. □

9.5.4 Useful Results of Matrix Perturbation Theory

We have shown above that the digital covariance matrix tends toward the continuous covariance matrix, if we compute it on the intersection of the shape with a ball. We have defined curvature estimators from the eigenvalues and eigenvectors of the digital covariance matrix. To show their multigrid convergence, it is thus necessary to understand how an error on a matrix can perturb its eigendecomposition. This is why we present first useful results from matrix perturbation theory [3, 4, 19, 55] before establishing the multigrid convergence of our estimators.

Let M and M be two symmetric matrices, we want to quantify the difference between the eigenvalues and eigenvectors of M and the eigenvalues and eigenvectors of M as functions of norms of MM . For instance, if M is the covariance matrix of a noisy data and M the noise-free covariance matrix, matrix perturbation results would allow us to bound eigenvalues defect (and thus principal curvature defect as described in Lemma 6).

Let \((\lambda,\boldsymbol{\nu })\) be an eigenpair of M, i.e., λ is an eigenvalue of M and \(\boldsymbol{\nu }\) its associated eigenvector. The eigengap δ λ (M) associated to an eigenvalue λ is the minimum distance between λ and any distinct eigenvalue of M. Note that the eigengap vanished if λ has multiplicity greater than 1. We also consider the operator norm (or 2−norm) of a matrix:

$$\displaystyle{ \|M\|_{op}:=\sup _{\|\mathbf{x}\|_{2}=1}\|M\mathbf{x}\|_{2}\,, }$$
(9.85)

please note that if \(m_{max}:=\max _{m_{ij}\in M}\vert m_{ij}\vert\), then \(m_{max} \leq \| M\|_{op} \leq \sqrt{nm} \cdot m_{max}\) for n × m matrix M.

The main theorem we use is due to Davis-Kahan [19] presented in a simplified version due to [43]:

Theorem 11 (Davis-Kahan)

Let M and M be two symmetric matrices, λ an eigenvalue of M and δ: = δ λ (M). Then for every eigenpair \((\lambda,\boldsymbol{\nu })\) of M, there exists an eigenpair \((\lambda ^{{\prime}},\boldsymbol{\nu }^{{\prime}})\) of M such that:

$$\displaystyle{ \ \vert \lambda -\lambda ^{{\prime}}\vert \leq \sqrt{2}\|M - M^{{\prime}}\|_{ op}\quad \mathit{\text{and}}\quad \|\boldsymbol{\nu } -\boldsymbol{\nu }^{{\prime}}\|\leq \frac{2\|M - M^{{\prime}}\|_{ op}} {\delta } \,, }$$
(9.86)

provided that \(\|M - M^{{\prime}}\|_{op} <\delta \sqrt{2}\) .

Concerning eigenvalues, stronger results can be used to pair eigenvalues of M to eigenvalues of M . For instance, sorting the values by increasing order and using the rank to pair values leads to an equivalent result on the eigenvalue difference. We mention here the Lidskii-Weyl inequality, which bounds eigenvalues defect without constraint on the eigengap:

Theorem 12 (Lidskii-Weyl Inequality)

If λ i (M) denotes the ordered eigenvalues of some symmetric matrix M and λ i (M ) the ordered eigenvalues of some symmetric matrix M , then \(\max _{i}\vert \lambda _{i}(M) -\lambda _{i}(M^{{\prime}})\vert \leq \sqrt{2}\|M - M^{{\prime}}\|_{op}\) .

Another consequence of Theorem 11 is that the eigengap between eigenvalues plays an important role in eigenvector deviations. For example, if M is a covariance matrix of a perfectly symmetric shape around the Oz axis, one eigenvalue would have multiplicity 2. Any infinitely small perturbations of the shape breaking the symmetry would lead to large defects of the eigenvectors associated to the eigenvalues with infinitely small eigengaps (in other words, the two eigenvectors could rotate around the third one).

In the our context where eigenvalues/eigenvectors are used to estimate principal curvature and principal directions, it means that on some flat or spherical surface patches, we can expect convergence of the principal curvature values but not the direction (in the sense of the norm of the vector deviation).

9.5.5 Multigrid Convergence of Integral Principal Curvature Estimators

The multigrid convergence of (local) digital covariance matrix and stability results of matrix perturbation theory induce multigrid convergence for integral principal curvature estimators:

Theorem 13 (Multigrid Convergence of Integral Principal Curvature Estimators \(\hat{\boldsymbol{\kappa }}_{\boldsymbol{1}}^{\boldsymbol{R}}\) and \(\hat{\boldsymbol{\kappa }}_{\boldsymbol{2}}^{\boldsymbol{R}}\))

  Let X be a compact domain of \(\mathbb{R}^{3}\) such that its boundary ∂X has reach greater than ρ and has C 3 -continuity.

Then, for the Gauss digitization process, the integral principal curvature estimators are multigrid convergent toward the principal curvatures on such shapes X for well-chosen gridsteps h and radius R. More precisely, setting \(R = kh^{\frac{1} {3} }\) with k an arbitrary positive constant, we have

$$\displaystyle\begin{array}{rcl} & & \forall h \in \mathbb{R},\,\,0 <h <\min \left (\left ( \frac{\rho } {2k}\right )^{3},\,\,\left ( \frac{k} {\sqrt{6}}\right )^{\frac{3} {2} }\right ),\,\,\quad \quad \quad \quad \\ & & \forall \mathbf{x} \in \partial X,\forall \hat{\mathbf{x}} \in \partial [\mathtt{G}_{h}(X)]_{h} \text{{ with }} \|\hat{\mathbf{x}} -\mathbf{x}\|_{\infty }\leq h,\quad \quad \quad \quad \\ & & \quad \quad \quad \quad \quad \|\hat{\kappa }_{1}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\kappa _{1}(X,\mathbf{x})\| \leq O(h^{\frac{1} {3} })\,, {}\end{array}$$
(9.87)
$$\displaystyle\begin{array}{rcl} \|\hat{\kappa }_{2}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\kappa _{2}(X,\mathbf{x})\| \leq O(h^{\frac{1} {3} })\,.& &{}\end{array}$$
(9.88)

Proof

First of all, requirements on h imply that \(h <\frac{R} {\sqrt{6}}\) and \(R <\frac{\rho } {2}\). We are thus in the hypotheses of Theorem 10, which indicates that the digital covariance matrix is multigrid convergent to the covariance matrix. We denote \(M^{{\prime}}:=\mathcal{V }_{h}(B_{R/h}(\hat{\mathbf{x}}/h) \cap \mathtt{G}_{h}(X))\) and \(M:=\mathcal{V }(B_{R}(\mathbf{x}) \cap X)\). It follows from this convergence property that:

$$\displaystyle\begin{array}{rcl} \|M^{{\prime}}- M\|_{ op} \leq O(R^{4}h).& & {}\\ \end{array}$$

Since both M and M are symmetric, it follows from matrix perturbation theory (Theorem 12) that eigenvalues of M and M are close. Recalling that λ 1λ 2λ 3 are the eigenvalues of the covariance matrix M and \(\hat{\lambda }_{1} \geq \hat{\lambda }_{2} \geq \hat{\lambda }_{3}\) are the eigenvalues of the digital covariance matrix M , it follows that \(\forall i \in \{ 1,2,3\},\vert \hat{\lambda }_{i} -\lambda _{i}\vert <O(R^{4}h)\). For instance, for \(\hat{\kappa }_{1}^{R}\), we write:

$$\displaystyle\begin{array}{rcl} \hat{\kappa }_{1}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h)& =& \frac{6} {\pi R^{6}}(\hat{\lambda }_{2} - 3\hat{\lambda }_{1}) + \frac{8} {5R} {}\\ & =& \frac{6} {\pi R^{6}}\left (\lambda _{2} - 3\lambda _{1} + O(R^{4}h)\right ) + \frac{8} {5R}. {}\\ \end{array}$$

With the hypothesis of C 3-continuity, we can substitute the truncated Taylor expansion of Lemma 6 into the latter equation. After some simple calculations, we get:

$$\displaystyle\begin{array}{rcl} \hat{\kappa }_{1}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) = \kappa _{1}(X,\mathbf{x}) + O(R) + O(h/R^{2}).& &{}\end{array}$$
(9.89)

Setting R = kh α, we optimize the value α to minimize all errors. The optimal value is \(\alpha = \frac{1} {3}\) and the bound follows. The reasoning is strictly similar for \(\hat{\kappa }_{2}^{R}\). □

As noticed in [15], slightly better bounds could be obtained if the shape X has a C 3-boundary with strictly positive curvature and if the distance between x and \(\hat{\mathbf{x}}\) could be reduced. The formulation of this theorem in [15] (Theorem 6) requires that the shape X can be decomposed into a finite number of monotonous pieces. Here, this is unnecessary since we have rewritten convergence theorems of digital moments for shapes with the sole property of having positive reach (see Sect. 9.3).

As indicated by Theorem 11, convergence of principal curvature directions is more tricky. The problem lies near places of the surface where there are umbilical points, i.e. places where principal curvatures are equal. Otherwise said, at these points, two eigenvalues coincide and their eigenvectors span a plane. We can nevertheless state:

Theorem 14 (Multigrid Convergence of Integral Principal Direction Estimators \(\hat{\mathbf{w}}_{\boldsymbol{1}}^{\boldsymbol{R}}\) and \(\hat{\mathbf{w}}_{\boldsymbol{2}}^{\boldsymbol{R}}\) and of Integral Normal Estimator \(\hat{\mathbf{n}}^{\boldsymbol{R}}\))

Let X be a compact domain of \(\mathbb{R}^{3}\) such that its boundary ∂X has reach greater than ρ and has C 3 -continuity.

Then, for the Gauss digitization process, the integral principal direction estimators are multigrid convergent toward the principal directions w 1 and w 2 on such shapes X for well-chosen gridsteps h and radius R, provided the principal curvatures are distinct. The integral normal vector estimator is also multigrid convergent toward the normal direction n . More precisely, setting \(R = kh^{\frac{1} {3} }\) with k an arbitrary positive constant, we have

$$\displaystyle\begin{array}{rcl} & & \exists h_{X} \in \mathbb{R}^{+},\,\,\forall h \in \mathbb{R},\,\,0 <h <h_{ X},\,\,\,\,\quad \quad \quad \quad \\ & & \forall \mathbf{x} \in \partial X,\forall \hat{\mathbf{x}} \in \partial [\mathtt{G}_{h}(X)]_{h} \text{{ with }} \|\hat{\mathbf{x}} -\mathbf{x}\|_{\infty }\leq h,\quad \quad \quad \quad \\ & & \quad \quad \quad \quad \|\hat{\mathbf{w}}_{1}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\mathbf{w}_{1}(X,\mathbf{x})\| \leq \frac{1} {\vert \kappa _{1}(X,\mathbf{x}) -\kappa _{2}(X,\mathbf{x})\vert }O(h^{\frac{1} {3} })\,,{}\end{array}$$
(9.90)
$$\displaystyle\begin{array}{rcl} \|\hat{\mathbf{w}}_{2}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\mathbf{w}_{2}(X,\mathbf{x})\| \leq \frac{1} {\vert \kappa _{1}(X,\mathbf{x}) -\kappa _{2}(X,\mathbf{x})\vert }O(h^{\frac{1} {3} })\,,& &{}\end{array}$$
(9.91)
$$\displaystyle\begin{array}{rcl} \|\hat{\mathbf{n}}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\mathbf{n}(X,\mathbf{x})\| \leq O(h^{\frac{2} {3} })\,.& &{}\end{array}$$
(9.92)

NB: Involved vectors are oriented to point to the same side.

Proof

As stated in Theorem 11, eigenvectors of two similar symmetric matrices are close provided the corresponding eigengap is not null. We take the same notations as in the proof of Theorem 13 and we recall that ∥M M op O(R 4 h). We denote (λ 1, ν 1), (λ 2, ν 2) and (λ 3, ν 3) the decreasing eigenpairs of M and \((\hat{\lambda }_{1},\hat{\nu }_{1})\), \((\hat{\lambda }_{2},\hat{\nu }_{2})\) and \((\hat{\lambda }_{3},\hat{\nu }_{3})\) the decreasing eigenpairs of M . Since no confusion may arise, we write κ 1 for κ 1(X, x) and κ 2 for κ 2(X, x). We start by expressing the eigengaps δ i (M) using Lemma 6:

$$\displaystyle\begin{array}{rcl} \delta _{1}(M)&:=& \lambda _{1} -\lambda _{2} = \frac{\pi } {24}(\kappa _{2} -\kappa _{1})R^{6} + O(R^{7}), {}\\ \delta _{3}(M)&:=& \lambda _{2} -\lambda _{3} = \frac{3\pi } {32}R^{5} - \frac{\pi } {1536}(69\kappa _{2} + 5\kappa _{1})R^{6} + O(R^{7}), {}\\ \delta _{2}(M)&:=& \min (\delta _{1}(M),\delta _{3}(M)). {}\\ \end{array}$$

As R approaches zero as h tends toward zero, it is clear that there exists some h X > 0 such that δ 1(M) ≤ δ 3(M). Hence we suppose hereafter that δ 2(M) = δ 1(M).

If κ 1κ 2, it follows from Theorem 11 that

$$\displaystyle\begin{array}{rcl} \|\hat{\nu }_{1} -\nu _{1}\|& \leq & \frac{2} {\delta _{1}(M)}\|M - M^{{\prime}}\|_{ op} \\ & \leq & \frac{48} {\pi (\kappa _{2} -\kappa _{1})}O\left ( \frac{h} {R^{2}}\right ) + O\left ( \frac{h} {R}\right ).{}\end{array}$$
(9.93)

By Definition 6, \(\hat{\nu }_{1} = \hat{\mathbf{w}}_{1}^{R}(\mathtt{G}_{h}(X),\hat{\mathbf{x}},h)\). Second, Theorem 3 of [48] tells that the eigenvectors of M are close to the directions of the principal curvatures and of the normal. More precisely, we have:

$$\displaystyle\begin{array}{rcl} \angle (\nu _{1},\mathbf{w}_{1}(X,\mathbf{x}))& =& O\left ( \frac{R} {\kappa _{2} -\kappa _{1}}\right ), {}\\ \angle (\nu _{2},\mathbf{w}_{2}(X,\mathbf{x}))& =& O\left ( \frac{R} {\kappa _{2} -\kappa _{1}}\right ), {}\\ \angle (\nu _{3},\mathbf{n}(X,\mathbf{x}))& =& O\left (R^{2}\right ). {}\\ \end{array}$$

Third, for unit vectors u and v, \(\|\mathbf{u} -\mathbf{v}\| = 2\sin \frac{\alpha }{2}\) where α is the angle between u and v. If \(0 \leq \alpha \leq \frac{\pi } {2}\), it is thus straightforward to check that \(\frac{\alpha }{\sqrt{2}} \leq \sin \alpha \leq \alpha\) and \(\frac{\alpha }{\sqrt{2}} \leq \|\mathbf{u} -\mathbf{v}\| \leq \alpha\). Putting these three facts together with Eq.(9.93) gives:

$$\displaystyle\begin{array}{rcl} \left \|\hat{\mathbf{w}}_{1}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\mathbf{w}_{1}(X,\mathbf{x})\right \|& \leq & \|\hat{\nu }_{1} -\nu _{1}\| +\|\nu _{1} -\mathbf{w}_{1}(X,\mathbf{x})\| {}\\ & \leq & \frac{1} {\vert \kappa _{1} -\kappa _{2}\vert }O\left ( \frac{h} {R^{2}}\right ) + O\left ( \frac{R} {\kappa _{2} -\kappa _{1}}\right ), {}\\ & \leq & \frac{1} {\vert \kappa _{1} -\kappa _{2}\vert }O\left (h^{\frac{1} {3} }\right ), {}\\ \end{array}$$

since \(R = kh^{\frac{1} {3} }\). The proof for the second principal direction is strictly similar, since δ 2(M) = δ 1(M). For the normal estimator, we use the relation on δ 3(M) to get:

$$\displaystyle\begin{array}{rcl} \|\hat{\nu }_{3} -\nu _{3}\|& \leq & \frac{2} {\delta _{3}(M)}\|M - M^{{\prime}}\|_{ op} {}\\ & \leq & \frac{48} {3\pi } O\left ( \frac{h} {R}\right ) + O(h). {}\\ \end{array}$$

Since \(\hat{\nu }_{3} = \hat{\mathbf{n}}^{R}(\mathtt{G}_{h}(X),\hat{\mathbf{x}},h)\) by definition and with the same reasoning as above, we derive:

$$\displaystyle\begin{array}{rcl} \|\hat{\mathbf{n}}^{R}(\mathtt{G}_{ h}(X),\hat{\mathbf{x}},h) -\mathbf{n}(X,\mathbf{x})\|& \leq & \|\hat{\nu }_{3} -\nu _{3}\| +\|\nu _{3} -\mathbf{n}(X,\mathbf{x})\| {}\\ & \leq & O\left ( \frac{h} {R}\right ) + O(R^{2}) = O\left (h^{\frac{2} {3} }\right ), {}\\ \end{array}$$

since \(R = kh^{\frac{1} {3} }\). □

To conclude this section, we have shown that digital integral invariants provide convergent estimates of curvatures values and directions in 3D. Furthermore, they provide also a 3D normal estimator to digital surfaces, whose worst case convergence speed is very fast compared to the literature (see [12] for a survey). Last, everything has been presented for the 2D and 3D case, but most of the results remain valid in arbitrary dimension. This is due to the fact that we have shown the convergence of digital moments in arbitrary dimension, and other stability results come from matrix perturbation theory, which are also valid in arbitrary dimension. Only the Taylor expansion of integral invariant in general dimension is missing. We will show in the next sections that these estimators compare well with other estimators in practice.

9.6 Parameter-Free Digital Curvature Estimation

In the multigrid convergence framework, the notion of scale is explicit with parameter h. For instance, convergence results can be obtained for any radius integration kernel R in \(O(h^{\frac{1} {3} })\). In some practical situations, we have to analyze a given digital object \(Z \subset \mathbb{Z}^{d}\) without any information about its Euclidean embedding. As a consequence, setting suitable values for R can be challenging. Note that beside integral invariant estimators described in the previous sections, most existing curvature estimators are also parametrized by such kernel size or window size parameter.

In this section, we propose techniques to automatically set such radius parameter for a given digital object Z in dimension 2 and 3 with the property that if Z comes from a multigrid digitization process, then we remain in the convergence theorem hypothesis. For short, the key point here is to rely on a geometrical information extracted from the digital contour, here maximal digital straight segments, from which multigrid parameter h can be retrieved. Let us first formalize this property.

9.6.1 Asymptotic Laws of Straight Segments Along the Boundary of Digitized Shapes and Scale Determination

First of all, we remind the definition of digital straight segment [29, 51].

Definition 7 (Standard Line and Digital Straight Segment)

The set of points \((x,y) \in \mathbb{Z}^{2}\) satisfying μaxby < μ + | a | + | b |, with a, b and μ integer numbers, is called the standard digital line with slope ab and shift μ. Any connected subset of pixels of a standard digital line is a digital straight segment (DSS for short).

On a digital set \(Z \subset \mathbb{Z}^{2}\), we denote by Bd(Z) its (cellular) topological boundary composed of pointels (0-cell) and linels (1-cell). Following the discussion in Sect. 9.2.3, the canonical embedding of Bd(Z) into \(\mathbb{R}^{2}\) coincides with [Z] h . On Bd(Z), we can define maximal segments and maximal segment pencils:

Definition 8 (Maximal Segment and Maximal Segment Pencil [37])

The pointels composing the digital boundary Bd(Z) form a 4-connected contour. They can thus be numbered consecutively as (p i ) i = 0…n−1. A sequence of pointels (p i , , p j ), indices taken modulo n, is a maximal segment on ∂Z iff {p i , , p j } is a DSS, while neither {p i−1, p i , , p j } nor {p i , , p j , p j+1} are DSS. At a given pointel pBd(Z), the pencil of maximal segment at p is the set of maximal segments on Bd(Z) containing p.

In many situations, maximal segments and maximal segment pencils play a very important role in multigrid digital contour geometry processing [34, 57]. For the purpose of this paper, let us focus on the asymptotic properties of lengths of maximal segment:

Lemma 12 (Asymptotic Laws of Maximal Segments [34, 57])

Let X be some convex shape of \(\mathbb{R}^{2}\) , with at least C 3 -boundary and non-null bounded curvature. The discrete length of maximal segments in Bd(Z) for Z = G h (X) follows:

  • the shortest is lower bounded by \(\varOmega (h^{-\frac{1} {3} })\) ;

  • the longest is upper bounded by \(O(h^{-\frac{1} {2} })\) ;

  • their average length, denoted L D (Z), is such that:

    $$\displaystyle{ \varTheta \left (h^{-\frac{1} {3} }\right ) \leq L_{D}(Z) \leq \varTheta \left (h^{-\frac{1} {3} }\log \left (\frac{1} {h}\right )\right )\,. }$$
    (9.94)

We now have the key ingredient for the scale inference of Z (if Z is the digitization of a C 3 Euclidean shape): If we compute the average length of all maximal segments in Bd(Z), Eq. (9.94) allows us to retrieve the scale parameter h. We can now introduce our parameter free curvature estimator in 2D:

Definition 9

Given \(Z \subset \mathbb{Z}^{2}\), the parameter-free digital curvature estimator \(\hat{\kappa }^{{\ast}}\) at a pointel pBd(Z) is defined as:

$$\displaystyle{ \hat{\kappa }^{{\ast}}(Z,\mathbf{p}):= \frac{3\pi } {2\rho (Z)} -\frac{3A(Z,\mathbf{p})} {\rho (Z)^{3}} \,, }$$
(9.95)

where ρ(Z) = L D 2(Z) and A(Z, p) = Card(B ρ(Z)(p) ∩ Z).

To rephrase the definition, we first compute the average discrete length of all maximal segments on Bd(Z). Then ρ is the square of this length. The estimation \(\hat{\kappa }^{{\ast}}(Z,\mathbf{p})\) is a function of the number of digital points in Z intersected with the ball of radius ρ centered at p. This definition mimics continuous definition in Eq. (9.14) and is a scale-independent version of Definitions 3 and 4 (page 314).

9.6.2 Parameter-Free Digital Curvature Estimators

In dimension 2, \(\hat{\kappa }^{{\ast}}(Z,\mathbf{p})\) is parameter-free and only related to a digital object Z. However, if Z is the digitization of a continuous shape X, i.e. if Z = G h (X), then multigrid convergence is obtained:

Theorem 15 (Multigrid Convergence of Curvature Estimator \(\hat{\kappa }^{{\ast}}\) [39])

Let X be some convex shape of \(\mathbb{R}^{2}\) , with at least C 3 -boundary and non null bounded curvature. Let Z = G h (X). Then, there exist a positive constant h 0 , for any 0 < hh 0 , we have, \(\forall \mathbf{x} \in \partial X\) and \(\forall \mathbf{p} \in Bd(Z)\)

$$\displaystyle\begin{array}{rcl} \|h\mathbf{p} -\mathbf{x}\|_{\infty }\leq h\Rightarrow \left \vert \frac{1} {h}\hat{\kappa }^{{\ast}}(Z,\mathbf{p}) -\kappa (X,\mathbf{x})\right \vert \leq O\left (h^{\frac{1} {3} }\log ^{2}\left (\frac{1} {h}\right )\right )\,.& &{}\end{array}$$
(9.96)

Note that pBd(Z) implies h p[G h (X)] h . The parameter-free curvature is rescaled by h in order to compare comparable shapes. To sketch the proof, Eq. (9.94) implies that (Z) is in \(O\left (h^{\frac{1} {3} }\log ^{2}\left (\frac{1}{h}\right )\right )\). Theorem 7 (page 318) only requires the kernel radius R to be in \(O(h^{\frac{1} {3} })\) to achieve multigrid convergence. In [39], we show that considering \(R = O\left (h^{\frac{1} {3} }\log ^{2}\left (\frac{1}{h}\right )\right )\) does not change the convergence property.

9.6.3 Local Parameter-Free Digital Curvature Estimator and 3D Case

Several extensions can be considered. The first one is a local definition of \(\hat{\kappa }^{{\ast}}(Z,\mathbf{p})\). Indeed, Definition 9 imposes the same radius parameter for each point of Bd(Z) (average of all maximal segment lengths). We may be interested in a local version to obtain adaptive estimations. For instance, a local version is easily defined by considering, at each point p, the average length of DSS in its maximal segment pencil, denoted ρ(Z, p). Hence, we define:

$$\displaystyle{ \hat{\kappa }^{{\ast}}_{ l}(Z,\mathbf{p}):= \frac{3\pi } {2\rho (Z,\mathbf{p})} -\frac{3A^{{\prime}}(Z,\mathbf{p})} {\rho (Z,\mathbf{p})^{3}} \,, }$$
(9.97)

where A (Z, p) = Card(B ρ(Z, p)(p) ∩ Z).

Doing so, we cannot prove anymore the multigrid convergence of \(\frac{1} {h}\hat{\kappa }^{{\ast}}_{ l}(Z,\mathbf{p})\) since in the maximal pencil, we may have pathological DSS with too long length (those in \(O(h^{-\frac{1} {2} })\) in Lemma 12). However, experimental evaluation shows good convergence properties and we observe that \(\hat{\kappa }^{{\ast}}_{l}(Z,\mathbf{p})\) outperforms \(\hat{\kappa }^{{\ast}}(Z,\mathbf{p})\).

A second extension considers the 3D case with \(Z \subset \mathbb{Z}^{3}\). In this case, we first want to construct a mean curvature estimator setting the radius kernel R to geometrical characteristics, denoted ρ , of Bd(Z). A first issue is that in dimension 3, even if digital planes can be defined, no result similar to Lemma 12 exists. In [39], we have presented a slice based approach: intersecting Z with planes aligned with grid axis defines a set of 2D digital curves on which maximal DSS can be computed and thus average maximal DSS length can be obtained. Similarly, at a given surfel s, two specific 2D curves can be defined and thus local parameter-free estimator can be defined considering average DSS length in the two maximal segment pencils containing the projection of s. As in the local 2D case, some pathological configurations may occur preventing us to have a complete convergence proof of such estimators (see details in [39]). However, experimental convergence can be observed. The following section details these results.

9.7 Experimental Evaluation

In this section, all presented estimators are evaluated experimentally and compared with state-of-the-art techniques.

9.7.1 Implementation Details

Integral invariant estimators are based on spherical kernels with radius given by R = kh α as described in theorem statements. Then, the digital object boundary is tracked and the kernel is centered on each surface elements. For 2D and 3D mean curvature estimators, the volumetric integral of the intersection between the kernel and the object is computed; for 3D principal curvature estimators, the covariance matrix of this intersection is computed and then eigenvalues and eigenvectors are deduced from it by diagonalization.

A brute-force implementation would lead to a computational cost of O((Rh)d) per surface element (i.e. the size of the kernel at grid step h). However, all quantities are additive and we can take advantage of the digital surface structure to speed up this algorithm considerably: if we consider a surface tracker for which surface elements are processed by proximity (the current surface element is a neighbor of the previous one through a translation vector \(\boldsymbol{\delta }\)), the area/volume estimation can be done incrementally:

$$\displaystyle\begin{array}{rcl} & & \widehat{\mathrm{Area}}\left (\mathtt{G}_{h}(X) \cap B_{R}(\mathbf{x}+\boldsymbol{\delta }),h\right ) = \widehat{\mathrm{Area}}\left (\mathtt{G}_{h}(X) \cap B_{R}(\mathbf{x}),h\right ) {}\\ & & \quad + \widehat{\mathrm{Area}}\left (\mathtt{G}_{h}(X) \cap (B_{R}(\mathbf{x}+\boldsymbol{\delta })\setminus B_{R}(\mathbf{x})),h\right ) {}\\ & & \quad -\widehat{\mathrm{Area}}\left (\mathtt{G}_{h}(X) \cap (B_{R}(\mathbf{x})\setminus B_{R}(\mathbf{x}+\boldsymbol{\delta })),h\right )\,. {}\\ \end{array}$$

Similarly we have for moments:

$$\displaystyle\begin{array}{rcl} & & \hat{m}_{p,q,s}\left (\mathtt{G}_{h}(X) \cap B_{R}(\mathbf{x}+\boldsymbol{\delta }),h\right ) =\hat{ m}_{p,q,s}\left (\mathtt{G}_{h}(X) \cap B_{R}(\mathbf{x}),h\right ) {}\\ & & \quad +\hat{ m}_{p,q,s}\left (\mathtt{G}_{h}(X) \cap (B_{R}(\mathbf{x}+\boldsymbol{\delta })\setminus B_{R}(\mathbf{x})),h\right ) {}\\ & & \quad -\hat{ m}_{p,q,s}\left (\mathtt{G}_{h}(X) \cap (B_{R}(\mathbf{x})\setminus B_{R}(\mathbf{x}+\boldsymbol{\delta })),h\right )\,. {}\\ \end{array}$$

Then, if we precompute all kernels \(\mathtt{G}_{h}(B_{R}(0\pm \boldsymbol{\delta })\setminus B_{R}(0))\) for some \(\boldsymbol{\delta }\) displacements (based on surface element umbrella configurations, 8 in 2D and 26 in 3D for \(\|\boldsymbol{\delta }\|_{\infty } = h\)), the computational cost per surface element can be reduced to O((Rh)d−1). Finally, in the ideal case of a Hamiltonian traversal of the surface, only the first surfel has to be computed using kernel \(B_{R}(\hat{\mathbf{x}})\) and every subsequent neighboring surfel is processed using sub-kernels \(\mathtt{G}_{h}(B_{R}(0\pm \boldsymbol{\delta })\setminus B_{R}(0))\).

To perform precise performance evaluation in both the multigrid framework and with respect to the state of the art, we need a family of Euclidean shapes \(\mathbb{X}\) on which the estimated quantity is known. Table 9.1 and Fig. 9.5 present continuous shapes considered in this analysis. Please note that, for parameter-free curvature estimators, only convex ones match with theorem statements. However, we can still experimentally evaluate the behavior of estimators when considering shapes that do not satisfy all theorem hypotheses.

Table 9.1 Equations, parameters and domains of Euclidean shapes considered in the experimental evaluation (t ∈ [0, 2π] for parametric curves)
Fig. 9.5
figure 5

Illustrations of 2D and 3D shapes considered in the experimental evaluation (please refer to Table 9.1 for equations and parameters): ellipse (a), flower (b), accelerated flower (c), sphere (d), rounded cube (e) and Goursat’s surface (f)

All curvature estimators have been implemented in DGtal.Footnote 4 DGtal is an opensource library devoted to digital geometry tools. Beside proposing curvature integral invariant based estimators, this library offers mathematical shapes with known curvature values and Gauss digitization of such objects on a grid with gridstep h. Furthermore, many existing curvature estimators from the literature are also available, making easy comparative evaluations.

9.7.2 Multigrid Convergence Analysis

We have first checked experimentally that the α parameter (for the ball radius R = kh α) should indeed be set around \(\frac{1} {3}\) to get multigrid convergence. This was empirically observed in [15], where a complete discussion on convergence behavior for different α values is detailed. In all following experiments, we have set α to \(\frac{1} {3}\).

In Figs. 9.6 and 9.7, we compare our digital integral invariant estimators (II) to state-of-the-art methods for respectively the l and l 2 error norms. In dimension 2, other curvature estimators are: curvature from Most-centered Maximal Segment with length information (MDSS) [13, 57], curvature from Most-centered Digital Circular Arc (MDCA) [53] and Binomial based convolution (BC) [22]. In dimension 3, we have considered the curvature estimation from polynomial surface approximation (Jet Fitting) [10]. For the latter, we have also chosen a window size in \(kh^{\frac{1} {3} }\). In Fig. 9.8, we detail timings in logscale for various estimators on the flower shape in 2D and the rounded cube in 3D. As expected, approaches based on object recognition in dimension 2 (MDSS and MDCA) provide faster computations. We also observe that II is a bit slower but has an asymptotic behavior much more favorable that BC. In dimension 3 (Fig. 9.8b), we observe that Jet fitting and II behaviors are similar and that II is 10 times faster than our implementation of Jet fitting.Footnote 5

Fig. 9.6
figure 6

Multigrid analysis of the estimation error with l norm in 2D and 3D. (a) Ellipse-\(\hat{\kappa }^{R}\)-l . (b) Flower-\(\hat{\kappa }^{R}\)-l . (c) AccFlower-\(\hat{\kappa }^{R}\)-l . (d) Sphere-\(\hat{H}^{R}\)-l . (e) RoundedCube-\(\hat{H}^{R}\)-l . (f) Goursat-\(\hat{H}^{R}\)-l . (g) Sphere-\(\hat{\kappa }_{R}^{1}\)-l . (h) RoundedCube-\(\hat{\kappa }_{R}^{1}\)-l . (i) Goursat-\(\hat{\kappa }_{R}^{1}\)-l . (j) Sphere-\(\hat{\kappa }_{R}^{2}\)-l . (k) RoundedCube-\(\hat{\kappa }_{R}^{2}\)-l . (l) Goursat-\(\hat{\kappa }_{R}^{2}\)-l

Fig. 9.7
figure 7

Multigrid analysis of the estimation error with l 2 norm in 2D and 3D. (a) Ellipse-\(\hat{\kappa }^{R}\)-l 2. (b) Flower-\(\hat{\kappa }^{R}\)-l 2. (c) AccFlower-\(\hat{\kappa }^{R}\)-l 2. (d) Sphere-\(\hat{H}^{R}\)-l 2. (e) RoundedCube-\(\hat{H}^{R}\)-l 2. (f) Goursat-\(\hat{H}^{R}\)-l 2. (g) Sphere-\(\hat{\kappa }_{R}^{1}\)-l 2. (h) RoundedCube-\(\hat{\kappa }_{R}^{1}\)-l 2. (i) Goursat-\(\hat{\kappa }_{R}^{1}\)-l 2. (j) Sphere-\(\hat{\kappa }_{R}^{2}\)-l 2. (k) RoundedCube-\(\hat{\kappa }_{R}^{2}\)-l 2. (l) Goursat-\(\hat{\kappa }_{R}^{2}\)-l 2

Fig. 9.8
figure 8

Timings in milliseconds for 2D estimators on a flower (a) and 3D estimators on a rounded cube (b). Results have been obtained on a Intel Xeon 2.27 GHz desktop machine

Finally, Fig. 9.9 illustrates various curvature maps on binary shapes.

Fig. 9.9
figure 9

Integral invariant based differential estimators on a digital surface (2563 OctaFlower shape). From left to right, mean curvature, Gaussian curvature, first principal direction, second principal direction and normal vector field (zooms in third row)

9.7.3 Parameter-Free Estimators

As described in Sect. 9.6, the ball radius can be deduced by the contour/surface geometry in order to design a parameter-free estimator. More precisely, in dimension 2, setting the radius to the square of the average length of all maximal DSS allows us to keep a multigrid convergence property (estimator \(\hat{\kappa }^{{\ast}}(Z,p)\) in Theorem 15). Furthermore, this ball radius can be defined locally (estimator \(\hat{\kappa }^{{\ast}}_{l}(Z,p)\)) to capture local behavior of the contour and thus reduce the l 2 error. The main drawback of the locally adapted estimator is that since radius may change for each surfel, we cannot use anymore the incremental propagation of area/volume or moments as described in Sect. 9.7.1. We can define an intermediate estimator based on a quantification of all local ball radii along the contour using for instance a k-means approach (\(\hat{\kappa }^{{\ast}}_{K=5}\) denotes the local curvature estimator defined from a quantification of ball radii into five classes). Figures 9.10 and 9.12 present convergence results in dimension 2 and 3 respectively. As expected, we observe the multigrid convergence of parameter-free estimators for both l and l 2 error metrics. Furthermore, we can observe that local approaches (and local approaches based on a quantification) induce smaller l 2 errors. This means that better estimation can be obtained if we locally adapt the ball radius to the curve/surface geometry. Represented in scale-scale, Fig. 9.11 shows the scale that has been selected for a 2D flower shape (Fig. 9.12). Finally, in Fig. 9.13, we show the multiresolution behavior of \(\hat{\kappa }^{{\ast}}(Z,p)\). Indeed, since the parameter depends only on the object geometry, we can obtain consistent curvature estimation whichever is the object resolution.

Fig. 9.10
figure 10

Comparison in log-space of parameter-free curvature l errors in dimension 2 (first row), and parameter-free mean and principal curvatures (second row) l errors on a multigrid ellipsoid. (a) Ellipse-l . (b) Flower-l . (c) Ellipsoid-l . (d) Ellipsoid-l . (e) Ellipsoid-l

Fig. 9.11
figure 11

Curvature scale-space analysis of a flower: x-axis is the curvilinear abscissa, y-axis is the kernel radius, curvature values are mapped between the blue (lowest curvature) and the yellow color (highest curvature). In black are drawn the radius ρ(Z) for global estimator \(\hat{\kappa }^{{\ast}}\) (first row), radii ρ(Z, p) for local estimator \(\hat{\kappa }^{{\ast}}_{l}\) (second row), and radii ρ(Z, p) after K-mean clustering for local estimator \(\hat{\kappa }^{{\ast}}_{K=5}\). (last row)

Fig. 9.12
figure 12

Comparison in log-space of parameter-free curvature l 2 errors in dimension 2 (first row), and parameter-free mean and principal curvatures (second row) l 2 errors on a multigrid ellipsoid. (a) Ellipse-l 2. (b) Flower-l 2. (c) Ellipsoid-l 2. (d) Ellipsoid-l 2. (e) Ellipsoid-l 2

Fig. 9.13
figure 13

(Left) Mean curvature mapped on “bunny” at different resolution using \(\hat{H}^{{\ast}}_{l}\) (yellow color is the highest curvature, blue the lowest)

9.8 Discussion, Applications and Further Works

9.8.1 Robustness to Noise

Thanks to the volumetric integration principle of Integral Invariant estimators, robustness to noise and outliers can be expected. Intuitively, if the noise is modeled as a zero-mean noise around the object surface or if outliers are of measure zero, geometrical moments would be stable.

Experimentally, robustness in dimension 2 and 3 can be observed in Fig. 9.14. Our noise model consists in swapping the grid point value at p with probability defined by a power law β 1+dt( p) for some user-specified β ∈ [0, 1] (dt( p) corresponds to the distance of p to the boundary of the original digital shape). Such noise model, so-called Kanungo noise [27], is particularly well-adapted to evaluate the stability of digital geometry algorithms. As expected, integral invariants approaches provide robust estimation as the noise parameter increases. Since Jet Fitting considers a principal component analysis of the point set, robustness to noise can also be observed for this approach. In dimension 2, methods which rely on the geometry of the digital contour (MDCA, MDSS) are highly perturbated, even for limited noise parameters.

Fig. 9.14
figure 14

Robustness of II estimator on the Octaflower shape: mean, (a)–(b), and Gaussian curvature, (c)–(d), estimation on a noise-free and noisy surface (noise level=0.5). Comparison of 2D estimators on different level of noise on an ellipse (e). Comparison of 3D estimators for mean (f) and principal curvatures (g) and (h) on different level of noise on an ellipsoid

9.8.2 Feature Detection with Multiscale Approach

As discussed in previous sections, integration radius should be set properly to have relevant curvature estimations. The radius should be set either by the user or automatically thanks to the parameter-free approach. In this section, we use the radius as a scale-space parameter to detect features on digital surfaces. First of all, as described in Eqs. (9.12) and (9.13), area and volume at a point x on a smooth manifold ∂X are related to curvature in 2D and mean curvature in 3D. If we consider now a piece-wise smooth surface X and a point x lying on a C 1-discontinuity of ∂X, Pottmann et al. [48, 49] have shown that area and volume are related to left/right side curvatures but also solid angle of normal vectors at x (see Fig. 9.15):

$$\displaystyle\begin{array}{rcl} A_{R}(\mathbf{x})& =& \frac{\alpha _{0}R^{2}} {2} -\frac{\kappa _{-} +\kappa _{+}} {6} R^{3} + O(R^{4})\,,{}\end{array}$$
(9.98)
$$\displaystyle\begin{array}{rcl} V _{R}(\mathbf{x})& =& \frac{2\alpha _{0}R^{3}} {3} -\frac{\pi (H_{-} + H_{+})} {8} R^{4} + O(R^{5})\,.{}\end{array}$$
(9.99)

If we define now the following quantities:

$$\displaystyle{ G_{X,\mathbf{x}}(R):= \frac{3\pi } {2R} -\frac{3A_{R}(\mathbf{x})} {R^{3}},\quad \mathcal{G}_{X,\mathbf{x}}(R):= \frac{8} {3R} -\frac{4V _{R}(\mathbf{x})} {\pi R^{4}} \,. }$$
(9.100)

At a smooth C 3 point x, these quantities converge to κ(X, x) and H(c, x) respectively as R tends to zero (see Sect. 9.2.3, Eq. (9.15)). On the contrary, at singular point x, we have:

$$\displaystyle\begin{array}{rcl} G_{X,\mathbf{x}}(R)& =& \frac{3} {2} \frac{1} {R}(\pi -\alpha _{0}) + \frac{\kappa _{-} +\kappa _{+}} {6} + O(R)\,,{}\end{array}$$
(9.101)
$$\displaystyle\begin{array}{rcl} \mathcal{G}_{X,\mathbf{x}}(R)& =& \frac{8} {3} \frac{1} {R}(1 -\frac{\alpha _{0}} {\pi } ) + \frac{H_{-} + H_{+}} {2} + O(R)\,.{}\end{array}$$
(9.102)

In other words, as R tends to zero, these quantities have a dominant \(\frac{1} {R}\) term. We do not go further into details, please refer to [40] for a complete state-of-the-art discussion and mathematical insights but feature detector on digital surfaces can be defined looking to the behavior of G X, x (R) and \(\mathcal{G}_{X,\mathbf{x}}(R)\) for a given range of radii: if the quantities remain constant, we classify x as belonging to a smooth part of the object. If the quantities follow Θ(R −1) speed, we classify x as belonging to an edge. In Fig. 9.16 we present some classification results into Edge (red), Flat (green) and Smooth (blue) classes on both noise-free and noisy data for the same set of parameters.

Fig. 9.15
figure 15

Scale-space analysis on a spherical shape (a) and a shape with a singularity (b)

Fig. 9.16
figure 16

Evaluation of feature detection on perfectly digitized and noisy shapes. SpheresUnion: 400 × 200 × 200 voxels, CubeSphere: 2003 voxels, Fandisk: 5123 voxels, OctaFlower: 5123 voxels. The range of radii used for all classifications is given by r min = 5 and r max = 25

9.8.3 Current Limitations and Possible Research Directions

We have presented a whole set of estimators based on integral invariant principle that can be applied to analyze the local geometry of digital shapes: mean and principal curvatures, normal and principal directions. We have shown that they achieve multigrid convergence for some classes of shapes et we have made explicit their convergence speed. Furthermore, these estimators compare favorably to other methods in practice, even in the presence of noise.

These estimators suffer of course of some limitations. A first limitation is that these estimators require to know an approximation of the volumetric shape X. An approximation of the boundary of X is not a usable input for these estimators. This is a strong limitation when analyzing clouds of points. Fortunately, in digital geometry, most input shapes are approximations of X.

Secondly, these estimators are multigrid convergent for a whole range of radii and, even in the case of the parameter-free curvature estimator, it is not obvious what is the optimal value for the k parameter. We know pretty clearly the bound on errors related to moment estimations, but the error bound on the Taylor expansion is not explicit. Certainly, the parameter k depends in some ways on maximal curvature derivatives, but this result remains to be shown.

Thirdly, these estimators are rather costly to compute for “big” digital shapes (about 12s for a 2563 object with R = 5, about 58s with R = 10). However, this has to be tempered by the fact that this is the price of being robust to noise. Indeed, even if the object is highly perturbated along its boundary, the volumetric integrals make the estimators very robust.

Many research directions could be carried on from this point. A first one would be to use other kernels than the simple ball. A good candidate is the Gaussian kernel, because computations could be conducted in the Fourier domain. Hence, computations will be almost linear in the size of the object, whatever the kernel radius. Furthermore, multiscale analysis would be much less costly to compute. It remains to show their multigrid convergence.

Another direction is related to Voronoi based approaches, especially to Voronoi Covariance Measure methods [18, 44, 45]. They are also based on covariance analysis, not of the shape but of the tangent cone. Such approaches are thus complementary and should be combined. Besides, looking at image processing methods, VCM approaches share similarities with image curvature estimators based on the image structure tensor [35, 52], and could also benefit from Fourier analysis.

Further information on digital integral invariant estimators can be found in [14, 15, 39, 40].