Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Shape analysis is a central problem for many applications in image processing and computer vision, such as object recognition or segmentation, see e.g. [27] for an overview. While boundary descriptors are a classic instrument for object representation, specific tasks in shape analysis demand alternative shape representations that unite geometrical and topological information. The widely-used medial axis transform (MAT) is such a shape descriptor. Initially, the MAT was introduced in 1967 by Blum [3] as a mathematical tool for modelling the biological problem of shape vision. It represents a shape by a thin set of lines or arcs that are centred in the shape, eliminating superfluous information of local symmetries. Intuitively, the MAT resembles bone structures, thus motivating the alternative term (topological) skeleton. Due to its useful properties, including equivalence to the shape boundary, homotopy to the original shape and invariance under Euclidean transformations, the MAT has been used for numerous practical applications; examples are object recognition [22, 28, 32] or medical imaging [31]. However, the results of existing skeletonisation algorithms largely depend on model parameters that often need to be manually adjusted. Moreover, the literature is lacking comprehensive quality analysis tools that allow to quantify important features of skeletonisation algorithms.

In this work, we deal with these problems. We propose here two new thinning algorithms for MAT computation, we call them FOAT and MDT. Both FOAT and MDT are robust w.r.t. the choice of parameters. In addition, MDT is particularly easy to understand and implement, and thus it is especially appealing for users. Moreover, we present new methods for comparative analysis of MAT algorithms.

Previous work. Algorithms for MAT computation are plentiful and diverse, both in their methods and their theoretical background. In general, three classes can be distinguished: Voronoi-based methods [10, 18] that exploit the similarity of the MAT and Voronoi diagrams, approaches related to the distance map [15, 21], thinning algorithms [19, 33] and morphological approaches [2, 12, 16, 24].

In our paper we focus on combinations of thinning algorithms and distance map methods with further enhancements by additional pruning steps. In particular, the Hamilton-Jacobi method by Siddiqi et al. [29] combines thinning and distance map methods with physical concepts of Hamiltonian mechanics, and it forms the basis for one of our algorithms. We also propose a new algorithm combining maximal disc detection using exact Euclidean distance maps with homotopic thinning. The method of Pudney [20] is similar to ours, but uses a different maximal disc detection method based on Chamfer distance maps. For mathematical definitions of thinning and pruning operators see [23].

Despite the large number of publications on MAT computation, systematic comparisons of existing methods are rare. Comparative studies of MAT algorithms are usually performed by evaluating them in the context of a particular application, e.g. for the purpose of object recognition. An example for such a comparison strategy is the matching approach by Siddiqi et al. [30], see also [1, 6, 9, 14, 32] for related approaches. One reason for this kind of indirect comparative studies is the lack of formal quality criteria for skeletonisation algorithms. However, from a more general point of view it is of interest to measure the qualities of skeletons independently from a specific application. In order to achieve this goal, we propose ways to assess in how far discrete algorithms mimic structural properties of the corresponding continuous-scale MAT. For this purpose, we also aim to quantify useful properties of the discrete skeletons. An additional problem, especially in the context of practical applications, is that many algorithms like the Hamilton-Jacobi method [29] require manual parameter adjustment for optimal results.

Our contribution. We address the aforementioned issues by proposing refined thinning algorithms that combine several state-of-the-art methods, and by introducing quality measures that allow to assess meaningful properties of MAT algorithms:

  1. (i)

    Flux-Ordered Adaptive Thinning (FOAT) extends the Hamilton-Jacobi method [29] with a secondary MAT detection (SMD) step. This allows an automatic, robust adaption of input parameters to individual shapes.

  2. (ii)

    Maximal Disc Thinning (MDT) is an extension of the maximal disc detection method of Remy and Thiel [21]. It combines the latter with the homotopy preserving thinning steps of the Hamilton-Jacobi method [29]. This method is homotopy preserving as well as largely independent of input parameters.

  3. (iii)

    Quality Criteria are proposed that quantify useful properties of discrete skeletons in a new way, namely exactness of reconstruction, skeleton minimality and skeleton complexity. These new quality measures allow to distinguish important properties of MAT schemes.

  4. (iv)

    Comparative Skeleton Graph Matching is employed as a means to test MAT algorithms for invariances. Skeletons are transformed into graphs that can be compared using a graph matching method [26] in a similar way as in object recognition approaches.

Tests of both the newly proposed algorithms and the quality measures were conducted on CE-Shape-1, a widely-used image database consisting of 1, 400 shapes, which was specifically designed for testing shape descriptors [13].

Paper organisation. In Sect. 4.2 we present our new skeletonisation algorithms along with general information about thinning methods. Quality assessment tools are discussed in Sect. 4.3. Section 4.4 contains the results of experiments conducted with the new algorithms and quality criteria. The paper is concluded by Sect. 4.5.

2 Algorithms

The algorithms described in the following paragraphs operate on a binary image u : Ω ⊂ R 2 → { 0, 1}. In the image domainΩ, the points of the shape form the object domain \(\mathcal{O} =\{ x \in \Omega \;\vert \;u(x) = 0\}\). The points of the skeleton \(\Sigma \subset \mathcal{O}\) are exactly the centres of inscribed circles, that are tangent to the object boundary in at least two points. The distance mapD of the object boundary \(\partial \mathcal{O}\) is defined as

$$D : \Omega \rightarrow \mathbb{R}_{0}^{+},\quad D(x) =\min _{ y\in \partial \mathcal{O}}\vert y - x\vert $$
(4.1)

Homotopic thinning. Both of the new algorithms are thinning methods that share the same set of thinning rules. In each thinning step, the smallest point in terms of the thinning order > thin is deleted from the object domain \(\mathcal{O}\), until only the skeleton Σ remains. In addition to an appropriate thinning order, a termination criterion is needed to compute an approximation to the skeleton.

Current thinning methods enforce important MAT properties by applying additional thinning rules. In our work we use homotopic thinning [20, 29], which guarantees both homotopy to the original object and thinness of the skeleton. Homotopy to the original ensures that major topological features of the object are preserved in its skeleton [11]. In particular, the skeleton has the same number and configuration of connected components and holes. Thinness means for discrete skeletons that the maximal width of the medial axis is one pixel.

In practice, the homotopic thinning rules enforce that points are only removed from the object if they do not disconnect it nor introduce holes, i.e. if they do not change the discrete topology. Additionally, endpoints of thin lines are preserved.

Hamilton-Jacobi skeletons. The Hamilton-Jacobi method of Siddiqi et al. [29], also known as Flux-Ordered Thinning (FOT), is the basis for our FOAT algorithm. Exploiting properties from Hamiltonian mechanics, Siddiqi et al. [29] conclude that MAT points are exactly the sinks of the precomputed distance map D’s gradient vector field ∇ D, where \(\nabla :={ \left ( \frac{\partial } {\partial x}, \frac{\partial } {\partial y}\right )}^{\top }\). Sinks of a vector field can be identified as points with large negative values of the divergence div ∇ D =  ∇  ⋅ ∇ D of the field.

In order to compute div ∇ D efficiently, the average outward flux \(\mathcal{F} : \Omega \rightarrow \mathbb{R}\) is used, cf. [29] for a detailed discussion. Relating corresponding quantities via the divergence theorem [8], the average outward flux of a point p ∈ Ω can be computed as

$$\displaystyle\begin{array}{rcl} \mathcal{F}(p)\; :=\;\displaystyle\sum _{ i=1}^{8}\frac{< \nabla D(n_{i}(p)),N(n_{i}(p)) >} {8} & &\end{array}$$
(4.2)

Thereby, \(n_{1}(p),\ldots ,n_{8}(p)\) are the direct neighbours of p on a discrete lattice, and N(n i (p)) is the outward normal in n i (p) with respect to the corresponding circles around p. The resulting flux map \(\mathcal{F}\) is used to define the thinning order > iflux:

$$\forall p,q \in \Omega : \qquad p > _{\mbox{ iflux}}q\quad \Leftrightarrow \quad \mathcal{F}(p) < \mathcal{F}(q)$$
(4.3)

In the thinning process of FOT, weak points with high flux values are removed first. As a termination criterion, thresholding of the flux map is applied to endpoints of thin lines resulting from the thinning process. Points with a flux value below the given threshold τ are marked as skeleton endpoints, other points are removed in accordance to homotopic thinning.

2.1 New Algorithm: Flux-Ordered Adaptive Thinning (FOAT)

The FOT method uses thresholding of the flux map to identify skeleton endpoints. Siddiqi et al. [29] propose choosing the threshold τ as a fixed value or as a quantile of the number of object points.

Both of these choices imply the need for manual adjustment in order to obtain good results, because geometric properties of the shapes are not taken into account for the definition of the threshold. While this is obvious w.r.t. the choice of a global fixed value for τ, let us note that quantiles guarantee that a chosen percentage of the image points is removed by applying thresholding to the flux map. However, quantiles do not take into account different relations of skeleton size to object size resulting from varying amounts of local symmetry in shapes. Extreme examples for this relation are a circle and a thin line of equal area. While the circle has exactly one valid skeleton point, its centre, the thin line is equal to the skeleton.

As a remedy, we propose to employ a secondary MAT detection (SMD) step. The underlying idea is to use a first approximation to the skeleton for performing an educated guess for the treshold parameter. The outward flux values of the preliminary skeleton \(\hat{\Sigma }\) are used to determine the threshold by averaging:

$$\tau \,:=\, (1-\lambda )\frac{\displaystyle\sum _{x\in \hat{\Sigma }}\mathcal{F}(x)} {\vert \hat{\Sigma }\vert }$$
(4.4)

where λ is an adjustment parameter that weights the removal of boundary artefacts against accuracy of reconstruction. While FOAT does not remove parameter dependence, the construction in (4.4) improves the robustness of the approach as we will show experimentally. In particular, the parameter λ does not need to be adjusted to the size and geometry of each shape. Instead, it influences the general balance of competing skeleton features such as size and exactness.

Any skeletonisation algorithm can be used for SMD, however, fast algorithms with less restrictive demands for skeleton properties like homotopy or thinness should be preferred in order to lessen the impact on overall performance. We choose the method of Remy and Thiel (RT) [21] for maximal disc detection which identifies MAT points as centres of maximal inscribed discs using lookup tables. In addition we use Meijster’s algorithm [17] for distance map computation, which offers better performance and exactness than the D-Euclidean method [4, 5] proposed for this purpose in [29].

2.2 New Algorithm: Maximal Disc Thinning (MDT)

MDT is proposed as an extension of the RT maximal disc method [21] and acts as a largely parameter-independent alternative to FOT and FOAT. While MDT uses the same homotopic thinning process as FOT and FOAT, it entirely discards the flux components of the algorithm, thus removing the necessity for several computational steps, cf. Fig. 4.1.

Fig. 4.1
figure 1

Flow chart of steps used in the FOT, FOAT and MDT algorithms. Note that the maximal disc computation in the FOAT scheme can be performed completely in parallel to the gradient and flux map computation

The central idea of the MDT method is processing object points in order of their distance to the boundary, mimicking the wave propagation idea of the classical grass-fire model of Blum [3] and retaining only skeleton endpoints. In this way, whole level sets of the boundary’s distance map are fully processed, before the algorithm moves on to the level set with the next higher distance value. This behaviour of the thinning process is achieved by defining the thinning order > dist via the distance map D from (4.1):

$$\forall p,q \in \mathcal{O} : \qquad p > _{\mbox{ dist}}q\quad \Leftrightarrow \quad D(p) < D(q)$$
(4.5)

Object points are removed if they are (i) not identified as skeleton points by the RT scheme, and (ii) if they do not violate the homotopy. Endpoints of the preliminary skeleton branches after the RT step are removed, if they are not verified as endpoints during the thinning process.

In order to minimise the homotopy-induced occurence of large skeleton branches that connect scattered isolated skeleton points, we perform a pruning on the skeleton computed via the RT method. Isolated points correspond mainly to small boundary perturbations, or they are caused by discretisation artefacts. A pruning step is recommended in the context of skeletonisation methods, cf. [1, 25].

Within our pruning step, we consider the number of points of the preliminary skeleton that are in a fixed neighbourhood of the pruning candidate. This candidate is removed from the preliminary skeleton, if the number of such neighbours does not exceed a predefined number. The latter is in fact the only parameter left in the MDT algorithm.

3 Analysis of MAT Algorithms

The performance of MAT algorithms is difficult to assess: Depending on the specific application, the desired priority of skeleton properties may vary significantly. For example, compression methods require skeletons of minimal size that produce reconstructions of maximal exactness. Other properties like homotopy, thinness or Euclidean invariances are not crucial for this application, but might be for others like object recognition or segmentation. Because of these requirements, we propose a novel modular approach to the analysis of MAT algorithms. This allows to express the quality of methods in terms of distinct properties that can be weighted as the application demands.

3.1 Quality Criteria

The aim of the quality criteria that we will introduce is to measure the difference between MAT algorithms (i.e. approximations of the MAT) and the exact MAT with respect to important properties of the latter.

Homotopy to the original shape and thinness are two binary criteria, that can be easily checked automatically. Within this paper, those criteria are not important for comparisons, since all of the proposed algorithms automatically guarantee both properties by their construction.

A natural quality criterion for discrete skeletons is the exactness of reconstruction that is automatically given in the continuous-scale MAT setting. In the discrete world, comparing the reconstructed shape to the original yields the set E of erroneous points, including false positives and false negatives. Normalising the number of errors in order to make the new classification number independent of the size of shapes yields

$$\hat{e}(u,\Sigma ) := \frac{\min \{\vert E\vert ,\vert \mathcal{O}\vert \}} {\vert \mathcal{O}\vert }$$
(4.6)

As we strive for a value of one (rather than zero) for exactness w.r.t. this property, we subtract \(\hat{e}(u,\Sigma )\) from one, yielding finally as a quality measure

$$e(u,\Sigma ) := 1 -\frac{\min \{\vert E\vert ,\vert \mathcal{O}\vert \}} {\vert \mathcal{O}\vert } \quad \mbox{ (exactness of reconstruction)}$$
(4.7)

Another central property of the discrete skeleton is its size. The exact MAT should ideally be the smallest, thin, connected subset of the object domain that yields an exact reconstruction of the original. Thus, the size of computed skeletons should be minimal. This defines the quality measure

$$m(u,\Sigma ) := 1 -\frac{\min \{\vert \Sigma \vert ,\vert \mathcal{O}\vert \}} {\vert \mathcal{O}\vert } \quad \mbox{ (skeleton minimality)}$$
(4.8)

Additionally, noise and errors resulting from the discretisation of the boundary must be taken into account. Skeletons are quite sensitive to boundary perturbation, as small deviations from the exact boundary can lead to spurious skeleton branches. Thus, a robust algorithm should give a minimal number of branches, which can be identified via the set P of end and branching points of the skeleton. Consequently, the corresponding quality measure can be defined as

$$c(u,\Sigma ) := 1 -\frac{\min \{\vert P\vert ,\vert \Sigma \vert \}} {\vert \Sigma \vert } \quad \mbox{ (skeleton complexity)}$$
(4.9)

Skeleton complexity is also important for applications that use the branches or their endpoints as graph nodes. The three quality measures above quantify major features of the skeleton and should be considered holistically.

3.2 Graph Matching for Invariance Validation

Graph matching is a standard tool in the context of using the MAT in object recognition, cf. [1, 7] for discussions. We investigate this approach in the context of desired invariances of the discrete skeleton, i.e. invariance under Euclidean transformation and noise. Rotational and translational invariance are natural properties of the continuous-scale MAT and should be preserved in the discrete setting. Invariance under noise is an obvious practical criterion while it is not included in a formal definition of the skeleton.

The quality of an algorithm’s invariance behaviour can be assessed by a comparison of the transformed shape’s skeleton to the skeleton of the original object. It is possible to use the quality criteria from the previous section for this task, although it is advisable to use unnormalised cardinality of the sets instead of the measures e, m and c, since in general, boundary perturbations change the cardinality of the object domain.

We propose here to use graph matching as an effective way of assessing quality w.r.t. invariances. A skeleton graph is composed of nodes that correspond to the end and branching points of the MAT. The graph’s edges mirror the connections of the end and branching points in the skeleton, and branch lengths are used as edge weights. Similar set-ups for graph representations have been used before in the literature, and also refinements of it have already been proposed, see e.g. [6]. However, to our knowledge they have not been used before in order to investigate invariances. As for our purposes, we do not consider merge or cut operations on the skeleton graphs as recommended in [9, 14].

The graph matching method graphdiff by Shasha et al. [26] computes a one-to-one mapping of nodes of a query graph Q and a database graph D, using a scoring function that takes node types and edge weights into account. Each possible mapping has a total score, and the mapping with the highest score is considered as the best match. See Fig. 4.2 for an illustrative example.

Fig. 4.2
figure 2

Rotational invariances are tested with rotations of shapes from CE-Shape-1, varying from 5 ∘  to 45 ∘ . The image on the right displays the skeleton produced by FOAT and superimposed nodes of the corresponding skeleton graphs. Square nodes represent endpoints, circles stand for branch points. Connection lines between nodes mark the one-to-one matching produced by graphdiff [26]. The matching score of this example is 0.74 (perfect score: 1.0), indicating subtle differences in branch lengths

If i, j are nodes in Q that are mapped to i′ and j′ in D, respectively, the score for the matches is determined by the matching edges of the nodes. The edge (i, j, w) with weight w matches (i′, j′, w′) if i and i′, as well as j and j′ are of the same node type. Then, the score is computed as \(\min (w/w^\prime,w^\prime/w)\). These scores can be used directly as quality measures for invariances.

4 Experiments

The experiments conducted for this work serve two distinct purposes: (i) To investigate if the newly proposed quality assessment methods give new meaningful information; (ii) to compare the new algorithms to the FOT method. The general quality of the skeletonisation results is tested with the MPEG-7 core experiment database for shape descriptors CE-Shape-1.

Invariance tests are performed with images from CE-Shape-1 with added boundary noise, translation and rotation. Both the new quality criteria and the graph matching method are used to assess the invariance behaviour of the MAT algorithms along with visual observations.

Comparison of algorithms. The invariance tests reveal that the new algorithms either surpass FOT or yield comparable results. Table 4.1 shows that FOAT has consistently higher scores than FOT in respect to rotational and noise invariance.

Table 4.1 Average matching scores for invariance tests using the CE-Shape-1 database. A score of 1.0 denotes a perfect match

The MDT method is often competitive to FOT, but in some tests a careful choice of parameters in FOT can give slightly better results. In particular, the behaviour of FOT/FOAT and MDT concerning rotational and noise invariance is different. FOT and FOAT tend to produce large spurious branches connecting the location of boundary perturbations with the exact skeleton, while MDT skeletons vary little in size, but show subtle structural changes, cf. Fig. 4.3.

Fig. 4.3
figure 3

Results of a noise invariance test. The shapes in the upper row contain 15 distinct boundary perturbations. For all algorithms, the corresponding skeletons show little or none deviations from the unmodified version. In the lower row, 30 perturbations were added to the boundary. FOT (lower left) and FOAT (centre) produce spurious branches of significant size, while for MDT (lower right), skeleton size varies only slightly but structural changes occur

We do not compare against the RT method here, since it does often not result in connected skeletons. However, the construction of a graph-based representation useful for graph matching is beyond the scope of this paper.

Turning to the quality averages for the shape database, these reveal that FOAT produces skeletons that are of slightly simpler structure than the ones of FOT, while skeleton exactness and absolute size is in average slightly higher. The MDT scheme confirms its tendency to produce skeletons of minimal length, staying nevertheless quite accurate but introducing some small structures that result in a slightly higher complexity as FOT and FOAT, see Table 4.2. Let us note in this context that the parameters used for the FOT and FOAT results displayed in Table 4.2 are chosen as to give the best balance in terms of all three quality scores e, m and c, i.e. one could obtain e.g. better results w.r.t. e at the expense of weaker scores in c and m.

Table 4.2 Quality averages for the CE-Shape-1 database. The abbreviations e, m, c denote the normalised quality measures as defined in (4.7)–(4.9). Ideally, scores should be close to 1

As an additional algorithm for our comparison, we employ here the accurate, yet not homotopy-preserving RT method which we also used within the SMD computation. We show these results in order to clarify the benefit of using a homotopy-based approach w.r.t. skeleton size and complexity.

All of the algorithms work on average with high precision. However, let us also point out that due to the size of 1,400 shapes in our test database, one may infer that the first three to four digits after the comma in the error bear significance: As computational results of the three algorithms are similar for many shapes leading to about the same error measure contributing in (4.7)–(4.9), deviations from this rule may be significant but do not show their influence in the first one or two digits.

Making algorithms even more comparable. With our new quality measures we have the tools at hand to study how the algorithms perform at a common level of accuracy. To this end, we have performed a systematic parameter search for FOT and FOAT that gives a uniform exactness score of e(u, Σ) = 0. 9535, matching the exactness of the MDT method (which is parameter-free). The result is given in Table 4.3. We observe that MDT favors in comparison to the other schemes skeletons that are of more minimal size but of more complex structure. Moreover, a bit surprisingly, we observe that at the accuracy of the MDT method – or, more generally speaking, at a given average accuracy – FOT seems to slightly outperform FOAT.

Table 4.3 Comparison of methods for the CE-Shape-1 database. Parameters for FOT and FOAT are chosen as to match the accuracy e(u, Σ) = 0. 9535 of the MDT method

Robustness vs. parameter choice. By the results documented in Table 4.3, one may come to the conclusion that the FOAT method may be not better than the already existing FOT scheme. However, having at hand the new rigorous measures for the quality of algorithms, we can now perform a systematic study of the sensitivity of the FOT and FOAT methods vs. the parameter choice. To this end, we have sampled a window of length 0. 1 around the optimal parameter choice employed before and studied the deviation in the error measures. As by Table 4.4 we observe that the FOAT method is generally much less sensitive to parameter variations than the FOT scheme; w.r.t. skeleton minimality and complexity it turns out that this can be measured to be in the order of a considerable factor of 10 and 40, respectively. Moreover, for FOT deviations appear already in the second digit after the comma which indicates a considerable difference.

Table 4.4 Parameter sensitivity for the CE-Shape-1 database. The table gives the maximal deviation in the measures e, m, c as by (4.7)–(4.9) when varying the parameter in an interval of length 0. 1. Ideally, scores should be as small as possible

The score differences illustrate the distinctive behaviours of the algorithms. When attempting to fine tune the threshold parameter τ (see Sect. 4.2) or when making use of the same value of τ for different shapes, FOT easily produces more and larger branches at narrow regions of a shape. There, additional skeleton points increase size and complexity, but yield only slight differences in exactness of reconstruction, see Fig. 4.4. On some shapes however, FOT also omits large branches, thus losing boundary information. Both effects can occur simultaneously, as displayed in Fig. 4.5.

Fig. 4.4
figure 4

The lizard shape is an example of skeletons that are similar in size and exactness of reconstruction, but vary in complexity. FOAT (middle) and MDT (right) feature less branches in the small details of the lizard such as in its toes. FOT (left) produces additional branches, in particular for jagged edges that result from the discretisation of the shape boundary, as it strives in this example for a very exact representation

Fig. 4.5
figure 5

The skeletons of the elephant shape reveal the improved adaption capabilities of FOAT (middle) and MDT (right) to individual shapes. Similar to Fig. 4.4, FOT (left) produces spurious branches at jagged edges (e.g. at the feet of the elephant), but omits branches that would give a refined reconstruction of the elephant’s head and back

A refined analysis of the new MDT algorithm. We now illustrate that the new quality measures can be useful to infer information about the properties of an algorithm if a related method is at hand.

Let us recall that, by construction, the MDT algorithm relies on the RT method. The latter gives as observable via Table 4.2 in average considerably more accurate skeletons than the MDT scheme. While the RT method is missing thinness, homotopy and connectivity, one may wonder why the loss in the accuracy quality score is relatively large as in the second digit after the comma. The question arises, if there is a general loss of accuracy for the shapes in our database (which would indicate a shortcoming of qualities like thinness, homotopy and connectedness), or if it is due to a few outliers that diminish the average quality score.

We now investigate this issue. For an illustration of the evolution step introduced by the MDT algorithm in comparison to the RT method, we present some typical skeletonisation results, cf. Fig. 4.6. By adding the homotopy-preserving thinning component, we clearly obtain thin, connected skeletons with no spurious points. However, in very few cases as in the car or the camel example given here, a significant branch can be missing because of the general tendency of the MDT algorithm towards producing skeletons as minimal as possible. In such singular cases the accuracy is obviously highly diminished; in turn one can infer by the high average accuracy score in Table 4.2 that these cases are very rare.

Fig. 4.6
figure 6

Comparison of skeletonisation results illustrating the MDT algorithm. Top row. The algorithm of Remy and Thiel without pruning. Second row. The algorithm of Remy and Thiel with an additional pruning step. Third row. The new MDT scheme including pruning. Note that not all branches of the pruned RT skeleton are preserved during the MDT thinning phase due to the thinning order. For these selected shapes the problem appears that important branches are pruned. Bottom row. Result of modified MDT algorithm which will give a better accuracy for such shapes

As a proof-of-concept that one can easily give a remedy for such rare cases (that can in principle be identified e.g. by computing individually for each shape the reconstruction error e, or by a more local procedure) we propose to fine tune the basic MDT method as follows. In a first step, the modified MDT scheme uses the RT pilot points to construct a preliminary skeleton that is homotopic, but not necessarily thin. This skeleton is then thinned according to homotopy preserving thinning rules that do obey the distance ordering, but do not remove skeleton endpoints, i.e. no branches are deleted from the preliminary skeleton. Finally, branches of very small length are removed with a simple graph based pruning algorithm. See the last row of Fig. 4.6 for typical results of this procedure.

5 Conclusions

In this paper we have proposed two new MAT algorithms, FOAT and MDT, based on homotopic thinning. The FOAT scheme relies on a robustification of the thresholding step in FOT. The MDT scheme is with the exception of the pruning step parameter-free. As it is also relatively easy to implement and gives in general results of good quality, it is in our opinion an attractive algorithm for applications.

Additionally, we have proposed quality assessment techniques that give an insight into properties of MAT algorithms. Moreover, they allow to perform a systematic parameter sensitivity analysis, and to analyse construction steps in an algorithm. The quality measures are easy to evaluate and can be applied with any MAT method. However, the evaluation of the invariance tests as proposed here is not straightforward for MAT methods that do not preserve homotopy since it relies on graph matching.