1 Introduction

In laser sintering (LS), a laser is used to manufacture a part (solid model) by sintering powder-based materials. A thin layer of powder is spread across the building platform (inside a process chamber). Then the laser traces a two-dimensional cross section of the part, thereby sintering the new layer of powder with the previous layer. This process continues layer-by-layer until all parts in a job are completed and is finished by a cooling phase.

Typically, the input data for LS is encoded as an STL file. A standard tessellation language (STL) file describes a raw unstructured triangulated surface of a three-dimensional object. Despite of its shortcomings it has become the de facto industry standard in the entire rapid-prototyping industry; see Burns [3] for more details on the STL format.

Laser sintering enables the production of durable and functional parts with very much the same properties as their “standard” molded or machined counterparts for a variety of applications, and even if only a few parts of the same shape are needed. Furthermore, snap fits and living hinges can be produced.

Unfortunately, the laser-induced heating and subsequent cooling down of the material may cause process-inherent problems like highly inhomogeneous temperature fields and the “warpage” phenomenon. Roughly speaking, warpage is the result of a change in the morphology of the molten powder. During the cooling process the morphology changes from amorphous to part-crystalline. The crystalline regions thereby have a higher density than the amorphous regions, which leads to a loss in volume. Since laser sintering is a layer-wise process, individual layers may undergo a different warpage, thus leading to inter-layer tensions between the layers. The tensions may result in a bimetallic effect called “curl” in the rapid prototyping industry. Summarizing, a part produced by LS may undergo a noticeable warpage, as shown in Fig. 1 for a real-world example. (Please note that, for ease of visualization, the warpage was exaggerated by a multiplicative factor of ten in this picture.)

Fig. 1
figure 1

A simple test part produced by laser sintering that ends up with a slight U-shaped deformation

Of course, process engineers do their best to research the warpage of LS parts and to fine-tune the process parameters. However, being geometers rather than process engineers we were tempted to come up with a different approach to mitigate the effects of warpage: can we apply an inverse deformation to a model prior to its LS manufacture, in an attempt to compensate the warpage that will occur, such that the part actually manufactured matches the goal shape more closely? Similar approaches have been pursued successfully in textile productions [Öchsner M. Private communication (2008)] and injection molding [9]. (We are not aware of any algorithmic investigation of such an inverse deformation, though.)

2 Problem studied

Consider a polyhedral part \({{\mathcal{P}}}\) bounded by a triangulated surface which we assume to be a closed polyhedral 2-manifold.Footnote 1 In order to apply an inverse deformation, we

  1. 1.

    need to obtain deformation vectors for at least some vertices of the part, and

  2. 2.

    need to interpolate between appropriate deformation vectors in order to obtain deformation vectors for all the other vertices of the part.

In the sequel we assume that deformation vectors are already known for some vertices, and focus on the appropriate interpolation of those vectors. We use the term data sites for those vertices of the part for which deformation vectors are known.

Let \(P:=\{p_1,\ldots,p_n\}\) denote the set of vertices of \({{\mathcal{P}}}\) and let \(S :=\{s_1, \ldots,s_k\}\) denote the sites given, with \(S\subset P.\) The deformation vector to be applied to s i is denoted by v i . Our goal is to construct an interpolation function \(f:P \rightarrow {\mathbb{R}}^3\) such that f(s i ) = v i and such that f(p) forms a decent deformation vector for \(p\in P\setminus S.\)

To solve this discrete interpolation problem we adapt a well-known multivariate interpolation scheme to make it work on triangulated surfaces: we resort to natural neighbor interpolation (NNI). In the next section we review the basics of NNI. We then proceed with an explanation of how NNI can be adapted to solve our deformation problem.

3 Natural neighbor interpolation

Natural neighbor interpolation is a multivariate interpolation scheme based on natural coordinates, introduced by Sibson [12, 13]. NNI classifies as a distance-based, statistical scheme [11] and has gained a recognition as a dependable method, especially for sparse and erratic data. Thus, NNI seems fit for a generalization to arbitrary metric spaces.Footnote 2

The basic idea of NNI is to express a point p as a linear combination of its neighboring sites, where the weights are derived from the Voronoi diagramFootnote 3 of the sites together with p.

Definition 1

The natural coordinates of a point p with respect to a set of sites \(S =\{s_1, \ldots,s_k\}\) are given as a vector of weights \(({\lambda_1,}\ldots,\lambda_k),\) where

$$ \lambda_i(p) := {\frac{|V(p, s_i)|} {|V(p)|}}. $$
(1)

Here, \(|V(p)|\) denotes the area of the Voronoi cell of p, and \(|V(p, s_i)|\) corresponds to the area of the ordered second-order Voronoi cell of p and s i , i.e. the region that lies closest to p and second closest to s i .

The so-called local coordinates property establishes the fact that every point p within the convex hull of S can be expressed in natural coordinates \(({\lambda_1,}\ldots, \lambda_k){:}\)

$$ p = \sum_{i = 1}^k s_i \lambda_i(p). $$
(2)

For a proof we refer to Sibson’s original work [12], or to a recent study on Voronoi-based interpolation schemes by Hiyoshi and Sugihara [7]. Thus, the following definition is sound.

Definition 2

For a set S of k sites, with λ i as defined above,

$$ f(p) := \sum_{i = 1}^k v_i \lambda_i(p) $$
(3)

gives the natural neighbor interpolation for p within the convex hull of S.

The resulting interpolant is at least C 1-continuous everywhere except at the sites, where it is not differentiable and hence only C 0. Recently, a method for getting rid of the continuity limitation of the natural neighbor interpolation was presented by Hiyoshi and Sugihara [8], thus boosting this interpolant to global C m continuity for an arbitrary non-negative integer m. However, provided that a large enough sample of data sites is available even the standard version is able to produce decent results, see Fig. 2.

Fig. 2
figure 2

Natural neighbor interpolation of 100 randomly spaced data points sampled from a paraboloid

Figure 3 illustrates why NNI is also called area stealing method. The Voronoi cell of p “steals” area from the Voronoi cells of its neighbors. The percentage of the Voronoi area around p stolen from a site s i is given by λ i (p). That is, in Fig. 3, the weight λ i (p) is given by the area of the region shaded in dark grey divided by the area of the union of the two regions shaded in dark grey and in light grey. (This union represents the Voronoi cell of p.)

Fig. 3
figure 3

A vertex p (depicted by a solid disk) is inserted into the Voronoi diagram of the data sites (depicted by circles) under the standard Euclidean metric. The Voronoi diagram of the sites is indicated by thin solid and dashed line segments, while the Voronoi cell of p is shown by thick line segments

Note that the definition of the natural coordinates implies that all coordinates are non-negative and sum to one. From the definition it also follows that NNI is only defined for points p constrained to the convex hull of the sites. (Otherwise the new Voronoi cell of p would be unbounded and, thus, would have infinite area.)

Brown studies natural neighbor interpolation on the sphere in the homonymous paper [2] but, surprisingly, little work has been done on generalizing NNI to arbitrary surfaces. Flötotto and Boissonnat [1] recently published a technique that can be used to interpolate on polyhedral and point sampled surfaces. They start with the observation that the Voronoi diagram restricted to a tangent on a manifold becomes a power diagram of one dimension less. Also known as weighted Voronoi diagrams, these constructs introduce regular coordinates as the more general pendant to natural coordinates. Flötotto and Boissonnat use regular coordinates to develop a coordinate system that is defined not only in every point on the manifold but also close by. If the manifold is sampled densely (especially in areas with high curvature) all those tangent power diagrams approximate the geodesic Voronoi diagram close enough to yield good results. However, in their examples hundreds or even thousands of sample points for rather simple geometries are needed, implying that their method would be very difficult to apply to our scenario.

4 Using NNI for interpolating deformation vectors on surfaces

Recall that the network of vertices and edges of \({{\mathcal{P}}}\) forms a planar graph. Thus, standard concepts of graph theory are applicable. In particular, we stress that \({{\mathcal{P}}}\) has O(n) many edgesFootnote 4 if it has n vertices.

Since our interpolation is to operate on triangulated surface of \({{\mathcal{P}}},\) the choice of a suitable metric d and the computation of the corresponding Voronoi diagram become imminent problems. Unfortunately, no analytical approach to computing the corresponding geodesic Voronoi diagram on a triangulated surface is known even for the Euclidean metric. Very recently, Fort and Sellarès [5] explained how to employ the capabilities of a modern graphics hardware for computing the discretized Voronoi diagram of a set of n points on a polyhedral surface, using O(n 2 log n) time and O(n 2) space. The main advantage of their work over our approach is that the shortest path between two vertices is allowed to move across surface triangles.

We also resort to computing discretized approximations of the Voronoi cells of the sites. In our approach, the distance between two vertices p i and p j of the triangulated surface of \({{\mathcal{P}}}\) is given by the length of the shortest path between p i and p j that is formed by edges of \({{\mathcal{P}}}.\) Furthermore, we take the number of vertices of \({{\mathcal{P}}}\) that belong to the Voronoi cell of a site s as a discrete measure for the area of that Voronoi cell.

The main drawback of this simplification is that it will yield good results only if the density of the vertices may be assumed to be roughly uniform on \({{\mathcal{P}}}.\) While real-world models often lack this property, see our discussion of the results achieved in Sect. 5, it is obvious that it is an indispensible prerequisite for any algorithm that attempts to deform a triangulated surface by shifting its vertices. Thus, it seems fair to demand a roughly uniform distribution of the vertices.

We note that working on closed surfaces allows us to bypass the restriction of NNI to the convex hull of S. If we would directly apply NNI in 3D space then we would either have to ensure that all vertices of the convex hull of \({{\mathcal{P}}}\) are data sites, a requirement that seems difficult to meet in practice, or would have to resort to more elaborate means for extending the interpolation to an extrapolation.

4.1 The algorithm

We apply a wavefront propagation which is very similar to Dijkstra’s algorithm for computing the Voronoi cells of the sites. Initially, if vertex p and a site s form an edge of the triangulated surface, then the distance between p and s is given by the length of the edge between p and s. Furthermore, a pointer is set that links p to s. Otherwise, the minimum distance from p to a site is set to infinity. If a vertex and more than one site form edges then the distance to the closest site is taken. (Ties are broken randomly.) All vertices with finite distances to sites are inserted into a priority queue ordered according to increasing distances.

In every subsequent step of the wavefront propagation we delete the vertex p with (currently) minimum distance from the priority queue. Furthermore, if a vertex p′ forms an edge e with p, then its minimum distance to a site is set to the distance of p plus the length of e if this yields a shorter distance than the one currently stored at p′. In this case we also update the pointer at p′ in order to link p′ to the same site as p. All vertices with updated distances are also inserted into the priority queue and their out-dated representatives are deleted from the priority queue. Effectively, we grow Voronoi cells around the sites by visiting the vertices in order of their distances to the sites.

This algorithm is illustrated in Figs. 4, 5, 6. Figure 4 shows the status after the initialization, Fig. 5 shows the status after the first two steps of the wavefront propagation (note that one pointer changed!), and Fig. 6 shows the final pointers of all the vertices and, thus, implicitly also the discrete “Voronoi cells” of the sites. By our definition, the “area” of the Voronoi cell of the site in the upper-right corner is seven (since it contains seven vertices of P). Figure 9 shows the resulting discrete Voronoi diagram on the surface of a polyhedral model.

Fig. 4
figure 4

A triangulation (dashed lines) of vertices (depicted by solid disks) and data sites (depicted by circles) together with the pointers established in the initialization of the wavefront propagation

Fig. 5
figure 5

After two steps of the wavefront propagation

Fig. 6
figure 6

Pointers (and implicit discrete Voronoi cells of the sites) after the finish of the wavefront propagation

4.2 Theoretical analysis of the algorithm

The correctness of this algorithm is established easily by observing that before and after every step of the wavefront propagation each vertex either knows its closest site (within the subgraph explored so far) or has its distance set to infinity.

What is the complexity of this algorithm? We note that a vertex is updated (and possibly inserted into or deleted from the priority queue) if and only if it is connected via an edge to the vertex currently visited. Thus, the number of vertex updates and insertions/deletions is bound by the number of edges, that is, by O(n). Since a priority queue of size O(n) can be maintained in O(log n) time per insertion or deletion, we get O(n log n) as the overall time complexity of this algorithm. One final scan of all vertices produces the size numbers for the Voronoi cells of all sites.

It remains to calculate natural coordinates for all the vertices. For each vertex p, we grow its Voronoi cell and sum up the area stolen from the data sites. Once again, this is carried out by performing a wavefront propagation. The wavefront stops whenever another vertex is reached that is closer to a site than to p (again, ties are broken randomly). As in the case of sites we once again use the vertex count as a discrete measure of the size of the Voronoi cell of p.

In the worst case, the determination of the Voronoi cell of p results in O(n) many other vertices being visited. Thus, the worst-case complexity of the determination of all natural coordinates for all the vertices is O(n 2 log n). Clearly, this seems too pessimistic in practice. Indeed, if the k sites are uniformly distributed among the n vertices then the complexity can be expected to be \(O({\frac{n^2} {k}} \log n).\)

One might be tempted to insert vertices permanently into the Voronoi diagram once their natural coordinates are known, in an attempt to let them act as data sites for subsequent iterations of the main loop. Tests showed that this modification sometimes speeds up the algorithm by up to a multiplicative factor of 2–3 (as could be expected) but also downgrades the results obtained! Since Sibson’s interpolant is only C 0 at the data sites, this observation is quite understandable [4].

5 Discussion of results

5.1 CPU-time consumption

The complexity analysis of our deformation algorithm suggests that the practical CPU-time consumption will depend on the number k of sites and on the overall number n of vertices. Indeed, the CPU-time consumptions measured in practical tests of the implementation of our deformation algorithm reflect the complexity bounds. If interpolations are performed on parts of various complexity, relative to a fixed number of 100 data sites (and deformation vectors per part), our algorithm clearly runs in time quadratic in the number of vertices (see Fig. 7). (All timings were obtained on a notebook with a Pentium M 1,73 GHz processor and are given in seconds.)

Fig. 7
figure 7

The y-axis shows the CPU time in seconds necessary to interpolate 100 randomly placed sites on triangulated surfaces having x vertices

As suggested by the complexity bound \(O({\frac{n^2} {k}} \log n),\) CPU time can indeed be saved by providing additional data sites. Figure 8 shows the decrease in the CPU-time consumption (y-axis) for an STL model of the Stanford dragon, cf. Fig. 9, as the number of data sites distributed randomly among the 19 547 vertices is increased from 2 to 1 000. Thus, for a practical application of our algorithm it will be important to strike a good balance between the efforts spent on obtaining data sites together with the deformation vectors and the time consumed by the actual inverse deformation.

We note that by applying ideas used for the computation of chessboard distances in 2D, we could shave a log-factor from the complexity bounds if we would use the link distance rather than the true Euclidean distance as a metric. However, the speed-up gained in practice is rather tiny while using the link distance makes the interpolation even more susceptible to irregularly spaced vertices. (Typically, less than 1% of the CPU time is spent on the computation of the Voronoi diagram of the sites.)

Fig. 8
figure 8

Decrease of CPU-time consumption (y-axis) as the number of data sites (x-axis) is increased (for n = 19 547)

Fig. 9
figure 9

Voronoi diagram on the surface of the Stanford dragon [14]. The data sites are marked as points

5.2 Practical applicability

Experiments quickly revealed that the various approximations and the discrete nature of our approach made it worthwhile to correct irregularities of the deformation vectors. Using a straightforward mean value smoothing this postprocessing step takes only linear time. Suppose that \(p_{i_1},\ldots,p_{i_m}\) are the neighbors of p i , then v i is replaced by \(1/m \sum_{j=1}^m v_{i_j}.\)

Comparisons between parts sintered with and without inverse deformation clearly showed that an inverse deformation does pay off. For instance, after applying our algorithm to a model of the bar depicted in Fig. 1, the resulting part matched its template better than the one built without inverse deformation: the deviation from the target geometry was reduced by a factor of 10! (We placed 44 data sites along the bar’s main edges, among a total of 6 708 vertices.) Similar results were obtained for other long and thin parts. Experiments also showed that the inverse deformation works well for polyhedral objects of higher genus (such as a plate with dozens of cylindrical holes). The reader is referred to [6] for high-resolution color images of parts deformed by our algorithm.

During the practical evaluation of our algorithm two problems quickly surfaced and turned into major hurdles for a large-scale test:

  • How can the deformation vectors be obtained in an automated way?

  • How can we re-triangulate surfaces such that we obtain meshes that are suitable for a deformation?

All commercial 3D scanners available to us do a point-to-surface matching but do not support a point-to-point matching. As a result, we know how far a point on the manufactured surface deviates from the input surface, but we are not supplied with deformation vectors attached to specific vertices (“sites”) of the part. Thus, we would need a surface registration mechanism to directly match automatically scanned data with the STL model in order to obtain a suitable set of deformation vectors fully automatically. The current approach to getting deformation vectors for vertices of the part (rather than for arbitrary points on the surface of the part) involves operations done by hand and is far too time consuming for large models. While such a manual intervention is good enough in a lab environment to produce a proof of concept, actual production runs would require also this step to be automated.

The availability of decent input triangulations also is a critical problem: STL models are generated by engineers using various CAD packages, but few of them are suitable for a (surface-based) deformation. For instance, it is obvious that one cannot hope to deform the bar depicted in Fig. 1 appropriately if its surface consists of a bunch of long rectangles (recall that we only shift vertices but do not bend edges!). Similarly, long but skinny triangles or highly irregular sizes of the triangles downgrade the quality of the deformation achieved by our algorithm. Thus, prior to fine-tuning our inverse deformation algorithm it turned out to be much more important for the actual real-world application to apply a good re-meshing software.

6 Conclusion

We presented an inverse deformation algorithm for correcting warpage of parts manufactured by means of laser sintering. Experiments run by sintering parts with and without our surface-based inverse deformation made it apparent that this purely geometric approach does indeed help to mitigate warpage effects. Currently, problems with the automatic generation of suitable input data (rather than the quality of the output generated by our algorithm) form the main hurdle for applying our algorithm in actual production runs. Acquiring suitable deformation vectors and re-meshing part surfaces in a fully automated way turned out to be problems that we could not easily solve within the industrial environment that supported this study. However, bearing an automated solution to these problems, the actual inverse deformation introduced in this paper can indeed be expected to reduce warpage effects that occur in laser sintering.