Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

8.1 Introduction

Portrayal of relief and landforms on maps is based on different techniques that provide a stylised representation of the terrain. While the oldest maps mostly provide a qualitative representation of the relief with limited accuracy, the introduction of more rigorous representations starting with the introduction of contour lines at the beginning of the eighteenth century offer a more accurate description of terrain. In addition to contouring, other techniques are variously used depending on the scale and purpose of the map. Maps are now commonly derived from remotely sensed Digital Terrain Models (DTM). They are designed for 2D visualisation on paper and mobile devices but also for representation in a perspective view (where the level of detail depends on the distance from the view point). These are more common on electronic devices.

Depicting terrain has always been a challenge for cartographers. They have always had to strike a balance between effective visual techniques for portraying landforms and methods that yield accurate terrain values. Each technique is applied according to the scale of the map and the level of detail required to be represented and requires a trade-off between visual quality and accuracy (Table 8.1). Relief on large-scale maps is usually portrayed with spot heights, contour lines, and shaded relief. Shaded relief can also be used at smaller scales, although hypsometric colours are preferred for small and very small scale maps (Imhof 1982).

Table 8.1 Terrain visualisation types according to map scale (After Imhof 1982)

The generalisation process is performed by simplifying the relief and emphasising characteristic landforms from a DTM. Earlier methods mostly focused on adapting the amount of information to the scale of the map by filtering out or smoothing details. Although they efficiently provide simplified terrain representations at a required accuracy, they fall short in highlighting landforms so that relevant features visually stand out from others on the map. More recent developments, in addition to a constant focus on accuracy, give further consideration to integrating knowledge about landforms and surrounding topographic elements in order to better model the relationships between terrain and associated entities. In other words, relief is no longer perceived solely as a field-based phenomenon but can also be considered as being composed of landforms seen as objects. Landforms are then related together and with other objects on the map that can have their own semantic attributes and methods.

This chapter provides a review of recent developments in terrain generalisation. It begins with an overview of the problem with the description of different representation techniques and issues brought by multiple scale representation on maps. The next section addresses the characterisation of terrain features. It addresses their identification from terrain data and their classification in a topological data structure. Section 8.4 reviews recent advances in generalisation techniques. It presents algorithms designed for traditional portrayal on 2D maps and for DTM generalisation, focusing on cartographic generalisation. The following three sections present applications of these techniques. First a method for generalising hypsometric maps with consideration of terrain features is presented. Second, a method for selecting isobaths with respect to nautical chart constraints where navigation hazards must be emphasised is detailed. The third application presents a model which preserves relationships between the terrain and objects on the map. The last section provides concluding remarks and perspectives on future developments.

8.2 Issues in Terrain Generalisation

8.2.1 Approaches to Terrain Generalisation

DTM generalisation is often considered as an optimisation problem where a representation at a given resolution is required. The objective is mainly to reduce any confusion and to convey the underlying trends of the terrain (Jordan 2007). In cartography, further work is often required to adapt the terrain representation to the map purpose, to avoid conflicts with other topographic elements, or to improve map aesthetics. In generalisation, the first problem is referred to as model generalisation and yields a digital landscape model (DLM) and the second is referred to as cartographic generalisation and produces the digital cartographic model (DCM) (Fig. 8.1). In the latter specific tools are used to highlight or modify terrain features for each representation technique.

Fig. 8.1
figure 1

Model and cartographic generalisation (João 1998)

Weibel (1992) described three different methodologies that can be combined in DTM generalisation. The first approach, global filtering, is based on resampling and filtering methods such as regular sampling or smoothing as used in image processing for smoothing the surface. Such methods do not take into consideration the terrain morphology and therefore cannot integrate cartographic constraints. They are usually considered for sampling very large datasets or over large changes in scale.

The second approach, selective filtering, eliminates non-significant points on the DTM. It consists mainly of grid and triangulated irregular network (TIN) DTM generalisation methods that preserve morphometric features. Methods were not only developed for terrain generalisation but also for data simplification and compression in computer graphics and for hydrological and geological applications. Two types of method are considered: the first selects critical points based on a distance or error threshold (Fei and He 2009), and the second is based on the extraction of feature points and lines obtained from the drainage system (Chen et al. 2012). Point selection methods are simple to implement and usually perform fast whereas drainage based methods tend to better preserve the terrain features and derivatives. Global and selective filtering approaches rely on mathematical principles and are mainly used to derive a secondary DLM from a primary DLM.

The third heuristic approach utilises operators that generalise specific terrain elements. It attempts to emulate manual techniques and consists of applying individual operators to various elements (e.g. contours, spot heights) composing landforms for the production of the DCM. Operators are also defined that perform specific tasks such as smoothing, displacement or removal. Each operation can be automated but combining operations is still a difficult task as combinations are not unique and the final result depends on the order in which they are applied. Currently, the most efficient models are based on multi-agent systems which allow the integration of both continuous and discrete operations and can draw up plans of action in order to evaluate different solutions (Ruas and Duchêne 2007).

8.2.2 Representation of Landforms

Weibel’s (1992) strategy suggests that generalisation should be structure- and purpose-dependent. The idea is that generalisation procedures should include mechanisms for terrain structure recognition. Landforms should be addressable as objects to allow the possibility of applying certain generalisation operators to specific objects. Landform characterisation depends on the purpose and scale of the map as landforms are generalised according to their meaning and the required level of detail. Landform classification methods fall into two groups (Deng 2007): set theory where components are morphometric points and, category theory where landforms are identified as objects. In the first group, each point of the terrain belongs to one of the six morphometric classes (peak, pit, pass, ridge, channel and plane). The results are scale-dependent and landform delineation may be fuzzy, with multi-scale classification and where fuzziness is also considered in the classification process (Wood 1996; Fisher et al. 2004).

In the second group, landforms are identified as belonging to some categories of objects. Landforms are usually associated with salient terrain features and not to their boundaries which are not always well-defined. For example, the presence of a mountain is easily associated with the existence of a peak significantly higher than its surroundings but there is no consensual definition of the spatial extent of a mountain or of the difference between a hill and a mountain. Uncertainty of landform boundaries is a modelling issue that has been discussed in related works by Frank (1996), Smith and Mark (2003). A landform is considered as a subjectively defined region in a rough part of the Earth’s surface. It follows that the objective of qualitative methods is not to explicitly locate the beginning and ending of a landform, but to find out the presence of landforms corresponding to an end-user typology. Therefore, landforms are not restricted to morphometric features but must be classified according to the map requirements.

Although these methods provide a classification of the terrain, landforms are not organised in a data structure describing the surface topology through different scales. The first structure describing the topology of a 2D manifold was the Reeb graph (Rana 2004). Nodes of the graphs are peaks, pits and passes of the surface. A topologically equivalent data structure is the contour tree (Fig. 8.2, right) that can be built from a contour map of the surface (Takahashi 2004). Surface networks (Rana 2004) describe the surface topology in a graph where edges are ridges and channels are those lines that connect critical points (Fig. 8.2, bottom left). Contour trees were used for terrain analysis and identification of landforms (Kweon and Kanade 1994) but multiple scale representation was not yet considered and only features characterised by the tree leaves were identified.

Fig. 8.2
figure 2

Topological structures of a terrain: critical net, surface network and contour tree

In the computer graphics field, several methods were developed for TIN simplification with preservation of morphometric features based on hierarchical watersheds (Beucher 1994) and on a critical net (Danovaro et al. 2003). Danovaro et al. (2010) also proposed a data structure that gave access to representations at adaptive resolutions. Such approaches provide a terrain representation to multiple resolutions by removing points while preserving feature lines. As highlighted by Jenny et al. (2011), the emphasis was on performance rather than cartographic generalisation. They are therefore more relevant to model generalisation.

8.3 Object-Oriented Classification of Landforms

Landform recognition has received more attention in the last decade and methods have been developed for the characterisation of specific landforms (Feng and Bittner 2010; Straumann and Purves 2011) and for the representation of landforms at different levels of detail. Levels can be defined by fixed resolutions or scales of observation from a raster DTM (Chaudhry and Mackaness 2008) or based on relationships between contours (Guilbert 2013). In both cases, landforms are bounded by contours and are identified at a resolution given by the vertical interval. The objective of these methods is to enable the representation of the relief at various levels of detail and its storage in a single database.

Chaudhry and Mackaness (2008) are interested in detecting hills and ranges from a raster DTM. Contours at a given vertical interval are first computed and then summits within contours are computed. The prominence of a summit is defined by the height difference between the summit and the key contour that is the lowest contour containing this summit and no other higher summit (Fig. 8.3, left). The terrain is then classified into morphometric features using Wood’s (1996) approach. Each morphometric feature which is neither a plane nor a pass is converted into a “morphologically variable polygon”. The spatial extent of a summit is defined by the contour that best overlaps with the morphologically variable polygon containing the summit (Fig. 8.3, right). Overlap value is defined by the area intersection between the contour polygon and the morphologically variable polygon divided by the contour polygon area.

Fig. 8.3
figure 3

Left summit A with key contour and morphologically variable polygons. Right extent of the summit in blue (Chaudhry and Mackaness 2008)

Once all extents are computed, partonomic relationships between summits are defined. If the extent of a summit is contained by the extent of another summit, a parent–child relationship can be defined between the summits. Based on the definition of the summit extent, a summit can only be the child of a higher summit. Authors can then set a hierarchy of summits and identify isolated mountains or a hill and a parent summit with its child summits as a range.

Guilbert’s (2013) work focuses on contour maps and provides a hierarchical structure, the feature tree, which makes explicit the relationships between features. A feature is defined by a region bounded by one or several contours and can be classified as a prominence (boundary contours are lower than other contours inside the feature) or a depression (boundary contours are higher than other contours). The contour map (Fig. 8.4a) is processed first by building the inter-contour region graph (Fig. 8.4b). The structure has the advantage that contours can be either open or closed and a feature, such as a channel stretching across the map, can be delineated by several contours. Features are extracted recursively in a bottom-up approach by collapsing edges of the region graph. Each round of the process goes through three steps.

Fig. 8.4
figure 4

Contour map (a) and its corresponding feature tree (d). Prominences in light grey, depressions in dark grey and unclassified features in white (Guilbert 2013)

In the first step, pairs of adjacent regions which have no more than two neighbours and have the same slope direction are merged by collapsing their connecting edge, e.g. regions K and L and regions A, B, C and D of Fig. 8.4b, are respectively merged into regions KL and ABCD. In the second step, new leaves obtained are copied to the feature tree. In the first round, they form the leaves of the feature tree. In the following rounds, newly extracted features are added on top of existing ones.

In the third step, leaves are aggregated to their neighbouring regions. Candidate regions for aggregation are regions r from the graph for which all adjacent regions but one are leaves. The region which is not a leaf is the one connecting r to the rest of the graph by the edge which is the base of the region as it encloses the subset formed by r and its leaves. Candidate regions are classified according to their edge elevation. If edges connecting r to its leaves are at the same elevation different from the base, r contains a pass connecting the leaves (region S with leaves T and U). Other candidate regions where at least one leaf edge is at the same elevation as the base (region I with leaves J and KL) are aggregated if there is no pass left. They correspond to channels or ridges connecting different parts of the map. Regions connecting to the smallest features are aggregated first so that more prominent features are closer to the root. The process stops when the whole map is partitioned into features. Finally, spurious features may be removed. For example, F is classified as a depression in the first round and is later aggregated with E to form another depression EF. F becomes redundant as it is a part of EF carrying the same meaning and is therefore removed.

The depth of the feature tree does not depend on the scale but on the terrain morphology. The data structure extends previous works on topological structures presented above by building an explicit hierarchy of features. An example on a contour map is shown in Fig. 8.5. The method allows the identification of the channel which crosses the map. Such feature cannot be characterised with a contour tree as the feature is delineated by two contours.

Fig. 8.5
figure 5

Top Contour map with feature leaves in light grey (depressions) and dark grey (prominences). Below feature tree with depressions (light grey) and prominences (black). Below the root, the map is partitioned into three features: one channel in the middle and two prominences on each side. Features labelled on the map are highlighted in the feature tree

Guilbert (2013) provides a richer topological structure as a summit can belong to several features delineated by different contours. Chaudhry and Mackaness (2008) associate a summit to only one hill in their hierarchy but the summit extent and summit relationships are related to the terrain morphometry. For example, in Fig. 8.3, summit C is not part of summit A. Using a feature tree, only contours are considered and A would be the summit of two features, one containing only A and one containing all three summits.

Both methods provide an object-oriented description of landforms and can be used to enrich a topographic database. Summits in the first case and features in the second can be stored and queried in a database. Geometric and semantic attributes such as the feature or summit name and height can also be added. Both methods can therefore be considered as landform generalisation as described by Weibel (1992) and allow the automatic selection and application of heuristic operators. They can be applied to either the DLM or the DCM however classifying a DLM at too high a resolution will lead to excessive decomposition of the map. Therefore they are more appropriate for producing a DCM either from an existing DCM or from a DLM which has already been simplified. Methods are limited to the description of prominences and depressions. Further knowledge could be gained through terrain analysis about the features in order to provide a more detailed classification of landforms however it would necessitate a formal description of landforms which is adapted to the map application. Such description can be achieved through an ontology but its definition is still an open problem (Smith and Mark 2003).

8.4 Generalisation Methods

8.4.1 Spot Height Selection

Automated spot height selection for topographic maps has received less attention than other design tasks and research has mostly focused on the classification of feature points through selective filtering. However, spot heights are not limited to VIP or feature points describing landforms. As mentioned in Sect. 8.2.2, filtering methods are mostly relevant for scale reduction. Spot heights on a map are also selected according to user needs and their distribution over the map. Palomar-Vázquez and Pardo-Pascual (2008) present a method where spot heights are selected according to their significance. Palomar-Vázquez and Pardo-Pascual (2008) applied their method to the production of a recreational topographic map. The selection criteria relate to proximity to hiking trails, transit points and places of touristic interest. Furthermore, morphometric points are extracted from the TIN and classified according to their type (peak, pass, depression). The importance of a peak also depends on its prominence which is modelled in terms of its height, the centrality of the peak and the mean slope (Fig. 8.6). A peak with a high centrality or steep slope marks an abrupt change in terrain and should therefore be given more significance.

Fig. 8.6
figure 6

Left the centrality is defined by the ratio between the area of the contour offset passing through the peak (dashed line) and the area of the contour (plain line). Right mean slope defined by the average of the slopes that connect the peak to the points of the curves

Once classification is done, each peak is assigned a weight according to its type. Palomar-Vázquez and Pardo-Pascual (2008) give a higher weight to points of interest as they are the most relevant to hikers. Finally, spot heights are selected with due consideration to their distribution. This is controlled by partitioning the map into half planes forming a binary tree with approximately the same amount of spot heights in each block. Points are eliminated according to their concentration, which is defined by the length of the minimal spanning tree connecting all the points within a block. Starting from the block with the highest concentration, the spot height with the lowest weight is removed from each block until the selection percentage is reached. The principle of the method can be applied to topographic and thematic maps however constraints are specific to each type of map and spot height density is related to map scale. Non morphological constraints need to be translated into a weighting that reflects their importance and which must be assessed by a cartographer.

8.4.2 Contour Line Generalisation

Contour generalisation is performed either when moving from one scale to a smaller scale or within a given scale to improve the quality of the representation. In the first case, simplification is done either by filtering the grid or TIN DTM (whether already available or generated from the contours) and extracting contours from the simplified representation or by directly simplifying the contours to the destination scale. In the second case, specific operators providing local corrections on contours are performed to fulfil cartographic constraints.

8.4.2.1 Contour Simplification

Traditional line simplification methods can apply to individual contours however they do not always maintain topological integrity. Recently, a line simplification method that preserves topological relationships and can be applied to any kind of line (contours or road networks) was presented by Dyken et al. (2009).

Specific contour line simplification methods are presented by Gökgöz (2005) and Matuk et al. (2006). Gökgöz (2005) first computes an error band around each contour. Characteristic points of the contours are then computed using a deviation angle defined by the angle between consecutive segments, which is more robust than the line curvature, and ordered according to their importance, the higher the angle the more characteristic the point. Each simplified contour is built iteratively by adding characteristic points and smoothing the line through cubic interpolation until the whole line lies within the error band. Results presented by Gökgöz (2005) show that the method provides the same amount of simplification as the (Li and Openshaw 1993) algorithm but points are not distributed regularly along the contours as no point is kept along straight lines. Gökgöz (2005) solution is more computationally expensive since it requires a priori computation of the error bands and each simplified contour is smoothed by cubic interpolation.

Matuk et al. (2006) build the skeleton of regions bounded by contours. The skeleton is formed by points which have at least two of their nearest neighbours on the boundary and is equal to the set of Voronoi edges built from the contours that do not intersect the contour edges (Fig. 8.7). A potential residual function is associated with each point of the skeleton and is defined by the distance along the boundary connecting its two nearest neighbours. Simplification is performed by pruning the skeleton (edges whose potential is smaller than a given threshold are removed) and reconstructing contours from the pruned skeleton. The method presents an original approach where the pruning threshold is set according to the scale factor however it may create visual artefacts if the scale factor is too big. Furthermore the algorithm does not guarantee the absence of intersections between contours. The method should therefore be applied iteratively with smaller thresholds.

Fig. 8.7
figure 7

Contour lines in black and skeleton in grey (Matuk et al. 2006)

In the context of nautical charts, these methods are not appropriate because the depth portrayed on the chart cannot be greater than the real depth (that is referred to as a safety constraint). Peters (2012) presents a method for extracting isobaths at a smaller scale where soundings below the plane formed by their surroundings are pushed ‘upward’ to smooth out the surface. Isobaths can also be aggregated and smoothed by interpolating between new soundings. The method defines a higher surface from which new isobaths are extracted, guaranteeing the safety constraint. The approach is very efficient in extracting isobaths at a smaller chart scale and is applicable to high resolution DTMs.

TIN or grid based simplification methods have the major advantage compared to line simplification methods of being more robust and applicable to large scale change since contours extracted from the simplified terrain are always topologically correct. They mostly apply to model generalisation. Line simplification methods are more appropriate for cartographic generalisation from a source DCM to a target DCM or for updating surfaces as they can directly integrate cartographic constraints such as the distance between the contours. In both cases, results yielded by these methods may still require further processing to provide a final map. Visual conflicts may remain and some terrain landforms may be removed or emphasised according to the purpose of the map.

8.4.2.2 Cartographic Generalisation Operations on Contours

In order to improve the quality of a representation, algorithms have been developed for selective contour removal (Mackaness and Steven 2006), smoothing (Irigoyen et al. 2009; Lopes et al. 2011) and displacement (Guilbert and Saux 2008). These methods apply to a set of contours which have already been rescaled to improve their legibility or to improve the aesthetics of the map by performing local corrections.

Mackaness and Steven (2006) developed an algorithm that detected and removed segments of intermediate contours in steep areas of terrain. The method is illustrated on a 1:50,000 map with index contours at 50 m interval and intermediate contours at 10 m interval. The method first computes the gradient from the DTM and defines regions where the gradient is greater than a 30° threshold. The gradient threshold depends on the scale of the map, the vertical interval and the legibility distance. One or all contour segments crossing these regions are removed according to the gradient value. The choice of contours to be removed is based upon rules set by mapping agencies.

Guilbert and Saux (2008) propose a model combining contour smoothing and displacement. The method is applied to cartographic generalisation of depth contours (isobaths) on nautical charts at fixed scale. Isobaths are not modelled by polygonal lines but by cubic B-spline curves (Saux 2003). The benefit is that curves are modelled by a mathematical expression so that derivative and curvature computations are more robust and the curve has a smooth representation. The limitation is that the quality of the approximation depends on the quality of the sampling and an ill-conditioned problem can lead to a non-reliable approximation.

For navigation safety reasons, generalisation can be done only by pushing isobaths towards areas of greater depth. Shallow isobaths are generalised first so that their displacements are propagated to deeper isobaths. Prior to its generalisation, the ‘isobath admissible area’ is computed. This is the area that the curve should stay within or to where it should be moved to correct conflicts with isobaths at the same and lower depths (Fig. 8.8). Deformation is performed by minimising an energy coefficient associated with each isobath. Two energy terms are defined: an internal energy related to the smoothness of the curve and an external energy related to the curve position which is non-zero if the curve is not within its admissible area. Convergence to an admissible solution is guaranteed by fixing critical points characterising shape features that one wishes to preserve and by removing bottlenecks or self-intersections that may occur during the process.

Fig. 8.8
figure 8

Left Isobaths with depth. Right Admissible area for the 10 m isobath

The method is applied to a set of isobaths in a semi-automatic way as all conflicts cannot be corrected by displacement (removal and aggregation may also be required). Propagating displacement from lower to greater depth can result in big deformations, and the artificial smoothing of steep slopes (Fig. 8.9). Finally, the method, based on iterative deformation, is quite computationally demanding.

Fig. 8.9
figure 9

Generalisation by smoothing and displacement. In the centre are examples of where isobath segments could be removed to avoid too large a displacement

Overall, cartographic generalisation operations are designed to correct one specific type of conflict or to perform one type of operation. These methods are still required to output the terrain representation according to mapping agency requirements that global DTM based approaches cannot consider. Mackaness and Steven (2006) correct local conflicts by removing contours. Correction remains contained in a small area and there are no side effects. This is different in the context of smoothing and displacement (Guilbert and Saux 2008) which may lead to side effects that propagate to other contours. Corrections may be applied in sequence however it is still up to the user to decide which operator to apply. For example the same conflict may be corrected either by removing a contour or by displacing it. Choosing an operation depends on the type of conflict and the terrain morphology and requires the application of a strategy that can be automated. An approach is presented in Sect. 8.6 where features formed by groups of isobaths are selected according to the morphology. Furthermore, methods presented in these sections apply to only one type of element (contours or spot heights) while conflicts can also occur between both or with other map elements. Work in this direction is discussed in Sect. 8.4.4.

8.4.3 DTM Generalisation for Relief Shading and Hypsometric Colouring

The computation of shaded relief from a DTM was pioneered by Yoeli (1966), and various computational models were documented by Horn (1982). Enhancements to relief shading methods have been proposed that seek to improve the depiction of terrain structures (Brassel 1974; Jenny 2001; Kennelly and Stewart 2006; Loisios et al. 2007; Kennelly 2008; Podobnikar 2012). Before relief shading methods can be applied to a DTM for display at medium or small scales, the DTM data often requires generalisation, because digital shaded relief at small scales is often excessively detailed when computed from high-resolution terrain models, making it difficult or impossible to perceive major landforms.

Leonowicz et al. (2010a, b) proposed a method for generalising DTM data that was developed for relief shading. First, the DTM is simplified with a strong low-pass filter, removing details such as mountain ridges or smaller valleys. In the next step, important details are detected in the original DTM using curvature operators. The detected high-frequency details are then amplified and added to the smoothed grid generated in the first step. This procedure is carried out separately for mountainous and flatter areas using separate sets of parameters. The resulting two terrain models are then combined with a slope mask, and, finally, a shaded relief image is computed.

Similarly, hypsometric colouring requires DTM generalisation as it applies to small scale maps. The process is therefore also based on terrain filtering and the identification of the main elements of relief that will be emphasised. Much recent work in this domain has been done by Leonowicz and Jenny (2011) (see Sect. 8.5).

8.4.4 Modelling the Relation Between Field and Object Type Data

A map is more accurate if the terrain generalisation is undertaken with respect to other objects on the map. Relationships between map objects can be expressed as constraints that must be maintained during the generalisation process (Harrie and Weibel 2007). With a terrain model described by a field function, the difficulty is to integrate these constraints in the generalisation process. Filtering methods rely on a mathematical representation to simplify the relief. Therefore, these constraints should be expressed as mathematical functions and integrated into the filtering process. However, these filtering methods usually only perform simplification and are not adapted for local deformation operations such as displacement or enlargement of a protrusion on the surface. Furthermore, maintaining a relationship may require modifying the terrain and the object at the same time. Therefore, both should be considered at the same time in the same model.

Most research done in this domain is concerned with maintaining topological relationships between rivers, contours and spot heights. Contours are considered as individual objects and so constraints can be defined directly between a contour and a river. Chen et al. (2007) provide a classification of conflicts between rivers and contours. Lopes et al. (2011) also take into account the relation between rivers and spot heights when deforming contours: displacement is controlled so that topological relationships between contours and spot heights are preserved and contours still cross rivers at an inflection point. Baella et al. (2007) apply Palomar-Vázquez and Pardo-Pascual (2008) method to detect spot heights for topographic maps and add a weight to selected spot heights close to points of interest (for example near roads, and inhabited zones).

However, these methods are limited to generalising one element with consideration of position constraints imposed by other objects. A more pertinent approach is to generalise the terrain and objects at the same time so that operators can be applied either to one or the other according to constraints. Such a model was proposed by Gaffuri (2007b) and Gaffuri et al. (2008). In their approach, each geometrical element of the terrain is considered as an object under constraints and conflicts are solved by using a multi-agent system approach. The model is reviewed in detail in Sect. 8.7.

8.5 Case Study I: Hypsometric Colouring

Hypsometric tinting is mainly used for small-scale maps (Table 8.1). Imhof (1982) recommends hypsometric tints for maps at scales of 1:500,000 and smaller. With the advent of chromolithography (the first colour printing technology) cartographers started producing maps with a variety of hypsometric colour schemes and, by the mid-twentieth century, hypsometric tints became the de facto standard for physical reference maps at small scales. For an overview of the historical development and contemporary application of hypsometric colour schemes, the reader is referred to Patterson and Jenny (2011).

A raster image with hypsometric colours can be easily derived from a DTM. A simple linear mapping of the elevation range to a colour range is sufficient to determine a colour for each cell in the DTM. Colour can be arranged in discrete steps, or can be interpolated to create continuous tone hypsometric tints. Imhof (1982) provides guidance on the vertical distribution of colour, suggesting a geometric progression, with small vertical steps between neighbouring colours for low elevations, and large vertical steps for higher elevations. Hypsometric colours are often combined with shaded relief to accentuate the third dimension of the terrain, except for extremely small scales when shaded relief is unable to effectively show terrain (Imhof 1982).

The case study discussed here generalises a DTM, which is then used to derive hypsometric colour. The method seeks to accentuate important landforms, such as major valleys and ridgelines, and remove distracting small terrain details (Leonowicz et al. 2009). The DTM is filtered with lower and upper quartile filters. These quartile filters assign to each raster cell the 25 or 75 percentile of its neighbouring values. The lower quartile filter is applied along valleys, and the upper quartile filter is used in the remainder of the DTM. Valley regions are identified based on a drainage network derived from the DTM.

When developing this method, one of the goals was to take design principles developed for manual cartography into account that had the ambition of increasing the readability of hypsometric colours, as documented by Horn (1945), Pannekoek (1962), and Imhof (1982). Manually generalised contour lines were used as a reference for evaluating this approach. These contour lines were generalised by an experienced cartographer (Emeritus Professor Ernst Spiess, ETH Zurich) in which he added hypsometric tints to a map of the Swiss World Atlas (Spiess 2008). The target map scale was 1:15,000,000. The cartographer used contour lines derived from the GTOPO30 elevation model with a 30-arc second resolution as a base for retracing the generalised contours with a digital pen tool.

When generalising terrain for hypsometric colouring, the main landforms should be accentuated, while secondary features should be eliminated. When removing elements, Horn (1945) recommends treating each landform as an entity with the ambition of either removing or retaining the entire entity. For example, if a side valley of a major valley is not important, it should not be shortened, but removed entirely.

The generalisation method applied in this study uses a series of operations performed on the GTOPO30 digital elevation model. This is illustrated on Fig. 8.10 (after Leonowicz and Jenny 2011). The flow diagram in Fig. 8.11 illustrates the sequences of the procedure.

Fig. 8.10
figure 10

Steps leading to a generalised terrain model for hypsometric colouring at small scales

Fig. 8.11
figure 11

Flow diagram for the generalisation of terrain models for hypsometric colouring at small scales

  1. 1.

    The initial DTM (Fig. 8.10a) is filtered with an upper-quartile filter, which assigns to each raster cell the 75 percentile of its neighbouring values. The upper-quartile filter preserves elevated areas (ridgelines) and aggregates isolated small hills and mountain peaks. This step generates the first intermediate DTM (Fig. 8.10b).

  2. 2.

    The initial DTM is also filtered with a lower quartile filter, which assigns the 25 percentile of the neighbouring values to each raster cell. The lower-quartile filter preserves elevation along valley bottoms, and preserves valleys from being dissected into a series of unconnected depressions. This filter also retains mountain passes. This step generates the second intermediate DTM (Fig. 8.10c).

  3. 3.

    The D8 hydrological accumulation flow algorithm (O’Callaghan and Mark 1984) is applied to the initial DTM to compute a drainage network. The D8 (deterministic eight-node) algorithm first computes a flow direction for each cell (the direction of the steepest path). The value of accumulation flow is then calculated for each cell as the number of cells draining into that cell. A threshold is applied to the accumulation flow grid to identify the cells that are considered to be part of the drainage network (Fig. 8.10d).

  4. 4.

    The drainage network is simplified with the desired level of generalisation. Starting at each raster cell, an upstream path is created by following cells that have smaller accumulation values than the current cell. The algorithm follows the path with the smallest absolute difference. If the path is longer than a predefined threshold it is retained, otherwise it is discarded.

  5. 5.

    The rivers found in step 4 are enlarged by a series of buffer operators (Fig. 8.10e). The resulting grid is then used as a weight to combine the two intermediate DTMs created with the upper- and lower-quartile filters in steps 1 and 2. This weighting applies the grid filtered with the lower-quartile filer to valley bottoms, and the grid filtered with the upper-quartile filter to other areas. Care is taken to create a smooth transition between valley bottoms and the surrounding areas (for details, see Leonowicz et al. 2009). The final result is shown in Fig. 8.10f.

The first map in Fig. 8.12 shows hypsometric colours derived from the ungeneralised GTOPO30 DTM. The second map is the reference map—drawn manually. The third DTM is automatically derived with the method described above. The manually generalised map is slightly more generalised than the map generalised using this algorithm. The digital method successfully aggregates mountain ridges. Small valleys are removed while the bigger ones are retained, but not shortened. Though intended for hypsometric tinting at small scales, it could be adapted to the derivation of contour lines and hypsometric tints at intermediate scales, but this option has not been explored yet.

Fig. 8.12
figure 12

A comparison of ungeneralised, and manually and digitally generalised hypsometric colouring. 1:15,000,000, southeast France

8.6 Case Study II: Isobathic Line Generalisation

Nautical charts provide a schematic representation of the seafloor, defined by soundings and isobaths, and are used by navigators to plan their routes. As the seafloor is not visible to navigators, they have to rely on the chart to identify hazards (reefs, shoals) and fairways. As a consequence, the depth reported on the chart must never be deeper than the real depth to ensure safety of navigation and submarine features are selected according to their relevance for navigation (Fig. 8.13). Indeed, nautical charts provide a more schematic representation of landforms when compared to topographic maps. As reported in NOAA (1997, pp. 4–11), “[cartographers] do, deliberately and knowingly, and on behalf of the navigator, include all lesser depths within a contour even if it means that [their] catch includes many deep ones as well”.

Fig. 8.13
figure 13

Isobaths are generalised according to the type of feature they characterise

Isobaths can be extracted from the DTM using DTM based methods (Peters 2012) however emphasising features to produce the DCM is done using heuristic methods: isobaths are enlarged, displaced or removed according to the landforms they characterise. In order to mimic the manual process done by cartographers, the seafloor relief portrayed on the chart is perceived as a set of discrete submarine features, which need to be generalised according to their significance from the navigator’s point of view. Constraints can be classified into (Guilbert and Zhang 2012):

  • The legibility constraint: generalised contours must be legible by observing a minimum size or distance between them;

  • The position and shape constraints: position and shape of isobaths are preserved as much as possible;

  • The structural and topological constraints: spatial relationships as well as distribution and mean distance between isobaths are preserved;

  • The functional constraint: a reported depth cannot be greater than the real depth and navigation routes are preserved.

The first three constraints (legibility, position and shape) apply to individual contours or locally to groups of isobaths. The objective of structural and topological constraints is to maintain morphological details by preserving groups of isobaths corresponding to submarine features. Constraints are expressed not only at the local level but also at more global levels on larger features.

8.6.1 A Feature Driven Approach to Isobath Generalisation

In this research, the initial set of isobaths was provided by the French hydrographic service. Isobath extraction from the bathymetric database was done first by simplifying the original set of soundings by sounding selection (interpolation, displacement or modification of soundings were not considered by cartographers because such soundings cannot be reported on the chart) and then by extracting isobaths by interpolation. The objective was to select the isobaths according to the features they characterise. Automating the generalisation process requires the identification of features formed by groups of isobaths and the definition of a strategy that applies various operators. Characterising the features utilised the approach of Zhang and Guilbert (2011) and Guilbert (2013). Topological relationships are stored in two data structures: the contour tree connecting the isobaths and the feature tree where each feature is composed of a boundary isobath and all the isobaths within the boundary. These are classified as either a peak or a pit.

Automating the process requires the definition of a generalisation strategy so that operators are selected to satisfy generalisation constraints. Guilbert and Zhang (2012) proposed a multi-agent system (MAS) to select features formed by groups of isobaths on the chart. Features and isobaths are respectively modelled as meso-agents and micro-agents at two different levels (Ruas and Duchêne 2007). At the micro level, operations and constraints relate to a single isobath (minimum area, isobath smoothness) or to adjacent isobaths (distance between isobaths). The terrain morphology is defined at the meso level. Features hold information related to the seafloor morphology which is used to evaluate whether the morphology is preserved and which operation can be performed with respect to the safety constraint.

Feature agents are able to communicate with their environment (that is, with other features and inner isobaths), in order to evaluate their state and decide upon further actions. Isobaths, on the other hand, evaluate their environment by estimating constraints (area, distance to neighbouring isobaths) and act based only on information received from a feature. The whole generalisation process is therefore driven by the features and each feature agent goes through a series of steps. These are summarised in Fig. 8.14.

Fig. 8.14
figure 14

Flowchart of the feature generalisation process (Guilbert and Zhang 2012)

The feature first evaluates if generalisation must be performed by communicating with the contour agent forming its boundary. The feature passes information about the neighbouring features and the direction of greater depth. The contour checks if any area or distance conflict has occured and returns the result to the feature. The feature then evaluates its situation with regard to the different generalisation constraints that apply to features:

  • A feature on the chart must be large enough to contain a sounding marking at its deepest or shallowest point;

  • A pit cannot be enlarged or aggregated with another feature;

  • A pit that is too small or not relevant is removed;

  • A peak cannot be removed;

  • A peak that is too small is enlarged or aggregated to an adjacent peak;

  • A minimum distance must be observed between adjacent features.

If some constraint is violated, a list of plans is set by the feature (Table 8.2). Each plan consists of one or several generalisation operations which are of two kinds: continuous and discrete operations. Continuous operations consist of deforming the boundary isobath in order to modify the extent of the feature. They are performed by applying a ‘snake model’ where an internal energy term expresses the shape preservation constraint and an external energy term models other constraints (distance, safety, area). The snake model is detailed in Guilbert and Zhang (2012). Such operations do not modify the structure of the terrain and so the contour tree and feature tree are not affected. The safety constraint is guaranteed by imposing that the force applied to a point in the snake model is oriented towards the greater depth. Discrete operations, on the other hand, may remove isobaths and features and so may update both topological structures. It should be noted that aggregation is seen as a two-step operation including a continuous deformation, where features are deformed until their boundaries overlap, followed by a discrete transformation where the new boundary contour is created and the feature tree is updated. In this way, the deformation is performed smoothly and distance constraints with other neighbouring isobaths are also taken into account.

Table 8.2 The list of actions (after Zhang and Guilbert 2011)

When processing a plan, the topological and safety constraints are always maintained as any operation that violates these constraints would be rejected. Once a feature has reviewed all its plans, the best plan is selected by checking which one best preserves the terrain morphology: feature areas are compared and the plan with the smallest variation of area is selected. Aesthetic and shape constraints are not considered and consequently the boundary of aggregated features presents sharp angles at the place where isobaths are merged.

Results for the generalisation of isobaths of Fig. 8.15 are presented in Fig. 8.16. Figure 8.17 presents the feature trees before and after processing. The process was performed automatically from the building of the feature tree to the application of generalisation operators. The MAS approach has the advantage that the user does not impose an order to the operations and the process keeps going until no further operation can be performed. At mark A, the grey feature was enlarged after the larger peak was aggregated, providing space for enlargement. Similarly, at mark B, the peaks were aggregated after the pit was removed. Some small features were not enlarged or aggregated because no valid solution was found.

Fig. 8.15
figure 15

Partial view of the original map (units in cm) with feature tree leaves. Dark grey peaks, light grey pits

Fig. 8.16
figure 16

The map after processing

Fig. 8.17
figure 17

Feature trees before and after generalisation

This work provides a basic strategy for automatic generalisation however it has to be noted that only feature selection was considered and that legibility conflicts between isobaths were not corrected—no smoothing and no displacement were performed, although legibility distance was considered during the deformations. As a consequence, the result is not acceptable as it is and the model needs to be extended by giving more autonomy to isobath agents in evaluating and correcting local conflicts.

8.7 Case Study III: Preserving Relations with Other Objects During Generalisation

The first generalisation models were mainly focussed on the generalisation of individual objects or object groups belonging to the same data layers. This chapter presents the GAEL generalisation model (Gaffuri 2007b; Gaffuri et al. 2008) dealing with the co-generalisation of two layer types: objects and fields. Fields, also called coverages, are a common method in GIS and cartography for representing phenomena defined at each point in geographic space. Relief is an example of such a field: it exists everywhere and other objects, such as buildings, roads and rivers lie upon it. As a consequence, many relations exist between these objects and the relief that it would be important to preserve. For example, river objects should flow down the relief field and remain in their valleys. This section presents how the GAEL model handles such object-field relations throughout the generalisation process and allows the co-generalisation of objects and fields.

The principle of the GAEL model is to explicitly represent relations between objects and fields and to include constraints on these relations’ preservation in the generalisation process. The fields are deformed by the objects, and the objects are constrained by the fields (Fig. 8.18) in order to preserve the relations that they share.

Fig. 8.18
figure 18

Field-object relations in the GAEL model

The following sections present in more detail these mechanisms using the river-relief outflow relation as an example.

8.7.1 Object-Field Relations and Their Constraints

Specific spatial analysis methods can be used to make explicit the relations between objects and fields. In the case of the river-relief outflow relation, an indicator is defined that assesses how the river flows down the relief: a river is considered to be flowing ‘downwards’ if each segment composing its geometry is directed toward the relief slope. With this indicator, rivers that do not flow properly on the relief (or even sometimes appear to flow ‘up’) are detected. A qualitative satisfaction function representing how the outflow relation is satisfied is then defined from this indicator.

In order to consider field-object relations in the generalisation process, constraints on these relations are defined. One constraint is defined for the field and another one for the object. The purpose of each constraint is, of course, to force the relation satisfaction to be as high as possible. The modelling of these relations and their associated constraints uses the same modelling pattern as the CartACom model (Gaffuri et al. 2008; Duchêne et al. 2012): a relation object is shared between both objects involved in the relation. Its role is to assess the satisfaction state of the relation between both objects. The two objects bear one constraint each, which models how each object sees the relation and how it should be transformed to improve the relation satisfaction state. Object-field constraints are included in generalisation processes whose purpose is to balance all generalisation constraints. Any constraint-based generalisation process may be used. For our experiments, we used the agent generalisation model of Ruas and Duchêne (2007). The following section describes the algorithms used to transform objects and fields in order to satisfy their common object-field constraints.

8.7.2 Algorithms for Object-Field Relation Preservation

The GAEL model includes a generic deformation algorithm whose principles are:

  1. 1.

    To decompose the objects into small components such as points, segments and triangles.

  2. 2.

    To define constraints on these components depending on the deformation requirements. Some of these constraints may be preservation constraints (to force the object to keep its initial shape) or deformation constraints (to force the object to have its shape changed). Figure 8.19 shows example of such constraints.

    Fig. 8.19
    figure 19

    Constraints connected with the deformation algorithm

  3. 3.

    To balance the preservation and deformation constraints by moving the points. This balance is found using an agent optimisation method. The advantage of this method is to perform deformations locally, only around the location where the deformation is required.

In the example of the river-relief outflow relation, the relief is represented as a TIN constrained by the contour line geometries. The following preservation constraints are used:

  1. 1.

    Triangle area preservation constraints;

  2. 2.

    Contour segments length and orientation preservation constraints;

  3. 3.

    Point position preservation constraint.

Both the relief and the hydrographic network are modelled as deformable features. In order that the outflow constraint is satisfied, both constraints of Fig. 8.20 are used. Their purpose is to have the angle α between the hydrographical segments (in dark grey) and the triangle slope (in light grey) as small as possible. River segments are constrained to rotate toward the slope direction, like a compass needle (Fig. 8.20a), while relief triangles are also constrained to rotate toward the flow direction of the rivers above them (Fig. 8.20b).

Fig. 8.20
figure 20

Outflow constraints for hydrographical segments (a) and relief triangles (b)

Using both deformation algorithms, the relief is deformed by the hydrographical network and the hydrographical network is deformed by the relief in order to have their common outflow relation preserved. Figure 8.21 shows the result of the relief deformation for a ‘fixed’ river. Figure 8.22 shows the result of a river deformation over a fixed relief. In both cases, the outflow relation between them has been preserved.

Fig. 8.21
figure 21

Relief deformation. Initial state (a): Some triangles in dark grey are not well oriented according to the river over them. Final state (b): The relief has been deformed according to the outflow triangle constraint. The valley has ‘shifted’ so it is aligned with the river (c)

Fig. 8.22
figure 22

River deformation. Initial state: Some segments in dark grey do not flow correctly with the relief triangles under them. Final state: The river has been deformed according to the outflow segment constraint. The river now falls in its valley

Further details on this outflow relation example are given in Gaffuri (2007a). The same approach can be applied to other kinds of object-field relations (Fig. 8.18). It requires:

  • Spatial analysis methods to measure the relation satisfaction;

  • Field decomposition and constraints to perform the field deformation;

  • Object deformation or displacement algorithms.

Gaffuri (2008) proposes such elements for other relief relations (with buildings objects for example) and with a land cover field. The GAEL model is now part of the production environment of the 1:25,000 base map of France.

8.8 Conclusions

This chapter has reviewed recent research in terrain generalisation. Following Weibel’s (1992) classification of generalisation methods, current thinking in terrain generalisation is built around ideas of selective filtering and heuristic methods. Filtering methods provide a simplified representation of the terrain represented by a field function but can less easily take into account non morphometric constraints. In heuristic methods, features composing the terrain are seen as objects and are generalised by applying individual operators that allow us to model constraints related to the purpose of the map and the relation with other objects portrayed on the map. Therefore, a first step in the generalisation process is to extract terrain information. Although much work has focused on the classification of point and line features for filtering methods, new approaches presented in Sect. 8.3 were developed to characterise landforms as objects defined with their own spatial and non-spatial attributes and on which heuristic operators could be applied.

Section 8.4 described new advances in different representation techniques. The focus has been on utilising DTMs that are stored in a grid, TIN or contour form. It can be seen that terrain generalisation for cartographic purposes does not solely focus on simplification and on performance or compression aspects but, as illustrated in the different examples, also on the information retained on the map according to the quality of the visual information (Sect. 8.5), its purpose (Sect. 8.6), and on the integration of terrain with other map elements in the generalisation process (Sect. 8.7).

Although terrain representation is a major aspect of cartographic generalisation, this review also shows that much work still remains. Classification of landforms as individual objects is still limited to morphometric classification and to basic features such as hills and valleys. As mentioned by Smith and Mark (2003), such classification is complex and the definition of an ontology of landforms is still an open problem. Another research area is in the logical definition of constraints and operators that apply to these landforms. Work presented in Sect. 8.6 is limited to a small number of constraints and operators. As discussed in Chap. 3, ontologies that formalise the generalisation process for terrain generalisation need to be developed in order to extract more knowledge from the terrain model and design efficient implementation strategies. The agent model presented in Sect. 8.7 continues to show promise in facilitating the implementation of a generalisation strategy that allows terrain data to be integrated with other layers of the map. However, the method comes with a high computational cost and its application to several types of layer greatly increases the complexity of the problem due to the large number of constraints that have to be considered.

In the context of research into continuous generalisation and on-demand mapping, the work outlined in Sect. 8.2.2 does not yet incorporate user requirements analysis. One reason for this is that representation of field data requires a large amount of data to be processed, as well as semantic knowledge and data enrichment prior to the generalisation process taking place. Modelling user’s requirements in this context is difficult to model and remains another open problem.