Introduction

Since laser scanning started to mature as a surveying methodology, people have tried to identify changes in a scene repeatedly sampled by LIDAR surveys. Actually, change detection, deformation analysis, and structural monitoring are different terminology for strongly related topics. For laser scanning, all these topics have in common that all compare point clouds of the same scene or object, but acquired at different epochs. From these comparison, conclusions are drawn on the local geometric state of the scene.

Before point clouds are ready to be compared, the input point clouds have somehow been acquired and aligned. Data from each epoch may have a different error budget, in most cases the quality of point cloud data is already strongly varying within one point cloud (Soudarissanane et al. 2011). On top of that additional uncertainty is introduced by the alignment procedure, a process also referred to as registration (Vosselman and Maas 2010). This setting for change detection and related methods for point cloud comparison exists for several years now, and different methodology exists for dealing with challenges like data blunders, uncertainty variations, occlusions, varying point densities, and detecting changes of individual objects in a complex scene.

A relatively new approach, that will be discussed in more detail below, is to incorporate the position of the sensor in point cloud processing. Traditionally, the position of the sensor is only used when creating a point cloud, but it has been demonstrated in several papers that there is additional information in the acquisition geometry.

In this paper, a review is made of both established methodology and recent methods triggered by two developments that add more challenges to the topic. First, equipment for point cloud acquisition is quickly spreading: laser mobile mapping systems, Kinect range cameras, and smart phones (using photogrammetry) are three relatively new sensor systems for acquiring points clouds. As a consequence, it becomes feasible to combine and consecutively compare point clouds acquired from completely different sensors. The second challenge is the constant increase in data volume. Notably, laser mobile mapping systems sample complete cities at a rate and point density that makes it very difficult to extract the potential of information contained in the data. Laser data is also more and more used in official regional and nationwide data archives. In the Netherlands for example, a third version of the Dutch national airborne laser scan archive is acquired from 2014 onwards (Swart 2010). It can be foreseen that in the coming years also more and more individual cities will acquire a somehow official city point cloud. This may trigger the need of change detection methods that follow some specific protocol.

There are also two other developments that are directly caused by the maturing of the laser scanning technology. First, more methods become available to characterize the quality of acquired data. The availability of such methods complicates data acquisition and processing, as these more sophisticated methods should be integrated in the work flow. But clearly, the error bounds of the results can be reduced together with reducing the error bounds of the input data. Another development is that laser scanning becomes more and more known as a surveying technique to a wider audience. The consequence of this is that laser scanning is now often only part of a bigger project. For example, the result of a laser scanning survey could be used to set boundary conditions for a numerical simulation. In this paper, the wider use of the laser scanning technology is notably reflected by the references. In recent years, more and more papers discussing laser scanning appeared in journals outside the geomatics community. In this paper, we shortly discuss recent developments in forestry, geomorphology, structural monitoring, and urban management. This paper is a completely worked-out extension of Lindenbergh (2013).

Main pre-processing challenges

In this chapter, an overview is given of to some extent inevitable issues in both acquisition and pre-processing that have to be taken into account by a successful change detection method. These effects have in common that they may all cause the detection of false changes.

Measurement geometry and surface properties

An issue that is not specific a problem for change detection are the effects of local variation in measurement geometry and surface geometry. In both static and kinematic scanning, both range distance between sensor and object and incidence angle will locally vary (Soudarissanane et al. 2011). The incidence angle is the angle between the incoming laser ray and the normal of the tangent plane of the surface at the location where the laser hits. Low incidence angles therefore correspond to almost perpendicular laser rays. Both local point density and local noise level vary with measurement range and incidence angle. In addition, the noise level will be influenced by the surface properties of the scattering surface, relative to the properties of the laser system, like wavelength and footprint size. In extreme cases, surface properties may be such that part of the scene is sampled in one epoch, but not in another epoch. This may notably happen when dealing with wet surfaces that, depending on the wavelength, may absorb most of the incoming laser light.

Local variations in point density will for example affect the cloud to cloud distances when comparing two points clouds from different epochs. Variations in local noise level make it more difficult to decide globally if a scene has locally changed, as it varies locally how easy real changes can be distinguished from differences induced by noise.

The effects of measurement geometry and surface properties on point density and noise level are at least partly understood but are difficult to incorporate in a change detection method as they require an additional processing step to identify the local variations and some strategy to handle these variations.

Registration

In the following, it is assumed, if not stated differently, that point cloud data representing the same location is available for at least two epochs. It is also assumed that point clouds are represented in the same local or global coordinate system. In practice, this means that already some preprocessing took place, often dependent on the method of acquisition.

Point clouds acquired by a mobile platform, such as an airplane or a car are typically directly georeferenced. This means that the position of the platform in a global coordinate system is obtained by a Global Navigation Satellite System (GNSS) and its orientation by an Inertial Measurement Unit (IMU). The global coordinates of a point whose distance to the platform is measured by laser ranging is then obtained by combining all measurements together with the orientation of the laser at acquisition time. In contrast, panoramic scans obtained from a static viewpoint are typically concatenated to form a larger point cloud by 3D matching. Initially, such point clouds are in a local coordinate system. If necessary, conversion to a global coordinate system can be made by incorporating known global coordinates of targets visible in the cloud. Specific methods are discussed in Chapter 3 of Vosselman and Maas (2010) and in Tam et al. (2013).

It is important to note that the processes of registration and/or direct georeferencing add to the error budget in a particular way. When concatenating scans, but also when applying a strip adjustment, i.e., the fine matching of points from different flight lines in airborne laser scanning, most often use is made of a rigid body transformation. Such transformation rotates and translates one point cloud in such a way that it optimally matches another point cloud. When comparing registered data from different epochs, this process notably results in systematic shifts resembling changes at locations in the matched point cloud away from where the matches were made. In georeferencing, errors in the positioning and orientation directly propagate in local varying errors in the resulting point clouds. Both direct georeferencing and registration errors are often at the millimeter to centimeter level, and are therefore often higher than the error in the laser range, and are as a consequence easily misunderstood as change.

Other systematic errors that could lead to false change detection results are unresolved errors in the system calibration or remaining errors in the positioning of the sensor, typically errors that cannot be removed by a rigid body transformation. Also, some surface materials sometimes result in systematic offsets in laser scan point clouds.

Varying viewpoints

Data sampling the same scene in different epochs may or may not be acquired from different viewpoints. Initially, most laser scan data was airborne. As a consequence, the viewpoint was mostly positioned above the scene of interest. Static terrestrial scanning often involves scans obtained from different scan positions. Most mobile terrestrial scan data is acquired from moving cars. In repeated mobile acquisitions, the viewpoint may be similar, from the street, but still significantly different, as the car may drive in a different lane, or cars with scanners at different heights are used. Modern airborne scanning also shows more variation in viewpoint, as for example, low-flying helicopters are used or data is acquired with different pointing angles in mountainous regions. Even more variation in viewpoint has to be dealt with when data obtained from totally different sensors is compared (Young et al. 2010).

The obvious consequence when comparing point clouds obtained from different viewpoints are shadow effects. Part of a scene may be visible in one epoch but invisible in another epoch. Also, the addition or removal of objects between acquisitions will lead to shadow effects. Therefore, in general, a change detection procedure has three possible outcomes: changed, unchanged, or unknown.

Temporary objects

Temporal objects are another challenge in multi-epoch point cloud analysis. In deciduous forests, variations in acquisition season will results in point clouds with more or less leafs, but at least the season can be taken into account when planning the acquisition. City modeling but also change detection is unavoidably affected, however, by temporary objects on streets like cars, people, or sun screens. One possibility is to classify and remove the temporary objects from each single scene, but this will result in a point cloud with holes (Aijazi et al. 2013a). Another possibility in some cases is to combine data from different epochs or from different acquisitions in one epoch to identify objects as moving or as not to belong to the facade background (Hähnel et al. 2003).

Change detection

In Chapter 7 of Vosselman and Maas (2010), a first breakdown of methodology aiming at change detection and deformation analysis was already presented. This division in approaches is first shortly recalled. Then for each type of approach, new methodology, if present and identified, is discussed in this and the following chapter.

Change detection versus deformation analysis

In Vosselman and Maas (2010), the following distinction between change detection and deformation analysis is made. Change detection looks for a binary answer: did the situation change, yes or no. Is the tree still there or was it removed. Deformation analysis looks for a quantified change: How much did the tree grow in 3 years? Essential for choosing a method to answer either of the two questions is the expected signal to noise ratio. If changes are large and obvious, a simple and efficient method should be used. Only start using more involved methods when this is required by the application. If in doubt, start easy, for example by using only part of the available points, and use more advanced methods only if the initial results indicate so.

Change direction

Essential for change detection is the spatial dimension of change, and if applicable, its spatial direction, in relation to the acquisition geometry. Consider for example Fig. 1. In this figure, two georeferenced ALS point clouds are compared, sampling a patch of forest in 2008 and 2012, respectively. Visual validation of the coregistration and possible flight strip effects did not reveal differences at flat horizontal and vertical patches, like roads and walls. Therefore, it is assumed that the registration error in this case is not more than a few centimeter, which agrees with the reported quality of the two data sets. The same two point clouds are also compared in Fig. 2a, b, but in a completely different way. Note that all three figures show exactly the same patch of forest. In Fig. 1, the difference between the maximal elevation per 25-cm grid cell is plotted. The underlying grid is a regular horizontal grid. And consequently, the changes considered are in the vertical direction. In this particular example, decrease of elevation, corresponding to the larger red patches, is notably caused by trees that were removed, while some increase of elevation, indicated in green, occurs on top of and around the tree canopies, as a result of natural growth.

Fig. 1
figure 1

Differences in maximal elevation in meter between point clouds sampling a forest patch in 2008 and 2012

Fig. 2
figure 2

Points with a cloud to cloud distance over 1 m colored by elevation. The cloud to cloud distances were determined between two high-density ALS data sets sampling a forest patch in 2008 and 2012 resp. a top view; b side view. The red and blue circles in both figures indicate the same trees

What is completely disregarded by the method used to generate Fig. 1 are changes in between the terrain and the tree tops. This is visualized in Fig. 2b, where all 2008 points are shown that are at least 1-m away in the normal 3D Euclidean distance from the closest 2012 point. In this way, not only changes in the local elevation of what is often referred to as the canopy height model are obtained, but most changes in the 3D structure from terrain up to the top of the canopy as well. In Fig. 1, this means that complete trees are visible that apparently were removed from the forest between acquisitions. Figure 2a meanwhile shows the 2008 points that are at least 1-m away from the closest 2012 point in a top view. Comparison to Figs. 1 and 2b shows that large differences in maximal elevation in Fig. 1 indeed correspond to completely removed trees. In addition, there are smaller changes in Fig. 2a, that are difficult to identify in Fig. 1 and could correspond to changes in the understory of the trees.

The method used to generate Fig. 1 is in fact a 1D change detection method, as changes in one direction only, the up direction, are considered. The method used for Fig. 2 is a 3D method, as changes in all possible directions are reported. An example of 2D changes is given in Schwalbe et al. (2008), where a vector field is derived from repeated scan data sampling a glacier. In this case, the vector field indicates the direction and velocity of the glacial surface flow. Both in 1D and 2D change detection, the operator has a choice on which directions to consider.

Range image methods

Laser ranging by definition determines the distance from the laser device to the scene. If the line of sight of the laser device to a part of the scene is blocked, no point on the scene of interest is recorded, but a point on the blocking element, think of a tree or car hampering the visibility of a facade. In van Goor et al. (2011), the effect of occlusions is mitigated by explicitly determining the overlap in a repeatedly scanned scene of a metro tunnel. At locations where corresponding planar segments were found, apparently no large change took place like the placement of platform furniture. But at these overlapping locations, still a detailed deformation analysis can be performed to identify possible subtle changes at the millimeter level due to, e.g., changing moisture conditions.

In some recent papers, the acquisition geometry is explicitly taken into account for the purpose of change detection, and notably for the purpose of identifying areas where no conclusion can be drawn because they were not visible in all acquisitions. An idea like that was already implemented in Zeibak and Filin (2007), where changes from the point of view of one of the scan stations are considered. In Hwang et al. (2013), a line of sight analysis is used to decide what part of a map can be updated using newly available laser mobile mapping data.

In Lindenbergh et al. (2011), a sandy beach is scanned several times from a fixed position by a terrestrial laser scanner. Such scanner operates in a spherical way. Variation in the horizontal plane is obtained by the rotation of the scanner head around its vertical axis, while variation in the vertical plane is obtained by a fast rotating mirror. If such scanner is placed over an almost flat surface like a beach, the local point density will decrease rapidly with increasing distance to the scanner. Therefore, a subdivision of the point cloud in a Cartesian 2D grid will also result in a large variation in the number of scan points per grid cell. This variation can be avoided by using a spherical grid similar to the organization of a panoramic scan in a depth or range image. A range image is an image where the pixel values represent ranges and the pixel locations corresponds to the way in which the ranges were acquired. For a panoramic scanner, the pixel location corresponds therefore to the horizontal and vertical angle at which a range is determined.

In the beach example, time series per spherical pixel were analyzed for change. In Kang et al. (2013), a similar organization in a range image is used to efficiently detect changes on a repeatedly scanned building facade. A large advantage of working with range images is that they can be treated as a raster, which for example enables fast neighborhood identification. To obtain such raster, a point cloud that appears irregular in a Cartesian coordinate system is transformed in an image or array that is regular in spherical coordinates. The use of range images is one approach to cope with large data volumes.

Ray analysis

One step further is to explicitly incorporate all information extracted along the line of sight from sensor to scene. The sensor information not only tells us that the scene in the direction of the line of sight is located at the range distance, but also that no other objects are between the sensor and the scene, that is, space is not occupied there. Simultaneously, from the information in this one ray, it follows that we do not know what space is occupied along the ray behind the surface. In that sense, each pulse with its associated ray results in some constraints on the possible shape of the scene. To summarize, for a given ray, the status along the ray changes from empty, for the line segment between scanner and scene, to occupied, for the point where the half-line intersects the scene, to unknown, for the areas behind the scene. A similar idea is exploited in the theory of space carving (Kutulakos and Seitz 2000), where the 3D shape of a scene is reconstructed from photos obtained from different known locations.

For airborne laser scanning, this ray analysis is extensively described in Hebel et al. (2013). A 3D grid structure is used to store the exact positions of scan rays and measured 3D points. This grid structure is applied to efficiently determine which 3D points or scan rays are in the proximity of a given 3D point or scan ray. The uncertainty in the measurements is incorporated by a so-called belief function, which encodes that the transitions empty-occupied and occupied-unknown are not abrupt but fuzzy, depending on the data quality. For the actual change, detection rules are given to combine belief functions corresponding to different rays. These rules decide whether a new ALS measurement confirms or contradicts the stored information, as extracted from previous measurements. The same methodology is applied to laser mobile mapping data in Xiao et al. (2013).

Note that this method requires that the acquisition location is known. This means that the trajectory of airplane or car and the correspondence between each 3D point and the trajectory should be explicitly stored. In addition, the size of the voxels used in the 3D grid structure will have some influence on the computational efficiency. Using larger voxels will actually increase the local search space for finding close by rays or 3D points, and will therefore increase the running time. As the 3D grid structure is only used to facilitate search operations, variations in voxel size will only have minor impact on the result.

Deformation analysis

In our definition, deformation analysis identifies quantified changes.

Point-wise deformation analysis

Point-wise deformation analysis quantifies changes at the level of single point locations. These locations may be the individual scan points of one epoch, or may be grid point locations of some regular grid. In both cases, no features like cars, boulders, or traffic signs are identified before applying the deformation analysis. Figure 2 is an example of point-wise deformation analysis as distances per point are determined to a reference point cloud; in this case, the cloud acquired first in 2008. In this example, a next step could be to identify single trees by clustering close by single points.

By its nature, point-wise analysis is often an obvious choice for quantifying erosion or sedimentation in geomorphological applications. For example, in Lague et al. (2013), riverbed changes are quantified in the direction normal to the local terrain surface. Point-wise analysis is also an obvious choice when identifying changes at a scale more close to the point density and the precision of the scanner. This topic is further discussed in section “Morphological maps”.

Object-oriented deformation analysis

The strong point of laser scanning is its ability of acquiring a large number of single points sampling the geometry of a scene in a short time. A static scanner typically acquires millions of 3D points in a few minutes. Many man-made infrastructure consists of a concatenation of geometric primitives like notably planes and cylinders. Planes form streets and walls and roofs of houses while cylinders form poles of street furniture and pillars supporting buildings. A point cloud representing a flat wall sampled at 6 m by a static or mobile laser scanner will also consist of hundreds of thousands to millions of points. Still, only three points not sharing a line are sufficient to uniquely define a plane.

This large measurement redundancy demonstrates the potential of laser scanning for object-oriented analysis. In this case, the objects are either the components corresponding to a single primitive, like one flat wall, or one cylinder as part of a light pole, or complete objects like a full facade, possibly composed from several walls, or a complete street lamp.

Object-oriented change detection has notably been applied for detecting changes in buildings (Rutzinger et al. 2010) and (Xu et al. 2013). In these cases, airborne laser scanning data was classified into different classes. Local class changes correspond to object changes, like a building that has been added or demolished. Similarly, Oude Elberink et al. (2011) compared classified airborne laser scanning data sampled after the 2010 Haïti earthquake to a reference map.

Morphological maps

In Pesci et al. (2013), the notion of morphological maps is used to identify seismic-induced building deformations. A morphological map consists of the point-wise deviations from a geometric primitive like a plane or cylinder, hypothetically representing a previous not altered state of the investigated object. In that sense, we consider this method as belonging to the class of object-oriented deformation analysis. The advantage of considering deviations with respect to such primitive is that point cloud registration is not required: if point clouds from two epochs are available, the deviations from the geometric primitive in the first epoch are simply compared to the deviations to the corresponding primitive in the second epoch and a direct cloud to cloud comparison is not necessary.

In Pesci et al. (2013), it is even argued that it is not necessary to sample a scene before and after an event like an earthquake to identify changes. Most buildings are constructed anyway in such a way that walls are vertical and planar and therefore deviations in the plumb line or in the local planarity of walls can often be related to the impact of high-energy events such as an earthquake, notably if there is additional information available considering the state of a building before the event.

Also in Elberink et al. (2012), only post event data is considered. In this case, the event considered is the Haïti 2010 earthquake. After the event, the affected area was sampled by airborne laser scanning. Using this data, an inventory was made of damaged buildings by a classification approach. First, data was segmented. Next, attributes were derived for the resulting segments, like mean height above the terrain or spread of the return intensity. These attributes were used as input for both rule-based and supervised classification.

Related is also the assessment of the as-built state of for example an industrial installation compared to a design model, or the actual flatness of a wall compared to a pure mathematical plane (Tang et al. 2011).

Incorporating measurement geometry

As stated above, Pesci et al. (2013) discusses the possible deformation of high medieval towers. These towers with heights of sometimes close to 100 m were scanned from a low position with inevitably leads to an unfavorable incidence angle (Soudarissanane et al. 2011). As the expected deformation signal in the study in Pesci et al. (2013) was also relatively small, a detailed study of the impact of the incidence angle on the signal-to-noise ratio is incorporated in the deformation analysis by considering point-to-point differences between scans from the same wall but acquired from different scan locations. Therefore, this paper is a good example on how progressing knowledge on the impact of measurement geometry on the data quality can be incorporated.

Using intensity information

Laser scanners not only store the range distance between scanner and object but almost always also store the signal strength as an intensity value. The signal strength depends on system characteristics, ambient conditions, measurement geometry, and material properties (Soudarissanane et al. 2011). Using the laser range equation, it is in principle possible to correct for the influence of measurement geometry and ambient conditions (Höfle and Pfeifer 2007). If the system characteristics allow, or if an additional calibration step is performed, the intensity can be used as an additional information channel, that can be used for classification (Antonarakis et al. 2008) and change detection.

Sensor fusion

There are different ways in which sensors can be fused to aid in the detection of deformation. One way is to combine laser scan data with data from other sensors; another way is to combine and compare laser scan data acquired from different platforms.

Wang et al. (2009) describes a method to obtain deformations in tunnels where projected laser pulses are photographed and converted to a 3D profile in a photogrammetric procedure. This method is quite similar to the principles that are applied in range cameras. In Wujanz et al. (2013), it is explained how laser scanning can enhance a deformation analysis based on ground-based InSAR. A general problem in InSAR, Hanssen (2001), is the need for phase unwrapping, which means that a number of cycles in a periodic signal has to be fixed in an under determined system. Here, laser scanning can assist by providing range constraints to the InSAR processing.

A completely different fusion approach is described in Bremer and Sass (2012). In this paper, scan data obtained before and after a landslide are compared to obtain an estimation of local erosion and deposition volumes. What makes it interesting is that the first acquisition was made by airborne laser scanning, while the second acquisition used terrestrial laser scanning. Comparison of the data was hampered by the presence of dense shrub vegetation which had to be removed by an advanced filtering approach. In general, the large difference in looking angle during data acquisition may cause problems when combining airborne and terrestrial data as areas where overlap occurs may be hampered by unfavorable scanning geometry. Also, Young et al. (2010) compares results from analyzing airborne and terrestrial scan data and report that the eroded volume estimated from the terrestrial scans is 30 % larger.

Emerging applications

In this chapter, four application fields are shortly discussed. These fields have in common that all use laser scanning for change detection. The type of applications, and therefore the way of processing, is quite different however.

Structural monitoring

Structural monitoring considers how structures deform under stress. Deformations are often relatively small, in the order of few millimeters to a few centimeters (Park et al. 2007), and also the objects under consideration are relatively small, in the order of a few meter. The analysis therefore takes place at a level of detail close to the resolution of the scanner, and the type of scanner employed is typically a static terrestrial laser scanner.

In Olsen et al. (2010), it is described how terrestrial laser scanning is used for the application of damage detection and volume change analysis for a full-scale structural test in a laboratory setting. Pesci et al. (2011) considers deformation on towers as does Pesci et al. (2013), but this paper explicitly links the deformation as measured from the scan data to theoretically expected deformation as obtained from a finite element model (FEM) analysis of the possible impact of a sequence of seismic events. In Riveiro et al. (2008), a combination of laser scanning, close range photogrammetry, ground penetrating radar, and FEM is described to document the structural state of historical arch bridges. Grosse-Schwiep et al. (2013) considers the monitoring of rotor blades of wind mills during operation, which means that the object of interest is moving during the data acquisition. It can be expected that also mobile scan data will be more and more used for structural monitoring of for example single building walls in city environments where construction works take place.

Forestry

In contrast to structural monitoring, laser scanning is applied in forestry at a variety of scales, varying from individual trees, to nationwide forest inventories. One of the main large-scale interests is biomass monitoring (Skowronski et al. 2014), where airborne laser scan data is used for obtaining a canopy height model, which is combined with statistical descriptors describing, e.g., the local point density, to obtain biomass estimates. At a smaller spatial scale, terrestrial laser scanning was used to determine the diameter at breast height of stems removed between two epochs because of harvesting (Liang et al. 2012). Also, high-density, i.e., 10–50 pts/m 2, airborne laser scanning data can be used to identify individual planted or removed trees (Xiao et al. 2012), at least outside forest areas, where individual trees are easier to identify. One step further would be to identify changes within single trees, for example the detection of removed branches or the identification of dead branches from terrestrial laser scan data. One approach for this is to first derive the structure of a tree in both epochs, e.g., by a skeletonization method, to consecutively match the derived structures (Bucksch and Khoshelham 2013) and (van Kaick et al. 2011), and, finally, to analyze structure parts for which no match in the other epoch is present.

Geomorphology

Geomorphology considers the shape of landforms. Meanwhile, laser scanning is an often used tool by geomorphologist to detect and quantify landscape dynamics (Paar et al. 2012). Changes can be caused by flow, like on a glacier (Schwalbe et al. 2008), or on a slope, somehow announcing a rock fall event (Abell et al. 2009). Flow-induced changes are typically identified by feature matching or correlation, as patterns in the landscape move due to the flow. Alternatively, local mass movements are determined using some variation of DEM comparison. Mass movements can be caused by landslides (Barbarella et al. 2013), or by rivers, which cause local erosion and sedimentation (Lague et al. 2013), or by permafrost degradation (Barnhart and Crosby 2013). A third type of change is the dislocation of boulders or rocks due to wave impact (Hoffmeister et al. 2012) or direct rock fall. In such case, the movement of the individual rocks can be parameterized by a rigid body transformation.

Urban changes

The final application domain considered here is that of urban changes. Both airborne and mobile laser scanning are more and more used for city inventories. The logical next step is to identify also changes in these inventories. In this case, the focus is typically object-oriented and considers objects like street poles, roofs, or facades that are sampled by many points. Two obvious approaches are to, given two point clouds from different epochs, (1) first classify scan points into objects, followed by (2) comparing resulting classification maps, or alternatively to (1) identify changed parts in the scene, followed by (2) classification of the changes (Teo and Shih 2013). Xu et al. (2013) uses change detection to identify building parts that were built without permit, while Aijazi et al. (2013b) uses change detection to notably identify non-permanent objects that can be consecutively removed to obtain a final clean database. An upcoming challenge in this topic is to create and maintain meaningful street inventories for complete cities using laser mobile mapping data.

Conclusions

In this report, a review of recent methods aiming at detecting changes from laser scan data is given. An overview of main challenges is given, consisting of locally varying point cloud properties, registration, possible varying viewpoints during acquisition, and the presence of temporary objects. When starting a project, first, the signal-to-noise ratios should be assessed. That is, an inventory should be made of the expected changes compared to the expected quality and redundancy of the point clouds. If the expected changes are large and obvious, a straightforward and efficient method can be used that ignores the data quality. If, on the other hand, changes are expected to be small, the measurement geometry is unfavorable, and outcomes are critical, a careful measurement setup is needed in combination with a possibly stochastic approach that systematically propagates quality of the input data towards the results, by considering the local effect of each processing step. Promising new methods do not only consider the final point cloud but also explicitly incorporate the position of the sensor during acquisition. What is largely but not completely (Rieg et al. 2014) missing in current methodology is a systematic analysis of how methods can be applied to the huge data sets that are currently acquired using, e.g., laser mobile mapping systems.