1 Introduction

1.1 Earthquake Response and Recovery

Disaster management is characterized as encompassing mitigation, preparedness (pre-disaster hazard research), response and recovery strategies (post-disaster research) undertaken to lessen the vulnerability and assess the impact of a disaster (Alexander 1991; Baird 2010; Van Westen 2013). The emergency disaster management encloses multi-criterial decision analysis based on increasing demand for up-to-date and accurate spatial information of the current situation during all the phases of the disaster management cycle (Fig. 1). This information is essential to plan a rapid response allocating a huge number of first responders, volunteer teams, technicians and experts to the affected areas in time (Alexander 2005).

Fig. 1
figure 1

Disaster management cycle

Geographic information systems (GIS) is a powerful tool in natural disaster assessment and management. Spatial referenced data, regarding buildings, streets, services, population, positions of hazardous facilities and topography, enter into a GIS by converting to shapefiles and geodatabases (Erden and Karaman 2012). Analysis and visualization of the spatial information can help in risk identification, development of preparedness and response scenarios, estimation of building damage, relief operations, socio-economic impacts assessment, simulation of disaster effects and generally in decision making before, during and after a natural disaster (Cutter 2003; Altan 2005; Goodchild 2006).

Geo-informatics has the important role of connecting the technology acquisition and identification of information resources with the methodologies and previous experiences in handling natural disasters, mainly earthquakes (Altan et al. 2001; Yang and Zhang 2013; Sivakumar and Ghosh 2017). A variety of earthquake damage assessment methodologies developed and applied in the research community worldwide focusing on either physical or socio-economic vulnerability decrease (Hashemi and Alesheikh 2011).

However, the sudden strike, with little or no warning, of an earthquake has a significant effect on human life and property. After an earthquake, a rapid damage assessment is vital for emergency response actions, rescue operations and post-disaster reconstructions (Tu et al. 2016). Building damage is one of the major issues that civil protection agencies should cope within the crisis management of an earthquake. Various types of remote sensing data (aerial or satellite images, SAR, LiDAR) and different techniques are utilized in building damage detection and assessment, that either evaluate the changes using data before and after a disaster or interpret only data after a disaster for initial building damage evaluation and rapid response (Dong and Shan 2013; Rastiveis et al. 2015; Tu et al. 2016). Tong et al. (2012) detected both individual collapsed buildings and region of buildings based on building height change using IKONOS stereo image pairs before and after the Wenchuan earthquake. Gerke and Kerle (2011) developed a building damaged classifier based on airborne oblique, multi-perspective pictometry data. The application of oblique images assessed except roof and facade building information that would not be visible from traditional image-based methods. Aerial imagery, UAVs and LiDAR system appeared as alternative sources of building damage information in earthquake-damaged areas (Yamazaki et al. 2015; Fernandez Galarreta et al. 2015; Vetrivel et al. 2015).

All aforementioned methods assess the damage information without taking into consideration some parts of the building that are ready to be collapsed. Detailed building damage information is crucial for stakeholders regarding response and rescue operations, as accurate damage recognition of individual parts of the building can activate experts at the right place the appropriate time (Vetrivel et al. 2015).

Nowadays, 3D modeling is becoming very popular in documenting building environment. The texturized photorealistic 3D models can exceptionally describe the shape and size of an object with all details and high level of features’ accuracy. These models were widely acquired with three different survey methods: (i) aerial photogrammetry; (ii) terrestrial photogrammetry (TP); or (iii) terrestrial laser scanning (TLS). Fusion of the above methods has also been explored (Lerma et al. 2010).

1.2 Terrestrial Photogrammetry and Terrestrial Laser Scanning 3D Modeling

Photogrammetry is the scientific methodology for the definition of the 3D geometry of a target by analyzing its 2D images and is divided into two categories: aerial and terrestrial photogrammetry. When the images are acquired by a camera located on or near the earth surface, then, photogrammetry is called terrestrial, while a distance between the camera and the target less than 100 m defines the close-range photogrammetry. A photogrammetric project includes all the steps from data acquisition until the construction of a 3D virtual model. The most common representations of a 3D model are a point cloud and a textured triangulated network surface called a polygon mesh.

In the previous years, an analytical photogrammetry workflow was applied in order to establish a relationship between a measuring system, several photo coordinate systems and target coordinate systems. This workflow includes several procedures such as the interior orientation, the exterior orientation, and the bundle adjustment while the user’s interaction with the digital photogrammetric workstation was intense. However, the recent trends in processing algorithms and the low-cost software provide to the users a complete digital workflow (Jiang et al. 2008). One of the main advantages of the close-range terrestrial applications is the usage of inexpensive non-metric cameras. Laboratory calibration is not recommended for off-the-shelf cameras because the calibration is rarely valid due to the long period between calibration and data acquisition.

Aerial and terrestrial photogrammetry ranks among the methods that can be used to survey building structures and assess 3D models. This approach became more popular when photogrammetry algorithms were enhanced by computer vision techniques leading to the well-known methodology of Structure from Motion (SfM) (Ullman 1979). The SfM became popular due to the Scale Invariant Feature Transform (SIFT) algorithm (Lowe 1999; Snavely et al. 2006, 2008).

The evolution of technology in the imaging sciences in recent years has introduced new measurement techniques for objects and whole areas. The introduction of LASER (Light Amplification by Stimulated Emission of Radiation) technology in the early 1960s and the understanding of the advantages of its radiation characteristics such as monochromaticity, proper alignment, and ease of laser beam formation, led to its initial application among other things to the manufacturing of distance and imaging instruments. The first applications wherein military instruments and over the years, the use of the laser has been transferred and established at the geodetic stations that are the precursors of terrestrial laser scanners. The primary differentiation of TLSs from geodetic stations is found in the absence of the optical sighting device and the ability to center and level as it is replaced using special sensors.

The data obtained by measuring with the TLS is the scanner-object-point distance, the polar coordinates of the points of the scanned surface (two steering angles and the distance) and the intensity value of the return laser beam. The cartesian coordinates (x, y, z) are automatically calculated by the polar coordinates of the points of the object being measured. Thus, at each point, the information (x, y, z, I) corresponds to its position. During scanning, the TLS also collects photos of each scanned point with its embedded camera (RGB values) in order to define the color and texture of the surface. The points of the scanned object are presented uniformly, after appropriate processing, in the form of a set of points with seven values defined as a point cloud that accurately forms and delivers its three-dimensional model.

The acquired point cloud represents in detail the geometry of the object without inaccuracies, which was impossible for conventional geodetic stations, and it forms the raw data ready for measurement and processing. However, because modern TLSs acquire millions of points per second, the point cloud is huge in volume. TLSs are nowadays used to measure objects in a wide range of applications such as civil engineering, archaeology, industrial facilities, and their potential enables them to acquire geometric data of an object or a larger area which later can be used to create representations of their 3D models.

Based on their principle of operation, TLSs systems can be categorized into three main categories: (a) Triangulation, (b) Time of Flight and (c) Phase Shift (Vosselman and Maas 2010).

On the triangulation principle, the instrument generates a laser beam that is deflected across the object by a rotating mirror. The beam is then reflected by the surface of the object and focused onto the sensor by the lens. The location of the laser beam on the sensor, plus the known distance between it and the mirror is combined with the recorded angle of the mirror to determine a point coordinate by triangulation. Triangulation scanners find application only at measuring at close distances and small objects as the accuracy of the distance between an instrument and an object depend on the square of the distance.

TLSs using the Time-of-Flight (ToF) principle emit a laser beam and calculate the travel time of the signal between transmission and reception. As the speed of light is known, by recording the time of transmission and reception of the beam to the detector, the distance traveled is calculated which is twice the distance between the device and the object. The measurements are made separately for each point, and so the laser beam should change direction each time, which is achieved by using mirrors deflecting the beam direction.

Phase-Shift (or Phase-Comparison) TLSs operate on the principle of comparing the phase difference between the transmitted light wave and the return wave. In this case, the transmitted beam is converted into a harmonic wave and the distance is calculated by the phase difference between the transmitted and received wave.

Various types of devices and methods were used in a post-earthquake scenario in order to estimate damages such as deformation of structures. TLS is one of the recently emerging technologies, which becomes even more preferable in hazard areas, because there is no need for direct contact with the object or the structure to be scanned. Researchers in the past years argue that TLS data can provide high precision and spatial resolution data suitable for damage assessment generally and especially after a natural disaster such as an earthquake.

Kayen et al. (2006) were ones of the first that investigated the potential of TLS data for the rapid assessment of damaged terrain, by presenting examples of geometric measurement and change detection of seismically-induced landslides. Devilat (2014) presents the methodology to build color and accurate 3D model from TLS of a heritage area which was hit by an earthquake, which can be used for assessment and further design. Olsen and Kayen (2012) discuss the challenges and benefits for the use of a TLS on post-disaster reconnaissance efforts and argue that the use of TLS can preserve the scene digitally for post-disaster assessment.

Many researchers proved that it is possible to detect deformations in buildings from TLS data proposing various automated and semi-automated methods (Olsen et al. 2013; Pesci et al. 2013; Jafari et al. 2017; Zhao et al. 2018; Puente et al. 2018; Xu et al. 2018). Erkal (2017) utilizes previously developed surface damage detection algorithms implemented in a software application, which extracts damage features from imported texture-mapped point clouds such as shape and size, determining condition ratings and producing damage reports.

Olsen et al. (2013) utilize optical techniques to show that recently developed change detection algorithms can quickly provide regional damage information with application on damaged structures from tsunamis, coastal erosion, and cracking of a historical building. Chang et al. (2008) investigate the use of TLS data to acquire the displacement information of columns and beams for safety evaluations, while Anil et al. (2013) showed that it is possible to detect as small as ~1 mm cracks from TLS data.

Oppose to TLS, the TP has been rarely used for damage assessment while the Unmanned Aerial Vehicles (UAV) are the most common platform for image acquisition. The 3D dense point clouds and the 3D building models generated from TP are used to recognized cracks, holes, horizontal drifts, debris or other detailed damages in building facades after an earthquake (Dai et al. 2011; Papakonstantinou et al. 2018).

Images for rapid 3D reconstruction for post-earthquake damage assessment can also be taken with tablets (Dabove et al. 2018). In this study, the extracted point clouds from tablet cameras compared with the 3D model retrieved from a professional camera and the differences were less than 2 cm. At the same time, the georeferencing process resulted in approximately 5 cm error at the control points. For a specific building the total time for data acquisition was 30 min which is sufficient for rapid mapping during emergency response.

Many researchers propose to combine and integrate both TLS and photogrammetric methods as they are capable of collecting precise and dense 3D point clouds (Lerma et al. 2010; Moussa et al. 2013; Zhihua et al. 2014). Li et al. (2008) proposed a 3D model and high-resolution imagery fusion for reconstructing 3D building models. LiDAR data, before and after an earthquake, were also proposed for detecting building changes by measuring the rate of destroyed rooftops of building models. Galarreta et al. (2015) demonstrated that the combination of 3D point clouds with damage features extracted from oblique images can be useful for intermediate damage assessment at building level. Yamatzaki et al. (2015) highlighted the usefulness of SfM technique to depict damage situation of buildings due to the 2011 Tohoku earthquake. Following the 24 August 2014 Napa earthquake, Morelan et al. (2015) used SfM to produce extremely high-resolution 3D point clouds with an mm-scale resolution of surface rupture through anthropogenic features. Nowadays, research interest focuses on the comparison of the 3D accuracy between measurements of high-resolution TLS and TP, so as to examine point cloud characteristics for accuracy assessment and suitability for different 3D applications (Widyaningrum and Gorte 2017; Selvaggi et al. 2018).

2 Methodology

2.1 Data Acquisition

The complexity of building structures, especially after an earthquake event, requires a large number of photographs because a stereo pair of images cannot provide the necessary details. Therefore, a convergent horizontally, vertically and oblique bundle of images generally pointing towards the center of the building are required (Hanke et al. 2002). The user, also, should ensure that each point of interest should be visible by at least two images taken with an adequate intersection angle.

Among with the off-the-shelf cameras, a typical survey requires additional elements, i.e. scale objects (Fig. 2) (Jiang et al. 2008). Moreover, retro-reflective targets can be established, if the project requires high accuracy measurements. During the survey, camera settings should be configured appropriately. Photo quality should be in RAW or a high-quality JPEG format with a typical f-stop between f/8 and f/16. For outdoor surveys a maximum ISO 400 and a minimum shutter speed 1/30 can provide qualitative photo images. Furthermore, cameras and lenses use to have some image stabilization features. The user should set to “off” any vibration reduction settings because settings that aim to reduce the vibrations and stabilize the images can reduce the potential accuracy (Rieke-Zapp and Peipe 2006).

Fig. 2
figure 2

Scale objects, i.e. objects with known dimensions

The methodology used for the acquisition process using a TLS differs from that of conventional methods. In the applications of classical techniques, the correlation of the points to be captured with the instrument position, whose coordinates are known, plays a key role. On the other hand, scanning with a TLS makes the data correlated with each other and not with the position of the instrument. The main factors to be taken into consideration for the positioning of the scanner are to cover the 3D space or object to be scanned fully. In particular, positioning stations should be well distributed to cover the entire desired area and no obstructions should be placed between them and the scanned object. It is also necessary to control the range of the scanner concerning the accuracy, since the farthest the distance from the object, the lower the resolution and the accuracy of the final product will be, or for the same resolution and accuracy, the more considerable amount of time will be needed.

After selecting the acquisition stations, an essential step in the measurement methodology is to determine the location of control points used to merge the scans and/or the georeference of the 3D model. These points are usually circular or spherical targets and high reflectivity stickers, which are automatically recognized by point cloud processing software. There must be at least three points well distributed in each scan, for the optimal definition of its reference system relative to the previous one. It should be noted that the merging of the scans can also be achieved with the features of the 3D space visible in the scan, without using artificial targets.

2.2 Data Processing

The SfM and multiview stereo (MVS) approach are the most popular for the generation of the 3D point cloud (Fig. 3). This approach has been extensively implemented the last decade in 3D mapping in different scales. For example, Westoby et al. (2012) used terrestrial images for 3D modeling of meso- and micro-scale landforms while Gallo et al. (2014) applied SfM for the 3D reconstruction on objects with a bounding box diagonal ranging from 13.5 to 41 mm. The combination of SfM and MVS has been employed in some commercial and free software packages in different variations (Snavely et al. 2006; Wu et al. 2011).

Fig. 3
figure 3

Pipeline for generation of dense point cloud

The first step of the SfM approach is the identification of common points between the images and the generation of a descriptor for each of these points. The most popular algorithms are the SIFT (Lowe 1999) and the Speeded Up Robust Features (SURF) (Bay et al. 2006). The advantage of these methods is that the corresponding points of more than one images can be matched regardless of the scale of the images, i.e. the distance between the camera and the target. Within the software, the user also can specify a set of points to aid the image matching, create scale bars, and check the accuracy of the procedure. The above algorithms require images that meet specific qualitative standards for the appropriate distinction of textures appearing in the images (Wu et al. 2013). Therefore, quality control of the images should be applied, assuring that no blurred images will be included in the process. Furthermore, moving and other undesirable objects (i.e. sky, humans, moving trees by the wind, reflections to windowpanes, shadowed or sun glinted areas) should be masked. Another problem that the user can meet during a survey, is the homogeneous surfaces i.e. walls and railings, and due to the available small distance between stations and objects, matching algorithms quite often are unable to match adjacent images. The output of the SIFT and SURF, is the list of the common points forming a sparse point cloud. These points are also used for the estimation of the camera intrinsic and extrinsic orientation parameters based on a bundle-adjustment algorithm (Triggs et al. 2000). Next step is the generation of the final dense point cloud based on the MVS algorithm and the camera locations and the sparse point cloud generated from the feature extraction and matching algorithm. Moreover, the surface normal for each point can be estimated (Furukawa and Ponce 2010). However, MVS performs well on Lambertian surfaces and often fail on non-Lambertian objects (Wu et al. 2011).

The acquired raw data of the TLS consist of individual point clouds, where their position and orientation should be changed so that each point cloud uses a universal coordinate system. This process is characterized as cloud alignment or registration, where the point clouds resulting from the different scans of the object are “joined” together using common points. The process is differentiated according to the software but generally uses three techniques. The first method is called target-to-target registration and performs the clouds’ alignment with the help of commonly-labeled artificial targets (targets come in a variety of forms such as physical objects like spheres and paper targets with a recognizable pattern printed on them) that have been captured at each scan or physical feature-specific points visible in two consecutive scans. The second method is called cloud-to-cloud registration and attempts to align scans based on common areas, with the constraint that there is enough overlap (>30%) between two consecutive scans. The latest method is surface-to-surface registration, where the alignment of the scans is based on the geometry of their surfaces.

At this stage, the georeference of the 3D model can be made by aligning the individual clouds of points in a particular coordinate system. The georeferencing process can use the targets used to register the scans, as their coordinates are known. Another method of registering the point clouds is to georeference them. Following this method, each point cloud is oriented based on the known points whose coordinates are determined by the topographic mapping. That way all the point clouds refer to a universal reference system.

During the scanning process, the TLS captures points which do not represent correct information as to the geometry of the object being scanned. The TLS point clouds are affected by a disorder generally referred to as “noise”. The noise depends on the scanning method and the surface characteristics of the object. The process at this stage consists of the “cleaning” of these points, which allows a significant reduction in the data, resulting in better point cloud management. The effect of noise dramatically affects the quality of models resulting from the scans. Noise reduction is made either automatically using special software and specialized algorithms or manually. The automatic noise reduction is achieved by applying filters which are capable of analyzing statistical indicators (maximum distance, mean distance and mean square deviation) based on the calculations that have been carried out. Finally, it is necessary to detect and remove points that do not belong to the object such as vegetation or any obstacles between the scanner and the object.

A lot of research is ongoing in analyzing an object or an area directly from the point clouds, but still, sometimes there is a need to “connect” the points and construct the final 3D model. This can be achieved through the triangulation process. It is a triangular irregular network (TIN), a polygon mesh created by the point cloud. In the first step, polygons are processed. Initially, the cleansing of the polygons that do not intersect with the main object or that are tangled are first cleared. Gaps created after this process need to be filled. The number of gaps to be displayed, lies in the integrity of the data and the number of truncated polygons. The principle of filling the gaps in software is based on the curvature surrounding it. After all the gaps are filled, the polygons are cleaned again. However, when filling the gaps, new fragmented polygons are created, so the process is repeated until there are no other gaps. Then the surface adaptation stage follows, where the construction of the sectors takes place, e.g. the segmentation based on the surface analysis. After the construction of sectors is completed, the next step is to construct the networks. Networks can be made symmetrical and coherent artificially. In general, the denser the networks are, the more accurate the surface they describe. The last step is to apply texture to the final model from the photographs taken the same time with the scanning process.

The quality of TLSs and therefore the data they acquire depends on a variety of parameters not only related to the precision of the instrument. Factors such as the environment, the features of the object to be scanned, the specifications of the TLS, and the methodology followed in each survey can cause significant errors and affect the final scan result. These factors can be distinguished from those related to the operation of the instrument, the shape and nature of the object, the environmental conditions and the choice of methodology for the measurement process (Boehler et al. 2003).

The laser beam diameter affects the resolution of the point cloud. The larger the diameter, the more likely there are deviations in the coordinates of the scanned points. Also, in the same way, the laser beam divergence (the angular measure of the increase in beam diameter or radius) can cause significant errors. Another factor that can cause errors is the “edge effect”. This error is observed at the edges of the objects being scanned. When the laser beam reaches a point on the edge of the desired object, only a part of it is reflected and returned to the scanner. The rest may be reflected by a surface behind the edge, if it exists, from an adjacent point or not at all. So, the scanner returns signals from different regions. Most TLSs use mirrors to deflect the laser beam in a particular direction. Any deviations occur in the angle generated by the laser beam and the surface of the mirror may lead to the calculation of wrong coordinates.

The various surface characteristics of the objects to be scanned can affect the accuracy of the distance measurements, which is dependent on the reflection of the laser beam (Ingensand et al. 2003). Depending on the material, color, roughness, temperature and humidity of the surfaces of the object being scanned, the reflected beam is varied. Corresponding error results also arise in the case of surfaces of translucent materials in which the laser beam is refracted, and it reflects on the surface itself. Other features of the objects that affect the results of the distance measurements are size, curvature, and orientation.

Environmental factors such as atmospheric temperature, pressure, humidity, vibrations affect the accuracy of the measurements of a TLS. Potential radiation interference from external light sources such as a projector or sunlight can alter the measurement results because they affect the power of the laser beam. Methodological factors can also affect the accuracy of the measurements. Such errors may result from a wrong selection of settings such as the desired sampling resolution and the distance from the scanned object. Additionally, some errors may result from the wrong approach to the georeference of the point cloud.

Various software packages have been developed and are available for TLS and TP data processing. Most of them, implement the same algorithms presented above applying a similar pipeline. In Table 1 we present a list of software packages that are available under different licenses and operating systems.

Table 1 List of software packages implementing photogrammetry techniques for image-based creation of 3D point clouds and meshes and/or processing of laser scanning point clouds

2.3 Point Cloud Comparison

Various methods have been developed for the comparison of two 3D models. The most common is the Multiscale Model to Model Cloud Comparison (M3C2) algorithm (Lague et al. 2013). The method requires that both of the 3D models are in raw point cloud format oppose to the methods that require meshes or grids. Furthermore, M3C2 is proposed for high accuracy distance measurements while other methods i.e. Cloud to Cloud (C2C) are applied for rapid change detection on very dense point clouds (Girardeau-Montaut et al. 2005). M3C2 computation is based on local normal direction rather than only on the vertical direction between points. The user defines a radius based on objects roughness and the algorithm creates a cylinder oriented along the normal vector. The intersection between the cylinder and the point clouds defines two-point subsets for which the mean distance is computed.

3 Application

3.1 Study Area

On 12th June of 2017 a Mw 6.3 earthquake occurred offshore Lesvos Island in SE Aegean Sea, Greece (Kiratzi 2018). Heaviest damage was reported in the village of Vrisa, where the majority of buildings constructed by stone masonry (Papadimitriou et al. 2018). According to the official nomenclature, engineers inspected all the 788 buildings. Nearly, 35% of the buildings suffered from very heavy damages or destruction and they characterized as beyond repair, about 39% are reported as moderate to heavy non-structural damage buildings, while 26% are characterized as buildings with negligible to slight damage.

For this research, the Vrisa village was divided into sectors, and each sector included several road sections. TP and TLS data were acquired for the entire settlement immediately after the earthquake. All surveys were conducted under real conditions with extended debris and decrepit buildings (Fig. 4-upper left and right). The weather conditions were quite extreme in some cases. For the month following the earthquake and during the working hours, the maximum air temperature was 39° and the maximum wind speed was 27.5 m/sec. In this research, we selected a road section including four buildings. Two of them were damaged beyond repair, one was damaged and needs restoration and one was undamaged (Fig. 4-down left and right).

Fig. 4
figure 4

Street conditions during surveys (up) and two damaged beyond repair buildings of the study area (down)

3.2 Results and Discussion

All images were acquired by two NIKON D3400 using an 18–55 mm and an 18–105 mm lens respectively. This 24.2-megapixel DSLR camera is equipped with a 23.5 mm × 15.6 mm CMOS sensor. The acquisitions performed with 18 mm focal length, therefore, the dimension of each pixel of the 6000 × 4000 pixel image was 4 × 4 μm. A total of 189 images were shot for this roadside section. The approximate distance between the stations and the facades were 4 m which was also the width of the road. Images were shot perpendicular to facades and at an angle of approximately 45° to the X and Z axes. A hand-held mounting pole also was used to shoot photos from higher stations (Fig. 5).

Fig. 5
figure 5

Projection centers at street level (blue) and by using mounting pole (pink)

In order to scale the model to the ground units, we used two artificial scale objects with known distances. A 2-sided wooden bar with sides 21 cm and 51 cm respectively and a 2 × 2 chessboard sized 18 × 18 cm. In the present road section, only the first scale object was identified and used.

The laser scanning process was performed using the terrestrial laser scanner Focus3D, manufactured by FARO. This scanner features a full 360° x 305° field-of-view, with high scan speed (976 k pts/sec) and the distance measurement is realized by the phase-shift measurement principle.

The first step was to determine the positions and the appropriate parameters setup of the scanner. The fundamental element to be taken into consideration for the positioning of the scanner is to fully cover the desired 3D area, so the chosen positions should be well distributed. A single scan was not sufficient, because of occlusions and possible danger of the physical safety of the team members (near ready-to-collapse walls), thus three positions were chosen (Fig. 6). Three partial scans with 120° x 305° field-of-view were captured using the Focus3D laser scanner. The selected resolution was at ¼, translated at a spatial resolution (point distance) of 6.13 mm/10 m and the selected quality was 2x, meaning that every point was fired two times by a laser beam for more accurate distance-value. During data collection, digital photos were captured by the integrated digital camera of the scanner. Also, measurements were taken from the two integrated sensors of the scanner, the digital compass and inclinometer, which is useful information for the later registration of the scans.

Fig. 6
figure 6

The study area, the TLS positions and the TP projection centers

During the survey presented in this paper, artificial targets were not used, because it is time and effort consuming and not suitable for a post-earthquake rapid response. SCENE software by FARO (2018) was used for the process of the scans, and the registration process took place using the cloud-to-cloud method. It is an automated registration approach which uses distinctive features extracted from point clouds, and these features have to be matched between pairwise scans in order to estimate an initial approximation for the six-parameter rigid-body transformation, followed by an error minimization step using a surface matching algorithm like the Iterative Closest Point (ICP).

Two filters were applied to the registered point cloud: (a) an outlier removal filter to eliminate isolated and undesired points such as noise and (b) the point cloud was cropped to the study area. The next step was to apply the texture on the segmented point cloud by using the acquired photos from the scanner which were automatically mapped to the corresponding point measurements. The final step was the quality assurance of the point cloud. This was done by comparing a number of control distances that were taken on fixed objects like windows and doors with the corresponding measurements on the point cloud, resulting to a deviation of less than 2 mm.

Comparison between two point clouds requires that both datasets must be co-registered. Observation stations of the laser scanner were georeferenced to the Greek Geodetic Reference System (GGRS87-EPSG: 2100) with the use of Real Time Kinematics (RTK) measurements. Georeferencing of all cameras’ stations was not feasible because more than 20,000 images were acquired for the whole settlement. Furthermore, a set of ground control points was not used for the terrestrial photogrammetry as a result of the rapid deployment of the survey immediately after the earthquake. Finally, georeferencing to GGRS87 was accomplished by identifying common points between the images and an orthomosaic that was created during a UAV survey that took place at the same time.

For the present study a fine registration of the two datasets was required. The main methods usually applied for this task are: (a) alignment by picking an adequate number of point pairs in both point clouds; and (b) the ICP (Besl and McKay 1992). It should be noticed that ICP has been extensively used in co-registration of point clouds not only on its primary form but also with hundreds of variations (Pomerleau et al. 2013). ICP was finally chosen for this research because its robustness and the better performance especially when the two datasets have small differences and overlap in a large extent. After the co-registration, the two datasets were clipped with the same bounding box, ensuring that the comparison will take place to the same spatial extent.

The alignment of the 189 images required 48,409 tie points that were automatically identified by the SIFT algorithm implemented in the Agisoft Photoscan (Agisoft 2018). Due to available processing power, the large dataset and the limited time, original images were downscaled to 25%. Thirteen checkpoints were required in order to assist the alignment. The reprojection error was 1.81 pixel and the final ground resolution of the images given the distance between the camera stations and the houses was 0.763 mm/pixel. Finally, 3 control points were used for the georeferencing resulting in Root Mean Square (RMS) error 7.57 cm while the error at the scale bars was 1.02 cm. The area covered by the facades of the four houses of this road section was 130 m2.

Regarding TLS methodology, the mean registration error of the three scans was 2.3715 mm. A percentage of 66.3 and 66.7% had an error less than 4 mm on pairwise registered point clouds. That error represents the distance where a specific point has been calculated between two consecutive scans. The point density of the registered point cloud was very high (41 M points) due to the massive overlap of the consecutive scans pairwise (53.4 and 40.9%).

Next step was the fine co-registration of both datasets with the ICP algorithm. Registration was based on 50,000 random sampling points and the process would stop either if the computation would exceed the 20 iterations or the RMS error would drop more than 10−5 between two consecutive iterations. The theoretical overlap between the two datasets was set to 90%. The RMS error from this procedure was 3.58 cm. Figure 7 shows the final point clouds of the 2 methodologies cropped to the same extent. It can be seen that both methodologies capture the facades in a similar way. Blind spots are created (a) at the second floor due to balconies; and (b) at doors or windows that are installed in a niche (Fig. 8). These spots are quite often at TLS approach due to fewer scanning positions (Fig. 6).

Fig. 7
figure 7

Point cloud based on terrestrial photogrammetry (up) and laser scanning (down)

Fig. 8
figure 8

Facade extensions (i.e. balconies) creating blind spots

Concerning the point density of the generated clouds, TLS produced locally a higher density point cloud with higher mean value equal to 142,458 points per 0.0314 m2. However, the density of the TP point clouds is uniformed throughout the facades with a mean value of 121,399 points per 0.314 m2. On the other hand, TLS present the higher density at the center of the scene (Fig. 9). The nature of TLS approach resulted to the convergence of the sight beams to the center of the study area and the irregular distribution of the point density.

Fig. 9
figure 9

Cloud point density based on terrestrial photogrammetry (up) and laser scanning (down)

The implementation of the M3C2 algorithm reveals that the point clouds produced by the two methodologies can similarly describe the facades and the roughness of the buildings (Fig. 10). Usually, the user defines a point cloud as a reference, however we would like to examine the areas that can be seen by TLS and not from TP and vice versa. Thus, we estimated both distances by assigning as a reference point cloud, the point clouds from both methodologies. When using the TLS point cloud as a reference, the mean absolute difference is 3.8 cm while the 94.9% of the computed differences are lower than 10 cm (Fig. 11). The higher differences are observed at the right side of the study area even though differences are still less than 10 cm. This is due to the fact that fewer camera stations locate at this area. The 5% of points that are in the distance greater than 10 cm are detected mainly in two areas. At the left side of the study area the damaged door of a building exposes the inner house which can be scanned by the TLS approach but cannot be photographed because of the light conditions. Another subset of points that seems to be misplaced between the 2 point clouds is an area that is behind glass windows. Optical properties of materials such as glass windows is a common error source of TLS and the distance measurement is affected and limited by the physical laws of reflection, including refraction and inner reflection effects (Ingensand et al. 2003). Thus, the surface reflection on glass of a laser beam normally causes reflected beams in many directions.

Fig. 10
figure 10

Absolute difference between TP and TLS point clouds setting the TLS dataset as reference (up) and TP dataset as reference (down)

Fig. 11
figure 11

Histograms of the absolute differences between TP and TLS point clouds setting the TLS dataset as reference (left) and TP dataset as reference (right)

When using the TP point cloud as a reference, results are quite similar. The mean absolute difference is 5.6 cm while the 91.4% of the computed differences are lower than 10 cm. By using as referencing both point clouds during distance estimation, we see that the 94.9% of the point cloud generated by TP overlaps the point cloud of TLS within a distance of 10 cm while the 86.9% of the TLS point cloud overlaps the dataset from TP within the same distance. Both ICP and M3C2 algorithms applied through the open source software CloudCompare (2018).

4 Conclusions and Future Work

Before and after an earthquake occurs, geo-information technologies can be utilized rapidly in hazard area for building damage assessment. A significant number of studies have shown that TP and TLS can assist in the response phase for gathering metric and qualitative information on a building scale. Both TP based on DSLR images and TLS lead to 3D building models which are geometrically accurate and high-quality textured. An important issue regarding assessed data in a hazard area is their reliability and credibility. After a natural disaster took place, data acquisition by agencies using different techniques needs to crosscheck or compare from different sources. 3D models including 3D spatial information can become a valuable tool in deep communication and development planning between engineers, constructors and citizens, during the recovery phase of an earthquake.

Within this research it is evaluated whether terrestrial photogrammetry is a reliable methodology to create a 3D model with acceptable accuracy. Comparing to laser scanners, terrestrial photogrammetry is based on low-cost equipment such as DSLR cameras and smartphones. The SfM processing approach and the supporting software, are more user-friendly to non-expert users although a basic background of analytic photogrammetry is critical for survey planning and results in the evaluation.

The exploitation of the 3D output models from the above processing was succeeded as embedded 3D objects in pdf files. The civil protection agencies of Greece used these models for a complete representation of the post-physical state of the Vrisa’s buildings. Based on the measurements that can be retrieved from the 3D model within the pdf file, agencies recognized and measured cracks, holes, volumes of debris and other detailed damages in building facades and estimated the compensation for the property loss. As the three-dimensional depiction tends to realism, stakeholders and civil protection agencies can use 3D models geometric information for post-earthquake reconstructions and gain knowledge for pre-earthquake hazard research. The extracted models are valuable components that assist engineers to understand the seismic behavior in a more comprehensible way.

Recently, Unmanned Aerial Systems appear as an alternative source for both less time-consumed spatial data acquisition in emergency situations, and cost-effective 3D documentation, even in areas with limited access by civil protection agencies, TP or TLS experts. New UAS are smaller, more flexible in terms of flight parameters and among with the cameras’ optimization, their use can replace TP. More specifically, the use of multi-view oblique images among with the vertical can support more efficient the damage assessment, because very high spatial resolution images of roofs and facades can be investigated for cracks, deformations and collapses (Gerke and Kerle 2011; Fernandez Galarreta et al. 2015). One of the most promising methodologies for this task is the Convolutional Neural Networks (CNNs) that have been applied in object classification and segmentation in remote sensing applications (Karpathy et al. 2014; Hu et al. 2015; Kampffmeyer et al. 2016). The usefulness of point clouds has been started to be investigated through CNNs for object classification, part segmentation and semantic labeling (Qi et al. 2017; Maltezos et al. 2017; Hackel et al. 2018) with some efforts to focus on earthquake damage assessment (Vetrivel et al. 2018). The transferability of the learning of CNNs is an important aspect of these methodologies that should be taken into consideration for future studies. The integration of a common database including training data (i.e. images and point clouds) from various earthquake events can further assist the damage assessment in different landscapes.