1 Monitoring Disasters from Space

Earth observation has been receiving considerable attention in disaster management in recent years. As such, the imaging capability of national or international earth observation missions has been improving steadily. Also, driven by technology innovation in New Space, the number of satellites has been increasing dramatically. Satellite constellations enable high-frequency data acquisition, which is often required in disaster monitoring and rapid response.

In the last two decades, enormous efforts have been made in international cooperative projects and services for sharing and analyzing satellite imagery in emergency response. Some representative ones are listed below.

  • International Charter ‘Space and Major Disasters’:Footnote 1 The International Charter ‘Space and Major Disasters’ is an international collaboration among space agencies and companies (e.g., Maxar and Planet Labs) to support disaster response activities by providing information and products derived from satellite data. The charter was initiated by the European Space Agency (ESA) and the French space agency (CNES), came into operation in 2000, and activated 601 times for 125 countries supported by 17 charter members with 34 satellites as of April 1, 2019.

  • UNOSAT:Footnote 2 UNOSAT is a technology-intensive programme of the United Nations Institute for Training and Research (UNITAR) to provide satellite imagery analysis and solutions to the UN system and its partners for decision making in critical areas, including humanitarian response to natural disasters. UNOSAT was established in 2001 and the Humanitarian Rapid Mapping service of UNOSAT was launched in 2003 and contributed to 28 humanitarian response to natural disasters in 22 countries in 2018.

  • Sentinel Asia:Footnote 3 The Sentinel Asia initiative is a voluntary basis international collaboration among space agencies, disaster management agencies, and international agencies to support disaster management activities in the Asia-Pacific region by applying remote sensing and Web-GIS technologies. Sentinel Asia was initiated by the Asia-Pacific Regional Space Agency Forum (APRSAF) in 2005 and its members consist of 93 organizations from 28 countries/regions and 16 international organizations. In 2018, there were 25 emergency observation requests and disaster response activities are supported by 8 data provider nodes and 48 data analysis nodes.

  • Copernicus Emergency Management Service (Copernicus EMS):Footnote 4 Copernicus EMS provides geospatial information for emergency response to disasters as well as prevention, preparedness, and recovery activities by analyzing satellite imagery. Copernicus EMS is coordinated by the European Commission as one of the key services of the European Union’s Earth Observation programme Copernicus. The two Mapping services of Copernicus EMS (i.e., Rapid Mapping, Risk and Recovery Mapping) started operation since April 2012 and 349 mapping activations have been conducted as of April 3, 2019.

Owing to the development of hardware, big earth observation data is now available from various types of satellites and imaging sensors. Large volume and a wide variety of earth observation data promote new applications but also raise challenges in understanding satellite imagery for disaster response. In this book chapter, we summarize recent advances and challenges in the processing of big earth observation data for disaster management.

2 Earth Observation Satellites

Over the last decades, the number of earth observation satellites has steadily increased, providing an unprecedented amount of available data. This includes optial (multi- and hyperspectral) images (e.g., Fig. 4.1b) and also synthetic aperture radar (SAR) images (e.g., Fig. 4.1e, f). Regarding disaster response, the sheer number of satellites ensures quick post-event acquisitions and often, due to the regular acquisition patterns of many satellite missions, the availability of a recent pre-event image. In the following paragraphs, we provide a summary of current and future earth observation satellite missions and how they benefit mapping damages and the extent of disasters.

Fig. 4.1
figure 1

(a) Illustration of optical remote sensing. (b) Sentinel-2 imagery. (c) NDVI derived from Sentinel-2 data. (d) Illustration of SAR remote sensing. (e) ALOS-2 (L-band) imagery. (f) Sentinel-1 (C-band) imagery

2.1 Optical Satellite Missions

Table 4.1 shows the list of optical satellite missions. An explosive amount of data has become available in the last decade. For instance, only Sentinel-2 satellites acquire over one petabyte per year. Data policies are different depending on resolution: datasets from moderate-resolution satellites (e.g., Landsat-8 and Sentinel-2) are freely available and those from very high-resolution satellites (e.g., Pleiades and WorldView-3) are commercial. For emergency responses, even some commercial satellite images are openly distributed through special data programs (e.g., Open Data ProgramFootnote 5 for WorldView images and Disaster Data ProgramFootnote 6 for PlanetScope).

Table 4.1 Current optical satellite missions

Optical remote sensing records the solar radiation reflected from the surface in visible, near-infrared, and short-wave infrared ranges as illustrated in Fig. 4.1a. Reflected spectral signatures allow us to discriminate different types of land covers. Owing to its similar characteristics to human vision, optical imagery is straightforward to analyze for damage recognition. A pair of pre- and post-disaster optical images are commonly used to detect pixel-wise or object-wise changes and identify damage levels of affected areas. In particular, if there is any clear change in the normalized difference vegetation index (NDVI) (e.g., Fig. 4.1c) or the normalized difference water index (NDWI) due to landslides or floods, affected areas can be detected easily and accurately.

Optical satellite imaging systems have evolved in terms of spatial, temporal, and spectral resolutions. Spatial and temporal resolutions are critical for disaster damage mapping. Improvement of temporal resolution has been achieved by forming satellite constellations. For instance, the revisit cycle of Sentinel-2 is five days and it was accomplished by a constellation of twin satellites (i.e., Sentinel-2 A and B). An extreme example is PlanetScope: the daily acquisition is possible for the entire globe with a constellation of 135+ small satellites (i.e., Droves). The evolution in temporal resolution allows disaster damage mapping within a day under good weather conditions.

Spatial resolution is another key factor to ensure accuracy of disaster damage mapping. Medium-resolution satellites such as Landsat-8 and Sentinel-2 are sufficient for mapping large-scale changes of the surface due to floods, landslides, wildfires, and volcanos. High-resolution satellites data are necessary particularly when analyzing damages in urban areas. Visual interpretation in emergency response relies on sub-meter satellite imagery such as Pleiades and WorldView to identify building damages.

The major drawback of optical satellites is that they cannot acquire images when affected areas are covered by clouds. Because of this limitation, in many real cases, datasets from different sensors are only available before and after disasters in a few days after disasters. Integration and fusion of multisensor data sources are crucial to deliver map products of disaster damages.

2.2 SAR Satellite Missions

Unlike optical imagery, SAR sensors have the advantage that they are undisturbed by clouds, making them invaluable for responding to disasters due to their reliable image acquisition schedule. Table 4.2 lists current and future SAR missions, together with their highest resolution modes, the corresponding swath widths, their frequency bands and launch dates. All of these satellites also have lower resolution acquisition modes with increased spatial coverage. As can be seen from Table 4.2, even moderately large areas can easily result in multiple GB of data if several sensors are used and acquisitions before and after an event are collected.

Table 4.2 Current and future SAR missions. Resolutions and swath widths depend on the acquisition mode. The table lists the maximum resolution and the corresponding swath width

As a quite recent development, several startup companies (ICEYE, Capella and Synspective) announced plans to create constellations of dozens of comparatively small and cheap satellites, that enable frequent and short notice acquisitions. Such constellations would produce a wealth of data, compounding the need, both for big data systems and algorithms.

The following publications provide more verbose information for the respective SAR satellites and list additional references. Morena et al. (2004) for RADARSAT-2, Lee (2010) for KOMPSAT-5 and Werninghaus and Buckreuss (2010) for TerraSAR-X, TanDEM-X and the essentially identical PAZ satellite (Suri et al. 2015). Torres et al. (2012) describes ESA’s Sentinel-1 satellites, Bird et al. (2013) NovaSAR-S, Caltagirone et al. (2014) COSMO-SkyMed, Rosenqvist et al. (2014) SAOCOM, and Sun et al. (2017) Gaofen-3. Future SAR missions are covered in Rosen et al. (2017) for NISAR, Motohka et al. (2017) for ALOS-4, and finally De Lisle et al. (2018) introduces the RADARSAT constellation mission. Technical details and developments regarding small SAR satellite constellations are given in Farquharson et al. (2018) and Obata et al. (2019).

Many satellites have acquisition modes where the resolution suffices to detect changes and damages for individual buildings. In any case, large scale destructions, caused by earthquakes (Karimzadeh et al. 2018), wildfires (Tanase et al. 2010; Verhegghen et al. 2016) landslides or flooding (Martinis et al. 2018) can be observed by all sensors. We cover these in greater detail in Sects. 4.4.1, 4.4.2 and 4.4.3. Here we introduce the reader to SAR image formation and how these characteristics are applicable for disaster damage mapping. For a more thorough introduction we advice the interested reader to consult (Moreira et al. 2013).

SAR sensors emit electromagnetic waves and measure the reflected energy (see Fig. 4.1d), called backscatter, which depends on the geometric and geophysical properties of the target. This renders the SAR sensors sensitive to different kinds of land cover but also physical parameters, such as soil moisture. In addition, depending on the SAR’s operating frequency, parts of the electromagnetic wave also penetrate the surface and image layers below the uppermost land cover.

Just like visible light, microwaves are polarized, and the polarimetric composition of reflected waves depends on the imaged targets’ geometric and physical properties. These polarimetric signatures permit further analysis and classification of the imaged area.

Inside one SAR resolution cell, i.e. pixel, numerous elemental scatterers reflected the impinging electromagnetic wave. The superposition of all these reflections make up the received signal at the SAR sensors. Between two SAR acquisitions changes of the elemental scatterers can be estimated, providing a direct measure of differences, the so-called coherence.

All of these properties: backscatter, polarimetric composition, and coherence are useful when analysing disaster-struck areas.

Some newer SAR satellite systems, namely PAZ, NovaSAR-S and the RADARSAT constellation, are additionally equipped with automatic identification system (AIS) receivers, enabling them to track shipping traffic. In most countries AIS transceivers are mandatory for vessels above a certain size. AIS is an additional data source that could be exploited for responding to disasters affecting ships.

3 Land Cover Mapping

Map information is necessary in all phases of disaster management. Mapping of buildings and roads is essential for rescue, relief, and recovery activities. The map information is generally well maintained in the developed countries; however, it is not the case for developing countries, particularly where uncontrolled urbanization is happening, and thus there is high demand for the automatic update of map information from satellite imagery at a large (e.g., country) scale.

Mapping of buildings, roads, and land cover types is one of the key applications using satellite imagery. Global land cover maps at a high resolution have been derived from satellite data in the last decade. Global Urban Footprint (GUF) was created with a ground sampling distance of 12 m by the German Aerospace Center by processing 180,000 TerraSAR-X and TanDEM-X scenes (Esch et al. 2013). The GUF data was released in 2012, freely available at a full resolution for any scientific use and also open to any nonprofit applications at a degraded resolution of 84 m. GlobeLand30 is the first open-access and high-resolution land cover map comprising 10 land cover classes for the years from 2000 to 2010 by analyzing more than 20,000 Landsat and Chinese HJ-1 satellite images (Jun Chen et al. 2015). In 2014, China donated the GlobeLand30 data to the United Nations to contribute to global sustainable development and climate change mitigation.

Recently, building and road mapping technologies that apply machine and deep learning to high-resolution satellite imagery have been dramatically improved. For instance, Ecopia U.S. Building Footprints powered by DigitalGlobe (currently a part of Maxar) has been released in 2018 as the first precise, GIS-ready building footprints dataset covering the entire United States produced by semi-automated processing based on machine learning. The 2D vector polygon dataset will be updated every six months using latest DigitalGlobe big satellite image data to ensure up-to-date building footprint information. Going beyond 2D is the next standard in the field of urban mapping. 3D reconstruction and 3D semantic reconstruction using large-scale satellite imagery have been receiving particular attention in recent years.

Benchmark datasets and data science competitions have been playing key roles in advancing 2D/3D mapping technologies. Representative benchmark datasets are listed below.

  • SpaceNet:Footnote 7 SpaceNet is a repository of freely available high-resolution satellite imagery and labeled training data for computer vision and machine learning research. SpaceNet was initiated by CosmiQ Works, DigitalGlobe, and NVIDIA in 2016. SpaceNet building and road extraction competitions were organized with over 685,000 building footprints and 8000 km of roads from large cities in the world (i.e., Rio de Janeiro, Las Vegas, Paris, Shanghai, Khartoum).

  • DeepGlobe:Footnote 8 DeepGlobe is a challenge-based workshop initiated by Facebook and DigitalGlobe as conjunction with CVPR 2018 to promote research on machine learning and computer vision techniques applied to satellite imagery and bridge people from the respective fields with different perspectives. DeepGlobe was composed of three challenges: road extraction, building detection, and land cover classification. The building detection challenge used the SpaceNet data; the road extraction and land cover classification challenges used images sampled from the DigitalGlobe Basemap +Vivid dataset. The road extraction challenge dataset comprises images of rural and urban areas in Thailand, Indonesia, and India, whereas the land cover classification challenge focuses on rural areas (Demir et al. 2018).

  • BigEarthNet:Footnote 9 The BigEarthNet archive was constructed by the Technical University of Berlin and released in 2019. The archive is a large scale dataset composed of 590,326 Sentinel-2 image patches with land cover labels. BigEarthNet was created from 125 Sentinel-2 tiles covering 10 countries of Europe and the corresponding labels were provided from CORINE Land Cover database. BigEarthNet advances research for the analysis of big earth observation data archives.

  • 2019 IEEE GRSS Data Fusion Contest:Footnote 10 2019 IEEE GRSS Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS) and the Johns Hopkins University (JHU), promoted research in semantic 3D reconstruction and stereo using machine learning and satellite images. The contest was composed of four challenges: three of them are simultaneous estimation of land cover semantics and height information from single-view, pairwise, and multi-view satellite images, respectively; the last one is 3D point cloud classification. The contest used high-resolution satellite imagery and airborne LiDAR data over Jacksonville and Omaha, US (Le Saux et al. 2019).

One major challenge in land cover mapping is the generalization ability. Most of training data was prepared for a limited number of countries and cities. Trained models for such data do not always work globally due to different characteristics of structures. The technical focus has been on how to ensure the generalization ability between different cities (Yokoya et al. 2018). To exploit the capability of machine learning and maximize the mapping accuracy, the simplest approach is to increase training data. Many mapping projects have been progressing in developing countries through annotation efforts by local people (e.g., Open Cities AfricaFootnote 11). Collaborative mapping based on crowdsourced data represented by OpenStreetMap plays a major role in creating training data. The synergy of openly available big earth observation data, crowdsourcing-based annotations, and machine learning technologies will accelerate the land cover mapping capability for the entire globe.

4 Disaster Mapping

4.1 Flood Mapping

Besides the above-mentioned international cooperative projects and services for the disaster response in the introduction part, flood mapping systems are also availability.

  • Global flood detection system.Footnote 12 The objective of this system is to detect and map major river floods using daily passive microwave remote sensing sensors (AMSR2 and GPM).

  • NASA Global flood detection system.Footnote 13 This system adapts real-time TRMM Multi-satellite Precipitation Analysis (TMPA) and Global Precipitation Measurement (GPM) Integrated Multi-Satellite Retrievals.

  • Tiger-Net.Footnote 14 ESA supports the African with earth observation for monitoring water resource (including flood mapping) through the satellites of ESA.

  • Dartmouth flood observatory.Footnote 15 It was founded in 1993 at Dartmouth College, Hanover, NH USA and moved to the University of Colorado, INSTAAR in 2010. They have used all the available satellite datasets (optical and SAR) to estimate the flood inundation map using change detection methods.

  • DLR flood service.Footnote 16 Sentinel-1 and TerraSAR-X SAR datasets are used to extract the flooding maps using a fully automatic chains (i.e., pre-processing, auxiliary datasets collection, initialized classification and post-processing) via a web-client.

For flood mapping, SAR images are the better choice compared to the optical and UAV images, as clouds are penetrated by electromagnetic waves and do not corrupt the resulting image. Usually, due to the lower reflectance in optical and lower backscattering in SAR datasets, water bodies are easily detected. Two traditional but efficient methods are usually utilized (seen in Fig. 4.2). The first one is to apply the change detection methods between pre- and post-flood images and then use the filters (e.g., morphological closing and opening) to remove the noise. This kind of techniques is suitable to detect the flood area using single source datasets, such as Landsat series (Chignell et al. 2015), ENVISAT ASAR (Schlaffer et al. 2015), and Sentinel SAR (Li et al. 2018).

Fig. 4.2
figure 2

Flood detection methods. (a) Change detection. (b) ‘Water’ change analysis

The second one is to extract the water bodies using classification methods (water and non-water areas) and indexes (listed in Table 4.3) from pre- and post-flood images. Then, the flood area is produced by analyze the changes between the water bodies of two periods. Tong et al. (2018) have applied the support vector machine and the active contour without edges model for extracting water from Landsat 8 and COMSO-SkyMed and then mapped the flood using image difference method.

Table 4.3 Water indices with their equations and sources for optical datasets

Technical challenges and future directions are list as follows:

  1. 1.

    Mapping flood in small specific area. Very high resolution remote sensing provide an opportunity to monitoring the flood in a small scale (e.g., downtown area). However, water is always mixed by the shadow areas. To separate the shadow from water body will improve the performance of flood monitoring.

  2. 2.

    Developing more computationally efficient and robust method without considering spatial resolution, spectral signature, or viewing angle. Normalized Difference Flood Index (NDFI) (Cian et al. 2018), which is computed using multi-temporal statistics of SAR images, will give us the inspirations.

  3. 3.

    Flood detection via satellite and social media by deep learning. Satellite images can provide large scale flooding information, however, we should wait for the datasets. Social media can provide real-time information. A proper way should be found to integrate the information derivied from satellite images and social multimedia. Interested reader can read more details in http://www.multimediaeval.org/mediaeval2018/.

Here, a typical example of combining medium-resolution SAR (i.e., Sentinel-1) and high-resolution optical (i.e., Jilin-1 sp06) datasets to detect the flood areas in Iran is shown in Fig. 4.3. Due to the coarse resolution of Sentinel-1, the small flood areas in the city center (red rectangle areas in Fig. 4.3b) could not be detected by using only Sentinel-1 images. However, it can be identified by the high-resolution optical images. Thus, the final flooded mapping is the combination of the city flooded areas extracted by high-resolution and the non-city flood areas generated by Sentinel-1.

Fig. 4.3
figure 3

(a) Location map of the target area. (b) False composite color image of Sentinel-1 SAR (R: pre-event, G, B: post-event). (c) Post-event high-resolution optical image (Jilin-1 sp06). (d) Mapping of flooded areas

4.2 Landslide Mapping

Landslide disasters are frequently triggered by heavy rains and earthquakes (Martelloni et al. 2012; Tanyaş et al. 2019). These deadly events can cause a large number of fatalities (Intrieri et al. 2019). As a result, there have been several efforts to map a global subjectively of occurrence using big earth observation data sources (Stanley and Kirschbaum 2017). These activities take advantage of the relationship between landslides and four main variables such as topography slope computed from global topography models (SRTM, ASTER GDEM), land cover, rainfall data, and seismic activity (NASA Goddard Space Flight Center 2007; Muthu and Petrou 2007; Kirschbaum et al. 2010, 2015; Kirschbaum and Stanley 2018). These techniques are mainly based on models that integrate all variables using heuristic functions to evaluate the possibility of landslide occurrence. These models can map the landslide susceptibility on a continental scale (approximately 1 km2), regional, and local scale with a resolution of few hundred meters. These studies provide an overview of the landslide hazard and can be used for mitigation and preparation activities before these disasters occurred.

Differently, earth observation data is also applied for mapping landslide damages in smaller scales focusing on particular events. Visual interpretation methods employ very-high-resolution optical imagery acquired from either space- or air-bone platforms. Although these approaches provide high-reliability on the damage assessment, their applicability is often restricted by the availability of suitable images such as cloud-free and good-illumination conditions. It is also important to notice that these techniques require huge human efforts for damage interpretation, specially in case of rapid disaster response.

Change detection models, on the other hand, use a set of images acquired before and after the disaster to evaluate the damages. The land cover changes estimated from multi-temporal optical imagery is used for delineating the extent of landslides. Furthermore, spectral indexes (e.g. normalized vegetation and soil index) are also employed for landslide mapping (Rau et al. 2014; Lv et al. 2018; Yang et al. 2013; Zhuo et al. 2019; Ramos-Bernal et al. 2018). Integration of high-resolution digital terrain models allows estimation of landslide-induced damages such as debris and land scars distribution in the affected area (Dou et al. 2019; Bunn et al. 2019). Similarly to visual interpretation approach, the availability of suitable multi-temporal image datasets firmly bound the deployment of these techniques.

In the case of SAR data that has almost all-weather acquisition conditions, mapping techniques take advantage of the side-looking nature of these sensors. The two properties of SAR data, intensity, and phase information of the backscattered signal are exploited for detecting landslide damages. The later is widely applied for monitoring and mapping seismic-induced landslides (Cascini et al. 2009; Kalia 2018). Interferometric SAR (InSAR) analysis using detail DEM data provide the spatial distribution and displacement fields of the ground movement (Riedel and Walther 2008; Rabus and Pichierri 2018; Amitrano et al. 2019). Furthermore, time-series InSAR models allow landslide monitoring of slow-movement landslides (Kang et al. 2017). On the other hand, change detection techniques, using SAR intensity images, are also powerful means to estimate the spatial distribution of landslide damages (Shi et al. 2015). For instance, texture features computed from multi-temporal datasets shows good correlation with the areas affected by landslides (Darvishi et al. 2018; Mondini et al. 2019). Furthermore, in case of disaster response where rapid geolocations of affected areas are crucial for rescue efforts, change detection based on intensity information has great applicability because of low computation time and direct manipulation of geocoded images. For instance, on September 6, last year, the 2018 Hokkaido Eastern Iburi Earthquake caused several landslides distributed in an extensive area (Yamagishi and Yamazaki 2018). Figure 4.4 shows a repid landslide mapping (yellow segments) using a combination of pixel- and object-based change detection analysis, proposed by Adriano et al. (2020), of a pre- and post-event Sentinel-1 intensity images acquired on September 1 and 13, 2018, respectively.

Fig. 4.4
figure 4

(a) Location of the target area. The red start shows the earthquake epicenter (b) Color-composed image from pre- and post-event Sentinel-1 intensity images (R: pre-event, G, B: post-event). (c) Google Satellite imagery corresponding to the same area shown in b. (d) Landslide mapping results using multi-temporal Sentinel-1 imagery. Background image corresponds to the color-composed RGB image

Recently, machine learning algorithms together with earth observation data are applied to detect landslide areas. Application of well establish classifiers such as support vector machine and ensemble learning models are used to identify landslide areas from optical, SAR intensity, and SAR coherence images (Bui et al. 2018; Park et al. 2018; Burrows et al. 2019). Furthermore, deep neural networks are also employed to map the landslide detection (Ghorbanzadeh et al. 2019; Wang et al. 2019). These approaches focused on high-resolution remote sensing imagery and landslide influencing features such as DEM data, land cover, and rainfall information.

4.3 Building Damage Mapping

Assessing the building damage in the aftermath of major disasters, such as earthquakes, tsunamis, and typhoons, are crucial for post-disaster rapid and efficient relief activities. In this context, earth observation data is a good alternative for damage mapping because satellite imagery can observe large scenes from remote or inaccessible affected areas (Matsuoka and Yamazaki 2004). Based on the evolution of sensor platforms and their spatial resolution, damage mapping can be divided into two parts. Initial applications for building damage recognition were based on change detection analysis of moderate-resolution, mainly using sensor launched in the late 90’s such as the Landsat-7 Satellite and the European Remote Sensing (ERS-1) SAR satellite, optical and SAR imagery. These applications relied on the interpretation of texture and linear correlation features computed from pre- and post-event datasets. Besides, due to their relative low spatial resolution (about 30 m2), these methods were efficiently applied for building damage mapping in a block-scale (Yusuf et al. 2001; Matsuoka and Yamazaki 2005; Kohiyama and Yamazaki 2005).

The following generation of high-resolution optical and SAR imagery, starting in early 2000s such as the QuickBird, GeoEye-1, TerraSAR-X, COSMO-SkyMed satellites, contribute to developing frameworks for detail mapping of building damage. These methods, besides of change detection techniques, implemented sophisticated pixel- and object-based image processing algorithms for damage recognition (Miura et al. 2016; Tong et al. 2012; Brett and Guida 2013; Gokon et al. 2015; Ranjbar et al. 2018). Moreover, taking advantage of very-high-resolution datasets, sophisticated frameworks were implemented to extract building damage using only post-event images (Gong et al. 2016). Most of these methodologies rely on specific features of SAR data. For instance, some studies analyzed the polarimetric characteristics of radar backscattering that are correlated with building damage patterns observed in SAR images (Yamaguchi 2012; Chen and Sato 2013). Furthermore, SAR platforms such as the Sentinel-1 and ALOS-2 repeatedly acquired images constructing large time-series datasets. Phase coherence computed from multi-temporal SAR acquisitions can provide important characteristics of the degree of changes in urban areas in the case of earthquake-induced damage (Yun et al. 2015; Olen and Bookhagen 2018; Karimzadeh et al. 2018).

Recently, advanced machine learning algorithms are implemented using multi-temporal and multi-source remote sensing data for mapping building damage. These methodologies learn from limited but properly labeled samples of damaged buildings to assign a level on the whole affected area (Endo et al. 2018). A recent example, Adriano et al. (2019) used an ensemble learning classifier on SAR and optical datasets to map the building damage following the 2018 Sulawesi Earthquake-Tsunami in Palu, Indonesia. Their methodology successfully classified three levels of building damage with an overall accuracy greater than 90% (Fig. 4.5). Furthermore, their implemented framework provided a reliable thematic map after only after three hours of acquired all raw remote sensing datasets.

Fig. 4.5
figure 5

(a) Location of the target area. The red start shows the earthquake epicenter (b) Pre-event WorldView-3 image. (c) Post-event WorldView-3 images. (d) Damage mapping results using multi-sensor and multi-temporal remote sensing data. Background image corresponds to the pre-event Sentinel-1 SAR image capture on May 26, 2018

5 Conclusion and Future Lines

Open data policy in earth observation and international cooperation in emergency responses have expanded practical use of image and signal processing techniques for rapid disaster damage mapping. In this chapter, we have reviewed earth observation systems available for disaster management and showcased recent advances in land cover mapping, flood mapping, landslide mapping, and building damage mapping.

Although human visual interpretation is still required to determine the level of detailed building damages, it takes a long time to acquire high-resolution images and conduct visual interpretation. One possible future direction is to construct training data on past disasters via human visual interpretation and develop machine learning models that can respond quickly to unknown disasters. Another challenge is that there are many cases where data can not be obtained from the same sensor before and after a disaster (He and Yokoya 2018). How to extract disaster-induced changes from multisensor and possibly heterogeneous data sources before and after disasters is a practical problem in damage mapping. Furthermore, it is important for the entire disaster management process to verify the accuracy of damage assessment results using in-situ data. Integration and fusion of earth observation data with ground-shot images and text information available online (e.g., news and SNS) is also a future subject. On the basis of the remote sensing image and signal processing technology and human expert knowledge, machine learning technologies have the potential to accelerate the accuracy and speed of damage mapping from big earth observation data.