Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Flood and surface water mapping is becoming increasingly necessary, as extreme flooding events worldwide can damage crop yields and contribute to billions of dollars economic damages as well as social effects including fatalities and destroyed communities (Xaio et al. 2004; Kwak et al. 2015; Mueller et al. 2016). Utilizing earth observing satellite data to map standing water from space is indispensable to flood mapping for disaster response, mitigation, prevention, and warning (McFeeters 1996; Brakenridge and Anderson 2006). Since the early 1970s (Landsat, USGS 2013), researchers have been able to remotely sense surface processes such as extreme flood events to help offset some of these problems. Researchers have demonstrated countless methods and modifications of those methods to help increase knowledge of areas at risk and areas that are flooded using remote sensing data from optical and radar systems, as well as free publically available and costly commercial datasets.

In 1972, Landsat 1 also called Earth Resources Technology Satellite-1 (ERTS-1) was launched, prompting an explosion of literature on the ability to map surface processes from space using wavelengths in the optical and near-infrared spectrum (Irons et al. 2016). Landsat 1 paved the way for the Landsat sensor series (1972–most recent launch of Landsat 8 in 2013), along with many other optical and radar surface monitoring sensors and sensor series, including Advanced Very High Resolution Radiometer (AVHRR); Moderate Resolution Imaging Spectroradiometer (MODIS); Advanced Spaceborne thermal Emission and Reflection Radiometer (ASTER); Visible Infrared Imager Radiometer Suite (VIIRS); Satellite Pour l’Observation de la Terre (SPOT); Sentinel, Advanced Land Observing Satellite (ALOS); Envisat; Radarsat; and Soil Moisture Active Passive (SMAP).

Earth observing satellites are able to capture images of the earth at varying spatial scales, and with different orbital periods, making each satellite sensor and dataset unique. Optical sensors may have the ability to capture surface reflectance data from visible blue, green, and red wavelengths, as well as emissivity data through infrared wavelengths. Though a variety of wavelengths and bandwidths are available, the two bandwidths found in many sensors are centered around the red wavelength (~0.65 nm) and near-infrared wavelength (~0.85 nm) (Berkeley Lab 2016), with varying bandwidths. These two bands are universally used in surface monitoring studies including mapping vegetation phenology, surface water and flooding, snowmelt, and drought monitoring (Rouse et al. 1973; Colwell 1974; Tucker 1979; Song et al. 2004; Pettorelli 2009; Hasan 2011; Gopinath et al. 2014; Abbas 2014). Infrared wavelengths are key for water studies due to the high absorption over water, and relative ease of identification beside dry land (Frazier 2000; Lei 2009).

The availability of Landsat/ERTS-1 and AVHRR-1 data allowed scientists (Rouse et al. 1973; Colwell 1974; Tucker 1979) to develop an algorithm to quantify vegetation change over time, the Normalized Difference Vegetation Index (NDVI). Because the index uses near-infrared and red wavelengths, it can be universally applied to most sensor datasets. In addition, the exploitation of the difference between the near-infrared and red wavelengths allows NDVI to be applied to other studies such as water detection (Lei 2009), and it has inspired similar spectral indices specifically for water mapping such as the Normalized Difference Water Index (NDWI) (Gao 1996; McFeeters 1996), the Vegetation Supply Water Index (VSWI) (Cai et al. 2011; Abbas et al. 2014), and the Normalized Difference Pond Index (NDPI) (Lacaux et al. 2006).

While these indices may be useful for analyzing surface conditions, they are also subject to errors introduced by input datasets. Some sensor-based errors include scan line errors (SLC-Off Products, 2013), scan angle errors (TIRS SSM Anomaly, 2015), and edge of swath pixel-bowtie effects (NOAA-NESDIS VIIRS User’s Guide 2013), while atmospheric conditions such as cloud coverage, cloud shadow, haze, pollution, scattered light from ground-based reflective objects, and atmospheric scattering may render many images unsuitable for analysis (Anderson et al. 2007; Holben 2007; Pettorelli et al. 2009). In areas where cloud coverage is pervasive, the use of cloud-penetrating radar is ideal, allowing data on surface texture to be collected instead of reflectance or emissivity (Brivio et al. 2002; Parinussa et al. 2016). When radar data is unavailable, preprocessing methods such as regression (Swets et al. 1999; Zhang et al. 2003), curve fitting (van Dijk et al. 1987), and maximum value (Holben 2007) formulas as well as cloud masking techniques (Fayne et al. 2015) have been implemented alongside atmospheric correction (University of California Berkeley College of Natural Resources) to determine the likely pixel value.

The sensors chosen for flood mapping efforts largely depend on the goals of the individual project, as well as sensor availability and funding. Optical sensors may be limited to surface inundation detection, while radar sensors may to determine water depths (Smith 1997; Hess et al. 2003). The ability to see fine details on the ground, known as spatial resolution, and to get data in a timely manner, known as temporal resolution, are important questions asked in remote sensing research. Studies may choose to sacrifice temporal resolution in favor of spatial resolution to create risk maps (Mueller et al. 2016; Revilla-Romero et al. 2015), while others may create high temporal resolution near real-time mapping products (Nigro et al. 2014). Further, studies may require the use of elevation models (Gallant and Dowling 2003; Guerschman et al. 2011), land cover datasets (Townsend and Walsh 1998; Gallant and Dowling 2003; Sun et al. 2012), or hydrological modeling software (Knebl et al. 2005) to obtain a certain level of precision or accuracy.

This chapter will review methods for mapping floods and open water using spectral formulas and statistical methods commenting on false color composite techniques with optical data, physical models using radar and ancillary datasets such as land cover maps and digital elevation models (DEMs), and will conclude with a look into the future of flood mapping techniques and applications. Some methods may be demonstrated with MODIS Terra 250- and 500-m data (path/row tile H28v07) (NASA-LPDAAC; USGS-EROS 2016) over the Lower Mekong Basin (LMB) to demonstrate visual impacts of the differences over the same study area. The demonstration area is located in Cambodia and Southern Vietnam, and the focus will be around the Tonle Sap Lake and Mekong Delta. This region was chosen because the LMB experiences monsoonal flooding between May and December, the MODIS data and a modified flood extent polygon (UNITAR-UNOSAT 2013) used in the example figures are observed during October 2013 (Fig. 5.1).

Fig. 5.1
figure 1

The demonstration region is the Lower Mekong Basin, seen here with MODIS (path/row tile H28v07). The southern region of the MODIS tile is extracted to highlight areas that are commonly flooded along with the modified flood extent

2 Optical Sensors

Flood mapping research using optical and near-infrared sensors may use a combination of statistics and empirical formulas to measure flood extent, such as spectral indices and single-band thresholds. To understand the capabilities and limitations of spectral indices, researchers must consider atmospheric penetration at certain wavelengths and the product availability of the desired wavelengths.

The United States Geological Survey (USGS) has produced an online Spectral Characteristics Viewer (USGS 2014), graphing the spectral response patterns of nine different minerals, nine vegetation types, four water types (ice, snow, clear, turbid), and three types of “desert varnish,” between 0 and 3000 nm (0–3 μm). The spectral viewer also allows users to view the bands relative to the spectral response graph for four Landsat sensors, Earth Observing-1 Advanced Land Imager, (EO-1 ALI), the ASTER and MODIS sensors on the Terra platform, and Sentinel 2A MultiSpectral Instrument (MSI).

Spectral graphs and others (ASTER Spectral Library 2008) like this help to explain how spectral indices and wavelength-based algorithms help scientists identify water features as separate from land. The spectral bands from MODIS show sensitivity to differences in reflectance from lawn grass, dry grass, and water. Note how the lawn grass spectra reflectance increases sharply in the near-infrared, while clear and turbid water show very low reflectance in visible wavelengths, and almost zero reflectance in the infrared wavelengths. The clear and turbid water in blue and dark blue do not reflect light past 1.2 μm, or 1200 nm. This allows for a clear delineation of water and other features using wavelengths beyond 1200 nm, such as bands MODIS 5, 6, and 7. In addition, the contrast between the highly reflective grass and very low reflectivity in clear and turbid water make the use of band 2 particularly useful.

Because water absorbs infrared radiation instead of reflecting (Campbell and Wynne 2011), also evidenced by the Spectral Viewer, many studies have been able to take advantage of the “dark pixel” values that occur as a result of low reflectance. As you can see in Fig. 5.2, the near-infrared shows clear and turbid water having a much lower reflectance than grasses, which will yield an infrared image where both water and grasses are present to show grass as very bright as water as very dark.

Fig. 5.2
figure 2

USGS Spectral Viewer with dry and lawn grass, along with melting snow, clear water, and turbid water with Terra MODIS bands 1–7

2.1 Band Thresholding

In 1993, Manavalan et al. applied a density slicing technique to a series of near-infrared (760–900 nm) images to identify the appropriate thresholds for the land–water boundary to monitor reservoir capacity. Density slicing generally involves an iterative process of the arbitrary segmentation of image values into intervals to aid in the visual identification of spectrally dissimilar features. Similarly, Frazier and Page (2000) compared the classification accuracy of density slicing to a more sophisticated maximum-likelihood classification to identify water bodies. In Frazier and Page (2000) the boundaries to extract the water bodies were selected by first identifying 12 different training sites over three water body types (river, lagoon, and dam) and using the maximum and minimum values for all training areas across Landsat-5TM bands 1–7. The study found that the values extracted for band 5 (1550–1750 nm) gave the best visual approximation of the ground truth image, achieving an overall accuracy of 96.9 %, compared to 97.4 %, the overall accuracy of the maximum-likelihood classification, demonstrating that single-band threshold techniques may be equally beneficial for mapping water compared to more data and computation intensive methods (Fig. 5.3).

Fig. 5.3
figure 3

Density slice threshold using Terra MODIS Band 6 (1628–1652 nm) 500 m MOD09A1 8-day composite October 24, 2013

The density slice technique demonstrated here shows how a grayscale infrared image (a) can be first transformed by applying a color ramp that helps distinguish between different features (b). The image here is in the raw digital number format from MODIS. The areas that reflect very highly may reach 3000, while areas that absorb infrared will be very low. Because water absorbs infrared, it is expected to have a very low reflectance in MODIS band 6. Finally, upon visual examination, a threshold such as 1500 in this cause can be selected that best depicts the feature of interest (c).

2.2 Spectral Indices

Other uses of red and infrared reflectance data combine the two bands into spectral indices. The one such spectral index is the Normalized Difference Vegetation Index (Rouse et al. 1973; Colwell 1974; Tucker 1979). The index was created to identify and measure vegetation health and phenology in ERTS-1, using a normalized scale −1 to 1, where 1 is very healthy vegetation, and values approaching zero are unhealthy vegetation or not vegetation at all (McFeeters 1996). The contrast of the high absorption of the red wavelength and the high reflectance of infrared by plant chlorophyll allows researchers to normalize chlorophyll activity between different plant types and various stages of development (Tucker 1979; Gao 1996). However, the normalization of the infrared and red wavelengths allows researchers to use the negative side of NDVI to focus on water’s absorption of infrared to separate land and water (Lacaux et al. 2006; Lei et al. 2009), which may also be useful in flood monitoring studies.

Two similar studies in 1996 created indices particularly for measuring water, the Normalized Difference Water Index (NDWI). One index maximizes the green (Landsat 4MSS 500–600 nm) reflectance of water features, while minimizing the infrared (Landsat 4MSS 800–1100 nm) reflectance, (called NDWIg hereafter) (McFeeters 1996) in order to delineate open water features. A modification of the NDWIg index helped to reduce sensitivity and over-detection of water in urban areas (Xu 2006). The MNDWI (MNDWIg) replaces the near-infrared wavelength (Landsat-5TM 760–900 nm) with mid-wave infrared (Landsat-5TM 1550–1750 nm), as there is greater contrast in the reflectance of lake water, urban areas, and vegetation in the mid-wave band in the study region, compared to infrared reflectance (Fig. 2 seen in Xu 2006).

Researchers compared these spectral indices to help understand their relevance across water coverage fractions and sensor types (Lei 2009). The analysis focused on the available infrared (or SWIR/MWIR) wavelength combinations with green (variations of NDWIg), with Landsat 7 ETM+, SPOT-5, ASTER, and MODIS. The study found that for all sensors excluding MODIS, the short wave bandwidth (~1550 ~ 1750 nm) worked best with the green bandwidths (~500 ~ 600 nm). Still, a bandwidth with a shorter frequency than the other sensors’ SWIR, but longer than the NIR is recommended, as evidenced by the choice of MODIS band 5 (1230–1250 nm) over band 6 (1628–1652 nm), which is more similar to the SWIR bandwidths on other sensors in the study. The research further concluded that NDVI is an inappropriate choice for delineating water bodies when short wave and green bands are available (Lei 2009).

The second NDWI focused on vegetation liquid moisture using short wave infrared (AVIRIS 1240 nm) and infrared (AVIRIS 860 nm) (called NDWIs hereafter), where both wavelengths are sensitive to canopy chlorophyll and moisture content (Gao 1996). As NDWIs is sensitive to variations in vegetation moisture, it may be beneficial to identify spatial variations where forested or crop areas are becoming inundated with floodwater in order to delineate open water from vegetation with high water content.

The Normalized Difference Pond Index (NDPI) helps to identify small water bodies (greater than 100 m2) where vegetation might be present, which may not be detected by other water indices or NDVI, particularly as pixel sizes increase to more coarse resolution (Lacaux et al. 2006). NDPI was developed using the SPOT-5 sensor, utilizing the green (500–590 nm) and shortwave (1580–1750 nm called middle infrared MIR in the text) wavelengths. NDPI identifies standing water when vegetation is present, which may allow for a more rigorous account of shallow water bodies that are not easily distinguishable in other water indices that focus on pure water or turbid water with little vegetation presence, which may be dominant in deeper flooded areas.

While some rice and grain crops may benefit from seasonal flooding, prolonged floods can cause immense damage to agricultural areas by wash away soils and crops. To contribute to agricultural food risk assessment the Modified Land Surface Water Index (MLSWI) was created by comparing combinations of infrared (841–876 nm) and two different shortwave bands (1628–1652 and 2105–2155 nm) (Kwak 2015). The near-infrared MODIS band 2 (841–876 nm) and shortwave infrared MODIS band 7 (2105–2155 nm) were shown to be ideal in Bangladesh. This is a surprising result, compared with the laboratory spectra (Lei 2009), where the shorter wavelength (1230–1250 nm) was an improvement over the longer wavelength (1630–1650 nm) when green (550–570 nm) was the complementary bandwidth.

The surface water indices all show a very similar map of flooding in the Mekong Region seen in Fig. 5.4. The Normalized Difference Water Index (NDWIg), when using a threshold of 0–1, shows the smallest flood extent (a). The Modified Normalized Difference Water Index (mNDWI) shows more flooding, as it was meant to be more sensitive to the effects of water in heterogeneous pixels (b). Finally, the Normalized Difference Pond Index (NDPI) shows the largest flood extent, as it is very sensitive to water turbidity. The variance shown between the water indices might be due to the nature of water turbidity in the region and the relatively coarse resolution, compared to the finer resolution data used in the original index development.

Fig. 5.4
figure 4

Surface water indices (a) NDWIg, (b) mNDWIg, (c) NDPI

2.3 Color Composite

Earlier research (Ali et al. 1989) identified AVHRR bands 1 (580–680 nm) and 2 (725–1100 nm) as suitable for studying water turbidity and land water separation, respectively, prompting a study by Rasid and Pramanik (1990) to use the two bands in a color composite method to delineate flood boundaries and identify areas inundated with deeper water. Researchers have also used near-infrared color composites to determine which pixel might have a mixture of water and land as transition zones can be difficult to distinguish in coarse resolution imagery (Chen 2013). The low reflectance in the near-infrared wavelengths allows researchers to identify turbid water, which may be shallower compared to clear water, which may be deeper. The color composites aid in preliminary visual inspection before continuing onto further studies to incorporate other datasets such as elevation models and land cover datasets, which will be discussed in the next section (Table 5.1).

Table 5.1 Formulas wavelengths and thresholds used in Figs. 5.3 and 5.4

3 Physically Based Models, Additional Input Data

Physical data such as temperature, texture, or elevation have also proven to be reliable methods to mapping floods. Temperature data can be formulated from measurements derived from long wavelength infrared data or brightness temperature conversion from passive microwave sensors, while texture and elevation can be derived from active microwave products and photogrammetric products derived from optical data. Additional sources of data that may be used in flood mapping efforts may be land cover datasets, outputs from hydrological models, and soil moisture and precipitation information.

3.1 Thermal

As water bodies have a relatively consistent temperature compared to dry land masses (Schaaf and Lakshmi 2000), land surface temperature (LST) can be used to delineate water bodies by identifying temperature contrasts between dry land and water. Researchers have studied this relationship over the United States using the High Resolution Infrared Sounder (HRIS) (Schaaf and Lakshmi 2000) as well as in Australia using MODIS (Parinussa et al. 2016), citing differences in surface temperature recorded between morning and evening satellite overpasses. The sensors used in these studies contain long wave infrared bands ranging from 10,600 to 12,510 nm to measure temperature through the computation of the data recorded from surface radiance and emissivity (Chedin et al. 1984; Wan 1999; Lei et al. 2016).

Cloud coverage during flood events makes the use of infrared imaging for flood detection a difficult task. While infrared bands may be able to penetrate small particle masses such as haze or cirrus clouds, denser stratus clouds pose a problem, preventing clear observations of the surface. In these cases, the use of cloud-penetrating radar is available, as the large wavelengths are able to surpass the relatively small particle sizes of the clouds (Liou 2002; Parinussa 2008). Surface temperature from radar measurements can similarly be used to identify flooding while penetrating cloud cover. Surface temperature can be calculated from brightness temperatures observed from microwave wavelengths. Brightness temperature, the measure of radiation emission from the surface, can be recorded by measuring the temperature at the antenna of the sensor. Surface temperature T S may be produced with varying accuracy, as determined by wavelength and algorithms used, the general principle of the conversation is given by the relationship with brightness temperature T B and the emissivity of the surface in kelvin e; r is reflectivity at the surface (Lakshmi 2013)

$$ e=1-r, $$
$$ {T}_{\mathrm{B}}=e{T}_{\mathrm{S}}. $$

The Advanced Microwave Scanning Radiometer—Earth Observing System (AMSR-E) and its ancestor AMSR-2 are dual-polarized passive microwave sensors with a wide range of spatial resolutions, and a twice daily revisit at 1:30 p.m. and 1:30 a.m. AMSR-E functioned from June 2002 to October 2011, while AMSR-2 became available beginning July 2012. MODIS is flown along with AMRS-E on the Aqua satellite platform, as a complement of sensor types for hydrologic applications capturing measurements at the same time. Both sensors can derive surface temperature products; however, Parinussa et al. (2008, 2016) found that because of the relative consistency between the datasets and the high accuracy as compared with ground measurements, a combined dataset was proposed, featuring MODIS as the primary data, and filling in cloud cover gaps using AMSR-2. The MODIS/AMSR-2 combined surface temperature product could produce a daily flood map with no cloud coverage when day and night observations are compared. In contrast, studies have identified that the difference between the vertically and horizontally polarized brightness temperatures observed simultaneously can also be used to identify open water, as large differences signify the presence of strongly polarized signals as are found in open water (Choudhury 1989; Smith 1997).

In addition to identifying flooding through diurnal changes in surface temperature, LST data can be used with NDVI to create the Vegetation Water Supply Index (VWSI), as \( \mathrm{VWSI}=\mathrm{NDVI}/\mathrm{L}\mathrm{S}\mathrm{T} \), which was originally created to monitor drought conditions (Cai et al. 2011), by identifying vegetation stress under arid settings. The inverse of the drought values may be used to identify above average moisture. A modification of the VWSI, known as the Normalized Vegetation Water Supply Index,

$$ \mathrm{NVSWI}=\left(\frac{\mathrm{VSWI}-{\mathrm{VSWI}}_{\min }}{\left({\mathrm{VSWI}}_{\max }-{\mathrm{VSWI}}_{\min}\right)}\right)*100 $$

normalizes the values over the study period to give more context to the severity of the output values, which are then segmented into equal interval classifications. The drought values are below 60, while wet values are above 80 (Abbas et al. 2014).

The three vegetation indices focus on identifying vegetation health and not water. However, based on the premises that floodwater may obscure vegetation or that floods might destroy healthy vegetation, vegetation indices may be used to help map floods. The Normalized Difference Water Index (NDWIs) identifies water content of vegetation; in Fig. 5.5a, the flooding in the region is shown with high water content values as 0–0.20. The Normalized Difference Vegetation Index (NDVI) is generally symbolized as values −1 to 1 with healthy vegetation being above zero; Fig. 5.5b shows the flooded region as negative values. The Vegetation Water Supply Index in Fig. 5.5c is based on NDVI and provides more information than NDVI alone as land surface temperature information is added. Because of surface water, surface temperature is not always able to be recorded from MODIS, leaving no data regions shown in white inside the floodplain.

Fig. 5.5
figure 5

Vegetation indices for water content and vegetation health (a) NDWIs, (b) NDVI, (c) VSWI

3.2 Radar Imaging

The surface temperature example is not the only combination of radar and optical remote sensing methods to map floods. Radar is able to measure ground texture through backscatter at multiple wavelengths much longer than is found in the optical spectrum. Wavelengths for radar sensors are generally measured in length centimeters (cm) or frequency (MHz, GHz). Ground features are expected to have a coarse texture, giving a speckled appearance in radar imagery, whereas water features are expected to be very flat or specular. Another study showed that while a C-Band (5.6 cm) synthetic aperture radar (SAR) image could penetrate cloud cover to identify surface features, backscatter from wind caused waves reduced the specular nature of the water bodies, preventing the water to be identified by the sensor (Alsdorf et al. 2007). It is then suggested that the L-Band (24-cm wavelength) sensor may be ideal for measuring inland surface water bodies, as it is not as sensitive to the rough texture of water caused by wind or flow turbulence.

Although raw backscatter data can be detrimental to direct observations of water surface when there is wind roughening, the backscatter coefficients of X- and L-band sensors have been used (Rosenqvist and Birkett 2002; Hess et al. 2003) to extract flood inundation extent when surface water is not specular, or is mixed with vegetative features such as in wetlands. To complement SAR systems that have the capability to measure stage height, Smith (1997) devised a method to combine European Space Agency SAR data with optically derived inundation extent from Landsat to obtain elevation extent/discharge rating curve to derive water elevations at the land–water boundary.

Researchers have also presented the case that while satellite imagery in the visible and near-infrared wavelengths is useful for mapping water extents, problems such as canopy cover and emergent vegetation can obscure and mix pixels, respectively, causing classification errors (Töyrä et al. 2002). This study utilized the C-Band on the satellite radar sensor RADARSAT and the NIR (790–890 nm) data from SPOT to create a composite to identify flood boundaries, which was repeated in another study, where visible imagery from Landsat thematic mapper (TM) and Envisat advanced synthetic aperture radar (ASAR) system were used to identify flooded regions (Ramsey et al. 2012). Acquiring flood depth information can also be difficult to using visible imagery with varying vegetation types or regularly flooded marsh areas (Rasid and Pramanik 1990; Ramsey et al. 2012). The Ramsey et al. (2012) study provided a solution to this problem by utilizing SAR and ASAR to identify relative water penetration depths in different marsh areas.

3.3 Digital Elevation Models

Incorporating ancillary data such as elevation is an important part of flood mapping. When available, digital elevation models are used in conjunction with optical, radar, and modeled data.

The integration of optical and radar data with digital elevation models using geographic information system (GIS) processing techniques is described in Townsend and Walsh (1998). The Position Above the River Index (PARI) model is an integrative approach creates a potential inundation map based on the river’s proximity to other hydrologic features, such as tributaries or streams.

In a study based in northern Italy, researchers found that due to the delay of the satellite overpass from the peak inundation time, only a fraction of the flooded area was observed by the satellite (Brivio et al. 2002). In a technique similar to the PARI model from Townsend and Walsh (1998), a cost-distance matrix was created. Using the areas that were mapped using C-Band SAR after the peak flood, Brivio et al. (2002) created a digital elevation model to create a cost-distance matrix to calculate the difficulty of water traveling from the river to the remaining flooded regions; the matrix was then used to trace the path of the river to the flooded region.

Flood depths provide a useful dynamic to flood maps giving the user specific information about the inundation level and the type of risks that exist in that area. Emergency planners and disaster mitigation teams typically require water depth information in areas other than at gage locations. A method to create water depth grids is identified (Lant 2013) by subtracting the DEM from the inundation extent.

In calibration of coarse resolution mapped flood extents, it was suggested (Fayne and Bolten 2014; Guerschman et al. 2011) that a higher resolution elevation model should be used to remove areas that would be unlikely to flood such as ridge tops or high hillsides. Gallant and Dowling (2003) created a multistep iterative process to categorize digital elevation models as flat valley bottoms or flat ridge tops and areas in between, known as the Multi-Resolution Index of Valley Bottom Flatness (MRVBF). As flooding is expected to occur along valley bottoms, a threshold may be used to mask out values that MRVBF and the complement Multi-Resolution Ridge Top Flatness (MRRTF) consider a high hillside or hilltop.

3.4 Classification Algorithms Using Elevation Data

Another method of mapping floods using coarse resolution imagery and elevation models is seen in the Open Water Likelihood (OWL) algorithm. OWL uses a logistic regression to incorporate MODIS shortwave infrared reflectance bands, NDVI, NDWIs, and MRVBF to obtain the probability that a fraction of the coarse resolution pixel is inundated (Guerschman 2011). The formula is as follows:

$$ \mathrm{OWL} = {\left(1+ \exp\;\left({a}_0+{\displaystyle \sum_{i=1}^5}{a}_i*{x}_i\right)\right)}^{-1} $$

where

  • a 0—3.41375620

  • a 1—0.000959735270

  • a 2—0.00417955330

  • a 3—14.1927990

  • a 4—0.430407140

  • a 5—0.0961932990

  • x 1—SWIR (1628–1652 nm) MODIS band 6 (reflectance × 1000)

  • x 2—SWIR (2105–2155 nm) MODIS band 7 (reflectance × 1000)

  • x 3—NDVI

  • x 4—NDWIs

  • x 5—MRVBF

This method was again validated (Chen et al. 2013), as it was applied to 500-m MODIS Daily and 8-day images, and was visually compared against Landsat 5 (Fig. 5.6).

Fig. 5.6
figure 6

(a) SRTM Elevation Model in meters, (b) MrVBF, (c) Water depth grid in meters, (d) OWL Water Likelihood Fraction

The four elevation-based products shown here are derived from 90-m elevation data (a) collected from the Shuttle Radar Topography Mission and were preprocessed by removing voids and sinks in the data (CGIAR-CSI). The permanent water bodies are overlayed in white. The Multi-resolution Valley Bottom Flatness (b) used the SRTM data as a product input to estimate the flatness of the floodplain, and therefore the likelihood of deposition. The light blue region in b is the flattest, while the white region is the smoothed, flat water surface. The water depth grid (c) was created by subtracting a binary classification of the NDPI (Fig. 5.4c) from the elevation model. Using this method, the region surrounding the lake shows the deepest flooding, while the delta shows shallower flooding. The Open Water Likelihood (OWL) algorithm combines inputs from two shortwave infrared bands, NDWIs (Fig. 5.5a) NDVI (Fig. 5.5b), and MrVBF (b) to identify what fraction of the pixel is likely inundated. Unlike other water detection algorithms, OWL is more sensitive to mixed pixels, reducing uncertainty caused by the other algorithms which may use less input variables.

Outputs from the OWL and MRVBF algorithms have also been combined in a decision tree and logistic regression approach with Landsat to create binary classifications of water bodies over time, then were combined to create a map of cumulative observations of surface water from space. Similarly, another study used a regression tree approach to integrate the predictors of water presence to derive a map of water fraction, instead of a binary classification, applied to coarse resolution MODIS imagery.

As many studies have cited problems with cloud and terrain shadow being spectrally similar to the low reflectance of water in the infrared wavelengths (Xu et al. 2006; Sun et al. 2012; Nigro et al. 2014), Li et al. (2015) used a geometric algorithm to identify and remove cloud shadow. This study is particularly relevant to all of the research related to optical data, as many of the spectral indices and thresholds mentioned in section one, use infrared reflectance. The terrain shadow removal study identifies that the root-mean-square (RMS) height, internal and external height difference are good indicators of surface roughness and delineation of water from terrain.

3.5 Software Models, Rainfall, and Soil Moisture

It is also worth mentioning that flood inundation maps can also be prepared using computer software, using input data such as gage height, elevation models, and land cover maps. Part of the US Army Corps of Engineers, the Hydrological Engineering Center developed the River Analysis System (HEC-RAS) and other hydrological modeling software, which is commonly used in combination with remotely sensed data (Lant 2013). The software suite enables researchers to create inundation maps, surface profiles, and model flow direction and physics, as well as taking into account topographic features such as surface roughness and slope, or seasonal changes in seasonal vegetation, and anthropogenic factors such as changes in impervious surfaces or crop cycles (United States Army Corps of Engineers HEC-RAS).

The implementation of software for flood modeling is particularly useful compared to, or in conjunction with, remote sensing studies as issues with temporal latency, cloud cover, cloud or terrain shadow, and spatial resolution are reduced or eliminated. Incorporating rainfall estimates is particularly useful for flood forecasting and flash flood analysis (Krajewski and Smith 2002). Many of the studies discussed here focus on slow and persistent flooding; however, one study created a framework to map inundation threats and flash flooding at city and regional scales by integrating the HEC-RAS system with precipitation data (Knebl 2005), while another estimated flood extent by combining precipitation data with a routing model (Wu et al. 2014).

As the surface soil moisture state is key to the infiltration or runoff of precipitation (Entakhabi et al. 2010), the recently launched Soil Moisture Active Passive (SMAP) sensor (NASA-JPL-SMAP) and other soil moisture products can be useful tools in mapping floods and determining flood risk when soil moisture is approaching a saturated state. Analogous to the use of radar backscattering to determine standing water, the unique dielectric properties of water and dry soil allows water to be measured as a fraction of soil to determine volumetric moisture. Therefore, if the soil is approaching saturation, then flooding is likely to occur. While the SMAP data was not available during the demonstration year, the 2015 volumetric soil moisture from the SMAP sensor (L3 SM_P 36 km 2015, National Snow and Ice Data Center NSIDC 2015, 2016) is able to capture increased soil moisture around the areas shown as flooded from the 2013 MODIS data.

The SMAP volumetric soil moisture data is publically available at 36 km although special algorithms and processing may be capable of creating a higher resolution product to 9 or 3 km. The coarse resolution of the 36-km pixels may not be enough to measure flooding independently, but it is clear in Fig. 5.7 that the SMAP sensor is able to capture increased moisture over the target region. While SMAP is not intended to map floods, the ability for SMAP to identify soils that are increasing in saturation before a flood event is very important for flood hazard and damage mitigation.

Fig. 5.7
figure 7

SMAP 36 km Volumetric Soil Moisture Data October 23, 2015

3.6 Change Detection Methods

Finally, change detection is a universal method to monitor flood events, as areas that were not previously flooded would appear different spectrally, thermally, and texturally. Simple subtraction between images of different dates also known as differencing (Song et al. 2004), calculating the standard deviation of a baseline of data with the z-score of the newer data (Sarp 2011), and percent change formulas (Hasan and Islam 2011) are all useful tools for identifying surface changes, and can be implemented across sensor types, spatial and temporal resolutions. However, it is important to note that although the algorithms may not be temporally dependent, the latency between compared datasets may skew the validity of the results, as other factors may contribute to the observed change. Trend analysis is one method to monitor flooded areas over time, as individual pixel values or basin averages can be made into a time series to identify when flooding is occurring or may occur in the future when flooding is cyclical. While Nash et al. (2014) used auto-regression techniques on NDVI to predict seasonal variations in vegetation health, another study used the TIMESAT (Eklundh et al. 2009) software to fit a function on the time series of NDVI data for snow-vegetation dynamics (Jönsson et al. 2010).

4 Conclusion

The field of remotely sensed flood mapping continues to evolve and improve. The development and improvements of real-time data access systems have allowed scientists to harness power of programming languages such as Python, C++, and R, in order to digest data as soon as it becomes available, and to create output data in a rapid manner. Instead of simply hosting the data in a list format online for users to download, many authors have found it useful to demonstrate their flood products on online web dashboards, such as the Dartmouth Flood Observatory (Brakenridge and Daniel 1996), the MODIS Near Real-Time (NRT) Global Flood Mapping Project (Nigro et al. 2014), and the Near Real-Time Flooding in Southeast Asia Project (Ahamed and Bolten 2016). Both the Global Flood Mapping and the Flooding in Southeast Asia projects rely on methods discussed here using MODIS data, such as infrared band thresholding and spectral index combined with change detection, respectively.

Free publically available and costly commercial Earth observing satellites are able to capture images of the earth at varying spatial scales, and with different orbital periods. Optical sensors may have the ability to capture surface reflectance data from visible blue, green, and red wavelengths, as well as longer wave infrared bands and emissivity data. Radar sensors may focus on brightness temperature or backscattering coefficients to identify moist and saturated soil. Finally, these first-level datasets can be used as inputs to other computational modeling software, time series, or regression algorithms to provide value-added improvements, increasing the spatial or temporal resolution of the input datasets, or creating a wholly different product entirely.

Utilizing earth observing satellite data to map standing water from space is indispensable to flood mapping for disaster response, mitigation, prevention, and warning as extreme flooding events worldwide can damage crop yields and contributing to billions of dollars economic damages as well as social effects including fatalities and destroyed communities. The increase in the quantity and variety of flood mapping techniques using satellite data has allowed broader and less-technical audiences to be able to benefit from flood products. The use of remotely sensed data by diverse audiences increases the general knowledge of flooding in a given area and may help to mitigate pervasive economic and social damages caused by flooding.