1 Introduction

Disaster monitoring from space can provide a powerful tool for advanced warning, assessment of the affected areas and coordination of relief measures. Disaster monitoring requires a quick, almost instantaneous, response which could be achieved if the complete data evaluations were carried out on board. Most earth observation (EO) satellites, such as Landsat, SPOT and IKONOS are large and expensive missions, taking many years to develop. However, small satellites and in particular micro-satellites are emerging that are becoming a better option due to their lower costs, shorter development time and state-of-the-art imaging sensors (Bretschneider et al. 2005). The Surrey Satellite Technology Limited (SSTL), a spin-off company of the University of Surrey and a commercial manufacturer of small satellites have deployed the disaster monitoring constellation (DMC) (Curiel et al. 2003), which at present consists of five micro-satellites in low earth orbit (LEO). DMC provides images to disaster relief agencies worldwide in times of need. The constellation offers a low-cost approach to disaster monitoring based on daily coverage of any place on earth using medium spatial resolution multispectral images.

The ability to detect temporal changes in images is one of the most important functions in intelligent image processing systems for hazard and disaster monitoring applications. Change detection analysis that requires two or more multispectral or synthetic aperture radar (SAR) images acquired over time has recently been adopted in various applications. The NASA’s earth observing one (EO-1) spacecraft carries on board several science analysis experiments including cloud detection, flood scene classification and change detection (Chien et al. 2005). In a related project, a specialised processor for on-board change detection is being developed for a repeat-pass change detection and hazards management (Lou et al. 2004). It is predicted that future satellite missions will be capable of carrying out intelligent on-board processing tasks such as image classification, compression and change detection (Zhou and Kaufmann 2002).

Flooding is a devastating disaster that occurs all over the earth. The causes of flooding are heavy rain, hurricanes and undersea earthquakes. Heavy rain is the main factor of flooding, compared to the other two causes. Rain that falls in extended periods causes water from rivers to overflow and flood the nearby areas. Hurricanes destroy dams and levies, which results in heavy flowing of water. Undersea earthquakes can cause big tidal waves such as Tsunami, destroying and flooding coastal areas. The Tsunami disaster on 26 December 2004 (Chen et al. 2005) has led to human tragedy and loss of infrastructure on a very big scale. Processing of downloaded satellite imagery has proven to be effective in monitoring of flooding events. Although envisaged in future missions, performing of flood monitoring on board EO satellites is not yet available due to satellite design limitations and restricted on-board computing resources.

This paper presents the results of a feasibility study on intelligent image processing and decision-making for flood monitoring on board small satellites. An automatic change detection system for flood monitoring missions is proposed (Vladimirova et al. 2006). A novel solution to flood detection based on the combined use of optical imagery and GPS reflectometry data is introduced. A fuzzy inference engine is employed in the decision-making process, which generates control signals to other subsystems on board the satellite. Multispectral satellite images from SSTL and Landsat satellites as well as Internet are used to evaluate the performance of the investigated algorithms and concepts.

The paper is structured as follows. Section 2 reviews related work on intelligent image processing for use on board satellites. Section 3 introduces the proposed intelligent on-board system and discusses the decision-making block. Section 4 presents evaluation results.

2 Image processing on board small satellites

Current commercial EO satellites have very limited image processing capabilities on board. They mostly operate according to a ‘store-and-forward’ mechanism, where the images are stored on board after being acquired from the sensors and are downlinked when contact with a ground station occurs. The implementation of high performance image processing on board satellites is a challenging task. Many factors have to be considered when deciding on the type of hardware and software to be used on board spacecraft. Intelligent imaging capabilities have already been incorporated in several EO small satellite missions for experimental purposes. For example, small satellites such as UoSat-5 (Fouquet 1992), BIRD (Haller et al. 2002) and the project for on-board autonomy (PROBA) (Bermyn 2000) are carrying sophisticated experimental imaging payloads. This section briefly reviews advanced imaging payload systems of earth observing small satellites.

The on-board imaging architecture first flown on UoSAT-5 is also implemented on other SSTL small satellite missions. For example, TiungSAT-1, which was launched in 2000, had two earth imaging systems (EIS)—multi-spectral earth imaging system (MSEIS) and meteorological earth imaging system (MEIS). TiungSAT-1 carried two transputers (T805) as the imaging processors with 20 MHz clocking speed and 4 MB of SRAM. EISs of TiungSAT-1 were capable of autonomous histogram analysis ensuring optimum image quality and dynamic range, image compression, autonomous cloud-editing and high compression thumb-nail image previews. UK-DMC is a satellite of the standard DMC design, with added research and development payloads. Like all of the standard DMC satellites, it carries an optical imaging payload developed by SSTL to provide 32 m ground resolution with an exceptionally wide swath width of over 640 km. The payload uses green, red and near infrared bands equivalent to Landsat TM+ bands 2, 3 and 4. In comparison to the other DMC satellites, UK-DMC features increased on-board data storage, with 1.5 GB capacity. Images are returned to the SSTL mission operations centre using the Internet Protocol over an 8 Mbps S-band downlink (UK-DMC 2008).

The BIRD satellite that was developed by the German Space Agency is another small satellite with image processing on board. The imaging system of the BIRD satellite is based on two infrared sensors and one charge-coupled device (CCD) camera. A distinctive feature is the specialised hardware unit based on the neural network processor NI1000, which was integrated in the payload data handling system (PDH) of the satellite. PDH is a dedicated computer system responsible for the high-level command distribution and the science data collection between all payloads on the BIRD satellite. The neural network processor implements an image classification system, which can detect fires and hotspots.

The small satellite mission of the European Space Agency (ESA) PROBA has advanced autonomy experiments on-board (Bermyn 2000). The Compact High Resolution Imaging Spectrometer (CHRIS), an EO instrument, demonstrated on-board autonomy with respect to the attitude and orbit control system (AOCS), data handling and resource management. The images from CHRIS are processed in a digital signal processor (DSP) based payload processing unit (PPU) operating at 20 MHz. PPU provides 1.28 Gbit (164 MB) of mass memory and acts as the main processing block for all on-board cameras and other payload sensors.

A significant milestone with respect to on-board intelligent processing is the NASA autonomous “Sciencecraft” concept and the revolutionary TechSat-21 satellite mission of the US Air Force Research Laboratory (AFRL). The TechSat-21 experiment was intended to demonstrate the ability of multiple small satellites flying in formation to perform missions traditionally carried out by single, larger satellites. The Techsat-21 project also included development of on-board science algorithms such as image classification, compression, and change detection (Chien et al. 2002). Under this programme a specialised processor for change detection is developed which is implemented as a multiprocessor system (Lou et al. 2004). The hardware is based on a hybrid architecture that combines field programmable gate arrays (FPGAs) and distributed multiprocessors. Customised FPGA boards and high-speed multiprocessors are being developed because of the limited on-board memory (512 MB or less) and the relatively slow processing speed of the current commercial-off-the-shelf (COTS) components. The required processor performance and memory capacity for the change detection task are estimated as 2.4 GFLOPS and 4.5 GB, respectively. It is aimed that the multiprocessor card will have up to 8 GB on-board memory for the change detection processing task.

FedSat is an Australian scientific small satellite that carries a high performance computing payload (HPC-1). In this mission, a cloud detection system (Williams et al. 2002) and lossless image compression (Dawood et al. 2002) was implemented using reconfigurable computing. Further developments in advanced on-board processing can be seen in the parallel processing Unit (PPU) of X-Sat, a small satellite which is in a process of development at Nanyang Technological University, Singapore. One of the PPU main functions is processing of images acquired from a multispectral camera payload. The processing unit comprises 20 SA1110 StrongArm processors that are interlinked by FPGAs and clocked at 266 MHz. The increased computational power of PPU is needed for specialised image processing such as real-time compression and unsupervised analysis to optimise the utilisation of the downlink bandwidth (Bretschneider 2003).

Table 1 illustrates the trends in image processing on board small satellites in terms of functionality and summarises computing characteristics of image processing payloads. It can be seen that more on-board image processing functions are included in recent missions. This is made possible due to the availability of more powerful computing resources on board as shown in Table 1.

Table 1 Functionality trends and computer characteristics of image processing payloads on board small satellites

3 An automatic flood monitoring system for use on board satellites

The goal of this research is to implement an intelligent on-board system for flood monitoring using optical images. The flood monitoring process is based on detection of changes between multispectral images taken at different observation times and making decisions based on the identified changes.

So far SAR images have been the choice as remote sensing data for flood detection because of their capability to penetrate clouds which can be quite heavy during flooding events. However, multispectral images have recently risen in importance due to optical EO satellites being relatively cheaper and having better revisit capability than SAR satellites. Multispectral images, taken from DMC satellites, are used to assess the feasibility of the on-board flood monitoring system. DMC images are similar to Landsat TM images in terms of spectral and spatial resolution and are taken in near-Infrared (NIR) (0.76–0.9 μm), red (0.63–0.69 μm), and green (0.52–0.62 μm) spectral bands.

The feasibility study, presented in this paper, is carried out under the following assumptions:

  • the EO satellite has a ground repeat track and therefore no geo-rectification is needed;

  • the earth areas to be monitored are pre-determined;

  • the reference images, which are required for the change detection process, are stored in an on-board image database;

  • the sun illumination, atmospheric conditions and other factors affecting the sensed images are the same as the ones of the reference images stored in the on-board database.

3.1 System description

The proposed intelligent system for flood monitoring is targeted at the DMC small satellite platform. The decision making process is based on fuzzy logic, which allows the system to deal with the uncertainty and ambiguity of the data input. The block diagram of the proposed system is shown in Fig. 1. The data input of the system is a newly taken image from the optical imagers on board the satellite. The images are split into image tiles of a smaller size for easier processing.

Fig. 1
figure 1

Block diagram of the proposed automatic flood monitoring system

The rectangular boxes in Fig. 1 denote the processing blocks of the system: image tiling, image registration, cloud detection, flood detection, fuzzy inference engine and database update. The parallelogram blocks denote data that flows between the processing blocks, and the cylinder block denotes the on-board database, which stores reference images and processing results. Global positioning system (GPS) data are used to “stamp” the images with the geo-location at the time of capture, providing additional information for the subsequent tasks of image co-registration and identification of the reference image in the database. Bi-statically reflected GPS signals, which are able to penetrate clouds, assist in the water detection process.

Change detection and flood detection are very challenging tasks to be implemented on board small satellites, because they operate on a pair of images: a sensed and a reference image. Image tiling, image registration, and cloud detection are pre-processing blocks that perform additional critical tasks on the input image. Image tiling divides the size of the image into manageable pieces of image data. Image tiling could also be used to make the change detection operation more efficient replacing the direct pixel-to-pixel image comparison by a comparison of statistical features extracted from the individual image tiles.

Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and by different sensors (Zitova and Flusser 2003). The registration process is aimed at finding the optimal spatial and intensity transformations so that the images are aligned. The change detection process is very sensitive to registration errors, especially if it is done on a pixel-by-pixel basis. The effect of image misregistration on the accuracy of remotely sensed change detection is analysed in (Dai and Khorram 1998), where it is reported that change detection accuracy drops dramatically within the first pixel of misregistration. Also, an evaluation of the effect of misregistration on moderate resolution satellite imagery concluded that high accuracy of registration is needed in order to achieve a reliable change detection system (Townshend et al. 1992). Several algorithms aimed at the implementation of the image registration pre-processing block in Fig. 1 have been investigated (Yuhaniz et al. 2005a, b), as discussed in Sect. 4 below.

Cloud cover is a problem for multispectral analysis, especially in the tropical regions where the cloud cover is quite heavy. As this research work is focused on using optical images, it is very important to detect clouds in the images before the decision-making process takes place. The automatic cloud cover assessment (ACCA) algorithm used on Landsat 7 is chosen for the cloud detection due to the similarity between the DMC and Landsat images in terms of spectral wavelengths. In addition, ACCA has also been used in several real-time on-board remote sensing experiments (El-Araby et al. 2005; Williams et al. 2002).

“A database is a collection of information that is organised so that it can easily be accessed, managed and updated” [What is Database? TechTarget Website [online]. http://searchsqlserver.techtarget.com/sDefinition/0,,sid87_gci211895,00.html (Accessed 30 Dec 2008)]. In the context of this system, the database stores images and their associative information. The images stored in the database are the reference images that are being used in the image registration and change detection processing blocks. Other information that is useful for the flood monitoring analysis, for example historical flooding data, could also be stored. The reference image contains the area of interest that is captured by the sensed image. The size of the image is pre-defined on ground. For example, let the area of interest be in the range of 100 km2. The size of the reference image covering 100 km2 will be 2,500 × 2,500 pixels as the DMC multispectral imager has 32 m ground sampling distance. The image is uploaded with ground control points that will be used to match the sensed image in the image registration processing block.

Databases have not yet been used on board satellites. An on-board hard disc data recorder (HDDR) employing a miniaturised hard drive device could be used to store the database. Hard disc drives are not common on board satellites and are still at an experimental stage. The SSTL’s Beijing-1 DMC satellite carries a pair of HDDRs that have survived launch and the harsh space environment. The HDDRs comprise a 60 GB pressurised hard drive each and are a good candidate for a database implementation. Other than images, the database could include dynamic weather data derived from terrestrial databases, or possibly from meteorological satellites via intersatellite communication. Weather data can be very useful, providing complementary data inputs to the flood detection and cloud detection processing blocks.

The flood detection block is one of the main components of the system. Flood detection and classification using satellite imagery is an active topic of research. Current approaches to this problem can be divided into two categories: change detection based methods and supervised classification based methods. The methods in the first category are widely used in flood detection of satellite imagery. Several flood detection algorithms under this category are investigated (Yuhaniz et al. 2005a, b). The output of this subsystem is a flood map image, which is to be used as an input to the fuzzy inference engine.

Utilizing bistatically reflected GPS signals from LEO for remote sensing on board satellites (Gleason et al. 2005) is a new area of research. GPS reflections are able to penetrate clouds, and also the receivers are cheaper and lighter than traditional SAR imagers. These features are very valuable for the small satellite platform. The potential of GPS reflectometry for water detection (Gleason 2006) is explored in the context of this work. It is proposed that GPS reflectometry signals are used in conjunction with optical images as input data for flood monitoring on board a small satellite. The ability of GPS reflections to penetrate cloud cover will act as a valuable complement and a backup for the flood maps produced by the flood detection processing block.

In this research work, we introduce a fuzzy logic based processing block, called fuzzy inference engine that receives data input from the flood detection processing block. The cloud detection processing block and the GPS reflectometry subsystem also provide input to the decision-making process. The output of the fuzzy inference engine in Fig. 1 is a control signal in the form of a flooding alert of varying strength. This will allow the system to trigger subsequent tasks such as image compression, issuing of a warning alert or scheduling of other imaging operations. For example, if the flooding alert is of high strength, the system would send a high-priority warning alert to the ground station.

3.2 Fuzzy inference engine

Fuzzy inference is the process of formulating an associative mapping from given inputs to an output using fuzzy logic. The mapping then provides a basis from which decisions can be made, or patterns discerned (Matlab Fuzzy Logic Toolbox Documentation.http://www.mathworks.com/access/helpdesk/help/toolbox/fuzzy/index.html?/access/helpdesk/help/toolbox/fuzzy/fp351dup8.html). Fuzzy associative maps have been used in many applications including computer vision, data classification and automatic control. A fuzzy inference system has three components: fuzzy membership functions, fuzzy operators and if–then rules. A simple example of using fuzzy inference for image classification is given in (Nedeljkovic 2004). Although the use of fuzzy logic in change detection analysis is an active research area, little work has been done with respect to remote sensing change detection systems. The use of a fuzzy inference system to detect land-cover changes is proposed in (De Souza et al. 2002). The input for the system is a degree of change for each pair of pixels that represent the absolute value of the differences between two images of the same band.

The fuzzy inference engine takes as inputs the flood maps generated by the flood detection processing block as well as additional information from other sources as shown in Fig. 1 and makes decisions which determine the next action of the system.

One of the reasons for using the fuzzy logic approach is to reduce errors during water detection caused by ambiguous data. Fuzzy associative maps employ fuzzy membership functions, instead of crisp data, which provides a better means of dealing with ambiguity. For example the NIR/Red differencing method uses the ratio of the near-infrared and red band images to detect water (Sheng et al. 2001). Image pixels are determined as water if the ratio of near-infrared and red band pixel values is lower than a certain threshold. The threshold that separates water from non-water pixels, T 0, is located in the valley between two peaks of the histogram of NIR band/red band images as shown in Fig. 2, so a pixel is considered water, if NIR/Red ≤ T 0 or land, if NIR/Red > T 0. Often the threshold T 0 is not well defined and this leads to ambiguity in the interpretation of water and non-water pixels. It is expected that the use of fuzzy logic will reduce errors caused by a wrongly defined value of T 0.

Fig. 2
figure 2

Histogram of NIR/Red band images

When T 0 is employed as a threshold, some ambiguous pixels would be classified wrongly as water or non-water. However, with the use of fuzzy logic, not only water and non-water pixels but ambiguous pixels can also be considered in the flood detection process. The pixel values can be water, ambiguous or non-water and the membership function will define how close the ambiguous pixels are to either water or non-water pixels. Figure 3 shows the membership function for the image input to the inference engine. The output of the fuzzy inference engine will not be just flooding or not flooding, but it will be expressed in terms of fuzzy sets, for example, “no alert”, “medium-strength alert” and “high-strength alert”, depending on the data input, as shown in Fig. 4.

Fig. 3
figure 3

Membership function plot of the inference engine input

Fig. 4
figure 4

Membership function plot of the inference engine output

Different types of imaging data input can be provided to the fuzzy inference engine for the decision making process as detailed in Table 2. The fuzzy inference block can accept pixel values based on the NIR/Red differencing method described above, which is the first choice in Table 2. The second choice of imaging data input for the fuzzy inference is an index for each tile. The index is a certain feature or indicator for each image tile, for example, obtained as a result of classification of the image tiles. Then, the identified features become the input for the fuzzy inference engine. This option has the advantage over the option 1 that it does not use pixel-to-pixel comparison. The third option is achieved by combining the options 1 and 2 with GPS reflectometry signals that have the ability to penetrate cloud. A transformed image, for example using Fourier Transform, can be used as the imaging input to the fuzzy inference engine too. The last option for the imaging data input to the fuzzy inference engine in Table 2 is the degree of change, which is the absolute value that represents the differences between two images of the same band.

Table 2 Types of imaging data input for the fuzzy inference engine

4 Experimental results

Extensive numerical experimentation is carried out in order to evaluate and validate different computational solutions for the implementation of the processing blocks in Fig. 1. This section discusses performance evaluation, the use of GPS reflectometry and accuracy assessment results.

4.1 Performance evaluation

Software modelling and simulation using Matlab is undertaken to select the optimal image registration and flood detection methods for implementation of the corresponding processing blocks in Fig. 1. The candidate methods are evaluated using multispectral image test sets consisting of a before-flooding reference image and a post-flooding image of the same area. The experimental results presented in this section are derived using the image test set described in Table 3. The post-Tsunami flooding test image of North Sumatra, Indonesia, taken on the 4 January 2005 by the UK-DMC micro-satellite is shown in Fig. 5. The size of the image is 1,500 × 2,500 pixels. The test images are split into image tiles of 500 × 500 pixels before performing flood detection.

Table 3 Test multispectral images used for performance evaluation
Fig. 5
figure 5

DMC image of North Sumatra after the Tsunami disaster. The image on the left-hand side is the tile in the bottom left corner of the image on the right (Image courtesy of SSTL)

One of the objectives of the evaluation is to estimate the execution time and memory capacity required by each of the investigated algorithms for a given image size. The image processing software can be executed in the solid state data recorder (SSDR) unit, part of the imaging payload of the DMC satellite platform. The PowerPC based SSDR unit is used for the performance evaluation presented in this section. The PowerPC processor is capable of executing 280 Dhrystone MIPS at 200 MHz and has 1 MB of RAM. The software was executed on a Pentium M 1.3 GHz personal computer and then the performance was scaled down to match the flight hardware characteristics. The Dhrystone 2.1 benchmark program was run on the Pentium M processor resulting in 1665 Dhrystone MIPS.

Figures 6 and 7 show the execution times required by the investigated image registration and flood detection methods, respectively. The results are obtained from applying four registration and six flood detection algorithms to a pair of test image tiles of 500 × 500 pixels each for the two processors—the test processor (1,665 MIPS) and the targeted processor (280 MIPS).

Fig. 6
figure 6

Estimated processing time of image registration methods. 1 Phase correlation, 2 cross correlation, 3 mutual information and 4 Fourier–Mellin registration

Fig. 7
figure 7

Estimated processing time of flood detection methods. 1 NIR differencing, 2 NDVI differencing, 3 parallelepiped, 4 maximum likelihood, 5 minimum distance, and 6 Mahalanobis distance

The tested image registration methods in Fig. 6 are phase correlation, cross correlation, mutual information and Fourier–Mellin registration (Yuhaniz et al. 2005a, b) as follows:

  1. 1.

    Phase correlation Phase correlation works by computing the Fourier transform for the input and base image and finding the peaks of their inverse cross power spectrums. It is a fast method, however, it is only useful for registration of images that have translation misalignment.

  2. 2.

    Cross correlation Cross correlation is a measure of similarity between pixels on the input and base images, which is used to find the translation shift. It is a slow method, which is sensitive to noise.

  3. 3.

    Mutual information This method is based on probability and information theory. It is a measure of statistical dependency between two data sets and is very useful for registration of multi-modal images.

  4. 4.

    Fourier–Mellin registration This method is an extension of the phase correlation algorithm adding the capability to detect rotation and scaling misalignments between the input and base images.

The tested flood detection methods in Fig. 7 are NIR band differencing, NDVI differencing, and four supervised classification methods: parallelepiped, maximum likelihood, minimum distance and Mahalanobis distance (Yuhaniz et al. 2005a, b) as follows:

  1. 1.

    NIR band differencing This method subtracts the pixels of the NIR bands of the before and after flooding images, as water in this band has very low reflectance compared to other land cover types. However, this method is very sensitive to factors such as atmospheric conditions and sun illumination, which leads to difficulty in setting the threshold that separates water and non-water areas (Nyborg and Sandholt 2001). The calculation of the flooded pixels is expressed as follows:

    $$ {\text{NIR}}_{\text{A}} - {\text{NIR}}_{\text{B}} \le T_{\text{C}} $$
    (1)

    where NIRA is the after-flooding image in NIR band and NIRB is the before-flooding image in NIR. TC is the threshold that separates the flooding and non-flooding pixels. TC is selected interactively by looking at the NIRA  NIRB histogram.

  2. 2.

    NDVI differencing The water pixels are detected based on the normalised difference vegetation index (NDVI). This method has the advantage over NIR differencing of stability to variations of atmospheric conditions, reflection of sun and water turbidity (Nyborg and Sandholt 2001). The calculation of NDVI is expressed as follows:

    $$ {\text{NDVI}} = ({\text{NIR}} - {\text{RED}})/({\text{NIR}} + {\text{RED}}) $$
    (2)

    where RED and NIR stand for the spectral reflectance measurements acquired in the red and near-infrared regions, respectively. The water and non-water pixels are separated by thresholding the NDVI of after and before flooding images, as expressed below:

    $$ {\text{NDVI}}_{\text{A}} - {\text{NDVI}}_{\text{B}} \le T_{\rm C} \quad {\text{and}\quad{\rm NDVI}}_{\text{A}} \le T_{\text{W}} $$
    (3)

    where NDVIA is the NDVI for the after-flooding images and NDVIB is the NDVI for the before flooding images. TC is the threshold that detects the decrease of NDVI by flood and TW is the threshold to exclude non-water surfaces after flood.

  3. 3.

    Parallelepiped classification The parallelepiped classifier uses the thresholds of each class signature such as the mean to determine if a given pixel falls within the class or not. The thresholds specify the dimensions (in standard deviation units) of each side of a parallelepiped surrounding the mean of the class in the feature space. If the pixel falls inside the parallelepiped, it is assigned to the class. The parallelepiped classifier is typically used when high speed is required.

  4. 4.

    Maximum likelihood classification This method is the most common supervised classifier in the remote sensing application. The water pixels are decided based on the following expression:

    $$ g_{i} \left( x \right) > g_{j} (x) \quad {\text{for}}\;j \ne i $$
    (4)

    where g i (x) is the discriminant function to classify the pixel as water which is expressed as below:

    $$ g_{i} (x) = { \ln }\,p(x|w_{i} ) + { \ln }\,p(w_{i} ) $$
    (5)

    where ln is natural logarithm, p(x|w i ) is the probability distribution of class w i (class water) to find x and p i (w) is the probability of class w i .

  5. 5.

    Minimum distance classification Similar to the maximum likelihood classification, the water pixels are decided based on a discriminant function, but with the following expression:

    $$ d(x,m{}_{i})^{2} > d(x,m_{j} )^{2} \quad {\text{ for }}j \ne i $$
    (6)

    where d(x,m i )2 is the discriminant function to classify the pixel as water which is expressed as below:

    $$ d(x,m_{i} )^{2} = (x - m_{i} )^{t} (x - m_{i}) $$
    (7)

    where m is the mean of the classes, t is the transpose of the matrix.

  6. 6.

    Mahalanobis distance classification This method decides the pixel as water based on the Eq. 6, with the discriminant function of

    $$ d(x,m_{i} )^{2} = (x - m_{i} )^{t} C^{ - 1} (x - m_{i} ) $$
    (8)

    where C is the covariance matrix.

The supervised classification methods (methods 3–6 above) are applied to a pair of images to detect water and then pixel-by-pixel image differencing is performed to find the flooded areas. Figure 8 illustrates a flood map, which is the result of performing flood detection on the test image shown in Fig. 5 using the NIR differencing method. The black areas are the areas affected by flooding.

Fig. 8
figure 8

The resultant flood detection analysis of North Sumatra, Indonesia, using NIR differencing

The SSTL DMC images can have the maximum of 1,200 image tiles of 500 × 500 pixels. Based on the results in Figs. 6 and 7 the minimal total processing time to detect flooding for a maximal size image on a PowerPC processor in the DMC SSDR is estimated as 10 h. Such a long processing time is not appropriate for on-board disaster monitoring. This confirms that more powerful computing systems are needed on board EO small satellites. Such a computing system could be realised as a multiprocessor parallel architecture including hardware acceleration of computationally intensive routines on FPGAs.

4.2 Using GPS reflectometry for flood detection

The potential of using GPS reflectometry signals for water detection is illustrated in Fig. 9 (Gleason 2006). As GPS signals travel across the surface they are detectable in LEO using a modified GPS receiver. As the signal traverses the surface it responds to surface features, including near surface water as demonstrated in Fig. 9a. Concerning Fig. 9b, the Missouri river can be seen intersecting the line of reflection points at second 2 (corresponding to the sharp jump in power) and continuing into Omaha City. The spikes near the 12th and 15th seconds are probably due to the crossings of a loop. It is also probable that the increase in signal power observed over this general region (between seconds 12 and 17), are due to the increased presence of water around the rivers in these areas. However, a more detailed investigation of the ground truth in these areas would be needed before knowing for certain if the increased power levels were due to the presence of surface water.

Fig. 9
figure 9

a The path of a reflected GPS signal across 19 s of data. b The peak power returned with estimated height (Image courtesy of GoogleEarth)

4.3 Accuracy evaluation of flood detection methods

Here we present experimental results on the accuracy assessment of the flood detection algorithms. One way of measuring the accuracy of water detection is based on the confusion error matrix in Table 4. Omission error (or producer’s accuracy) measures the error caused by detecting flooded areas as non-flooded, which result into a missed flooding alert. Commission error (or user’s accuracy) measures the error caused by detecting non-flooded areas as flooded, which will generate a false alarm. Overall accuracy measures the accuracy of the flood detection method without taking into account the source of error (errors of omission or commission). Accuracy assessment of flood detection methods is carried out for the North Sumatra test images as described in Table 3. The accuracy results are presented in Table 5, which shows that the NIR/Red differencing method with the fuzzy engine provides the best overall accuracy while keeping the omission errors low.

Table 4 Confusion error matrix
Table 5 Flood detection accuracy assessment for the image test set of North Sumatra, Indonesia

5 Conclusions

This work investigates the feasibility of automatic flood monitoring on board small satellites using optical images. Different from a conventional flood monitoring system, this approach aims to reduce the response time by processing the multispectral images before they are transmitted to the ground station.

A fuzzy inference engine is introduced to support the decision making-process and improve the flood monitoring performance. It is shown that a fuzzy inference engine can help improve water detection by including uncertain data input in the decision-making process.

Several existing image registration and flood detection methods are selected and tested in order to find their expected performance on the computing hardware on board a small satellite. The evaluation results show that high-performance computing and parallel processing are required on board small satellites in order to meet the increased requirements for imaging processing on board remote sensing small satellites.

A novel solution to flood detection is proposed combining GPS reflectometry data and optical images. The ability of GPS reflections to penetrate cloud cover will act as a valuable complement and backup for the flood maps produced from the optical images.