Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Radar is an acronym for radio detection and ranging, which hints at some of the technique’s uses and capabilities (Levanon 1988). Radars operate in the microwave portion of the electromagnetic spectrum, which encompasses wavelengths (λ) from 1 meter (m) to 1 mm (mm), or equivalently, frequencies (f) from 300 megaHertz (MHz) to 300 gigaHertz (GHz). By international convention, the entire radar spectrum is divided into several bands with different designations and uses. Of particular interest here are X-band (f = 8–12 GHz, λ = 2.5–3.75 cm), C-band (f = 4–8 GHz, λ = 3.75–7.5 cm), and L-band (f = 1–2 GHz, λ = 15–30 cm)—the bands used by radar systems in Earth orbit that provide data for our study of how volcanoes deform.

Radar operates by broadcasting a pulse of electromagnetic energy into space. If a radar pulse encounters an object, some of the energy is redirected back toward the radar. The same antenna can be used to transmit the initial pulse and receive the return signal. Precise timing of the delay between the initial and return signals allows determination of the distance from radar to object, and the Doppler frequency shift between the two signals is a measure of the object’s velocity relative to the radar . Thus radar can be used to locate and measure the velocity of such objects as aircraft or automobiles—two common uses of Doppler radar. But radar’s applications extend far beyond air traffic control and speeding tickets. With some mathematical and engineering cleverness, enormous virtual orbiting radars can be mathematically “synthesized” and used to image the entire globe at meter-scale. By combining selected radar images in just the right way, subtle patterns of ground deformation can be revealed and analyzed to study processes occurring deep within the Earth or at the surface. To understand how that’s possible, we first need to know how imaging radars organize the multitude of signals returned from a kilometers-wide swath of Earth’s surface so they can be assembled into a focused image. Then we’ll explain how huge “synthetic aperture” radars that don’t exist in the real world nonetheless can be used to study, among other things, the inner workings of volcanoes in the Aleutians and elsewhere.

1.1 Principles of Interferometric Synthetic Aperture Radar

1.1.1 Imaging Radar

Imaging radar systems operate on the same principles as Doppler radars, but have additional capability to distinguish among return signals from individual resolution elements within a target footprint. Imaging radars are side-looking, i.e., they direct signals to the side of their path across the surface rather than straight down. As a result, the arrival path of the radar signal is oblique to the surface being imaged. Return signals from near-range parts of the target (the part closest to the ground track of the radar) generally arrive back at the radar sooner than return signals from far-range areas. The relationship is affected by scattering from surface topography, i.e., returns from mountain tops arrive sooner than returns from valley bottoms at the same range. For now, the salient point is that the relationship between round-trip travel time and range can be used to organize return signals in the across-track, or range direction. In the along-track or azimuth direction, the Doppler principle comes into play. Signals returned from areas that are ahead of the radar as it travels along its path are shifted to slightly higher frequencies, while returns from trailing areas are shifted to slightly lower frequencies. An imaging radar uses the relationship between return-signal frequency and relative velocity between radar and target to organize return signals in the azimuth direction .

In this way, the returns from each resolution element on the ground can be assigned unique coordinates in range and azimuth. The resulting data can be processed into an image of the target area, which contains information about topography and radar reflective properties of the surface. The resolution of real aperture imaging radar systems depends on, among other factors, the size of the antenna (bigger is better) which, for practical reasons, is limited to a few meters or decimeters. The corresponding ground resolution element for a C-band radar (wavelength λ ~ 5.6 cm) with a 3 m antenna at a range of 850 km (typical values for an orbiting SAR) would be about 16 km, which is too large for most applications. This limitation can be overcome, however, by using mathematics to create a synthetic radar antenna much larger than is practical in the real world .

1.1.2 Synthetic Aperture Radar

Synthetic aperture radar (SAR) is an advanced radar system that utilizes image processing techniques to synthesize a large virtual antenna, which provides much higher spatial resolution than is practical using a real aperture radar. SAR systems take advantage of the fact that each point along the ground swath is illuminated for an extended period of time while the footprint of the radar beam moves across it. Recall from the discussion of real aperture imaging radar that it’s possible to keep track of where each return signal originated in the target area. By doing this continuously, a SAR system collects information from each resolution cell on the surface from the instant t 0 when the cell is first illuminated at the leading edge of the footprint to the time t 1 when it is last illuminated at the trailing edge. Because the radar travels a considerable distance along its trajectory during that time (tens of kilometers in the case of a typical orbiting SAR), it’s as if the cell were being illuminated by a virtual radar with a much larger antenna—one comparable in size to the distance traveled by the real aperture radar during the interval t 1t 0.

Most SAR systems designed for Earth orbit use an antenna 1–4 m wide and 10–15 m long, with a look angle in the range 10–60° to illuminate a footprint 50–150 km wide in the range direction and 5–15 km wide in the azimuth direction. Such a SAR system is capable of producing a ground resolution of 1–10 m in azimuth and 1–20 m in range, which is an improvement by about three orders of magnitude over a comparable real aperture system. Because a SAR actively transmits and receives signals backscattered from the target area, and because radar wavelengths are mostly unaffected by weather clouds, a SAR can operate effectively during day and night under most weather conditions to produce images at times and under conditions that render most optical imaging systems useless for surface observations.

Using a sophisticated image processing technique called SAR processing (Bamler and Hartl 1998; Curlander and McDonough 1991; Henderson and Lewis 1998; Massonnet and Souyris 2008; Rosen et al. 2000), both the intensity and phase of the signal backscattered from each ground resolution element can be calculated and portrayed as part of a complex-valued SAR image. The intensity of a resulting single-look complex (SLC) image (Fig. 1.1a) is controlled primarily by terrain slope, surface roughness, and surface dielectric constants. The phase of the image (Fig. 1.1b) is controlled mainly by two factors: (1) the radar signal’s round-trip travel distance between the SAR and the ground, and (2) interactions between the signal and surface materials. The round-trip distance is proportional to the travel time, which is related to the speed of light, c. The situation is complicated by the fact that c is affected by water molecules in the troposphere and by free electrons in the ionosphere that retard or advance, respectively, the signal’s phase ever so slightly relative to the case in a vacuum. The resulting “path effects” influence the phase of the return signal at the SAR. We’ll return to this topic later. First, let’s explore how SLC images can be combined to form interferograms .

Fig. 1.1
figure 1

a Amplitude component of a single-look-complex (SLC) SAR image acquired on October 4, 1995, by the ERS-1 satellite over Ugashik-Peulik volcanic center, Alaska. b Phase component of the SAR image acquired on October 4, 1995, corresponding to the amplitude image in Fig. 1.1a . c Phase component of an SLC SAR image acquired on October 9, 1997, by the ERS-2 satellite over the Ugashik-Peulik volcanic center, Alaska. The amplitude component is similar to that in Fig. 1.1a and therefore is not shown. The SAR phase values represented in Fig. 1.1b and c appear as random numbers but nonetheless contain useful information, as illustrated in Fig. 1.1d–i. d Original interferogram formed by differencing the phase values of two co-registered SAR images (Fig. 1.1b and c). The resulting interferogram contains fringes produced by the differing viewing geometries, topography, any atmospheric delays, surface deformation, and noise. e Flattened interferogram produced by removing the effect of a flat Earth surface from the original interferogram (Fig. 1.1d) . f Simulated interferogram representing the contribution of topography in the original interferogram (Fig. 1.1d). The perpendicular component of the InSAR baseline is 35 m in this case. g Topography-removed interferogram produced by subtracting the simulated interferogram in Fig. 1.1f from the original interferogram in Fig. 1.1d. The resulting interferogram contains fringes produced by topography, surface deformation, any atmospheric delays, and noise. h A geo-referenced topography-removed interferogram (Fig. 1.1f) overlaid on a shaded relief image produced from a digital elevation model (DEM). The concentric pattern of fringes indicates ~17 cm of uplift centered on the southwest flank of Peulik volcano, Alaska, which occurred during an aseismic inflation episode during 1996–1998 (Lu et al. 2002). i Model interferogram produced using a best-fit inflationary point source at ~6.5-km depth with a volume change of ~0.043 km3 overlaid on the shaded relief image (compare to Fig. 1.1h). Each interferometric fringe (full-color cycle) in Fig. 1.1h and i represents 360° of phase change, or 2.83 cm of range change between the ground and the satellite. Areas of loss of radar coherence are uncolored in Fig. 1.1h and i

1.1.3 Basics of Interferometric SAR (InSAR)

We have seen that it is possible to conjure up a virtual radar antenna with a synthetic aperture of many kilometers and use it to acquire data that can be processed into SAR images with a ground resolution of a few meters. But how is it possible to measure centimeter-scale deformation of the ground surface with such a system? For that, we need the “In” part of InSAR. InSAR images are formed by combining (“interfering”) signals from two spatially or temporally separated SAR antennas. The term interferometry draws its meaning from two root words: interfere and measure. The interaction of electromagnetic waves, referred to as interference, is used to measure precise distances and angles. The technique that makes use of interference of electromagnetic waves that are transmitted and received by a SAR is called interferometric synthetic aperture radar, or InSAR. Very simply, InSAR involves the use of two or more SAR images of the same area—one arbitrarily chosen reference or master image and one or more additional images referred to as slave images—to extract land surface topography and deformation patterns.

For InSAR purposes, the spatial separation between two SAR antennas, or between two vantage points of the same SAR antenna, is called the baseline. The two antennas can be mounted on a single platform for simultaneous interferometry, which is the usual implementation for aircraft and spaceborne systems such as Topographic SAR (TOPSAR) and Shuttle Radar Topography Mission (SRTM) systems (Farr et al. 2007; Zebker et al. 1992) . Alternatively, InSAR images can be formed by using a single antenna on an airborne or spaceborne platform in nearly identical repeating flight lines or orbits for repeat-pass interferometry (Gray and Farris-Manning 1993; Massonnet and Feigl 1998). In this case, even though successive observations of the target area are separated in time, the observations will be highly correlated if the backscattering properties of the surface have not changed in the interim. In this way, InSAR is capable of measuring ground-surface displacements with sub-centimeter precision for X-band and C-band sensors (wavelength λ = 2–8 cm), or few-centimeter precision for L-band sensors (λ = 15–30 cm), in both cases at a spatial resolution of tens of meters over an image swath tens to a few hundred kilometers wide. This is the typical implementation for spaceborne SARs, including the European Space Agency’s ERS-1, ERS-2, and Envisat ; Japan Aerospace Exploration Agency’s JERS-1 and ALOS; Canadian Space Agency’s Radarsat-1 and Radarsat-2 ; German Aerospace Center’s TerraSAR-X and TanDEM-X; and Italian Space Agency’s COSMO-SkyMed satellite constellation (Table 1.1). InSAR images can also be formed from SAR images acquired by different satellites in a tandem orbit configuration. This approach allows InSAR data to be acquired in a very short time interval and therefore is ideal for mapping topography (see Chap. 2). There are several well-known examples. The first is the ESA’s ERS-1 and ERS-2 tandem mission during 1995 and 1996, which resulted in abundant InSAR pairs with temporal separations of 24 h. The second is the ESA’s ERS-2 and Envisat tandem mission, which produces interferometric pairs with a temporal separation of about 28 min. As a consequence of a 31 MHz difference in carrier frequency between ERS-2 and Envisat, cross-platform ERS-2/Envisat InSAR images with relatively long spatial baselines (~2 km) can be used to generate DEMs with sub-meter accuracy (see Chap. 2). A third example is the DLR’s TanDEM-X mission, in which SAR sensors onboard two TerraSAR-X satellites acquire images simultaneously with a typical baseline of 200–500 m. This innovative flight mode enables the production of global DEM with 2 m relative accuracy and 10 m absolute accuracy. A fourth example of a tandem SAR mission is the Italian COSMO-SkyMed satellite constellation, which can acquire InSAR images with temporal separations of 24 h for DEM production.

Table 1.1 SAR satellite specifications

The generation of an interferogram requires two SLC SAR images. Neglecting phase shifts induced by the transmitting/receiving antenna and SAR processing algorithms, the phase value of a pixel in an SLC SAR image (Fig. 1.1b) can be represented as:

$$ \phi_{1} = - \frac{4\pi }{\lambda }r_{1} + \varepsilon_{1} $$
(1.1)

where r 1 (a deterministic variable) is the apparent range distance (including possible atmospheric delay) from the antenna to the ground target, λ is the wavelength of the radar, and ε 1 is the sum of phase shifts due to the interaction between the incident radar wave and scatterers within a given resolution cell. Because the backscattering phase (ε 1) is a stochastic (randomly distributed, unknown) variable, the phase value (ϕ 1) in a single SAR image cannot be used to calculate the range (r 1) and by itself is of no practical use. However, imagine that a second SLC SAR image (with the phase image shown in Fig. 1.1c) is obtained over the same area at a different time with a corresponding phase value of:

$$ \phi_{2} = - \frac{4\pi }{\lambda }r_{2} + \;\varepsilon_{2} $$
(1.2)

Note that phase values in the second SAR image (Fig. 1.1c) cannot provide range information (r 2) either, due to the stochastic nature of the backscattering phase ε 2. But something very useful emerges when two otherwise useless SLC SAR images are combined, as explained below.

An interferogram (Fig. 1.1d) is created by co-registering two SAR images and differencing the corresponding phase values (Fig. 1.1b and c) on a pixel-by-pixel basis. The phase value of the resulting interferogram (Fig. 1.1d) is:

$$ \phi = \phi_{1} - \phi_{2} = - \frac{{4\pi (r_{1} - r_{2} )}}{\lambda } + (\varepsilon_{1} \; - \;\varepsilon_{2} ) $$
(1.3)

The fundamental assumption in repeat-pass InSAR is that the scattering characteristics of the ground surface do not change during the time interval between acquisitions of two SAR images used to produce an interferogram. The degree of change in backscattering characteristics can be quantified by a parameter called interferometric coherence, which is discussed in a later section. Assuming that the interactions between incoming radar waves and surface scatterers remain the same for the two SAR images (i.e., ε 1 = ε 2), the interferometric phase value can be expressed as:

$$ \phi = - \frac{{4\pi (r_{1} - r_{2} )}}{\lambda } $$
(1.4)

Typical values for the range difference (r 1r 2) are from a few meters to several hundred meters. The SAR wavelength (λ) is of the order of several centimeters. Because the measured interferometric phase value (ϕ) is modulated by 2π, ranging from –π to π, there is an ambiguity of many cycles (i.e., numerous 2π values) in the interferometric phase value. Therefore, the phase value of a single pixel in an interferogram is of no practical use. However, the change in range difference, δ(r 1r 2), between two neighboring pixels that are a few meters apart on the ground is usually much smaller than the SAR wavelength. So the phase difference between two nearby pixels, δϕ, can be used to infer the range distance difference (r 1r 2) to sub-wavelength precision. Let’s now examine this relationship quantitatively.

Figure 1.2 shows two neighboring targets, T 1 and T 2, with a height difference h between them. An InSAR system acquires two images of the targets from vantage points A 1 and A 2. We represent the range differences for T 1 and T 2 by (\( \overline{{A_{1} T_{1} }} \)\( \overline{{A_{2} T_{1} }} \)) and (\( \overline{{A_{1} T_{2} }} \)\( \overline{{A_{2} T_{2} }} \)), respectively. If we ignore any ground surface displacement between the times when the two images were acquired, the difference in range difference between T 1 and T 2 can be expressed as:

Fig. 1.2
figure 2

InSAR phase variation between two targets (T 1 and T 2) with a height difference h. The spatial distance between SARs at A 1 and A 2 is called the baseline, with B representing the component perpendicular to the slant range direction. q and s are the distances between T 1 and T 2 that are perpendicular and parallel to the slant range direction, respectively

$$ \begin{aligned} \Delta r & = (\overline{{A_{1} T_{1} }} { - }\overline{{A_{2} T_{1} }} ) { - }(\overline{{A_{1} T_{2} }} { - }\overline{{A_{2} T_{2} }} )\\ & \approx \overline{{A_{1} B_{1} }} { - }\overline{{A_{1} B_{2} }} \\ & \approx B_{ \bot } (\angle \overline{{B_{1} A_{2} B_{2} }} ) \\ & \approx B_{ \bot } (\angle \overline{{T_{1} A_{1} T_{2} }} ) \\ & \approx \frac{{B_{ \bot } q}}{R} \\ \end{aligned} $$
(1.5)

where \( \angle \overline{{B_{1} A_{2} B_{2} }} \) is the angle between \( \overline{{B_{1} A_{2} }} \) and \( \overline{{B_{2} A_{2} }} \), q is the distance between T 1 and T 2 along the direction perpendicular to the slant range, \( B_{ \bot } \)is the perpendicular baseline, and R is the slant range distance. By combining Eqs. 1.4 and 1.5, we obtain the phase difference between the two pixels that contain targets T 1 and T 2 in the interferogram (Ferretti et al. 2007):

$$ \Delta \phi = - \frac{4\pi }{\lambda }\frac{{B_{ \bot } q}}{R} $$
(1.6)

Recognizing that q can be represented by (\( \frac{s}{\tan \theta } + \frac{h}{\sin \theta } \)) (see Fig. 1.2), the phase difference between two pixels expressed by Eq. 1.6 can be divided into contributions due to the slant range difference s and the height difference h:

$$ \Delta \phi = - \frac{4\pi }{\lambda }\frac{{B_{ \bot } s}}{R\tan \theta } - \frac{4\pi }{\lambda }\frac{{B_{ \bot } h}}{R\sin \theta } $$
(1.7)

In a SAR image, s is the slant range difference between two neighboring pixels in the range direction, i.e., the slant range pixel size—a system parameter determined by the radar range sampling frequency. So the phase contribution due to the slant range difference (i.e., the first term on the right side of Eq. 1.7) can be removed from the original interferogram using SAR system parameters (s, R, θ, and B ). The process is called phase flattening and the result is a flattened interferogram (Fig. 1.1e) in which the phase variation (\( \Delta \phi_{flat} \)) can be expressed as:

$$ \Delta \phi_{flat} = - \frac{4\pi }{\lambda }\frac{{B_{ \bot } h}}{R\sin \theta } $$
(1.8)

The phase (or range distance difference) in the original interferogram represented by Eq. 1.4 and exemplified by Fig. 1.1d contains contributions from both the topography and any possible ground surface deformation. In order to derive a deformation image, the topographic contribution needs to be removed from the original interferogram. The most common procedure is to use an existing DEM and the InSAR imaging geometry to produce a synthetic interferogram (a representation of the phase image that would be produced by topography alone) and subtract it from the interferogram to be studied (e.g., Massonnet and Feigl 1998; Rosen et al. 2000). This is the so-called 2-pass InSAR approach. Alternatively, the synthetic interferogram that represents the topographic contribution can come from a different interferogram of the same area. The procedures are then called 3-pass or 4-pass InSAR (Zebker et al. 1994). Because the 2-pass InSAR method is commonly used for deformation mapping, we next present a brief explanation of how to produce a topographic phase image from an existing DEM .

Two steps are required to simulate a topography-only interferogram based on a DEM. First, the DEM needs to be resampled to project heights from a map coordinate into the appropriate radar geometry via geometric simulation of the imaging process. The InSAR imaging geometry is shown in Fig. 1.3. The SAR acquires two images of the same scene from locations A 1 and A 2. The baseline, defined as the vector from A 1 to A 2, has length B and is tilted with respect to the horizontal by angle α. The slant range r from the SAR to a ground target T with an elevation value h is linearly related to the measured phase values in the SAR images by Eqs. 1.1 and 1.2. The look angle from A 1 to target T is θ 1. For each ground resolution cell at ground range r g with elevation h, the slant range value (r 1) should satisfy:

Fig. 1.3
figure 3

InSAR imaging geometry. Two SAR images of the same target area are acquired from vantage points A 1 and A 2. The vector between A 1 and A 2 is called the baseline, which has length B and is tilted with respect to the horizontal by angle α. The baseline B can be represented by a pair of horizontal (B h) and vertical (B v) components, or by a pair of parallel (B //) and perpendicular (B ) components. The range distances from the SAR vantage points to a ground target T with elevation h are r 1 and r 2, respectively. The look angle from A 1 to the ground point T is θ 1

$$ r_{1} = \sqrt {(H + R)^{2} + (R + h)^{2} - 2(H + R)(R + h)\cos (\frac{{r_{g} }}{R})} $$
(1.9)

where H is the satellite altitude above a reference Earth surface, assumed to be a sphere with radius R. The radar slant range and azimuth coordinates are calculated for each point in the DEM. This set of coordinates forms a non-uniformly sampled grid in SAR coordinate space. The DEM height data from the non-uniform grid are then resampled into a uniform grid in SAR coordinate space.

In the second step required to simulate a topography-only interferogram, the precise look angle from A 1 to ground target T at the ground range r g (slant range r 1) and elevation h is calculated as follows:

$$ \theta_{1} = \arccos \left[ {\frac{{(H + R)^{2} + r_{1}^{2} - (R + h)^{2} }}{{2(H + R)r_{1} }}} \right] $$
(1.10)

Because θ1 is known from the imaging geometry of the SAR, the interferometric phase value due to the topographic effect at target T can be calculated as:

$$ \phi_{dem} = - \frac{4\pi }{\lambda }(r_{1} - r_{2} ) = \frac{4\pi }{\lambda }\left(\sqrt {r_{1}^{2} - 2(B_{h} \sin \theta_{1} - B_{v} \cos \theta_{1} )r_{1} + B^{2} } - r_{1} \right) $$
(1.11)

where B h and B v are horizontal and vertical components of the baseline B (Fig. 1.3).

Figure 1.1f shows the simulated topographic effect in the interferogram shown in Fig. 1.1d. The simulated interferogram was produced using an existing DEM and the InSAR imaging geometry for the interferometric pair. Removing topographic effects from the original interferogram results in an interferogram containing information from two sources: (1) any range differences caused by relative ground-surface displacements (deformation) that occurred during the time interval between image acquisitions, and (2) measurement noise (Fig. 1.1g). This information is represented by the phase value:

$$ \phi_{def} = \phi - \phi_{dem} $$
(1.12)

In practice, an ellipsoidal Earth surface characterized by its major axis, e maj , and minor axis, e min , is used instead of a spherical Earth. The radius of the Earth at the imaged area is then:

$$ R = \sqrt {(e_{\hbox{min} } \sin \beta )^{2} + (e_{maj} \cos } \beta )^{2} $$
(1.13)

where β is the latitude of the center of the imaged area.

If h is set to zero, the procedure outlined in Eqs. 1.91.13 will remove the effect of an ellipsoidal Earth surface on the interferogram. This results in a flattened interferogram (Fig. 1.1e) in which the phase values can be approximated by Eq. 1.8. Recognizing that \( R = \frac{H}{\cos \theta } \) (Fig. 1.2), we can rewrite Eq. 1.8 as:

$$ \phi_{flat} = - \frac{4\pi }{\lambda }\frac{{B_{ \bot } }}{{H\tan \theta_{1} }}h + \phi_{def} $$
(1.14)

If ϕ def in Eq. 1.14 is negligible, ϕ flat can be used to calculate the surface height h. This explains how the InSAR technique can be used to produce an accurate, high-resolution DEM for a large region. A near-global DEM was produced by stitching together topography-only interferograms derived in this way from SAR images acquired during the Shuttle Radar Topography Mission (SRTM) (Farr et al. 2007). Note that, because the SRTM DEM comprises ellipsoidal heights rather than orthometric heights, geoid undulation must be considered for some applications—especially for InSAR images composed of many azimuthal image frames (i.e., hundreds of kilometers long in the azimuth direction). For the ERS-1/2 satellites, H is about 800 km, θ 1 is about 23° ± 3°, λ is 5.66 cm, and B should be less than 1,100 m to ensure that interferometric coherence is maintained. In this case, Eq. 1.14 can be approximated as:

$$ \phi_{flat} \approx - \frac{2\pi }{9600}B_{ \bot } h + \phi_{def} $$
(1.15)

For an interferogram with B  = 100 m, 1 m of topographic relief produces a phase value of about 4°. However, producing the same phase value requires only 0.3 mm of surface deformation. So the interferogram phase value is much more sensitive to changes in topography (i.e., surface deformation ϕ def ) than to the topography itself (i.e., h). This explains why repeat-pass InSAR is capable of measuring surface deformation with a theoretical precision of just a few millimeters.

In the 2-pass InSAR technique, any errors in the DEM used to remove topographic effects are mapped into apparent surface deformation. This effect is characterized by a term called the altitude of ambiguity (h a ), which is the amount of topographic error required to generate one interferometric fringe in a topography-removed interferogram (Massonnet and Feigl 1998). Assigning \( \Delta \phi_{flat} \) = 2π in Eq. 1.8, we can derive the altitude of ambiguity h a as a function of viewing-geometey parameters:

$$ h_{a} = - \frac{\lambda R\sin \theta }{{2B_{ \bot } }} = - \frac{\lambda H\tan \theta }{{2B_{ \bot } }} $$
(1.16)

For example, in the case above, the interferogram with B  = 100 m has h a  = 960 m. A 10 m DEM error would manifest itself as 10/960 of a fringe or about 0.3 mm of apparent surface displacement in a topography-removed interferogram. Because the altitude of ambiguity is inversely proportional to the perpendicular baseline B , interferometric pairs with smaller baselines are better suited for deformation analysis and those with larger baselines are a better choice for DEM generation.

The final procedure in 2-pass InSAR is to rectify the SAR images and interferograms into map-coordinate space, which is a backward transformation of Eq. 1.9. The resulting geo-referenced interferogram (Fig. 1.1h) and derived products can be overlaid with other data layers to enhance the utility of the interferograms and facilitate data interpretation (Fig. 1.1i). Figure 1.1h and i show six concentric fringes that represent about 17 cm of range decrease (mostly uplift) centered on the southwest flank of Mount Peulik volcano, Alaska. The volcano inflated aseismically from October 1996 to September 1998, a period that included an intense earthquake swarm that started in May 1998 more than 30 km northwest of Peulik and continued for more than 5 months (Lu et al. 2002) (see Chap. 6, Ugashik-Mount Peulik section).

1.1.4 InSAR Coherence, Accuracy, and Critical Baseline

1.1.4.1 InSAR Coherence and Measurement Accuracy

An InSAR coherence image is a cross-correlation product derived from two co-registered complex-valued (both intensity and phase components) SAR images (Lu and Freymueller 1998; Zebker and Villasenor 1992). It depicts changes in backscattering characteristics on the scale of the radar wavelength. Loss of InSAR coherence is often referred to as decorrelation. Because InSAR coherence is related to the amount of phase error in an interferogram, it determines the accuracy of the deformation image or InSAR-derived DEM map. InSAR coherence is estimated by cross-correlation of the SAR image pair over a small window of pixels (Fujiwara et al. 1998; Lu and Freymueller 1998):

$$ \gamma = \left| {\frac{{\sum {C_{1} C_{{_{2} }}^{*} e^{{ - j\phi_{\det } }} } }}{{\sqrt {\sum {\left| {C_{1} } \right|}^{2} \sum {\left| {C_{2} } \right|}^{2} } }}} \right| $$
(1.17)

where C 1 and C 2 are complex-valued backscattering coefficients, \( C_{2}^{*} \) is the complex conjugate of C 2 , ϕ det is the deterministic phase due to baseline error, topography, or large deformation in the correlation window. The deterministic phase values in the correlation window can be approximated as 2-dimensional linear phases. An InSAR coherence map is generated by computing γ in a moving correlation window over the entire image.

Decorrelation ρ, which is equal to 1-γ, can have several causes: (1) thermal decorrelation is caused by uncorrelated noise sources in radar instruments; (2) geometric decorrelation results from imaging a target from much different look angles; (3) volume decorrelation is caused by volume backscattering effects; and (4) temporal decorrelation can be due to environmental changes over time (Lu and Kwoun 2008; Zebker and Villasenor 1992). SAR image misregistration (see Sect. 1.2.2) and other InSAR processing errors also can reduce the level of InSAR coherence.

Given the coherence value γ calculated from Eq. 1.17, the phase standard deviation can be approximated as:

$$ \delta \phi = \frac{{\sqrt {1 - \gamma^{2} } }}{{\sqrt {2N} \gamma }} $$
(1.18)

where N is the number of looks (i.e., total number of pixels in the correlation window) used in estimating γ. The standard deviation of the phase decreases with increasing coherence, resulting in a more precise InSAR-derived DEM map or deformation image. The multi-look number (N) should be large enough to obtain a realistic estimate of the phase standard deviation. For ERS-1/ERS-2 images, a multi-look factor of 20 (2 pixels in range and 10 pixels in azimuth) is often used. The resulting interferogram (or coherence map) has a spatial resolution of about 40 m. In addition, the multi-looking process increases the signal-to-noise ratio of complex-valued SAR backscattering images or interferograms. For example, the sum of N coherent pixels renders a single multi-looked pixel whose amplitude is about N times larger than the average amplitude of the N constituent pixels. On the other hand, the amplitude of the sum of N decorrelated pixels is only about \( \sqrt N \) times the average amplitude of N constituent pixels. Therefore, the signal-to-noise ratio of the multi-look image is improved relative to the original image by a factor of \( N/\sqrt N = \sqrt N \) (Massonnet and Souyris 2008).

1.1.4.2 InSAR Critical Baseline

For distributed targets, InSAR coherence decreases with increasing perpendicular baseline B . Ignoring temporal and volumetric decorrelations, the critical baseline, B c, is defined as the minimum value of B for which InSAR coherence is totally lost. In essence, the critical baseline is related to the geometric decorrelation that results from imaging a target from very different look angles. Critical baseline is an important concept in InSAR image selection and processing. Next, we’ll explore this concept from four slightly different but related perspectives.

InSAR critical baseline—geometric decorrelation

Zebker and Villasenor (1992) evaluated the degree of geometric decorrelation in an InSAR image by forming the Fourier transform of the SAR impulse response intensity. Assuming that the imaged surface consists of uniformly distributed and uncorrelated scatterers and that the impulse response is approximated as a sinc function in the radar backscattering model, Zebker and Villasenor (1992) derived the following expression for spatial decorrelation ρ spatial (i.e., 1−γ spatial ):

$$ \rho_{spatial} = 1 - \frac{{2B_{ \bot } S_{g} \cos \theta }}{\lambda R} $$
(1.19)

where S g is the ground range resolution and geometrically is equal to \( \frac{{S_{s} }}{\sin \theta } \), where S s is the slant range resolution. Note that S s , a SAR system parameter, is equal to \( \frac{c}{{2B_{w} }} \), where c is the speed of light and B w is the SAR chirp bandwidth (see Glossary).

The value of ρ spatial decreases with increasing B . When ρ spatial reaches zero, complete decorrrelation occurs. This leads to the derivation of critical baseline B c as the following:

$$ B_{c} = \frac{\lambda R}{{2S_{g} \cos \theta }} = \frac{\lambda R\tan \theta }{{2S_{s} }} = \frac{{\lambda RB_{w} \tan \theta }}{c} $$
(1.20)

For an imaged terrain with a slope angle α, the equation above can be generalized as:

$$ B_{c} = \frac{{\lambda RB_{w} \tan (\theta - \alpha )}}{c} $$
(1.21)

InSAR critical baseline—celestial footprint

Another way to conceptualize the critical baseline is by considering each ground resolution element (S g ) to be a radiating antenna. The beam width of an antenna with size S (where S is the projection of S g onto the direction perpendicular to the slant range, i.e., S  = S g cosθ) can be expressed as:

$$ \Delta \theta = \frac{\lambda }{{2S_{ \bot } }} $$
(1.22)

To achieve a coherent InSAR image, the trajectories of the two satellite orbits must lie within the angular beam width defined by Eq. 1.22 (i.e., within the “celestial footprint” of the radiating ground resolution element) so that the speckle (see Glossary) in both images will remain correlated (Gabriel and Goldstein 1988; Prati and Rocca 1990; Vachon et al. 1995). Therefore, the critical baseline can be expressed as:

$$ B_{c} = R\Delta \theta = \frac{\lambda R}{{2S_{ \bot } }} = \frac{\lambda R}{{2S_{g} \cos \theta }} = \frac{\lambda R\tan \theta }{{2S_{s} }} = \frac{{\lambda RB_{w} \tan \theta }}{c} $$
(1.23)

For terrain with a slope angle α, the above equation can be generalized as:

$$ B_{c} = \frac{{\lambda RB_{w} \tan (\theta - \alpha )}}{c} $$
(1.24)

This is the same as Eq. 1.21, which was derived from InSAR decorrelation based on radar backscattering modeling.

InSAR critical baseline—spectral shift

Gatelli et al. (1994) derived an equation for the critical baseline based on the principle of spectral shift of the radar backscattering spectrum. This approach provides insights into the relationship between the interferogram baseline (i.e., variation of radar look angles) and the frequency shift of the backscattering signal. Considering that the spectra of the backscattering returns at different look angles correspond to different portions of the spectrum of the ground reflectivity, the frequency shift of the backscattered signal is related to the variation of radar look angle as follows (Gatelli et al. 1994):

$$ \Delta f = \frac{f}{\tan (\theta - \alpha )}\Delta \theta $$
(1.25)

where f is the central frequency of the radar wave, α is the slope angle of the imaged surface, and Δθ is the difference in radar look angle (Fig. 1.2). Recognizing that (for \( B_{ \bot } \ll R \)), \( \Delta \theta \approx \frac{{B_{ \bot } }}{R} \) (Fig. 1.2), and given that \( f = \frac{c}{\lambda } \), we can rewrite Eq. 1.24 as:

$$ \Delta f = \frac{{cB_{ \bot } }}{\lambda R\tan (\theta - \alpha )} $$
(1.26)

Equations 1.25 and 1.26 state that the backscattering returns contain different spectral portions of the ground reflectivity spectrum due to a change in SAR look angle. The SAR signal has a limited bandwidth B w centered at frequency f, so complete decorrelation occurs when Δf > B w . This leads to the definition of critical baseline as:

$$ B_{c} = \frac{{\lambda RB_{w} \tan (\theta - \alpha )}}{c} $$
(1.27)

which is the same as Eqs. 1.21 and 1.24.

InSAR critical baseline—phase aliasing

We now consider two scatterers within a ground resolution element q on a terrain with slope angle α (Fig. 1.2). The range difference and the corresponding phase difference between these two scatterers can be calculated based on Eqs. 1.5 and 1.6, respectively. From Eq. 1.6, the phase difference Δϕ increases linearly with B . The critical baseline can be defined as the minimum value of B for which the phase difference Δϕ equals 2π, resulting in an aliasing phase (and consequently loss of coherence):

$$ \frac{4\pi }{\lambda }\frac{{B_{c} q}}{R} = 2\pi $$
(1.28)

Recognizing that \( q = \frac{{S_{s} }}{\tan (\theta - \alpha )} \), we can write the following expression for the critical baseline:

$$ B_{c} = \frac{\lambda R}{2q} = \frac{\lambda R\tan (\theta - \alpha )}{{2S_{s} }} = \frac{{\lambda RB_{w} \tan (\theta - \alpha )}}{c} $$
(1.29)

which is the same as the expressions in Eqs. 1.21, 1.24, and 1.27.

The critical baseline is an essential concept in InSAR processing because it helps to guide image selection and processing strategies. Several points are worth mentioning here. First, the critical baseline reaches a maximum at a look angle of about 45° over flat terrain. Therefore, SAR images obtained with a larger look angle (more oblique, e.g., beam mode 6 for Envisat) render a larger critical baseline than those obtained with a smaller look angle (e.g., beam mode 2 for Envisat). This means that, other factors being equal, an interferogram produced from SAR images obtained with a larger look angle will have better coherence than another interferogram with the same baseline but produced from SAR images obtained with a smaller look angle.

A second important point is that the critical baseline decreases with increasing slope angle. Therefore, the critical baseline for a slope facing toward the radar is much less than that for a slope facing away from the radar. Areas where foreshortening occurs quickly lose coherence when the slope angle approaches the radar look angle (i.e., the critical baseline approaches 0). On the other hand, areas with slopes facing away from the radar have higher coherence than other areas (e.g., flat terrain and slopes facing towards the radar) as long as the slope angle is less than the radar look angle (i.e., before the slope becomes a shadowed area). As a consequence, for a right-looking SAR (one that points to the right of its flight path), an east-facing slope generally will render higher coherence in an ascending-track interferogram (SAR orbiting from south to north, looking east) than in a descending-track interferogram of the same area. Conversely, a west-facing slope generally will render higher coherence in a descending-track interferogram.

A third concept that emerges from our discussion of critical baseline is the relationship between baseline length and the degree of geometric decorrelation. Decorrelation is a result of the range spectral shift, which increases linearly with baseline. Filtering out non-overlapping portions of the SAR backscattering spectrum before interferogram generation therefore improves interferogram coherence. Because the range spectral shift is a function of the terrain slope angle, a topography-dependent, common-band filtering algorithm is desirable (Fornaro and Gurarnieri 2002).

Fourth and finally, the range spectral shift due to the baseline separation between SAR images from two SAR systems with slightly different central frequencies can be utilized to compensate for the frequency difference (Guarnieri and Prati 2000). For example, C-band Envisat and ERS-2 SARs were on the same orbital plane with a 35-day repeat for both sensors and a 28-min time lag between them. Because the radar center frequency of Envisat was different from that of ERS-2 by 31 MHz, Envisat SAR images generally cannot be combined with ERS-2 images for repeat-pass cross-platform InSAR processing. Fortunately, the 31 MHz carrier frequency difference can be compensated by a perpendicular baseline of approximately 2 km (Eq. 1.26). Consequently, Envisat and ERS-2 images can be combined to preserve InSAR coherence in spite of a large baseline of about 2 km, which is twice as large as the critical baseline for either an ERS-2 or Envisat (beam mode 2) interferogram (Guarnieri and Prati 2000; Lee et al. 2010). The resulting interferogram is very sensitive to the surface topography and can be used to generate a high-precision DEM. More details on DEM generation from InSAR can be found in Chap. 2.

Further in-depth descriptions of InSAR processing techniques can be found in Zebker et al. (1994), Bamler and Hartl (1998), Henderson and Lewis (1998), Massonnet and Feigl (1998), Rosen et al. (2000), Hanssen (2001), Hensley et al. (2001), and Massonnet and Souyris (2008). Interested readers are encouraged to consult these technical references to more fully explore the concepts introduced here .

1.1.5 InSAR Image Interpretation and Modeling

To understand the causes of surface deformation, a common approach is to use numerical models based on an observed deformation field to infer various physical parameters of the deformation source(s). Because InSAR produces spatially dense maps of the deformation field (albeit only the satellite-to-ground component) rather than the relatively small number of point measurements available from techniques such as GPS or in situ sensors such as strainmeters or tiltmeters, InSAR observations can provide an especially strong constraint on deformation models. Various idealized deformation source geometries are available for volcano studies, including the spherical point pressure source (hereafter referred to as the Mogi source) (Mogi 1958), the dislocation source (sill or dike source) (Okada 1985), the ellipsoid source (Davis 1986; Yang et al. 1988), and the penny-crack source (Fialko et al. 2001). The most widely used model—both because of its simplicity and because it often fits the observed deformation field quite well—is a Mogi source embedded in an elastic homogeneous half-space. The predicted displacement u at the free surface due to a change in volume ΔV or pressure ΔP of an embedded sphere is:

$$ u_{i} (x_{1} - x_{1}^{\prime} ,x_{2} - x_{2}^{\prime} , - x_{3} )\; = \;C\frac{{x_{i} - x_{i}^{\prime} }}{{\left| {R^{3} } \right|}} $$
(1.30)

where \( {x}_{1}^{\prime } \), \( {x}_{2}^{\prime } \), and \( {x}_{3}^{\prime } \) are the horizontal coordinates and depth of the center of the sphere, R is the distance between the center of the sphere and the observation point (x 1, x 2, and 0), and C is a combination of material properties and source strength:

$$ C = \Delta P(1 - v)\frac{{r_{s}^{3} }}{G} = \Delta V\frac{(1 - v)}{\pi } $$
(1.31)

where ΔP and ΔV are the pressure and volume changes in the sphere, respectively, and \( v \) is Poisson’s ratio of the host rock (typical value is 0.25), \( r_{s} \) is the radius of the sphere, and \( G \) is the shear modulus of the host rock (Delaney and McTigue 1994; Johnson 1987).

A nonlinear least-squares inversion approach is often used to optimize the source parameters in Eqs. 1.30 and 1.31 (Cervelli et al. 2001; Press et al. 2007). Modeling the observed interferogram in Fig. 1.1h using a Mogi source results in a best-fit source located at a depth of 6.5 ± 0.2 km—presumably a crustal magma reservoir beneath Mount Peulik volcano. The calculated volume change of the reservoir is 0.043 + 0.002 km3. Figure 1.1i shows the model interferogram based on these best-fit source parameters. The Mogi source fits the observed deformation in Fig. 1.1h very well. Discussions of advanced InSAR techniques and more details on modeling InSAR observations can be found in Chaps. 2 and 3.

1.1.6 InSAR Products

Typical InSAR data processing includes precise registration of an interferometric SAR image pair, interferogram generation, removal of the curved Earth phase trend, adaptive filtering, phase unwrapping, precise estimation of interferometric baseline, generation of a surface deformation image (or DEM map), estimation of interferometric correlation, and rectification of interferometric products. Using a single pair of SAR images as input, a typical InSAR processing chain outputs two SAR intensity images, a deformation or DEM map, and an interferometric correlation image.

1.1.6.1 SAR Intensity Image

SAR intensity images are sensitive to terrain slope, surface roughness, and target dielectric constant. Surface roughness refers to the SAR wavelength-scale variation in surface relief, and the radar dielectric constant is an electrical property of material that influences radar return strength. Therefore, SAR intensity images alone can be used to map hazards-related landscape changes, whether natural or human-caused (e.g., volcanic flows, wildfires, deforestation). Multi-temporal (i.e., repeated or time-sequential) SAR intensity images can be used to monitor the progression of landscape changes due to hazards such as flooding, wildfire, volcanic eruption, earthquake shaking, or landsliding. As an example, Fig. 1.4a and b show two SAR intensity images acquired before and during the February–May 1997 eruption at Mount Okmok, Alaska. A lava flow emplaced during the eruption is clearly delineated in the co-eruption image (Fig. 1.4b). No cloud-free optical satellite image from Landsat or other civilian satellites is available for the entire 2-month-long eruption. This example demonstrates the value of all-weather SAR images for monitoring hazardous events in remote, cloud-prone areas.

Fig. 1.4
figure 4

Examples of typical InSAR products. a ERS-1 SAR intensity image of Mount Okmok, Alaska, before its 1997 eruption. b JERS-1 SAR intensity image acquired during the Februray–May 1997 Okmok eruption. The 1997 lava flows are outlined. c Coherence map from SAR images acquired on July 17 and September 25, 1997. Loss of coherence (colored in pink and purple) is primarily related to the emplacement of Februray–May 1997 lava flow inside the caldera, changes due to ice/snow on the caldera rim, and vegetation and erosion in lowlands and along coasts. d InSAR deformation image produced from two SAR images acquired before and after the 1997 eruption, showing volcano-wide deflation. Each fringe (full color cycle) represents 10 cm of range change between the ground and satellite. Areas that lack interferometric coherence are uncolored. e Synthetic InSAR image from a Mogi source that was derived from modeling the observed deformation in Fig. 1.4d. Each fringe (full color cycle) represents 10 cm of range change between the ground and the satellite. The cross marks the ground location of the Mogi source. Areas that lack interferometric coherence are uncolored. f Thickness of lava flows emplaced inside Okmok Caldera during the 1997 eruption. Flow thickness was derived from the height difference between pre-eruption and post-eruption DEMs that were constructed from InSAR images. The extent of Fig. 1.4f is outlined in Fig. 1.4b

1.1.6.2 InSAR Coherence Image

Loss of InSAR coherence or decorrelation renders an InSAR image useless for measuring ground surface deformation. On one hand, geometric and temporal decorrelation can be mitigated by choosing an image pair with short baseline (similar look angles) and brief temporal separation, respectively. Choosing such a pair is recommended when the goal is to measure deformation. On the other hand, the pattern of decorrelation within an image can provide useful information about surface modifications that occurred between the acquisition times of two SAR images. By choosing image pairs appropriately, time-sequential InSAR coherence maps can be used to map the extent and progression of hazardous events such as lava flows, wildfires, or floods . As an example, Fig. 1.4c shows an InSAR coherence map for Mount Okmok that was derived from SAR images acquired on July 17 and September 25, 1997. The decorrelation patterns (colored in pink and purple) outline the extent of 1997 lava flows, variable snow and ice cover near the caldera rim, and vegetation and landscape erosion along coastal areas. Decorrelation inside the summit caldera is due primarily to post-emplacement deformation of lava flows from the February–May 1997 eruption of Okmok.

1.1.6.3 InSAR Deformation Image

Unlike a SAR intensity image, an InSAR deformation image is derived from phase components of two overlapping SAR images. A SAR is a side-looking sensor, so an InSAR deformation image depicts ground surface displacements in the SAR line-of-sight (LOS) direction, which include both vertical and horizontal components. Typical look angles for satellite-borne SARs are less than 45° from vertical, so LOS displacements in InSAR deformation images are more sensitive to vertical displacement (uplift/subsidence) than horizontal displacement. Here and henceforth we conform to common usage by sometimes using the terms “displacement” and “deformation” interchangeably. Readers should keep in mind that, strictly speaking, displacement refers to a change in position (e.g., LOS displacement of a given resolution element or group of elements in an InSAR image), whereas deformation refers to differential motion among several elements or groups (i.e., strain). Likewise, the terms “uplift” and “inflation” are used interchangeably to mean tumescence of the ground surface, and “subsidence” and “deflation” are used interchangeably to mean de-tumescence of the ground surface. As an example, an InSAR deformation image produced from two SAR images that bracket the 1997 Okmok eruption shows volcano-wide deflation (subsidence) of about 120 cm (Fig. 1.4d).

An interferogram or InSAR-derived surface deformation image is often visualized using a pseudo-color map. Figure 1.5a and b are two synthetic interferograms showing ground uplift and subsidence, respectively. Each fringe, represented by a color band that spans the spectrum from yellow to violet to cyan, or vice versa, corresponds to a phase change through a certain range (often set to 2π) or a LOS ground surface deformation of a certain magnitude (often set to half of the SAR wavelength). Fringes can be thought of as contours of range change, akin to the contours on a topographic map. In most cases, the colors themselves are not meaningful but the change in color represents a certain amount of relative phase change or deformation. Note the progression of colors for an uplift signal is opposite to that due to a subsidence signal (Fig. 1.5). The more fringes there are in a certain area, the more surface deformation occurred there. So fringe density is proportional to strain. An area of uniform color indicates there was no relative LOS change in that area. Because a cycle of colors is used to represent a fringe, a portion of the full color cycle represents a fraction of a fringe. This allows us to represent relative surface displacement (i.e., deformation) to a sub-centimeter level. Color rendering of interferograms is arbitrary and can be confusing: the same color transition can represent two opposite deformation patterns in two interferograms from different researchers. Due to SAR’s side-looking perspective, a symmetric deformation signal is represented by a non-symmetric fringe pattern (Fig. 1.5). Random colors at the pixel scale in an interferogram indicate loss of coherence. For many of the InSAR images in this book, we drape interferometric fringes over shaded-relief images from DEMs and show areas of loss of coherence as uncolored (see, for example, Figs. 1.1 and 1.4). More discussion of the interpretation of ground displacements from fringe patterns can be found in Chap. 2.

Fig. 1.5
figure 5

Representation and visualization of InSAR image and InSAR-derived deformation maps. Each fringe, represented by a cycle of colors (yellow, red, purple, cyan, green, to yellow), corresponds to a 28-mm LOS range change, which is half of ERS/Envisat/Radarsat-1 SAR wavelength. Note the color progression for a subsidence signal (left column) is opposite from that for an uplift signal (right column)

The spatial distribution of surface deformation data from InSAR images can be used to constrain numerical models of subsurface deformation sources. By comparing the deformation patterns predicted by such idealized sources to the actual patterns observed with InSAR, we can identify a best-fitting source model. As an example, the best-fit point pressure source for the observed deformation interferogram in Fig. 1.4d is located beneath the center of Okmok caldera at a depth of ~3 km below sea level (~3.5 km below the caldera floor), and the source volume change associated with the 1997 eruption is −0.047 km3 (see Okmok section, Chap. 6). Figure 1.4e shows the modeled interferogram based on the best-fit spherical point pressure source. The model source fits the observed deformation pattern in Fig. 1.4d remarkably well. Such a result can shed light on the processes responsible for observed surface deformation. In this case, the location of the model source ~3.5 km below the center of the caldera floor is evidence that the deflation process is withdrawal of magma from a storage zone in the upper crust to feed the 1997 eruption.

InSAR deformation images have an advantage for modeling purposes over point measurements made with GPS or strainmeters, for example, because InSAR images provide more complete spatial coverage than is possible with even a dense network of in situ sensors. On the other hand, continuous GPS stations and strainmeters provide better measurement precision and much better temporal resolution than is possible with InSAR images, because the latter is constrained by the orbital repeat times of SAR satellites—typically several days to weeks. In hazardous situations, InSAR offers the additional advantage of not placing field observers or instruments in harm’s way. For hazards monitoring, a combination of periodic InSAR observations and continuous real-time data streams from networks of in situ sensors (e.g., GPS, strainmeters, tiltmeters) is ideal .

1.1.6.4 Digital Elevation Model

As described earlier, the ideal SAR configuration for DEM production is a single-pass (simultaneous) two-antenna system (e.g., SRTM). However, repeat-pass single-antenna InSAR also can be used to produce useful DEMs. Either technique is advantageous in areas where the photogrammetric approach to DEM generation is hindered by persistent clouds or other factors (Lu et al. 2003). There are many sources of error in DEM production from repeat-pass SAR images, e.g., inaccurate determination of the InSAR baseline, atmospheric path-delay anomalies, or possible surface deformation due to tectonic, volcanic, or other sources during the time interval spanned by the images. To generate a high-quality DEM, these errors must be identified and corrected using a multi-interferogram approach (Lu et al. 2003). A data fusion technique such as the wavelet method can be used to combine DEMs from several interferograms with different spatial resolution, coherence, and vertical accuracy to generate the final DEM product (Baek et al. 2005; Ferretti et al. 1999) . One example of the utility of precise InSAR-derived DEMs is illustrated in Fig. 1.4f, which shows the extent and thickness of a lava flow emplaced during the 1997 Okmok eruption. The flow’s 3-dimensional distribution was derived by differencing two DEMs that represent the surface topography before and after the eruption.

1.2 Issues in InSAR Data Processing

Several issues must be addressed during InSAR data processing to ensure the best possible products. Among these are spurious phase anomalies introduced by the SAR processor, coherence improvement, baseline estimation, tropospheric artifacts, and ionospheric artifacts. In the following sections we discuss each of these issues sequentially.

1.2.1 Phase Anomalies Due to SAR Processor

A SAR processor is required to transform a scene of raw SAR data into an SLC image through matched filtering of raw SAR data in both range and azimuth directions with corresponding reference functions (e.g., Curlander and McDonough 1991). However, imperfect geometric calculations can result in a spurious ramping phase in InSAR images. Figure 1.6 shows two InSAR images processed with two different SAR processors. The ramping phase in Fig. 1.6a is likely due to SAR processor error. Note that ramping phase caused by SAR processor error could easily be confused with that caused by baseline error (see Sect. 1.2.3).

Fig. 1.6
figure 6

Interferograms produced by two different SAR processors using the same pair of SAR images. The ramping fringes (“ripples”) in (a) are likely due to systematic phase error associated with one of the SAR processors . Ramping fringes are absent or nearly so in (b), indicating that substantially less systematic phase error was associated with the SAR processor used to produce that image

1.2.2 InSAR Coherence Improvement

Interferometric coherence is a qualitative assessment of the correlation of SAR images acquired at different times. It describes the amount of phase error and thus the accuracy of deformation estimates or DEM products. Constructing a coherent interferogram requires that SAR images correlate with each other; that is, the backscattering spectrum must be substantially similar over the observation period. Physically, this translates into a requirement that the ground scattering surface be relatively undisturbed at the scale of the radar wavelength during the time between measurements (Li and Goldstein 1990; Zebker and Villasenor 1992). Comparison of L-band and C-band interferometric coherence suggests that L-band is far superior to C-band for surfaces covered with thick vegetation or loose material that is easily mobilized (e.g., fine ash or pumice fragments) (Lu 2007; Lu et al. 2005a, b). Therefore, chances for producing coherent interferograms are improved by: (1) using C-band images separated in time by only a few months to a few years in sparsely vegetated terrain, (2) using L-band imagery in areas where the surface is covered with thick vegetation or loose material, and (3) choosing SAR images acquired during local summer in areas that are subject to seasonal snow cover (Fig. 1.7).

Fig. 1.7
figure 7

a L-band ALOS and b C-band Envisat InSAR images showing ground surface deformation associated with the June 2007 Father’s Day intrusion and eruption along the east rift zone at Kīlauea volcano, Hawai’i. Each fringe (full color cycle) represents a line-of-sight range change of 11.8 cm or 2.83 cm for ALOS and Envisat interferograms, respectively. InSAR deformation values are draped over a shaded relief map. Areas of loss of coherence are uncolored. The C-band image loses coherence in areas of dense vegetation more readily than the L-band InSAR image due to the dominant volume backscattering in the C-band image

Another factor that affects InSAR coherence is the accuracy of image co-registration. A stringent prerequisite in InSAR processing is careful registration of reference and slave SAR images and resampling the slave image to the geometry of the reference image. For conventional InSAR processing, co-registration is done by cross-correlating the reference and slave images at a dense grid of pixel locations and using the results to construct range and azimuth offset polynomials for the entire image. The range and azimuth offset polynomials are expressed as functions of range and azimuth pixel position. A problem arises when, for an interferogram with a large perpendicular baseline, topographic variations introduce additional localized offsets between the reference and slave images (Lu and Dzurisin 2010). The range offset due to topographic relief is linearly dependent on topography and can be approximated as:

$$ \Delta r_{off} = - \frac{{B_{ \bot } }}{H\tan \theta }\Delta h $$
(1.32)

where Δr off is the range offset due to height difference Δh, B is the perpendicular baseline of the interferogram, H is the altitude of the SAR satellite above Earth, and θ is the SAR look angle. For the ERS and Envisat SAR sensors, normal values for H and θ are about 790 km and 23°, respectively (beam mode IS2 for Envisat). Therefore, a 1 km difference in topography can induce ~1.5 m range offset for an ERS or Envisat interferogram with a perpendicular baseline of 500 m. This offset is about 8 % of the range pixel size. For the ALOS PALSAR sensor, normal values for H and θ are about 700 km and 34°, respectively. The topography-induced range offset for a fine-beam PALSAR interferogram with a perpendicular baseline of 1 km can be as large as ~2.1 m, or about 23 % of the range pixel size. In other words, range offsets due to topographic relief in rugged terrain can be large enough to degrade InSAR coherence if the offsets are not taken into account during image co-registration. Therefore, we recommend using a DEM and the SAR imaging geometry to compute direct functions that map the position of each pixel in the reference image to a corresponding pixel location in the slave image. This can result in significant improvement in coherence for interferograms with relatively large baselines in mountainous areas (Lu and Dzurisin 2010) .

1.2.3 InSAR Baseline Refinement

A significant error source in InSAR deformation mapping is baseline uncertainty due to inaccurate determination of SAR antenna positions at the times of image acquisitions (i.e., errors in orbit determinations). The accuracy of satellite position vectors provided in Radarsat-1 and JERS-1 metadata is much poorer than that for ERS-1, ERS-2, Envisat, ALOS, and TerraSAR-X. Therefore, baseline refinement is particularly important for Radarsat-1 and JERS-1 interferogram processing. Even for ERS-1, ERS-2, and Envisat, for which precise position vectors are available from Delft Institute for Earth-oriented Space Research (DEOS) (http://www.deos.tudelft.nl/ers/precorbs/), baseline errors in interferograms can be significant and baseline refinement during processing is recommended.

Figure 1.8a shows an interferogram of Mount Okmok produced from a pair of ERS-2 images acquired on August 18, 2000, and July 19, 2002, using precise position vectors. Apparent range changes due to baseline errors are obvious, and portrayal of the volcanic deformation field is compromised by the occurrence of more than three spurious fringes outside the 10-km-wide caldera—an excellent example of the need for baseline refinement during InSAR processing. A commonly used method is to determine the baseline vector based on an existing DEM via a least-squares approach (Rosen et al. 1996). For this method, areas of the interferogram that are used to refine the baseline should have negligible deformation or deformation that is well characterized by an independent data source. Starting with the interferogram in Fig. 1.8a, we assumed that deformation well outside the caldera was negligible and used the method of Rosen et al. (1996) to refine the baseline vector and produce the interferogram shown in Fig. 1.8b. A concentric pattern of more than three fringes, corresponding to 8–10 cm of surface displacement (mostly uplift), is centered within the caldera (Fig. 1.8b)—a clear improvement over the image produced without baseline refinement . Alternatively, baseline-induced fringes can be modeled using a first-degree or second-degree polynomial and removed from the contaminated interferogram. Obviously, interferogram phase values in deforming areas should not be used to estimate the polynomial coefficients.

Fig. 1.8
figure 8

Topography-removed interferograms of Mount Okmok, Alaska (a) before and (b) after baseline refinement. Each interferometric fringe (full-color cycle) represents 2.83-cm of range change between the ground and the satellite

1.2.4 Tropospheric Artifacts

Atmospheric delay anomalies, which are caused by small variations in refractive index along the propagation paths of radar signals,Footnote 1 are the most significant source of error in repeat-pass InSAR deformation measurements. Spaceborne SAR sensors such as those aboard ERS-1/-2, JERS-1, Radarsat-1, Envisat, ALOS, and TerraSAR-X orbit at altitudes of 600–800 km. Radar signals from these sensors must propagate twice through the ionosphere and troposphere during transit from SAR to ground and back again. These signals, like all electromagnetic waves, are affected by small variations in the refractive index along the propagation path. In this and the following section, we discuss the effects of the troposphere and ionosphere on InSAR observations.

The lowest part of Earth’s atmosphere, from the surface to about 20 km altitude, is referred to as the troposphere. It contains approximately 80 % of the atmosphere’s mass and 99 % of its water vapor and aerosols. Differences in temperature, pressure, and water vapor content of the troposphere at the times of SAR image acquisitions can result in variations in the refractive index and thus in the phase of signals returning to the radar. These phase variations show up in interferograms as fringes that could be mistaken for evidence of surface deformation. Zebker et al. (1997) showed that variations in tropospheric water vapor content contribute most to atmospheric delay anomalies. Spatial and temporal changes of 20 % in relative humidity can lead to 10 cm errors in repeat-pass interferometric deformation maps. Note the tropospheric anomalies can affect a region ranging from tens of meters to tens of kilometers in dimension, whereas any baseline error contaminates an entire InSAR image (i.e., hundreds to thousands of kilometers).

In cloud-prone and rainy regions, the range change caused by tropospheric delays can be significant and must be considered during deformation analysis. Distinguishing the effects of tropospheric delay anomalies from surface deformation requires multi-temporal interferograms. Figure 1.9a shows a topography-removed interferogram for the southeastern part of Mount Okmok, Alaska. The interferogram was produced using a pair of SAR images acquired in May and July 1997. Apparent range changes as large as ~5 cm (two fringes) can be seen in the southern half of the image between the summit and coastline. To test our suspicion that the fringes were caused by atmospheric delays rather than ground deformation, we produced two more interferograms for the same area. For one (Fig. 1.9b), we used the same July 1997 image that was used previously together with an image acquired in September 1997. For another (Fig. 1.9c), we used the images acquired in May 1997 (also used for Fig. 1.9a) and September 1997 (also used for Fig. 1.9b). The interferograms shown in Fig. 1.9a and b, which have the July 1997 image in common, have fringe patterns that are similar in shape but opposite in color progression. Furthermore, the fringe pattern in question is absent from the interferogram shown in Fig. 1.9c, which was produced without using the July 1997 image. From these observations we can conclude that the fringes in Fig. 1.9a and b most likely were caused by tropospheric delays associated primarily with the July 1997 image. It is also possible that the fringes could record successive episodes of equal but opposite ground deformation, but there is no geologic reason to suspect this and we regard it to be highly unlikely.

Fig. 1.9
figure 9

Topography-removed interferograms of Mount Okmok, Alaska, for the periods a May–July 1997, b July–September 1997, and c May–September 1997, all after the February–May 1997 eruption ended. Fringe patterns in the lower halves of Fig. 1.9a and b are similar to each other in shape but opposites of each other in terms of color progression. This suggests that the patterns are caused by atmospheric-delay anomalies associated with the SAR image acquired in July 1997, which is common to both interferograms. That image was not used to produce the interferogram in Fig. 1.9c, where a similar fringe pattern is absent. The most likely explanation is that the July 1997 image was contaminated by strong atmospheric delays. Each interferometric fringe (full-color cycle) represents the equivalent of a 2.83 cm range change between the ground and the satellite

Because tropospheric delays sometimes correlate with topography (Fig. 1.10a), it can be difficult to identify and remove atmospheric anomalies unless multi-temporal InSAR images are available. In general, the atmospheric delay phase can be treated as a temporally high-frequency signal and the deformation phase as a low-frequency signal that accumulates during a timespan comparable to that of the interferogram. This is the basis for multi-temporal InSAR processing, which can be used to separate the effects of tropospheric delays and ground surface deformation (see Chap. 3). In some cases, tropospheric delays do not correlate with topography (Fig. 1.10b and c). Most of the atmospheric delays in Fig. 1.10 have relatively low spatial frequency (i.e. long wavelength on the order of 1–10 km). However, tropospheric artifacts sometimes manifest themselves as shorter wavelength signals. For example, atmospheric delays in Fig. 1.11 have wavelengths as short as tens of meters. SAR images with this type of short-wavelength atmospheric signal should be identified and removed from further InSAR processing.

Fig. 1.10
figure 10

Envisat interferograms of Unimak Island, Alaska (yyyymmdd): a 20040813–20051007, b 20040620–20041003, and c 20090719–20090823. Atmospheric delay anomalies in these 3 interferograms have relatively long wavelengths at spatial scales of ~1–10 km. The atmospheric delay signal correlates with topographic relief in Fig. 1.10a, whereas in Fig. 1.10b and c no such correlation exists

Fig. 1.11
figure 11

ERS-2 interferogram of Nabro volcano, Eritrea, from images acquired on October 7, 1997, and September 26, 2000, illustrating short-wavelength effects of atmospheric delay anomalies

Finally, we want to caution readers that tropospheric delays can mimic ground surface deformation phenomena due to groundwater movements or landslide motions. Figure 1.12a shows an interferogram in which most of the fringes are due to tropospheric artifacts. Fringes in the upper left part of the interferogram (inset) could be misinterpreted, even by InSAR experts, as ground surface deformation due to landslides or groundwater movement. However, by using multi-temporal interferograms we confirmed that the fringes were caused by changes in atmospheric moisture associated with an intense weather event. The high backscattering plume in the SAR intensity image (Fig. 1.12c, upper right) indicates that the water surface near these fringes was extremely turbulent at the time, providing additional evidence that the fringes are due to a strong weather event. In short, Figs. 1.91.12 demonstrate that tropospheric delay artifacts can manifest as both long and short wavelength signals, sometimes correlate with topographic relief, and can mimic ground surface deformation patterns. Therefore, caution is advised when interpreting fringes in cases where only a few InSAR images are available.

Fig. 1.12
figure 12

a Topography-removed Envisat interferograms of Nabro volcano in Eritrea (African rift valley) acquired on October 31, 2008, and July 3, 2009. b Enlarged version of the interferogram showing atmospheric delay anomalies (circled) that could easily be misinterpreted as ground surface deformation due to landslides or ground water movements. c A SAR intensity image of the same area obtained on October 31, 2008, illustrating the turbulent water surface in the area of the atmospheric fringes. The turbulent water was likely caused by a weather event that induced significant localized atmospheric delay anomalies in the InSAR image

As a general rule, multiple observations from independent interferograms for similar time intervals should be used to verify apparent surface deformation (Lu et al. 2000; Zebker et al. 1997). Because atmospheric artifacts generally do not correlate in time, multi-interferogram InSAR processing can be used to model and reduce atmospheric effects and enhance the signal-to-noise ratio of InSAR deformation signals (see Chap. 3).

Other approachs to reducing the effects of atmospheric water-vapor variations on interferograms are to estimate or measure water-vapor concentrations directly and then remove the effects from InSAR observations. Three methods for obtaining water vapor information for this purpose have been proposed. The first is to estimate water-vapor concentrations in the target area at the times of SAR image acquisitions using short-term predictions from operational weather models (e.g., Foster et al. 2006). Predicted atmospheric delays from the weather model are used to generate a synthetic interferogram that is subtracted from the observed interferogram, thus reducing atmospheric delay artifacts and improving the ability to identify any remaining ground deformation signal. The problem with this approach is that current weather models have much coarser resolution (a few kilometers) than InSAR measurements (tens of meters). This deficiency can be remedied to some extent by integrating weather models with high resolution atmospheric measurements, but this approach requires intensive computation.

The second method used to reduce tropospheric anomalies in interferograms is to estimate water-vapor concentrations from continuous Global Positioning System (CGPS) observations in the target area. In this way it is possible to estimate precipitable water vapor content along the satellite-to-ground LOS with an accuracy that corresponds to 1–2 mm of surface displacement (Bevis et al. 1992; Niell et al. 2001). The spatial resolution (i.e., station spacing) of local or regional CGPS networks is typically several kilometers to tens of kilometers, which is sparse relative to the decimeter-scale spatial resolution of SAR images. Therefore, spatial interpolations that take into account the covariance properties of CGPS zenith wet delay (ZWD) measurements and the effect of local topography are required (Jarlemark and Elgered 1998). Jarlemark and Emardson (1998) applied a topography-independent, turbulence-based method to spatially interpolate ZWD values. In a follow-up study, Emardson et al. (2003) found that the spatio-temporal average variance of water vapor content depends not only on the distance between CGPS observations, but also on the height difference between stations (i.e., topography). These and other studies led to a topography-dependent turbulence model for InSAR atmospheric correction using ZWD values from CGPS data (Li et al. 2005).

A third approach to correcting tropospheric delay anomalies in InSAR observations is to utilize water-vapor measurements from optical satellite sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and European Medium Resolution Imaging Spectrometer (MERIS) (Li et al. 2003). A disadvantage of this method is the requirement of nearly simultaneous acquisitions of SAR and cloud-free optical images (rare in the Aleutians, more common elsewhere) .

1.2.5 Ionospheric Artifacts

The ionosphere is the upper part of the atmosphere extending from about 50 to 1000 km altitude, but normally concentrated in a zone 250–400 km high. Energetic radiation from the Sun ionizes molecues in the ionosphere to create a mixure of free eletrons, ions, and gases. The electron density is controlled primarily by the intensity of solar activity, geographic location, and time of day (Meyer et al. 2006). The ionosphere is a dispersive medium, meaning that it produces frequency-dependent effects on electromagnetic waves. These effects are more severe on long-wavelength SAR images (e.g., L-band) than short-wavelength SAR images (e.g., X-band, C-band). The dispersive nature of the ionosphere causes a) the radar carrier’s phase velocity to be slightly higher than the speed of light in vaccum, and b) the group velocity of the radar signal envelope to be slightly lower than the speed of light in vaccum. As a result, fluctuations in ionospheric electron density are another cause of anomalies in SAR and InSAR images (Gray et al. 2000; Mattar and Gray 2002; Meyer et al. 2006; Wegmuller et al. 2006; Meyer, 2011; Jung et al. 2013). Such fluctuations, which occur over length scales of tens to hundreds of kilometers, can cause geolocation errors in SAR amplitude images, produce azimuth pixel shifts (“azimuth streaking”) that affect InSAR image correlation, and bias interferometric phase values for the affected area.

Ionospheric influences on InSAR imagery are an active topic of investigation and several methods have been proposed to reduce the associated artifacts (Bamler and Eineder 2005; Meyer et al. 2006; Jung et al. 2013). The range split-spectrum method, which is based on the difference in path length between two observations made at different wavelengths, is similar to the technique used to reduce ionospheric artifacts during dual-frequency GPS data processing (Rosen et al. 2011). While the InSAR phase observations due to surface deformation, topography, and troposphere artifacts are non-dispersive, the ionospheric effect is dispersive. As a result, the ionospheric effect can be distinguished by observations at two different frequencies. The same technique has been used successfully to identify and remove ionospheric effects from GPS observations (Parkinson et al. 1996), and it holds great promise for improving InSAR results as well. For a single-frequency SAR mission, the range bandwidth must be large enough to allow the split-spectrum technique to work. The split-specturm method cannot achieve a desirable accuracy for correcting ionospheric artifacts in L-band ALOS PALSAR interferograms, for example, due to PALSAR’s relatively narrow range bandwidth.

A second method for mitigating ionospheric artifiacts utilizes the differences in group delay and phase advance caused by the ionosphere to estimate the difference in total electron density between two SAR observations (Meyer et al. 2006). The group delay in the range direction can be obtained by estimating the local range displacement through a correlation technique, while the phase delay (in range) can be calculated from the unwrapped interferogram. This method requires highly accurate measurement of range displacement between two SAR images, and also that the integer ambiguity number in the unwrapped interferogram be solved by independent means (Meyer et al. 2006). Currently, the method is not sufficient to correct for ionospheric effects in L-band ALOS PALSAR interferograms due to that system’s limited range resolution.

A third method of correcting ionospheric artifacts in InSAR imagery makes use of the fact that the azimuth gradient of the ionospheric phase distortion is linearly proportional to the azimuth displacement (Meyer et al. 2006; Raucoules and de Michele 2010; Jung et al. 2013). The azimuth displacement of an InSAR image pair can be calculated using image correlation or multi-aperture InSAR techniques (see Chap. 2). This method calculates the azimuth gradient of the ionospheric phase distortion from the azimuth displacement, and then estimates the phase distortion through azimuth integration. It has been successfully implemented in several case studies (Raucoules and de Michele 2010; Jung et al. 2013). However, the method requires excellent coherence of the interferogram and assumes that any ground deformation in the SAR azimuth direction is negligible.

At present (autumn 2013), there is not a robust, generally applicable technique for removing ionospheric artifacts from InSAR imagery, and the issue remains a popular topic of ongoing research. The practical solution for the time being is to reduce the impact of ionospheric artifacts using a multi-interferogram approach, similar to the procedure used to identify and reduce tropospheric artifacts (see Chap. 3).

Even though both ionospheric and tropospheric artifacts in InSAR imagery are, in general, spatially correlated and temporally uncorrelated, there are some notable differences between the two. First, tropospheric artifacts in InSAR imagery are nondispersive (i.e., a C-band interferogram includes the same amount of troposphere-induced phase effect as an L-band interferogram of the same scene acquired at the same times), while ionospheric artifacts are more pronounced in lower-frequency (longer wavelength) SAR systems. Second, an increase in electron density in the ionosphere advances the interferogram phase while an increase in water vapor in the troposphere delays the interferogram phase (Hanssen 2001; Meyer et al. 2006). Third, ionosphere-induced phase anomalies in InSAR imagery tend to span a larger area and appear more “streaked” than tropospheric artifacts (Raucoules and de Michele 2010; Meyer 2011; Jung et al. 2013). Finally, tropospheric artifacts can be topography-dependent due to the concentration of water vapor at lower altitudes, whereas high-altitude ionospheric effects occur entirely beyond the reach of topography.