Keywords

Introduction

This chapter provides a description of light detection and ranging (LiDAR) as an active remote sensing technique. LiDAR has evolved over the past seven decades, and as a result, there are many different types of LiDAR systems in use today. Systems can be classified based on the application (atmospheric, mapping, bathymetry, navigation), based on the ranging technique (time of flight, triangulation, phase difference), based on the target detection principle (scattering, fluorescence, reflection), or even based on the platform that the system is deployed on (ground based, mobile terrestrial, airborne, spaceborne, marine, submarine). There are many reference works that cover LiDAR systems from alternative viewpoints. For example, Lidar: Range-Resolved Optical Remote Sensing of the Atmosphere (Weitkamp 2005) provides an in­depth review of modern atmospheric LiDAR techniques, while Topographic Laser Ranging and Scanning: Principles and Processing (Shan and Toth 2009) provides a complete review of the main terrestrial mapping LiDAR techniques. In the context of a Handbook of Satellite Applications, this chapter provides a high-level overview of LiDAR systems with a focus in those based on spaceborne platforms and their main applications. The chapter starts with a brief historical timeline of the origins of the LiDAR technique; it is followed by a high-level technical overview of the principles of operation and the hardware that constitute a generic LiDAR system; and it concludes with descriptions of the main applications of LiDAR technology to and from spaceborne platforms.

Origins of LiDAR Technology

What we know today as LiDAR is the result of the convergence of efforts by different scientific communities to use visible light sources and detectors to resolve technical or scientific issues. LiDAR was pioneered by atmospheric scientists in the 1930s for the determination of atmospheric density profiles, refined as a way to obtain precise and accurate measurements of distances by geodesists and surveyors in the 1940s and 1950s, and taken to interplanetary distances by physicists studying relativistic effects in the 1960s.

Early proposals for the use of high-power searchlights to study atmospheric density and composition were developed by E. G. Synge in 1930 (Synge 1930) and M. A. Tuve et al. in 1935 (Tuve et al. 1935). Early successful measurements using bistatic systems consisting of a high-intensity searchlight and a telescopic photographic station separated by baselines of 2–18 km were conducted by J. Duclaux in 1936 (Hulburt 1937), E.O. Hulburt in 1937 (Hulburt 1937), and E.A. Johnson et al. in 1939 (Johnson et al. 1939). Using long-exposure photography, the setup by Duclaux was able to trace light scattering up to a height of 3.4 km, and the experiments by Hulburt reached heights of up to 28–30 km (Hulburt 1937). The limit of these photographic techniques was set by the saturation of the photographic film and the contrast between the beam intensity and night sky. An alternative to the saturation of the photographic film was the method proposed by Tuve et al. and implemented for the first time by Johnson et al. which consisted of modulating the intensity of the searchlight and using a photoelectric cell to detect the scattered radiation. The output of the photoelectric cell was amplified by an AC system tuned to the lamp modulating frequency. With this type of electric detection system, Johnson et al. were able to record light scattering to heights of 34 km (Johnson et al. 1939). These early atmospheric LiDAR experiments yielded scattering intensity information as a function of the height, but were not concerned with obtaining accurate range measurements. The need to obtain accurate range (distance) measurements using light beams came from the geodetic science community.

LiDAR as a tool to determine accurate range (distance) measurements for geodetic and surveying applications originated in the late 1930s as a technique named electronic distance measurement or EDM. The development of the first EDM instrument began in 1938 when the physicist and geodesist Erik Bergstrand, of the Swedish Geographical Survey Office, began to investigate the possibilities of using a Kerr cell as an electro-optical shutter to modulate a beam of light in an attempt to better measure the speed of light. Bergstrand’s first operational instrument was reported to work in 1941 (Carter 1973). In August 1948, Bergstrand presented a paper at the meeting of the International Association of Geodesy (IAG) held in Oslo, Norway. In that paper, he explained that the process could be reversed and that by measuring the light’s time of flight and using the known speed of light, it was possible to accurately compute the distance between the light source and a retroreflector. Soon after that IAG meeting, Bergstrand licensed the distance measuring concept to the Swedish AGA (Svenska Aktiebolaget Gasaccumulator) company to develop a commercial EDM instrument. AGA produced the first EDM instrument in the early 1950s and marketed it as the Geodimeter, short for geodetic distance meter. The instrument used a Kerr cell to modulate the light and a mercury vapor lamp as the light source. Refinement of the Geodimeter by AGA continued through the 1950s and 1960s (Fernandez-Diaz 2007).

During the 1940s and 1950s while Bergstrand was developing the EDM technique, atmospheric scientists continued to build upon the early scattering measurements by using pulsed searchlights. These pulsed light sources enabled the researchers to measure the range to the scattering particles using the time-of-flight principle rather than the original triangulation method. In the book Meteorological Instruments, published by W.E.K. Middleton and A.F. Spilhaus in 1953, the acronym LiDAR was coined for this type of time-of-flight technique (Wandinger 2005). Around the same time, a group at Princeton University led by professor R.H. Dickey, working on gravitation research, investigated a concept of using a high-density and high-altitude artificial satellite to measure slow changes in the universal gravitation constant (G) by tracking the satellite orbit using retroreflectors and pulsed searchlights (Bender et al. 1973). This concept incorporated elements of both the atmospheric and geodetic LiDAR research. However, the pulsed light sources and photodetectors available at that time made its implementation impractical. A breakthrough in technology was needed which increased the power and intensity of the light beams.

The breakthrough came in November 1957, when Gordon Gould, a graduate student at Columbia University, coined the acronym LASER, for light amplification by stimulated emission of radiation, and described the principal components of the laser (Taylor 2000). The conceptual invention of the laser was followed by the first successful implementation by Theodore Maiman and his colleagues at Hughes Aircraft Company, who built the first solid-state pulsed laser using a ruby rod in 1960. That same year, Ali Javan and his colleagues from Bell Laboratories succeeded in building the first gas (HeNe) laser (Javan et al. 1961). Another important advancement was the development of Q-switching for ruby lasers in 1961 by F.J. McClung and R.W. Hellwarth, which enabled the generation of short (nanoseconds) laser pulses that packed relatively large amounts of energy (McClung and Hellwarth 1962). The photons produced by a laser are from a very narrow wave band, have very similar phase and polarization, and travel nearly parallel to one another. These attributes make it relatively simple to create a highly collimated beam of light (its divergence is essentially limited by the aperture of the transmitter and the atmosphere) that yields strong returns from even very distant targets.

In May 1962, L.D. Smullin and G. Fiocco were successful in obtaining ruby laser returns from the bare lunar surface (Smullin and Fiocco 1962) and between June and July 1963 obtained atmospheric returns from heights between 60 and 140 km (Fiocco and Smullin 1963). These experiments ignited an exponential development in LiDAR technology in these fields of research. Within the following decade, atmospheric scientists had demonstrated all the basic atmospheric LiDAR techniques in use today (Wandinger 2005).

The physicists and geodesists working on relativity and gravitation obtained the first ruby laser returns from an artificial satellite (Beacon Explorer 22-B) equipped with corner cube reflectors (retroreflectors) on October 31, 1964 (Carter 1973; McGarry and Zagwodzki 2005). This became the origin of what is currently known as satellite laser ranging or SLR, which uses LiDAR to measure ranges from ground stations to satellite-borne retroreflectors with millimeter-level precision and from which it is possible to obtain highly accurate orbits for critical satellites such as GPS, GLONASS, Galileo, Jason, ERS, and others (The International Laser Ranging Service). However, even before the Beacon Explorer was launched, scientists realized that low-orbiting satellites imposed several challenges such as very short visibility times and Earth’s gravitational perturbations that would limit the quality of the relativistic experiments. To overcome these limitations, they had proposed the idea of placing retroreflector arrays on the surface of the Moon, which could be used to bounce back a laser beam shot from the Earth. These lunar retroreflector arrays would allow yield better results than the ones obtained by Smullin and Fiocco in 1962 (Smullin and Fiocco 1962) and by Grasyuk et al. in 1964 (Bender et al. 1973), because they would result in “point” returns, with negligible time spread compared to returns from a patch of lunar topography.

On July 21, 1969, during the Apollo 11 mission, Neil Armstrong oriented and leveled the first lunar retroreflector array (LRRR) on the surface of the Moon. The first successful return signals from the LRRR were obtained on August 1, 1969, at Lick Observatory, and on August 20, 1969, at the McDonald Observatory (Alley et al. 1969). Additional retroreflectors arrays were deployed on the Moon by the Apollo 14 and 15 missions, and French-built retroreflectors arrays were deployed by the Soviet Lunokhod 1 and 2 rovers (Dickey et al. 1994). To this date, observatories are still bouncing laser pulses from these retroreflectors in a technique called lunar laser ranging (LLR). This has provided numerous contributions to a number of scientific fields such as gravitational physics, relativity, astronomy, lunar science, geodesy, and geodynamics (Dickey et al. 1994).

Down on Earth, during the 1960s, there was also an exponential development of the EDM technique. In 1967, AGA introduced its Geodimeter Model 8, which was its first to use a helium–neon laser, and doubled the range of the lamp units from 30 to 60 km. Meanwhile, other companies were working on laser-based EDMs with the ability to determine ranges using weak return signals from natural targets rather than from retroreflectors. Examples of these reflector-less EDMs are the instruments manufactured by Spectra Physics such as Mark II and Mark III (Geodolite). These, or similar instruments, were used in the mid-1960s as the first airborne LiDAR profilers and even bathymetric LiDAR systems (Fernandez-Diaz 2007). As lasers with higher pulse rates were developed and scanners of different designs were added to distribute measurements over swaths of terrain, these laser profiling systems evolved into the high-resolution airborne mapping LiDAR systems operational today(Carter et al. 2007).

The first spaceborne LiDAR system was flown onboard the ANNA-1B (Army, Navy, NASA, and Air Force) satellite in 1962, which was a joint project between the agencies to test various satellite tracking techniques including interferometry, Doppler, and strobe lights (Simons 1964). ANNA-1B was equipped with two high-intensity optical beacons that when commanded produced a sequence of five flashes separated by 5.6 s. The flashes were recorded against star fields using stellar cameras (e.g., Wild BC-4 and PC-1000) at ground stations of the Minitrack Optical Tracking System (Harris et al. 1966).

The first spaceborne LiDAR based on a laser transmitter was flown during the Apollo 15 mission in July–August 1971. The Apollo 15 laser altimeter, based on a Q-switched ruby laser, was part of the metric camera system but was also capable of operating independently (Robertson and Kaula 1972). Similar laser altimeter systems were flown on the Apollo 16 and 17 missions in 1972, and their data were used for, among other things, to determine the lunar shape and infer its structure (Kaula et al. 1974). Between 1972 and the 1990s, there was a hiatus in the deployment of spaceborne LiDAR systems, but since 1990, there has been a continuous progression both in terms of numbers and technological development of the deployed systems. Table 1 presents a summary of past, current, and future space-based LiDAR systems. Their principles of operation and applications are described in the following sections.

Table 1 Spaceborne LiDAR systems

High-Level Technical Overview of LiDAR

In principle, LiDAR consists of sending out optical energy, observing the interactions between the photons and the target, and measuring the distance between the emitter and the target. At the highest level, a LiDAR system consists of three main subsystems: an optical transmitter, an optical receiver/detector, and ranging/timing/control electronics. The designs of these elements vary greatly among systems and depend upon the targeted application. To help illustrate these concepts, Figs. 1 and 2 show a 3D model and optical diagram of the atmospheric scattering LiDAR (CALIOP) onboard the NASA/CNES CALIPSO satellite.

Fig. 1
figure 1

3D optical model of the CALIOP LiDAR (Image courtesy of NASA)

Fig. 2
figure 2

Optical diagram of the CALIOP LiDAR

The optical transmitter is composed of a light source, usually a laser system, and optical elements used to modify (focus, collimate, expand, split) the light beam. The optical detector is comprised of a telescopic-type instrument that collects the backscattered photons, spatial and spectral filters that discriminate the specific wavelengths intended to be detected, and an electronic photodetector that can be a simple photomultiplier or photodiode in the case of the mapping LiDAR or as elaborate as an spectrometer in the case of fluorescence or Doppler LiDAR. If the transmitter and the detector systems share the same optical elements, i.e., same optical transmit and receive paths, the system is considered to be monostatic. If the optical transmit and receive paths do not share elements, the system is defined as bistatic. From Figs. 1 and 2, it can be seen that CALIOP is a bistatic system, with a transmitter consisting of two independent lasers located parallel to the receiving telescope. Finally, the ranging/timing electronics enable the LiDAR to determine the distance to the target. In addition, LiDAR systems very often have mechanical, optical, or electronic scanning mechanisms that allow steering the light beam.

The design of a LiDAR system starts with the definition of the purpose or application that the system will serve. The application will dictate which interaction between light and target needs to be detected (scattering, reflection, absorption, etc.) and the most suitable ranging method. The type of interaction between light and target dictates what particular wavelengths can be used and narrows down the light sources that can be selected. From this point, it remains to select the best available photodetector to sense that light–target interaction. To aid the design process, the LiDAR equation is used, which relates the expected received signal strength with sensor parameters such as transmitted optical energy and receiver telescope area, atmospheric parameters such as transmittance and scattering probability at the operating wavelength, and operating conditions such as expected range and target cross section. The following sections provide basic descriptions of the ranging methods, the light–target interaction phenomena, and the light sources and photodetectors that enable the operation of a system. These descriptions cover material that leads to different forms of the LiDAR equation.

Ranging Methods

There are three main methods that can be used to measure the distance (range) between a LiDAR instrument and the target: optical triangulation, phase difference, and time of flight (TOF). It is also possible to employ hybrid approaches combining two of these methods. Each of these ranging methods has its own set of strengths and weaknesses and range of applicability (English et al. 2005).

Optical Triangulation

Optical triangulation was the ranging method used in the early atmospheric LiDAR experiments of the 1930s. As illustrated in Fig. 3, it is based on the geometry principle that knowing three elements of a triangle, it is possible to determine any other element of the triangle. In the case of the early atmospheric LiDARs, the first element that was known was the separation between the searchlights and the observing station. This leg of the triangle is known as the baseline. The other two known elements of the triangle were the horizontal angles of the searchlight and photographic station.

Fig. 3
figure 3

Triangulation ranging principle

Systems based on optical triangulation are ideal for short-range measurements (few meters) yielding micrometer-level precision at high data rates. However, its accuracy depends on the relation between range and baseline distance, and it degrades rapidly with increasing range (~R2). It is also limited due to its sensitivity to noise from exterior illumination sources (English et al. 2005).

Phase Difference

Phase difference was the ranging method used in early geodetic EDMs such as the Geodimeter, and it is currently used in some ground-based and airborne mapping systems and on one short-range spaceborne imager. This method consists of modulating the intensity of a continuous wave (CW) laser using a superposition of sinusoidal waveforms with different spatial wavelengths. The range is determined by measuring the phase difference and the number of complete cycles between the emitted and return laser waveform. The main disadvantage with this method is that phase differences are not unique, as there is always an unknown number of complete modulating wave cycles that have occurred prior to the phase difference (phase ambiguity). Compared to the time-of-flight method, the phase difference methods provides higher measurements rates. If there is no a priori knowledge of the range (for geodetic systems), the maximum range of this method is half the spatial wavelength of the carrier frequency, and the range resolution is a function of the highest modulating frequency and the phase difference resolution (English et al. 2005).

Time of Flight (TOF)

The third ranging method uses discrete pulses of light rather than continuous emitting sources. The TOF principle is the simplest, and it consists of measuring the time between when the light pulse is emitted and the detection of a return signal. This two-way travel time (time of flight) is divided in half and multiplied by the speed of light in the respective medium, yielding the range between the instrument and the target. Early LiDARs that used light from lamp sources would create light pulses using optical chopper wheels or capacitive discharge devices (flash lamps). The development of Q-switching by McClung and Hellwarth in 1961 enabled the emission of very energetic laser pulses rather than the continuous wave beams. However, even when these pulses last for a relatively short time, generally in the order of a few nanoseconds, at the high speed that light travels, this translates into several centimeters in length (e.g., 1 ns = 30 cm). In order to obtain sub-centimeter accuracy, the recording and analysis of the entire emitted and return waveform must be performed, or a specialized electronic circuit called a constant fraction discriminator (CFD) can be used on the fly to precisely time a specific point on the waveform (generally the half point of the pulse amplitude at its leading edge). Systems that range to special design retroreflectors may use mode-locked lasers which produce very narrow pulses picoseconds in width.

TOF is the most common ranging method in modern LiDAR, because it provides unambiguous range measurements of distances limited only by the dispersion of the laser energy and the sensitivity of the detector. However, the TOF approach is limited in data collection rate by the laser repetition frequency (PRF) which is the number of laser pulses that can be emitted per second.

Hybrid Systems

Hybrid systems use two above ranging methods, combining the unique capabilities of each to overcome the limitations of a single method. For instance, a hybrid system that employs the triangulation and TOF methods can exploit the advantages of TOF for long ranges and the accuracy and speed of a triangulation system at short ranges (English et al. 2005).

Light–Target Interaction Phenomena

Recall that the “D” in LiDAR stands for detection, the detection of return optical energy backscattered from the target. Detection of a target is possible because there is an interaction between the emitted light energy and the target. There are several types of interactions, which usually depend on the relative size of the target and the wavelength of the radiation. The main interactions between light and matter employed by LiDAR technology are described next.

Scattering

Scattering is the physical phenomenon that occurs when electromagnetic radiation changes its original direction of travel due to interactions with matter in the form of atoms or molecules (Fig. 4). If there is only one particle, a single scattering process is produced. If the photon is scattered several times by different particles, the process is called multiple scattering. These matter and radiation interactions can occur with or without the apparent transfer of energy. In elastic scattering, the photons maintain their wavelength, thus conserving energy. Examples of elastic scattering include Rayleigh and Mie scattering. Inelastic scattering occurs when part of the photon energy is transferred into the scattering particle, thus changing its wavelength. Examples of inelastic scattering include Raman and Brillouin scattering. Based on the relative size of the scattering centers with respect to the wavelength of the radiation, scattering can be classified as Rayleigh scattering when the particles are small compared to wavelength, Mie scattering when the particle size and radiation wavelength are roughly of the same order of magnitude, and geometric scattering when the particles are much larger than the wavelength.

Fig. 4
figure 4

Photon and matter interaction – scattering

The backscatter component is the radiation that changes direction by approximately 180°, i.e., reverses direction (Fig. 4). Radars and LiDARs detect the backscatter component of the radiation that was emitted. In atmospheric LiDARs, Mie scattering is used to detect aerosols in the troposphere, while Rayleigh scattering is used to detect molecules in the stratosphere and mesosphere. Mapping LiDARs are based on geometric scattering as the targets are much larger than the optical wavelengths.

Reflection

Reflection is a particular type of geometric scattering following particular geometric relationships. There are two limiting theoretical models for reflective surfaces: a specular reflector is one from which incident radiation will be reflected in a single direction (like a mirror) following Snell’s law, and a Lambertian reflector surface will spread the reflection over a wider pattern (Fig. 5). These are two limiting cases, and the actual reflection from most surfaces will be between these models. Mapping LiDAR detects reflected radiation from varied targets such as the solid rough surface of a planet (Lambertian behavior), diffuse targets like a forest canopy, or mirror-like surfaces such as a calm lake (specular behavior). An example of LiDARs that are based on specular reflections is those systems used for satellite or lunar laser ranging (SLR and LLR). To achieve extremely long ranges and millimeter-level accuracy, corner cube reflectors (retroreflectors) are used to reflect the laser beam in almost exactly the opposite direction (within a few seconds of arc) in which it was emitted.

Fig. 5
figure 5

Specular and Lambertian reflection patterns

Absorption

Absorption is another possible result of the interaction of electromagnetic radiation and matter. For a photon to be absorbed, it has to be of a particular wavelength or energy, and because of the principle of conservation of energy, the absorption causes a change in the energy state of the atom or molecule by either an electronic, vibrational, or rotational transition. Differential absorption LiDAR (DIAL) systems compare the received backscattered signal for two or more different laser wavelengths to determine the differential molecular absorption coefficients. If the differential absorption cross sections for each wavelength are known, the concentration of the gas atoms or molecules can be directly deduced. Atmospheric constituents that can be detected by DIAL include ozone and water vapor. DIAL can also be used for industrial emission monitoring and forest fire detection.

Fluorescence

Fluorescence occurs when a molecule absorbs a photon and after a determined period of time emits another photon of the same or longer wavelength. It is considered resonance fluorescence when the emitted photons have the same wavelength of the absorbed photon and normal fluorescence when the emitted photons have longer wavelengths (lower energy). The process of normal fluorescence occurs in three stages: the excitation of the molecule by the incoming photon which happens on a timescale of femtoseconds (10–15 s), vibration relaxation which brings the molecule to a lower excited state and occurs on a timescale of picoseconds order (10–12 s), and emission of a longer wavelength photon and return of the molecule to the ground state which occurs in a relatively long time period of nanoseconds (10–9 s). Fluorescence LiDAR usually emits ultraviolet radiation and observes the reemission of photons in the visible range with a spectrometer detector which records the relative emission at different wavelengths. Applications of fluorescence LiDAR include vegetation studies and the detection of pollutants. For instance, minute amounts of oil in water can be detected because of the UV fluorescence properties of hydrocarbons.

Doppler

The Doppler effect consists of an apparent shift in frequency or wavelength of waves (sound or electromagnetic) as a result of the relative motion between the emitter and the observer. These relative motions can be due to movement of emitter, observer, or medium (in the case of sound waves) or even the simultaneous motion of all three of them. If the relative motion makes the emitter and observer become closer, the wavelength of the wave will appear to get shorter (blue shift), whereas if the distance becomes larger, the wavelength will appear to get longer (red shift). In addition to the well-known frequency shift, the Doppler effect also causes the broadening of spectral line features in a process that is temperature dependent. Turbulence and winds are manifestations of the collective motion of the atmospheric molecules and particles. Light scattered along the line of sight (LOS) of the propagating laser beam will experience Doppler shifts and linewidth broadening due to the relative motion of the atmospheric elements with respect to the LiDAR system and due to changes in atmospheric temperature. Thus, Doppler LiDAR is applied to determine air temperature, wind speeds, and directions. The Doppler shifts are proportional to the ratio of wind speed and the speed of light as

\( \Delta \lambda =-{\lambda}_0\frac{c}{v\times \cos \left(\theta \right)} \) where Δλ is the wavelength shift, λ 0 is the reference or emitted wavelength, c is the speed of light, and \( v\times \cos \left(\theta \right) \) is the wind speed component along the LOS. The spectral linewidth broadening is given by

\( {\sigma}_{\lambda }=\frac{1}{\lambda_0}\sqrt{\frac{k_BT}{m}} \) where k B is the Boltzmann constant, T is the particle temperature, and m is the particle mass.

There are two main ways for measuring the Doppler shift and linewidth broadening using LiDAR: direct detection and coherent (heterodyne) detection (Wandinger 2005). In direct detection Doppler LiDAR, the wavelength shift is determined by a spectrometer instrument which employs narrowband spectral filters and measures the backscattered radiation at each band. Coherent Doppler LiDAR is based on the emission of modulated pulses of single-mode single-frequency laser radiation. The detected backscattered signal is mixed with the signal of a local oscillator, and by detecting the beat frequency, the frequency shift is determined. To determine the sign of the shift, a frequency offset is introduced between the emitted pulse and the local oscillator.

Depolarization

Depolarization is not a LiDAR detection technique per se; however, because the polarization of the laser radiation emitted by a LiDAR is well known, it is possible to measure how much radiation is backscattered with the same polarization and at a perpendicular polarization. In atmospheric LiDARs, depolarization provides information about the nature of the scattering particles, as Mie scattering theory indicates that depolarization is caused by nonspherical scatterers. In mapping, LiDAR depolarization can be used to characterize surface roughness.

Light Sources

A light source is a basic part of a LiDAR system. During the early days of LiDAR experimentation, the light sources were mercury or sodium vapor lamps. Currently, the light source will most likely be a laser. Laser is an acronym for light amplification by stimulated emission of radiation. Traditional lasers consist of an optical resonator which contains an optical gain medium. This gain medium, or lasing material, is pumped with optical or electrical energy (semiconductor lasers) causing the electrons in the lasing material to be excited to a higher nonequilibrium level, and stimulated emission occurs when an interacting photon causes an electron to drop from the higher level to its ground state releasing an additional photon at the same wavelength as the interacting photon. If this stimulated emission builds up within the optical resonator to a point where the gain of the process overcomes the cavity losses at a given resonant mode, then lasing is achieved, and a relatively high-coherent beam of light will be emitted. Coherence refers to the laser beam’s spatial and spectral characteristics; a perfectly coherent laser beam will travel in a single direction (spatial coherence), and the photons would be of a single wavelength, polarization, and phase (spectral coherence). In the real world, lasers are not 100 % coherent, but can emit light from several modes at different wavelengths at the same time with not necessarily the same polarization, and their beam can diverge beyond the diffraction limit. However, most lasers used in LiDARs are built to be single mode and diffraction limited. Besides the traditional electronic population inversion lasing method, it is possible to generate laser light through other processes such as relativistic free electron beams and by modifying the vibrational and rotational modes of oscillation of molecules. Lasers can produce light not only in the visible spectrum but also in other regions of the spectrum including the infrared, the ultraviolet, and the X-ray regions.

Lasers can be classified based on the lasing medium as solid-state, liquid, and gas lasers. Examples of solid-state lasers include those based on crystalline paramagnetic ions, glass, solid dyes, semiconductors, polymers, and excimers. Liquid lasers can be based on organic dyes, rare earth liquids, polymers, and excimers. Gas lasers include neutral atoms, ionized gases, and molecular gases (Weber 2001). One of the most common lasers used in LiDAR technology is based on the solid-state crystal: neodymium-doped yttrium aluminum garnet (Nd:YAG), which lases at 1,064 nm.

Based on their modes of operation, lasers can be classified into continuous wave lasers if its output power is constant over time (although the intensity of the beam can be modulated) and pulsed lasers if the optical energy is released in sudden bursts. Laser pulses packing a relatively high amount of energy, compared to continuous operation, can be obtained through the Q-switching technique. Pulses obtained through Q-switching are typically in the range of hundreds of picoseconds to tens of nanoseconds in length. Extremely short pulses in the picosecond to the femtosecond range containing very little energy can be created using the mode-locking technique.

High Signal-to-Noise Ratio (SNR) and Photon-Counting Detectors

The optical backscattered signal resulting from the interaction between the radiation and the target needs to be detected by the LiDAR system. For this purpose, many different types of photodetectors can be employed. These photodetectors include PN, PIN, and avalanche photodiodes and photomultiplier tubes. The selection of the photodetector is a crucial aspect in the design of a LiDAR system (Kaufmann 2005), and factors that must be taken into account in this process are the wavelength and the magnitude (signal strength) and magnitude range (dynamic range) of the radiation to be detected and the speed at which it needs to be detected. Generic characteristics of photodetectors include its wavelength band of operation (spectral response), its sensitivity (how much electric signal is produced per unit of detected radiation), its noise characteristics (how much electric signal is produced even when no radiation is incident on the detector), response speed (ability to detect distinct events separated by short times), active area, number of elements (single element vs. array of detecting elements), and its operating voltage and power consumption.

Independent of the type of photon detector used, there are two main modes of operation depending on the magnitude of the detected signal: high signal-to-noise ratio (SNR) or analog detection and low SNR, also called photon counting or digital detection (Hamamatsu Corporation 2005). In high SNR LiDAR systems, the magnitude of the detected signal is many times larger than the general background noise, including scattered solar radiation and artificial lighting, and the detector thermal noise. High SNR is typical of short-range, high-power systems such as mapping and elastic backscattering LiDARs. In the low SNR domain, the magnitude of the detected signal is very close to the noise level, and in some cases, the detector responds to the excitation of single photon events, and this is why it is also called photon counting. Photon counting is used in extremely long-range systems such as SLR and LLR, for systems where the interaction between the radiation and matter is particularly weak such as in Raman LiDAR or high atmosphere Rayleigh scattering and resonant fluorescence LiDARs (Abshire et al. 2005; Whiteway et al. 2008), water penetrating (bathymetric) LiDAR, and low-power multichannel systems (Cossio et al. 2010).

The LiDAR Equation

The LiDAR equation is a mathematical formulation that provides an estimate of the received optical signal strength by a system as a function of instrument parameters, atmospheric phenomena, and detection range. The LiDAR equation is used to design systems and to evaluate the performance of existing systems, and it is inverted to determine atmospheric properties from real observations. There are many versions of the equation depending on the type of system it describes. In its most generic form, it is (Wandinger 2005)

$$ {P}_r(R)={K}_s\times G(R)\times {T}^2(R)\times \beta (R) $$

where P r (R) is the received power as a function of the range, K s is a constant factor dependent upon system parameters such as transmit power and optical efficiency, G(R) is a factor that depends on the geometry of the observation as function of the range, T(R) is the propagation medium transmission factor, and β(R) is a factor that describes the target backscattering properties. Each of these factors can be expanded and/or adjusted to account for the specifics of each system and application.

For instance, the LiDAR equation for elastic backscattering atmospheric LiDAR, where the targets are atmospheric constituents (atoms or molecules), can be expanded as (Wandinger 2005)

$$ {P}_r(R)=\left[\frac{P_0\eta c\tau A}{2}\right]\times \left[\frac{O(R)}{R^2}\right]\times {\left[{e}^{-{\displaystyle \underset{0}{\overset{r}{\int }}\alpha \times dR}}\right]}^2\times \beta (R) $$

where P 0 is the emitted laser power (pulse energy/pulse length), η is the optical efficiency of the system, c is the speed of light in the transmission medium, τ is the laser pulse width, A is the receiving telescope area, O(R) is the fractional overlap area collected by the receiver, and α is the extinction coefficient. In this case, both the atmospheric transmission and scattering coefficient are the properties under study. The scattering coefficient indicates the probability that a photon will be backscattered. The atmospheric transmittance is the exponential integration of the extinction coefficient which is proportional to the amount of scattering material in the atmosphere; it can also be considered as the effective cross-sectional area of particulates per unit volume. The combined expression cτA is considered the scattering volume, which when multiplied by the scattering coefficient β(R) yields the scattering cross section.

For an altimetry or mapping LiDAR, the equation can be expanded as (Bufton 1989)

$$ {P}_r(R)=\left[{P}_0\eta A\right]\times \left[\frac{1}{R^2}\right]\times {\left[{e}^{-{\displaystyle \underset{0}{\overset{r}{\int }}\alpha \times dR}}\right]}^2\times \left[\frac{\rho }{\varOmega}\right] $$

where ρ/Ω is the target backscatter or reflectance per solid angle.

These equations can be expanded even further to account for each interaction that affects the laser beam along its two-way travel from the transmitter to the receiver and as stated before need to be adjusted for the particular type of LiDAR system and application.

Comparison of LiDAR to Other Forms of Remote Sensing

Having described LiDAR technology and principles of operation, it is convenient to compare this active optical detection technique against other forms of remote sensing. It is important to remember that every remote sensing technique has its strengths and limitations, and it is crucial to understand the relative advantages and intrinsic limitations of different techniques to determine which is the most appropriate for a given application. The next two sections compare active versus passive remote sensing techniques and LiDAR versus radar.

Advantages and Disadvantages of Active Remote Sensing

Having control of the illumination source creates several advantages for active remote sensing (LiDAR and Radar) over passive techniques. The first advantage is that active systems are independent of day/night conditions. This is particularly true for Radar systems. However, certain types of LiDAR units work better under night conditions, and some can only work at night. Long-wavelength radars (>10 cm) are also independent of weather conditions and can work through clouds and rain.

With passive remote sensing techniques such as multispectral and hyperspectral imaging, most of what can be inferred from the target has to do with the amplitude of the detected signal (relative or absolute reflectance). With active systems, there is full knowledge and sometimes control of the parameters of the illumination signal: amplitude, frequency, phase, and polarization. This control allows researchers to study the effect that the target has on all the parameters of the emitted radiation enabling a more complete characterization of the target. The use of phase information makes it possible to accurately measure sub-wavelength scale changes in ranges, which is applied in deformation mapping using InSAR or millimeter-level ranging with LiDAR. Measuring the change in polarization (depolarization) enables the geometric characterization of the target; it is used in atmospheric LiDAR to determine if the scatterers are spherical or not and in polarimetric SAR to determine the orientation and location of the scattering sources.

Finally, measurements of perceived changes in frequency or wavelength allow the use of Doppler techniques to determine the relative speed of the target moving along the line of sight (LoS) of the LiDAR or Radar. A parameter of the illuminating signal for which there is almost full control is the power (limited by the maximum power output of the source) which can generally be adjusted to a level that optimizes the signal-to-noise ratio (SNR) of the detected return, thereby reducing the sensitivity to background and detector noise compared to passive remote sensing techniques.

Despite the many advantages of the active remote sensing technique, there are some disadvantages with respect to the passive techniques. The main disadvantage is that active sources can only sample relatively small areas at a given time, and to increase the spatial resolution, it is often necessary to reduce the extent of the study area. An additional disadvantage is that active sensors provide very little spectral information, limited to a few wavelengths compared to the hundreds of channels that can be studied with a hyperspectral system.

LiDAR Versus Radar

To compare LiDAR and Radar remote sensing, a good starting point is their respective operational wavelengths. Most operational radars work in the wavelengths between 2 and 30 cm (10–2 m), while LiDARs operate between 300 and 2,000 nm (10–9 m). On average, this is a five order-of-magnitude difference, and this has many implications for remote sensing applications. The first implication has to do with the interaction between radiation and matter. As explained earlier, scattering is a process determined by the relative size of the particles and the wavelength. In the case of atmospheric constituents, their size is comparable to the wavelengths in the optical range, and this is why it is possible to study atmospheric scattering with LiDAR. It is also possible to measure Doppler shifts and broadening from optical radiation scattered by moving atmospheric particles, which in turns allows for the remote determination of wind velocities and temperature profiles using LiDAR. The Radar wavelengths, on the other hand, are much larger than atmospheric particles and are not affected by atmospheric atomic and molecular constituents. However, low-wavelength (<10 cm) Doppler radar is sensitive to much larger water drops and ice crystals.

Besides the scattering interaction, there is also the possibility of absorption and atmospheric extinction which is the depletion of transmitted radiation, caused by the combination of scattering and emission. Atmospheric transmission is complementary to extinction. The Earth’s atmosphere is practically transparent to radio waves, but it is relatively opaque in certain optical bands. This is of crucial importance for remote sensing applications from satellite platforms for which the electromagnetic radiation to be detected needs to travel through the Earth’s atmosphere. Therefore, the bands of operation of spaceborne sensors are selected taking into consideration the transparency of the atmosphere. The atmosphere’s transparency in the radio wavelengths allows Radar to operate under most weather conditions, which combined with its day and night operability provides a significant advantage over other forms of remote sensing. However, absorption is not entirely an undesirable phenomenon. Absorption at specific wavelengths due to atmospheric molecules is the principle used by differential absorption LiDAR (DIAL) to detect and measure the concentrations of molecules such as ozone and water vapor in the atmosphere.

A final aspect to consider in the comparison between radar and LiDAR is the divergence or spread of a Radar or laser beam. The divergence also relates to the angular resolution of a remote sensing system. Divergence is determined by diffraction at the output aperture from which optical or radio energy is emitted. The Rayleigh criterion provides an estimate of the angular resolution of optical imaging systems or the beam divergence of active systems as

$$ \mathrm{Sin}\left(\theta \right)=1.220\frac{\lambda }{D} $$

where θ is the angular resolution or beam divergence in radians, λ is the radiation’s wavelength, and D is the diameter of the aperture (lens or antenna). Considering an average optical wavelength of 1,064 nm and a modest aperture of 1 cm, the diffraction-limited divergence of a laser beam is then 0.13 μrad. For a radio wave at an average wavelength of 10 cm and with an antenna 10 m in diameter, the divergence of the radio beam is 12.2 μrad, almost 100 times wider than the laser beam. In order to have the same divergence as the optical beam, the antenna would have to be almost 940 m in diameter. To overcome this limitation, the synthetic aperture radar (SAR) technique was developed to electronically synthesize a virtual antenna many times larger than the physical antenna, based on the platform motion. Smaller divergence of laser beams implies smaller footprints and better angular and spatial resolutions for LiDARs as compared to Radar.

The contrast of higher resolution due to smaller footprints is that LiDARs generally provides smaller spatial coverage. In addition, current spaceborne LiDAR systems for atmospheric and mapping applications operate in single beam profiling mode, which means that the sampling is performed along a single line with no scanning capabilities. On the other hand, spaceborne Radar systems have multiple beams and the capability to electronically steer the beams in a direction perpendicular to the direction of flight. Larger footprint and scanning capabilities of radar systems allow for larger spatial coverage and a better temporal resolution.

Satellite LiDAR Applications

Geodetic and Geodynamic Applications

Geodesy is the study of the shape, size, orientation, motion, and gravity of the Earth; it also includes the establishment of coordinate reference systems used to uniquely describe the location of any point on the Earth. Geodesy is the discipline that enables many of current satellite applications such as satellite-aided navigation (GPS, GLONASS, and Galileo) and satellite remote sensing mapping by establishing the geodetic frame of reference on which these systems operate.

The first geodetic observation is credited to Eratosthenes, a Greek philosopher who lived in the third century BC and who was able to conclude that the Earth had a spheroid shape and was able to estimate its size. Over the centuries, geodetic instruments and techniques have evolved, but the need to measure angles, distances, and time to determine geographic coordinates and the Earth’s parameters has not changed. This need for accurate distance and time measurements led geodesists to develop the electronic distance measurement (EDM) technologies, one of which evolved into modern-day ranging LiDAR. Also, for centuries, geodesists have been performing astronomical observations to derive coordinates and distances between remote stations. They realized that this could also be done by observing man-made airborne objects, and so as technology matured, they started using balloons, airplanes, rockets, and eventually satellites as targets. So it is not surprising that the first spaceborne application of LiDAR technology was developed for geodetic studies.

This was achieved by leaving the active LiDAR equipment (laser transmitter and optical detector) on the ground (Fig. 6) and installing passive elements (retroreflector arrays) on satellites (Fig. 7). This architecture has many advantages, the main one being that technology can improve continuously on the ground segment and need not stop once the satellite is integrated and launched. Also the spacecraft infrastructure, being passive, does not require power or maintenance and typically has extremely long lifetime. The long lifetimes and large number of satellites carrying retroreflectors have allowed the accumulation of over four decades of ranging data.

Fig. 6
figure 6

NASA MOBLAS-7 mobile SLR system circa 1980 (Courtesy of NASA)

Fig. 7
figure 7

Laser retroreflector array on COMPASS satellites (Courtesy of Shanghai Astronomical Observatory, Chinese Academy of Sciences)

The first geodetic satellite tracked by LiDAR was the ANNA-1B launched on October 31, 1962. ANNA-1B carried equipment to test three different satellite tracking techniques; one of them was the use of high-intensity optical beacons (Simons 1964). The beacons operated on command and produced a sequence of five flashes separated by 5.6 s. These flashes were recorded using long-exposure photography; simultaneous observations from different stations allowed the determination of the satellite position (Harris and Berbert 1966). The first geodetic satellite that carried a retroreflector array was the Beacon Explorer-B (designated as the Explorer 22) (Degnan et al. 1994). The Explorer-B was launched on October 9, 1964; it was a 116-pound satellite that in addition to the retroreflector also carried a radio beacon. The satellite was tracked from stations around the world using both radio and LiDAR technology, although radio equipment was much cheaper than the optical Radar, and because the satellite was magnetically stabilized, the retroreflectors were oriented in such a way that it was only possible to track the satellite from stations on the northern hemisphere.

The first laser tracking of the Explorer 22 was carried out on October 31, 1964, by a team from NASA’s Goddard Space Flight Center (GSFC). This was the origin of a geodetic LiDAR technique named satellite laser ranging (SLR). The Explorer 22 was soon joined by more satellites carrying corner cube retroreflectors including more satellites of the Explorer series, Explorer 22 (launched on April 29, 1965), Explorer 29 also known as GEOS1 (launched in November 6, 1965), and the Explorer 36 or GEOS 2 (launched on January 11, 1968). The Centre National d’Etudes Spatiales (CNES) from France also contributed to SLR by launching a pair of geodetic satellites, the Diadème-1 D1C (February 08, 1967) and the Diadème-2 D1D (February 15, 1967), equipped with dual-frequency Doppler transmitters and retroreflector arrays. The first international SLR campaign occurred in the spring of 1967 with the participation of five laser stations, three operated by CNES and located in France, Algeria, and Greece, one station operated by NASA in Maryland, and one operated by the Smithsonian Astrophysics Observatory (SAO) in New Mexico. Data from this campaign was used to compare SLR to traditional optical observations, and an improvement by a factor of 4 in the accuracy of determined positions was estimated; however, most important was the development of SAO standard Earth’s gravity model (Degnan et al. 1994).

This first international SLR campaign with stations spread across the world helps illustrate the mode of operation of this geodetic LiDAR technique. As shown in Fig. 8, a single satellite can be tracked simultaneously from stations separated by a few meters up to thousands of kilometer, and using triangulation, it is possible to determine the baselines between the stations. Observations from SLR stations are enhanced by colocation with other global space geodetic techniques such as very long baseline interferometry (VLBI), global navigation satellite systems (GNSS), and Doppler orbitography and radiopositioning integrated by satellite (DORIS).

Fig. 8
figure 8

Simultaneous SLR from three stations at the Goddard Geophysical and Astronomical Observatory (Image courtesy of NASA)

The early geodetic satellites were not optimal for geodesy and relativistic applications because they were launched into low orbits and because they carried a variety of instruments which enlarged their cross section and lowered their density. The satellite’s low orbit and low density limited the visibility times and increased their susceptibility to gravitational perturbations, while the large cross section made them susceptible to atmospheric drag, radiation pressure, and other nonconservative forces. To overcome these limitations, the ultimate Earth satellite, the Moon, was equipped with retroreflectors. As early as 1962, J.E. Faller had proposed the idea of placing a retroreflector on the surface of the Moon, and in 1965, the lunar ranging experiment (LURE) multi-institutional team was formed. Between 1969 and 1973, a total of five retroreflectors were placed on the Moon, three of them by manned Apollo missions (11, 14, and 15) and two French-built retroreflectors carried by the Russian lunar rovers Lunokhod 1 and 2 (Luna 17 and 21 mission) (Bender et al. 1973). These lunar retroreflectors made it possible to range to and track the Moon from stations around the world using a LiDAR technique called lunar laser ranging (LLR).

As a complement to the lunar retroreflectors, several satellites designed exclusively for geodesy using SLR have been launched into relatively high and very stable orbits. These “cannon ball” satellites have high densities and small surface area covered almost entirely by retroreflectors. The first was the French-built Starlette launched in 1975, followed by the American Laser Geodynamics Satellite (LAGEOS-1) launched in 1976. Other SLR-only satellites include the Japanese Ajisai (launched in 1986), the Soviet Etalon-1 and 2 (launched in 1989), the LAGEOS-II (built by the Agenzia Spaziale Italiana and launched in 1992), and the French satellite Stella (launched in 1993).

To this date, more than 130 satellites have been tracked from more than 70 laser stations around the world (Fig. 9) (The International Laser Ranging Service). The massive amount of data collected for almost half a century from SLR and LLR has allowed the accurate determination of the ground station coordinates to the millimeter level and the satellite orbits to the centimeter level. These techniques combined with other space geodetic techniques such as VLBI and GNSS have been applied to scientific issues such as the modeling or establishment of the Earth’s gravity field, reference frame, and orientation parameters, to prove geodynamic theories such as plate tectonics, glacial rebound, and crustal deformation, to test principles of general relativity, and to determine Earth–lunar and solar system celestial mechanics parameters (The International Laser Ranging Service; Degnan et al. 1994). Also the establishment of the terrestrial reference frame (TRF) and Earth orientation parameters (EOP) along with the accurate determination of satellite orbits is crucial for satellite applications such as navigation and Earth observation. Some of these applications are described next.

Fig. 9
figure 9

Stations of the international laser ranging service (Courtesy of ILRS/NASA)

Observations and Modeling of the Terrestrial Gravity Field

The Earth’s gravity field is a 3D vector field that specifies the acceleration that an object will experience at a given point at or above the Earth’s surface. Its main component or mean gravity, 9.8 m/s2, is the equivalent gravity of a uniform mass distribution and a spherical shape. The next-order deviation from this simplified model is due to the Earth’s rotation and oblate shape. Smaller-order variations are due to mass distribution heterogeneity. In addition to spatial variations, there are temporary variations due to mass redistribution through and among the atmosphere, cryosphere, hydrosphere, and solid Earth.

To study the gravity field, the gravitational potential is modeled by a spherical harmonic series of the form (Heiskanen and Moritz 1967)

$$ U=\frac{GM}{r}{\displaystyle \sum_{n=0}^{\infty }{{\displaystyle \sum_{m=0}^n\left(\frac{r_0}{r}\right)}}^n}{\overline{P}}_{nm}\left( \sin \phi \right)\times \left[{\overline{C}}_{nm} \cos \left(m\lambda \right)+{\overline{S}}_{nm} \sin \left(m\lambda \right)\right] $$

where n is the degree and m is the order, \( {\overline{P}}_{nm} \) is the fully normalized Legendre polynomial and associated functions, r 0 is the reference radius, ϕ is the latitude and λ is the longitude, and \( {\overline{C}}_{nm} \) and \( {\overline{S}}_{nm} \) are the series coefficients determined from observational data from a variety of sources. Similar spherical harmonics can be used to describe the shape of planetary bodies.

Before dedicated gravity satellite missions such as CHAMP (2000), GRACE (2002), and GOCE (2004), global gravity observational data were obtained by tracking satellites using SLR (Degnan et al. 1994). A satellite orbit is determined primarily by the Earth’s gravity field and affected by nonconservative forces such as drag (atmospheric, thermal, neutral density, and charged particles) and radiation pressure. If the effects of the nonconservative forces can be accounted for, then the differences between the predicted and determined orbit of a satellite can be attributed to inaccuracies in the gravity model. Data from SLR, in situ, airborne and shipborne gravimetry, and satellite altimetry have been used to produce gravity models until this last decade. However, data from SLR provide the longest baseline to study temporal variations of the low-order zonal harmonic components of the gravity field (Degnan et al. 1994).

Terrestrial Reference Frame (TRF) and Earth Orientation Parameters (EOP)

Satellite applications require a foundation of permanently operating reference stations to collect the observations required to provide their mapping, positioning, and timing services. This network of stations serves as a terrestrial reference frame which defines the origin (center of mass) and orientation of the Earth. Earth orientation parameters – universal time (UT1), length of day (LOD), and coordinates of the pole and celestial pole offsets – describe the irregularities of the Earth’s rotation and the orientation of the axis of rotation relative to inertial space and celestial reference system. Observations with space geodetic techniques, including SLR, LLR, GPS, and VLBI, provide the required data to define the Earth’s center of mass, UT1, LOD, and polar motion. VLBI is the only technique capable of accurately determining changes in the orientations of the earth with respect to the crust and to a celestial reference frame composed of natural radio sources (quasars) – the best current approximation of a true inertial reference.

Precision Orbit Determination for Navigation and Earth Observation Missions

Precision orbit determination (POD) is an important aspect of satellite operations, and for some satellites, such as navigation and remote sensing satellites, it is of crucial importance. It is also a technique that is in cyclical improvement. In order to obtain a precise orbit, an accurate gravity model is required. Over periods of years, gravity models are improved, based on observations of satellite orbits obtained from optical, radar, and SLR tracking. The improved gravity model in turn allows for better orbital determination, and so the cycle continues. In the early years of the space era, satellites were tracked from the ground using optical photographic cameras and basic Doppler radar techniques with accuracies of approximately 10 m for satellites in a 1,000 km altitude orbit (Vetter 2007). The introduction of SLR in 1964 provided and alternate method for satellite tracking with an improved accuracy of a few meters. The ability to track satellites has continued to improve over the years to the millimeter-level accuracy obtainable today (McGarry et al. 2005).

SLR is a more precise technique than radar because it can obtain accurate ranges to retroreflector arrays, whose position with respect to the satellite center of mass is well known, whereas radar obtains a range to the center of the satellite radar cross section, whose position relative to the center of mass is known to a lower level of accuracy. Currently, satellites with orbital altitudes below 20,000 km can be continuously tracked using GNSS (or other systems such as NASA’s TDRS) with centimeter-level precision or better. However, for GNSS satellites, to provide positioning, timing, and navigation accurately is necessary to have accurate knowledge of their own orbits. GNSS satellites are tracked by a variety of means including optical and radar. Most of GLONASS satellites, the two current Galileo spacecrafts (GIOVE-A and GIOVE-B), one of the Chinese COMPASS, the Japanese QZS-1, and one GPS satellite (GPS-36) carry retroreflector arrays to be tracked by SLR (GPS-35 decommissioned in April 2009 also carried an array) (The International Laser Ranging Service). Other satellites whose orbit needs to be accurately determined for the fulfillment of their scientific objectives are therefore tracked by SLR and include gravity mappers GOCE and GRACE; radar and LiDAR altimeters Cryosat, Jason 1 and 2, and ICESat (decommissioned); and remote sensing satellites Envisat, ERS-2, TerraSAR-X, and TanDEM-X (The International Laser Ranging Service).

Laser Altimetry and Topographic Mapping

Laser altimetry was the first application of spaceborne LiDAR on which the active equipment was carried by the spacecraft. Laser altimetry originated as an alternative to more traditional Radar altimeter. This was because the large divergence of radio beams makes its footprint on the surface of the planet many times larger than the footprint of a narrower laser beam. In altimetry, a smaller footprint results in a more accurate and representative estimate of height (Bufton 1989). As illustrated in Fig. 10, in nadir-looking satellite LiDAR altimetry, the laser footprint is dependent on the satellite orbital altitude and laser beam divergence, while the spacing between footprints (spatial resolution) depends on the orbital velocity and the laser pulse repetition rate (PRF). The accuracy of the derived elevation depends on the precise determination of the spacecraft orbit and attitude.

Fig. 10
figure 10

Principles of operation of satellite LiDAR altimetry

The first spaceborne altimetry systems were not deployed on Earth observation missions but rather on missions to the Moon and Mars. This was because the Earth’s atmosphere presented a huge challenge as most of the laser energy is scattered by atmospheric constituents on a two-way trip from outside the atmosphere to the ground and back. Table 2 presents a historical evolution of spaceborne LiDAR altimeters and their main technical characteristics. The first laser altimeter system was deployed with the Apollo 15 mission to the Moon in 1971. The altimeter was part of the orbital science investigation and was designed to take an altitude reading for each photograph taken with a mapping metric camera (every 20–28 s), although the altimeter was also able to range independently of the camera (at least every 20 s) (Alley et al. 1969). The metric camera, the altimeter, and two other cameras (panoramic and stellar) were located in the scientific instrument module (SIM) within the Apollo service module. The Apollo laser altimeter was based on a Q-switched ruby laser and a photomultiplier tube detector; the system was also deployed on the Apollo 16 and 17 missions in 1972. At its highest sampling rate of 0.05 Hz, the altimeter sampled the lunar surface height every 30–43 km with a footprint of roughly 30 m in diameter. The main problem with this instrument was its short lifetime; during the Apollo 15 mission, the altimeter showed anomalous operation and stopped working in lunar orbit #38. As a result, only two complete and two partial surface profiles had useful data (Robertson and Kaula 1972). For the Apollo 16 mission, the sampling rate was reduced, and the instrument lifetime was extended to lunar orbit #63, some 2,372 laser pulses, of which 69 % had valid data, yielding five complete lunar surface profiles (Wollenhaupt et al. 1972). For the last Apollo lunar mission, the laser was modified to increase its lifetime, and the altimeter lasted during the entire mission. The laser fired 4,026 pulses and yielded 16 complete lunar surface profiles (Wollenhaupt et al. 1973). Data from all the missions combined yielded 7,080 height points, and from these, a lunar mean radius was determined, and a spherical harmonic representation of the lunar shape was produced completely to the 12th order and degree. However, the coverage was limited to ±26° lunar latitude.

Table 2 Evolution of technical characteristics of spaceborne laser altimeters

LiDAR altimetry returned to the Moon in 1994 onboard the Clementine mission. This instrument had a mass of only 2.4 kg (Smith et al. 1997) (compared to the 22.5 kg of the Apollo altimeter (Robertson and Kaula 1972)), yet it fired around 650,000 laser pulses. Because the system was designed as a military ranging system and not an altimeter, only 19 % of the fired pulses caused reflections that were detected, and of these, only 72,548 were filtered out as valid surface returns (Smith et al. 1997). These data covered the lunar surface between 79°S and 81°N latitude, with a minimum along-track resolution of 20 km and an across-track resolution of roughly 60 km. From these data, a spherical harmonic representation of the lunar shape complete to the 72nd order and degree was produced (Smith et al. 1997). Most recently, the Lunar Reconnaissance Orbiter (LRO) carrying the Lunar Orbiter Laser Altimeter (LOLA) (Ramos-Izquierdo et al. 2009) has been mapping the Moon since September 2009, and as of June 19, 2010, LOLA had collected over two billion elevation measurements using its multichannel technology (Smith et al. 2010).

Besides the Moon, the shape and topography of three other extraterrestrial solar system bodies have been mapped: Mars, Mercury, and the asteroid 433 Eros. The first attempt to use LiDAR to map the Martian topography was the Mars Orbiter Laser Altimeter 1 (MOLA-1) launched onboard the Mars Observer launched in 1992 (Smith et al. 2001; Garvin et al. 1998). Unfortunately, the Mars Observer was lost on August 21, 1993, a few days before the orbit insertion maneuver. The second attempt was by MOLA-2 onboard the Mars Global Surveyor; MOLA-2 performed regular mapping operations between February 28, 1999, and June 30, 2001, and within this time frame, approximately 640 million points were collected of the Martian surface (Smith et al. 2001; NASA). Figure 11 shows some samples of Mars topography from MOLA-2 data.

Fig. 11
figure 11

Mars topography from MGS – MOLA-2 (Image courtesy of NASA)

Eros was mapped by the laser range finder (Colea et al. 1996) onboard the NEAR-Shoemaker spacecraft launched in 1996, which entered orbit around Eros on February 14, 2000, and landed on the surface of the asteroid on February 12, 2001, and the mission was terminated on February 28, 2001. During its mapping mission, the laser range finder collected around 11 million measurements and allowed the best determination of shape, gravity, and rotational state of any asteroid to date (Zuber et al. 2000; Miller et al. 2002). The most recent extraterrestrial body whose surface has been studied with LiDAR is Mercury. The Mercury surface, space environment, geochemistry, and ranging (MESSENGER) mission was launched on August 3, 2004, carrying the Mercury Laser Altimeter (MLA) (Cavanaugh et al. 2007). After launching from Earth, MESSENGER has to perform six reversed gravity assists to obtain an orbital orientation and velocity suitable for its orbital insertion around Mercury in March 2011. These gravity assists are the result of flybys of planetary bodies, one with Earth (2005), two with Venus (2006 and 2007), and three with Mercury (January and October 2008, September 2009). MLA has been activated on the three Mercury flybys, and results have been reported for second flyby. During the flyby, a 3,200-km-long profile along the equatorial region was collected (Zuber et al. 2008). The laser footprint at the surface ranged between 23 and 134 m, while the spacing between the footprints varied from 725 to 888 m. Even this modest data profile has improved our knowledge of the shape and topography of the planet and has provided a preview of Mercurian crater morphology.

With regard to planet Earth, there are a few reports that indicate the existence of an altimetry LiDAR system named LORA, which was used to obtain precise altitude of photographs taken from a large format camera onboard a Soviet satellite (Werner et al. 1995, 1996). This LiDAR was reported to be operational as early as 1984; however, it has been hard to obtain independent confirmation of these reports. The first confirmed LiDAR returns from the surface of the Earth were obtained in September 1994 during the STS-64 mission. The LiDAR In-space Technology Experiment (LITE) was flown into space in the cargo bay of the Space Shuttle Discovery (Winker et al. 1996). However, LITE was designed primarily as an experimental atmospheric LiDAR and will be discussed in greater length in the next section.

The first LiDAR altimeter designed for Earth observation was the Shuttle Laser Altimeter (SLA) (Garvin et al. 1998). SLA was designed to fit in two hitchhiker canisters mounted on as special bridge structure carried in the Shuttle cargo bay as part of the small self-contained payload program (SSCP) most commonly known as the Getaway Special (GAS). This compact design allowed the SLA to be carried on any shuttle mission on which there was room for the GAS bridge. The SLA design was based on MOLA-1 and was constructed using MOLA spares. One of the GAS canisters housed the optical receiver that consisted of a 38 cm Cassegrain telescope and at its prime focus a silicon avalanche photodiode detector (Si APD). It also contained a coaxial transmitter based on a diode-pumped, Q-switched, Nd:YAG laser. The second canister contained the flight computer, power electronics, temperature sensors, and ancillary equipment. An upgrade from the MOLA architecture was the inclusion of a waveform recorder which digitized each received pulse in 4 ns samples quantized at 8 bits. The digitizer allows the determination of a redundant time of flight obtained from the time interval meter (TIM) to characterize the structure of the surface that caused the backscattering. SLA was flown twice: the first time was during the Endeavour STS-72 mission in January 1996 (Garvin et al. 1998) and the second during the Discovery STS-85 mission in August 1997 (Carabajal et al. 1999). During the STS-72 mission, SLA-01 collected about 82 h of nadir-looking altimetry data, roughly totaling three million observations. The Endeavour orbit for STS-72 had an altitude of 300 km, an inclination of 28.45°, and an average orbital velocity of 7 km/s. The orbit inclination and the nadir-looking orientation of SLA constrained the ranging acquisition in the midlatitudes between 28.45°N and 28.45°S, the laser footprint size determined from the altitude and beam divergence was ~100 m, and the spacing between footprints determined by the combination of the velocity and PRF was ~700 m. After preprocessing and filtering, roughly 475,000 valid returns were obtained from land and 1.1 million from the ocean surface (Garvin et al. 1998).

For the second flight of SLA onboard the Discovery STS-85 mission, the hardware was upgraded to include a variable gain amplifier (VGA) that allowed the detector to adjust to the high dynamic range of the laser returns observed during SLA-01 that had caused the saturation of the waveform recorder. Similar to SLA-01, SLA-02 collected almost 83 h of data, firing close to three million points (Carabajal et al. 1999). The main difference was that the orbital inclination of STS-85 was 57° which allowed altimetry sampling up to high latitudes. After preprocessing and filtering, roughly 590,000 valid returns were obtained from land and 1.5 million from the ocean surface. There were plans for two more flights of SLA to keep improving the system by reducing the beam footprint and increasing the PRF. A third flight was planned for late 1998 in partial support of the Shuttle Radar Topography Mission (SRTM). However, no additional flights of SLA past SLA-02 were executed. Figure 12 shows the ground tracks of collected data from the SLA-01 and SLA−02 experiments. Data from the SLA mission were compared against other ground and sea surface elevation databases (Behn and Zuber 2000; Harding et al. 1999) and were also used to perform accuracy assessments of the later collected STRM dataset (Suna et al. 2003).

Fig. 12
figure 12

Ground tracks for the SLA-01 and SLA-02 collections

The lessons learned from the two SLA missions were incorporated into the most recent and advanced spaceborne LiDAR altimeter for Earth observation to date: the Geoscience Laser Altimeter System (GLAS). GLAS was deployed on a dedicated platform: the Ice, Cloud, and land Elevation Satellite, ICESat (Abshire et al. 2005; Schutz et al. 2005). ICESat was launched on January 13, 2002, into a 600 km altitude orbit with a 94° inclination. This orbit has a nadir repetition cycle (within 1 km) of 183 days (or 2,753 revolutions), and ground track spacing within the cycle is 15 km at the equator and 2.5 km at ±80° latitude.

The GLAS transmitter was powered by three diode-pumped, Q-switched Nd:YAG lasers which operate one at a time (Abshire et al. 2005; Schutz et al. 2005). The lasers produced 5 ns pulses at 40 Hz and 1,064 nm, part of the 1,064 nm pulse was passed through a nonlinear frequency-doubler crystal to obtain a 532 nm pulse. The transmitted pulse energy was 75 mJ at the infrared wavelength and 35 mJ at the green wavelength with a beam divergence of 110 μrad. The orbital and laser characteristics yielded a footprint of 65 m on the surface with successive spots spacing of 172 m. The backscattered radiation was collected by a 100 cm diameter beryllium telescope; the 1,064 nm component is used to detect strong backscattering in analog mode from clouds, water, ice, and land surfaces, while the 532 nm component was use in photon-counting mode to detect scattering from thin high-altitude clouds. The 1,064 nm signal was filtered through an 800 pm spectral filter and detected by a Si APD (there were actually two APDs for redundancy). The APD output was digitized separately at 1 GHz and 2 MHz rates; the 1 GHz rate enables a range resolution of 15 cm for accurate surface determination, while the 2 MHz yields a 77 m resolution for the detection of thick clouds and aerosols. The 532 nm component was filtered twice through 370 and 30 pm spectral filters to limit background light, and the resultant beam was split into eight beamlets that were individually detected by eight Si APD detectors operating in Geiger mode (Abshire et al. 2005; Schutz et al. 2005).

To obtain an accurate geolocation of the laser returns, besides the accurate determination of the two-way time of flight, it is necessary to determine the position and attitude of the instrument and the orientation of the fired laser shot. Precise orbit determination (POD) was performed via GPS tracking using two redundant dual-frequency blackjack receivers connected to two separate antennas on the zenith deck of the spacecraft (Schutz et al. 2005). On the nadir deck, a corner cube reflector array allowed the satellite to be tracked using SLR for an accuracy assessment of the GPS-derived orbit (Schutz et al. 2005). There were two attitude determination systems onboard the spacecraft, one for the satellite and one for the sensor optical bench. GLAS’s optical bench attitude was determined to better than 10 μrad with reference to inertial space through a stellar reference system (SRS) based on data acquired from a 10 Hz zenith looking star camera and a precision gyroscope (Schutz et al. 2005). In addition, the far-field pattern of the laser beam for each laser pulse was imaged, and its orientation was determined with respect to the optical bench and inertial space (Schutz et al. 2005). GLAS was designed to perform nadir pointing ranging; however, the spacecraft could be commanded so GLAS could point ±5° off-nadir to acquire targets of opportunity. Figures 13 and 14 show photos of the ICESat satellite integration, which highlight crucial elements of GLAS and the subsystems that enabled precise orbit and attitude determination.

Fig. 13
figure 13

ICESat’s nadir deck showing (a) receiving telescope, (b) retroreflector array, and (c) telemetry antenna (Image courtesy NASA)

Fig. 14
figure 14

ICESat’s Zenith deck showing (a) the satellite star trackers, (b) telemetry antenna, and (c) GPS antennas (Image courtesy NASA)

The GLAS lasers were expected to last for 3 continual years of operation. Unfortunately, laser 1 failed prematurely after 37 days. This failure prompted a change in the collection strategy for the mission from a continual collection with an 8-day repeat cycle to a campaign collection mode with a 33-day repeat cycle, resulting in less temporal and spatial resolution but allowing the measurement of polar ice height over the extended 7-year period. The last GLAS laser ceased operation on October 11, 2009, and was decommissioned in August 14, 2010 (Abshire et al. 2005). In its almost 8 years in space, GLAS fired almost two billion laser pulses (Abdalati et al. 2010). The primary objective of the ICESat mission was the accurate determination of interannual and long-term changes of polar ice volume and mass balance; however, additional applications included the monitoring of land topography, hydrology, vegetation canopy height, cloud heights, and atmospheric aerosol distributions (Abshire et al. 2005). Figure 15 illustrates the use of GLAS data collected between 2003 and 2007 to generate maps of Antarctic and Greenland’s ice sheet elevation change rates. The images indicate the dynamic thinning of ice sheets in certain areas and the accumulation of ice and snow in others.

Fig. 15
figure 15

ICESat data showing changes in elevation (m/year) in the Greenland and Antarctica ice sheets (Image courtesy of NASA)

To continue the critical measurement of the polar ice sheets, an improved ICESat-2 mission is currently being developed and scheduled for launch in 2016 (Abdalati et al. 2010). To obtain a denser spatial sampling than that of ICESat-1, a multi-beam approach (Figs. 16 and 17) combined with a higher PRF is under study (Yua et al. 2010). The baseline design consists of a micropulse laser with a PRF of a 10 kHz, 0.1 mJ of energy per pulse, and a pulse width of ~1 ns. A diffractive optical element (DOE) splits the beam into nine beamlets with different energy and arranged in a 3 × 3 slanted array (Fig. 17). The footprint of each beam is expected to be 10 m in diameter, and because the array is slanted with respect to the flight line, the projection on the ground produces nine parallel tracks grouped in threes. Each group will be spaced 3 km apart, and within the group, the spot separation in the across-track direction will be 50 m (Fig. 17).

Fig. 16
figure 16

Multi-beam LiDAR transmitter concept for the ICESat 2 mission (Image courtesy of NASA)

Fig. 17
figure 17

Layout of the multi-footprint concept for ICESat 2 obtained from the DOE

Besides ICESat-2, the NRC decadal survey recommends two additional LiDAR altimetry missions to be launched before 2020. The most immediate mission is the Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) (National Research Council 2007) which will attempt to exploit the synergy between L-band polarimetric InSAR and multi-beam LiDAR altimeter. The second mission recommended for the last half of the decade is LiDAR surface topography (LIST). The objective of LIST will be to produce a global elevation dataset with a horizontal resolution of 5 m with at least 10 cm vertical precision (National Research Council 2007).

Atmospheric Studies

Obtaining global datasets on atmospheric composition, structure, and circulation is of crucial importance for the development of global climatic models. These datasets are obtained with a myriad of instruments using both direct detection and remote sensing. It is also important to use both a bottom-to-top and top-to-bottom approach. In situ Radar and LiDAR provide the bottom-to-top measurements which are able to detect phenomena in the lower denser layers of the atmosphere, but because of the higher density of the lower layers, they are not able to obtain measurements of the thinner upper layers. Spaceborne sensors provide the top-to-bottom view detecting phenomena in the higher and thinner layers of the atmosphere and generating a much needed global coverage not attainable any other way. Spaceborne atmospheric LiDARs have been employed mainly to study the Earth’s atmosphere and in particular cases the Martian atmosphere. All other planetary atmospheres in the solar system are too dense to be probed in the optical wavelengths.

Although there are some unconfirmed reports that as early as 1984 a Soviet reconnaissance satellite carried a LiDAR to obtain precise altitude of photographs taken from a large format camera and was used for early atmospheric observations (Werner et al. 1996; Werner et al. 1995), the first confirmed spaceborne LiDAR built primarily for atmospheric studies flew into space in September 1994. The LiDAR In-space Technology Experiment (LITE) was flown into space in the cargo bay of the Space Shuttle Discovery during the STS-64 mission (Winker et al. 1996). LITE was designed and built based on the experience accumulated over two decades by NASA’s Langley Research Center designing, building, and operating ground-based and airborne atmospheric LiDARs. LITE was designed mainly to detect and measure clouds and aerosols in the troposphere and stratosphere, determine the height of the planetary boundary layer (PBL), and derive temperature and density profiles in the stratosphere at heights between 25 and 40 km. It was also capable of detecting returns from land and sea surfaces, however, without the precision of an altimetry system.

As shown in Fig. 18, LITE was designed to fly in the cargo bay of the space shuttle integrated into a Spacelab 3 m pallet. The laser transmitter system was based on two redundant flashlamp-pumped, Q-switched, Nd:YAG lasers (Winker et al. 1996). Part of the energy of the 1,064 nm fundamental wavelength was passed through nonlinear frequency-doubling crystals to obtain 532 and 355 nm beams. The pulse repetition frequency (PRF) was 10 Hz, and each pulse had a width of 27 ns; energy per pulse was 470, 530, and 170 mJoules, with a divergence of 1.8, 1.1, and 0.9 mrad for the 1,064, 532, and 355 nm wavelengths, respectively. The laser beams were steered through a two-axis gimbaled prism to maintain optical alignment with the field of view of the receiver. The receiver was based on a 1-m diameter Ritchey–Chrétien telescope, with a rotating wheel with multiple aperture stop settings to configure the instrument for day or night collections. Dichroic beam splitters separate the return signal into the three spectral components, and part of the 532 return signal was used to determine and control the boresight alignment between the transmitter and receiver. The three beams were directed through narrowband spectral filters before their respective detectors, photomultiplier tubes (PMT) for the 355 and 532 nm components, and an avalanche photodiode (APD) for the 1,064 nm component. The output from the detectors was digitized with 12-bit amplitude at 10 MHz (550 μs).

Fig. 18
figure 18

The LITE LiDAR onboard the Space Shuttle Discovery (Image courtesy of NASA)

The STS-64 carrying LITE was launched into a 260 km altitude orbit with a 57° inclination and 7.4 km/s orbital velocity. These orbital characteristics combined with optical transmitter specifications yielded footprints 470 and 290 m in diameter for the 1,064 and 532 nm beams, respectively, and footprints were spaced every 740 m. During the 11-day mission, LITE was operated roughly 5° off-nadir to avoid saturation from high specular reflections and acquired a total of 53.6 h of quick view data (43.5 high-rate profiles) collection; almost two million laser pulses were fired (1.16 from the first laser, 0.77 from the second) (Winker et al. 1996). These collections provided the first ever high-resolution transects of the atmospheric constituents and cloud structures. LITE data was validated against ground-based and airborne measurements.

After LITE, there were several short-lived spaceborne atmospheric LiDAR experiments. Including the Balkan-1 onboard the Spektr module of the Russian MIR space station (launched on May 20, 1995) (Werner et al. 1995), the French designed and built l’Atmosphere par LIdar Sur SAliout (ALISSA) onboard the Priroda module (launched April 23, 1996) also of the MIR space station (Chanin et al. 1999) and the Balkan-2 onboard the ALMAZ-1B Earth observation satellite (launched on January 1, 1997) (Matvienko et al. 1994). Currently, the joint NASA and CNES Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite carry the only operational spaceborne terrestrial atmospheric LiDAR: the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP). Launched on April 28, 2006, CALIPSO is managed by NASA’s Langley Research Center due to the center’s overall expertise on atmospheric LiDAR systems. An interesting aspect of the CALIPSO mission is that it is part of the afternoon or “A-Train” satellite constellation which also includes the Aqua, Aura, PARASOL, and CloudSat satellites. All the satellites follow the same Sun­synchronous orbit (705 km altitude, 98° inclination) and are separated from each other by a few seconds to minutes. The sensor suite carried by the satellites in the constellation enables the first global, near simultaneous measurements of aerosols, clouds, temperature, relative humidity, and radiative fluxes. CloudSat with its cloud profiling radar (CPR) leads CALIPSO by 10–15 s, which allows for the simultaneous profiling of the same cloud systems with both the radar and LiDAR. CALIPSO attitude is controlled such that the LiDAR points 0.3° ahead of nadir in the along­track direction, to avoid saturating the detector with strong specular returns from calm water bodies. Based on the spacecraft orbital parameters and the transmitter characteristics, the footprint on the ground is 70 m in diameter, and adjacent spots are separated 333 m in the along-track direction.

Figures 1 and 2 (see the “High-Level Technical Overview of LiDAR” section) illustrate CALIOP’s system design; its transmitter is based on two redundant, diode-pumped, Q-switched Nd:YAG lasers with a PRF of 20.16 Hz, with a nominal energy per pulse of 220 mJ (Winker et al. 2004). Part of the energy of the 1,064 nm pulses is passed through a frequency-doubling crystal to produce a 532 nm component. The energy of the transmitted pulses at both wavelengths is nominally 110 mJ and is measured before passing through a beam expander that limits the divergence to 100 μrad. The laser polarization is also controlled to be linearly polarized with a purity greater than 99 %. Backscattered photons are collected by a 1-m beryllium mirror telescope; a field stop at the telescope focus limits the receiver field of view and provides a spatial filter limiting background noise. A dichroic beam splitter separates the 1,064 and the 532 nm components; the 1,064 nm stream is filtered through a narrow band spectral filter and then is directly detected by an avalanche photodiode (APD). The 532 nm stream is passed by a double spectral and etalon filter to limit the background noise. The pure 532 nm component is then passed through a polarization beam splitter to separate the perpendicular and parallel polarization components and from there directed to separate photomultiplier tubes (PMT). For each channel, the output of the detector is amplified by two parallel amplifiers and 14­bit digitizers which provide an effective 22-bit dynamic range. This dynamic range covers the expected magnitude range of the backscattering signals from molecules, aerosols, and cloud surfaces encountered in the atmosphere. Data acquisition starts when the laser pulses are estimated to be 115 km above sea level and finish at above 18.5 km below MSL; the output from the digitizers is sampled and recorded at 10 MHz (15 range bin).

CALIOP’s data provides thin transects of the Earth’s atmosphere that characterize the vertical distribution of atmospheric aerosols and molecules. It is a valuable complement to other type of meteorological sensors that provide information on the horizontal distribution of clouds and other atmospheric features. Figure 19 shows an example of one of such atmospheric transect as CALIPSO was on an ascending pass from South America up to the North Atlantic (Fig. 20). Figure 19 illustrate the difference in detected scattering at the 532 and 1,064 nm channels. The 532 nm is the most sensitive channel due to its shorter wavelength. Figure 21 illustrates the complementary value of atmospheric LiDAR to other forms of passive remote sensing; overlaid over an AQUA MODIS image is the ground track of the CALIOP profile. Complementing the horizontal cloud structure from the MODIS image, CALIOP data shows the vertical cloud structure, including an ash plume produced by the eruption of the Eyjafjallajökull volcano in May 2010.

Fig. 19
figure 19

Example of an atmospheric backscattering profile detected by CALIOP’s 532 nm parallel + perpendicular and 1,064 nm channels (Image courtesy of NASA)

Fig. 20
figure 20

Ground track for the atmospheric scattering profiles shown in Fig. 19 (Image courtesy of NASA)

Fig. 21
figure 21

Horizontal cloud structure from AQUA MODIS and vertical profile from CALIOP showing the ash plume from the Eyjafjallajökull volcano (Image courtesy of NASA)

An interesting implementation of atmospheric LiDAR occurred in 2008, when a ground-based atmospheric LiDAR was deployed and made successful measurements of the Martian atmosphere for 152 days. The LiDAR was part of the meteorological station (MET) onboard the Phoenix Mars Lander that was launched from the Earth on August 4, 2007, landed on Mars on May 25, 2008, and collected and transmitted scientific information until October 29, 2008 (a total of 152 Martian days) (Whiteway et al. 2011). What is outstanding about this LiDAR system is the degree of miniaturization that was achieved. The entire unit had a total mass of 6 kg. The transmitter was based on a single diode-pumped, Q-switched Nd:YAG laser with a PRF of 100 Hz and a pulse width of 10 ns (Whiteway et al. 2008). Part of the energy of the 1,064 nm pulses was passed through a frequency-doubling crystal to produce a 532 nm component. The pulse energy was 0.3 mJ at 1064 nm and 0.4 mJ at the 532 nm. The divergence of the laser beams was 250 μrad. The backscattered photons were collected by a 10-cm diameter reflective telescope, separated into the two spectral components by a dichroic mirror. The 1,064 beam was filtered through a 2 nm interference filter and detected by a Si APD working in analog mode. The 532 beam was passed through a 1 nm interference filter, limited by a field stop and detected by a PMT, whose signal was collected in both analog and photon-counting modes. The analog output was recorded with a 14-bit amplitude at a 30 MHz sampling frequency (333 μs per bin). Analog detection was used for backscattering below 10 km, while photon counting was used to detect weak signals from backscattering up to 20 km.

A future satellite, ADM-Aeolus scheduled for launch in 2013, will carry the first atmospheric LiDAR to be used for the remote determination of global wind speed profiles. Along with temperature, pressure, and humidity, wind velocities are the basic variables used to describe the state of the atmosphere, and the knowledge of global circulation is crucial for the improvement of global climate models. The Atmospheric Laser Doppler Instrument (ALADIN) onboard Aeolus is designed and constructed as a direct detection Doppler LiDAR. The operation principle of the instrument, illustrated in Fig. 22, consists of detecting Mie and Rayleigh scattering by aerosols and atmospheric molecules and using a high-resolution spectrometer to measure the wavelength shift of the backscattered radiation with respect to that emitted by the laser transmitter (Ansmann et al. 2007). The wavelength shift is proportional to the relative velocity along the line of sight (LOS) between the satellite and the scattering particles. By taking into account the spacecraft motion and the Earth’s rotation, it is possible to isolate the wind velocity. The satellite is planned to orbit at a 400 km altitude (7.21 km/s ground speed), and the ALADIN will point 35° off-nadir in the across-track direction. The Doppler shift and thus the wind speed are to be determined at different ranges (heights) along the LOS, and the wind horizontal component perpendicular to the satellite ground track will be projected from the slanted vector. Mission requirements call for the wind measurements to be averaged across 50 km cells, and average measurements are to be obtained about 200 km apart. To achieve these, the LiDAR will operate in burst modes, transmitting continuous burst for 7 s every 28 s (Ansmann et al. 2007).

Fig. 22
figure 22

Concept for the future ALADIN Doppler wind LiDAR

ALADIN has a monostatic design, i.e., the transmit and receive paths go through the same telescope (Ansmann et al. 2007). The telescope is an afocal Cassegrain design with a diameter of 1.5 m and its field of view of only 12 μrad, which produce a footprint of 12–15 m at the end of the 500 km LOS. The optical transmitter is based on a single-mode, diode-pumped, Q-switched, frequency-tripled Nd:YAG laser. The output laser pulse in the ultraviolet range has a wavelength of 355 nm with a pulse width of 30 ns, energy per pulse of 120 mJ, and a planned PRF of 100 Hz. The spectrally pure laser pulses are passed through linear and circular polarizers before being expanded through the telescope. The backscattered photons are collected by the telescope and passed through the polarizers; only the parallel polarized components are accepted and passed through a field stop and 1 nm spectral filter to limit the effect of background illumination. Once the return beam is spatially and spectrally filtered, it is directed to the spectrometer system which is comprised of a Fizeau interferometer, which detects the spectrally narrow Mie backscattered peak (channel 1) and two Fabry–Perot etalons (channels 2 and 3), to detect the wide Rayleigh–Brillouin backscatter spectrum. The output of the spectrometers is detected by two accumulation charged-coupled devices (ACCD).

Besides the described spaceborne atmospheric LiDARs, both NASA and ESA are currently contemplating future mission that would incorporate atmospheric LiDAR instruments. Currently in the design phase, the joint ESA JAXA Earth Clouds, Aerosols, and Radiation Explorer (EarthCARE) is aimed at improving our understanding of the interactions between cloud, radiative, and aerosol processes. EarthCARE proposes a suite of atmospheric instruments which includes a cloud profiling radar (CPR), multispectral imager (MSI), a broadband radiometer (BBR), and an atmospheric backscattering and depolarization LiDAR (ATLID). ALTLID is envisioned to be an ultraviolet high spectral resolution backscattering LiDAR (Le Hors et al. 2008), much like an upgraded version of CALIOP. On the Decadal Survey, the NRC recommended to NASA the design and implementation of three missions that incorporate atmospheric LiDARs (National Research Council 2007). The most immediate is the Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), which is envisioned to incorporate a multiwavelength LiDAR system. The Aerosol-Cloud-Ecosystems (ACE) mission, with a primary goal to reduce uncertainty about climate forcing in aerosol–cloud interactions and ocean ecosystem carbon dioxide (CO2) uptake, will incorporate an atmospheric backscattering LiDAR, a multiangle polarimeter, and a Doppler radar. Finally a demonstration mission is recommended, the 3D-Winds, which should incorporate a Doppler LiDAR to map tropospheric winds for weather forecasting and pollution transport modeling.

Guidance, Navigation, Control, and Inspection

The most recent application of LiDAR in spaceborne platforms is for on-orbit operations such as ranging for rendezvous and docking, active imaging for inspection and servicing, and robot vision for autonomous operation. The first use of LiDAR technology for semiautonomous/autonomous vehicle operation was for the Mars Pathfinder micro­rover “Sojourner,” which landed on Mars on July 4, 1997, and operated until September 27, 1997, when the communication with the Lander was suddenly lost. The Sojourner micro­rover was equipped with a stereo-pair imaging system for rover navigation. To aid the camera in proximity operations, a laser triangulation system was included which consisted of five semiconductor diode laser stripe projectors (JPL 1997). Using preflight calibration tables, the system was able to determine distances from the rover to the projected laser stripes based on the pixel position on which the laser spots were detected.

The increasing need for active imaging for on-orbit inspection and servicing has created another application of LiDAR technology. This need was extremely evident after the tragic loss of the Columbia Shuttle during atmospheric reentry on February 1, 2003. Among the many improvements to the systems and procedures of the space shuttle program during the return-to-flight effort was to include on every future mission tools for on-orbit inspection. The inspection of the critical shuttle areas such as the wing leading edge and thermal protection tiling is now performed with a suite of passive and active imaging systems mounted on the end of 15 m long boom that serves as an extension to the shuttle remote manipulator system (SRMS). The orbiter boom and sensor system (OBSS) includes three sensors: two of them, the laser dynamic range imager (LDRI) and the intensified television camera (ITVC), are mounted on a pan and tilt platform, while the third sensor, the laser camera system (LCS), is rigidly mounted on the side of the boom (Fig. 23) (NASA 2005). LDRI and LCS are active imaging sensors that use nonconventional LiDAR technology for the collection of 3D data.

Fig. 23
figure 23

The shuttle orbiter boom and sensor system (OBSS), inset images show close-ups of the laser dynamic range imager (LDRI) (a) and the laser camera system (LCS) (b) (Images courtesy of NASA)

LDRI was developed by Sandia National Laboratories and uses a combination of phase difference ranging and video to derive 3D information. LDRI has a laser transmitter based on a continuous wave (CW) diode laser emitting light at 805 nm with a maximum power of 12 W (Smithpeter et al. 2000). The CW amplitude (intensity) is modulated at 3.125 or 140 MHz. In contrast to most LiDAR systems, where the divergence of a laser beam is restricted, the light from the LDRI is expanded and then passed through a diffuser plate to produce a floodlight effect; different plates can yield beam spreads of 10–60°, with a normal used value of 40°. This expanded beam is used to illuminate the target; the backscattered photons are collected by a refractive lens and passed through a narrow 30 nm spectral filter to limit the contribution from external illumination and then focused on the cathode of an image intensifier tube. The optical gain of the intensifier tube is modulated with the same signal used to modulate the laser output. The output from the image intensifier is coupled by a fiber optic taper to a CDD detector which is read by a conventional 640 × 480 analog video recorder operating at 30 frames per second. To perform ranging of a given target area, the area is illuminated by the variable intensity laser for a given time on which several video frames are recorded. Assuming that each pixel of the frame is imaging the same target area and that the range remains constant throughout the different collected frames, it is possible to derive the phase difference for each pixel between the emitted and backscattered radiation by comparing the changes in intensity from several frame captures. Knowing the phase difference, it is possible to determine the range between the sensor and the target on a pixel by pixel basis, thus generating intensity and spatial datasets of the illuminated area.

LCS was developed by the Neptec Design Group of Canada and first flew into space on August 10, 2001, as a detailed test objective (DTO) during the STS-105 mission of the Space Shuttle Discovery (STS-105 Shuttle Press Kit 2001). LCS is a hybrid video and imaging LiDAR sensor; the LiDAR sensor is based on the triangulation ranging principle capable of imaging a 30° × 30° field of regard (FOR) from a range between 1 and 10 m (Dupuis et al. 2008; Deslauriers et al. 2005). The transmitter of LCS is based on a continuous wavelength-shifted Nd:YAG laser emitting at 1,500 nm. Scanning mirror/galvanometers are used to steer the laser beam in two dimensions over the FOR to illuminate the target. The reflected photons are captured by a refractive lens, filtered by a narrow band-pass spectral filter, and detected by a linear detector array (LDA). By determining the array coordinates of the pixel that detects the highest intensity signal and knowing the baseline distance and the galvanometer angles, it is possible to determine the range to the target using the triangulation principle with high precision (3 mm at 5 m) and at fast acquisition rates. LCS has been upgraded by Neptec to have a hybrid LiDAR design which combines a triangulation LiDAR operating at 1,400 nm with a time-of-flight (TOF) LiDAR operating at 1,540 nm which shares the same scanning mechanisms (Dupuis et al. 2008; English et al. 2005). This upgraded sensor also includes a thermal imager, and it is designated as TriDAR. TriDAR exploits in a synergistic approach the advantages of the TOF and triangulation ranging mechanism, combining the long-range capabilities with coarse precision of the TOF (range <3 km, <25 mm precision) with the sub-cm accuracy in the short range of the triangulation units. TriDAR first flew into space as a DTO onboard Discovery during STS-128 in August–September 2009, to demonstrate its capabilities to perform autonomous acquisition and tracking of the ISS. It also performed real-time docking measurements during the STS-31 mission in April 2010.

An additional space-based ranging and imaging LiDAR was carried by the Air Force XSS-11 satellite which operated between 2005 and 2007. The rendezvous laser system (RLS) sensor, also referred to as Spaceborne Scanning Lidar System (SSLS), was designed and manufactured by Optech and MDA as a system to allow XSS-11 to perform autonomous rendezvous and proximity maneuvers (Nimelman et al. 2006; Dupuis et al. 2008). RLS was a time-of-flight scanning LiDAR with a 20° × 20° field of view, a laser beam divergence of 500 μrad, a maximum range of 5 km with a resolution of 1 cm, and an accuracy of 5 cm. During its 22-month operations, XSS-11 used RLS to perform rendezvous and proximity operation around its expended Minotaur launch vehicle and with several US-owned dead or inactive resident space objects.

Conclusion

The entire books have been written on the subject of LiDAR remote sensing from specific points of view. This chapter is meant to provide a broad overview of LiDAR technology, highlighting the most common applications from spaceborne platforms. It describes the versatility of LiDAR, not only as a remote sensing technique but also as a method of enabling and supporting other remote sensing techniques and satellite applications. LiDAR, despite originating roughly at the same time as Radar, is not yet as mature as Radar or other forms of remote sensing. However, exponential development of its enabling technologies (lasers, photodetectors, positioning, and attitude sensor) as well as LiDAR data processing algorithms over the last two decades is speeding its maturation process. As is the case with any other technology, further technical developments will enable new applications, even when there is much room for the development of LiDAR on its own, and a great deal of progress is also expected from a synergistic approach of combining it with other forms of active and passive remote sensing techniques.

Cross-References