Keywords

1 Introduction

Remote sensing means acquiring knowledge from a distance. Conceptually, remote sensing is not only the sensing but includes the complete processes in which data about the Earth’s surface are recorded through monitoring electromagnetic energy; they are processed in the laboratory to make them usable for different applications, and subsequent analysis of rectified data in a multidisciplinary approach (Kumar and Kaur 2015). Different scholars have defined remote sensing in their own way. Colwell (1966) used the term ‘remote sensing’ in its broadest sense, which merely means ‘reconnaissance at a distance’. He later (1983) defined it as: ‘The measurement or acquisition of information of some property of an object or phenomenon, by a recorded device that is not in physical or intimate contact with the object or phenomenon under study.’ In 1997 he further refined this as ‘The art, science, and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring, and interpreting imagery and digital representations of energy patterns derived from non-contact sensor systems.’ Lillesand and Kiefer (2000) suggest that ‘[r]emote sensing is the science and art of obtaining information about an object, area, or phenomenon through the analysis of data acquired by a device that is not in contact with the object, area, or phenomenon under investigation.’ Further, Campbell (2002) defines ‘Remote Sensing [as] the practice of deriving information about the earth’s land and water surfaces using images acquired from an overhead perspective, using electromagnetic radiation in one or more regions of the electromagnetic spectrum, reflected or emitted from the earth’s surface.’ According to Jensen (2007), ‘[r]emote sensing is the process of collecting data about objects or landscape features without coming into direct physical contact with them. Finally, remote sensing is a multidisciplinary activity which deals with the inventory , monitoring and assessment of natural resources through the analysis of data obtained by observations from a remote platform’ . So remote sensing can be defined as an art, science and technique of collecting real information, without being in physical contact with an object or phenomenon through the sensor or camera from the various platforms over the wide range of electromagnetic energy by the means of a tripod, aircraft, spacecraft or satellite for multidisciplinary analysis.

2 Elements of Remote Sensing

In the remote-sensing process, various mechanisms/activities are involved while supplying the final data to the user community. They are as follows (see Fig. 3.1).

Fig. 3.1
figure 1

Stages of remote sensing

  1. (a)

    Source of energy The Sun and the Earth are two natural sources of energy. The Sun is a major source of electromagnetic energy that illuminates the object. The other sources of energy are flash gun, radar, geo-thermal energy etc., which interact with an object.

  2. (b)

    Interaction of electromagnetic radiation with an atmosphere Electromagnetic energy travels through the different thicknesses of the atmospheric layer to reach the target, at which point it is reflected back to the sensor, thus travelling through the atmosphere twice.

  3. (c)

    Interaction of electromagnetic radiation with the Earth’s surface The energy interacts with surface features such as land, water and vegetation. The interaction varies according to their properties.

  4. (d)

    Electromagnetic energy received by the sensor After interacting with the object on the Earth’s surface, the electromagnetic energy is reflected back to the atmosphere. This reflected energy is acquired by the sensor and the amount received by the sensor depends upon the object’s behaviour and the atmospheric conditions.

  5. (e)

    Transmission to the ground station The electromagnetic energy received by the sensor is converted into signals and transmitted to the ground-receiving stations in various global locations. In India, Delhi, Lucknow, Hyderabad and many more cities are receiving these data.

  6. (f)

    Rectification of data The received signals are converted into the picture elements in order to create image data. These image data, in raw form supplied by the sensor, are not useful to the common user. In their original form they contain discrepancies. These data are rectified according to the needs of the user.

  7. (g)

    Supply to the user The rectified image data are called imagery, and supplied in either analogue or digital form to users, according to their requirements. The data are finally digitally or visually analysed and conclusions are drawn that can be applied to various fields or applications.

3 Remote-Sensing Platforms

In remote sensing, platforms play an important role to provide bases for sensor mounting in moving or static form. Moving platforms include balloons, kites, pigeons, aircraft and spacecraft/satellites. Static platforms include high-rise buildings, tripods etc. used for collecting ground information (ground truth) or laboratory simulation for experimental purposes. In general, the platform is a means of holding the sensor aloft (Rees 2001). It is a stage on which to mount the camera or sensor to acquire the information about the target under investigation. On the basis of altitude above the Earth’s surface, a platform may be classified as groundborne, airborne or spaceborne (Figs. 3.2 and 3.3) .

Fig. 3.2
figure 2

Types of remote-sensing platform

Fig. 3.3
figure 3

Remote-sensing platforms above the Earth’s surface (after Rees 2001)

3.1 Groundborne

As the term suggests, groundborne sensors are positioned near to the ground; they can be hand-held, mounted on a tripod, positioned on the roof of a building or operated from a moving vehicle, in order to collect information. A ground-based remote-sensing system for Earth-resource studies is mainly used for collecting soil samples obtained during fieldwork or for laboratory simulation studies before sensor mounting on the airborne or spaceborne platform. This can, for example, be a camera or radiometer mounted to a pole or tripod or moving vehicle (Fig. 3.4) to assess the reflectance behaviour of an object or phenomenon or a specific crop during a day or season.

Fig. 3.4
figure 4

Groundborne platform on a moving vehicle

3.2 Airborne

Airborne observation is carried out by using aircraft with specific modifications to carry remote sensors. In the past, pigeons, balloons or kites were used for airborne remote sensing. The airborne platform provides flexibility in the choice of altitude (platform) and convenience for acquiring the data in terms of time and requirements. Airborne observations are possible from 100 m up to 30–40 km above the ground. The speed of the aircraft can vary between 140 and 600 kmph. The main disadvantage of aircraft, as a platform for remote sensing, is short duration as compared to the spaceborne platform, which means the absence of continuous data as well as influence from atmospheric dynamics producing geometric distortion in the photograph . An airborne observation is acquired from much lower altitudes than spaceborne observations, so the spatial coverage of data is smaller, and it is obviously unsuitable for very large area mapping. On the other hand, it is much more suitable for detailed investigations of smaller areas. Recently drones have been used as airborne platforms.

3.3 Spaceborne

A spaceborne sensor is mounted on a satellite and deployed with the help of a satellite launching vehicle. The placing of a satellite-carrying sensor in orbit is more expensive than an airborne platform but benefits include the high speed of the satellite, large field of view (potential swath ) and increased spatial coverage due to continuity of observations of the Earth’s surface. Spaceborne platforms are not, like airborne platforms, affected by the Earth’s atmosphere. Depending upon the altitude, there are two types of orbit: polar (Fig. 3.5) and geostationary (Fig. 3.6). These are used to place the satellites, depending on the objectives of the mission. The orbit height is calculated by subtracting the radius of the Earth from the orbit of the satellite. For example, where the orbit of the satellite is 7200 km and the Earth’s radius is 6400 km, then the orbital height of the satellite is 800 km (7200 − 6400 = 800 km).

Fig. 3.5
figure 5

Polar orbits

Fig. 3.6
figure 6

Geostationary orbits

4 Electromagnetic Spectrum

Electromagnetic energy categories on the scale of wavelengths are called the electromagnetic spectrum (Fig. 3.7). This extends from gamma rays (shortest wavelength) to radio waves (longest wavelength). Further subdivisions of spectral regions are known as spectral bands. The electromagnetic spectrum plays a vital role by providing atmospheric windows (Table 3.1) for remote-sensing processes for various applications (Table 3.2). Broadly, for remote-sensing purposes, the electromagnetic spectrum is divided into two major parts: the optical region and the microwave region. The optical region of the electromagnetic spectrum refers to that part of the spectrum in which optical laws apply. It ranges from X-rays (0.02 µm) through the visible part of the electromagnetic spectrum to the far infrared (1000 µm) region.

Fig. 3.7
figure 7

Electromagnetic spectrum

Table 3.1 Atmospheric windows for remote sensing
Table 3.2 Application of spectral region

Gamma rays (shorter than 0.3 Å) and X-rays (0.3–300 Å) (ångström = 10−10 m) This region has been used to a lesser extent because of atmospheric opacity. Its use has been limited to low-flying aircraft platforms or to the study of planetary surfaces with no atmosphere (e.g., the Moon). It is used mainly to sense the presence of radioactive materials.

Ultraviolet region (300 Å–0.4 µm) This is used mainly to study planetary atmospheres or surfaces with no atmospheres because of the opacity of gases at these short wavelengths. An ultraviolet spectrometer was mounted on the Voyager spacecraft to determine the composition and structure of the upper atmosphere of Jupiter, Saturn and Uranus.

Visible region (0.4–0.7 µm) This electromagnetic region plays an important role in remote sensing. This spectral band receives maximum illumination from the Sun and its energy is readily detectable by sensors. This is also known as ‘visible light’. It occupies a relatively small portion of the electromagnetic spectrum. This is the only portion of the spectrum that is associated with the concept of colour, i.e. blue, green and red, which are known as primary colours.

Infrared region (0.4–103 µm) This region is subdivided into three subregions, i.e. reflected infrared, thermal infrared and far infrared. Reflected infrared divides into near infrared (NIR), the wavelength of which is 0.7–1.4 µm, and short-wave infrared (SWIR), which is 1.4–3.0 µm. Thermal infrared is divided into mid-wave infrared (MWIR), which is 3.0–8.0 µm, and long-wave infrared (LWIR), 8.0–15.0 µm. The far infrared subregion wavelength is 15.0–1000 µm or 1 mm. In this region, molecular rotation and vibration play important roles. Imagers , spectrometers, radiometers, polarimeters and lasers are used in this region for remote sensing. Thermal infrared gives information about surface temperature.

Microwave region (1 mm–1 m) This covers the neighbouring region, down to a wavelength of 1 mm (300 GHz frequency). In this region, most of the interactions are governed by molecular rotation, particularly at the shorter wavelengths. This region is mostly used by microwave radiometers /spectrometers and radar systems.

Radio wave region (more than 10 cm) This covers the region of wavelengths longer than 10 cm (frequency less than 3 GHz). This region is used by active radio sensors such as imaging radars, altimeters and sounders, and, to a lesser extent, passive radiometers.

5 Interaction of Electromagnetic Radiation with Earth Features

When electromagnetic energy is incident on any given Earth-surface feature, three fundamental energy interactions occur, i.e. reflected, absorbed and transmitted. The proportions of energy reflected, absorbed and transmitted will vary for different Earth features in different wavelengths, depending on their material type and condition. These differences permit us to distinguish different features on an image. Many remote-sensing systems operate in the wavelength regions in which reflected energy predominates, so the reflectance properties of Earth features are most important to identify features in an image. A graph of the spectral reflectance of an object as a function of wavelength is termed as a spectral reflectance curve (Fig. 3.8).

Fig. 3.8
figure 8

Spectral reflectance curve

5.1 Interaction of Electromagnetic Radiation with Soil

Various characteristics of the soil interact differently with electromagnetic energy, such as moisture, texture, surface roughness and the presence of minerals and organic matter content. Coarse, sandy soils are usually well drained, resulting in low moisture content and relatively high reflectance of electromagnetic energy. Poorly drained, fine-textured soil will generally have lower reflectance of electromagnetic energy. The absence of water in the soil itself will show the reverse tendency, with coarse-textured soils will appearing darker in the imagery than fine-textured soils. Surface roughness and organic matter content decrease soil reflectance. The presence of iron oxide in a soil will also decrease reflectance in the visible region.

5.2 Interaction of Electromagnetic Radiation with Vegetation

The characteristics of vegetation relevant to reflectance include leaf structure, moisture content and levels of chlorophyll. Our eyes perceive healthy vegetation as green in colour because of the very high absorption of blue and red energy by plant leaves and the very high reflection of green energy. The blue and red bands constitute the chlorophyll-absorption region. Stress on plants decreases chlorophyll production, resulting in less chlorophyll absorption by these bands. In the near infrared region, the reflectance behaviour is determined by the structure of the leaf. As the structure of the leaf is highly variable between different plant species such as banyan, banana etc., the near infrared region can be useful to discriminate tree species, even if they look the same in the visible region. Near infrared is also used to detect stress on vegetation. The reflectance of healthy vegetation increases dramatically and reflects 40–50% of the energy incident upon it, and very little, less than 5% of energy, is absorbed in this region. The reflectance also increases with the number of layers of leaves in a canopy and maximum reflection is achieved at about eight leaf layers. Beyond 1.3 μm vegetation is essentially absorbed or reflected, with little or no transmittance of energy. There is water absorption at 1.4–1.9 and 2.7 μm. These are referred to as water absorption bands and 1.6 and 2.2 μm wavelengths have a peak reflectance between absorption bands.

5.3 Interaction of Electromagnetic Radiation with Water

Basically, water surface acts as specular reflection. But the suspended material in the water and the bottom of a shallow water body interact differently with electromagnetic energy. Clear deep water acts as a blackbody for the near infrared and beyond energy that it absorbs. Suspended material in water changes the reflectance of the energy. The reflectance increases in tandem with the increase in suspended matter. Any increases in chlorophyll concentration in the water tend to decrease reflectance in the blue region and increase reflectance in the green region. Due to this characteristic, electromagnetic energy is used to monitor and estimate the concentration of algae through remote-sensing data. Snow has very high reflectance in the visible region, but it drops in the near infrared region. The mid-infrared region is used to identify snow and cloud due to low reflectance from snow and comparatively high reflectance from cloud.

6 Earth Resource Observation Remote-Sensing Satellites

The imagery produced by the satellite is mainly utilised for the monitoring and assessment of natural and man-made resources through mapping processes. In this regard, the characteristics of the sensor are the most important feature of the satellite, especially its level of resolution. Satellite-based remote sensing has been widely accepted since the launch of the first Earth-resource remote-sensing satellite, Landsat, in 1972, followed by the SPOT series, IRS series etc. These satellites also use different orbits for various objectives.

6.1 Landsat Series

Landsat is a series of satellites launched by the USA, the first to provide digital imagery for image processing. The Earth Resources Technology Satellite (ERTS) was launched on 23 July 1972; it was renamed Landsat in 1975. Landsat 1 was the first unmanned satellite specifically designed to acquire data about Earth resources on a repetitive, low-resolution and multispectral basis. It was a sun-synchronous, near-polar orbiting satellite. This was very useful for natural and cultural resources-monitoring and assessment of the Earth’s surface. Landsat has subsequently successfully launched seven satellites, with the exception of Landsat 6, which failed to achieve orbit in 1993. There are six different sensors incorporated in the Landsat series with different combinations—i.e. Return Beam Vidicon (RBV), Multispectral Scanner System (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS). Landsat 1 , 2 and 3 incorporated RBV and MSS, providing images of the same area by framing and scanning systems, respectively. Landsat 4 and 5 incorporated MSS with enhanced radiometric resolution of 8 bits and TM sensor with 7 bands. Landsat 7 , still in operation, has ETM+ and Landsat 8 has OLI and TIRS sensors (Tables 3.3 and 3.4).

Table 3.3 Landsat satellite characteristics
Table 3.4 Use of Landsat series satellites

6.2 Commercial Satellites

IKONOS This is the world’s first commercial satellite launched on 24 September 1999 at a 680 km-high orbit, to collect data on panchromatic and multispectral imagery with 0.80 and 3.2 m resolution, respectively. The name IKONOS is derived from the Greek word for ‘image’. The sensor has 11-bit radiometric resolution. The revisit time of this satellite is three days and provides data in the spectral range of 0.45–0.90 μm in panchromatic and four multispectral bands: Band 1 Blue (0.445–0.516 µm), Band 2 Green (0.506–0.595 µm), Band 3 Red (0.632–0.698 µm) and Band 4 Near Infrared (0.757–0.853 µm). The imagery from both sensors (panchromatic and multispectral) can be resolution merged to create 0.80 m pan-sharpened colour imagery. The product of the IKONOS imagery is being used to provide large-scale mapping for national security and infrastructure development.

QuickBird This satellite was launched on 18 October 2001 at an orbital altitude of 450 km. In orbit for more than 13 years, the QuickBird mission ended on 27 January 2015. The spatial resolution was 0.61 m in panchromatic mode and 2.4 m in multispectral mode, providing 11-bit data. The revisit time was approximately 3.5 days. The spectral range of panchromatic imagery was 0.445–0.900 µm and multispectral imagery in four bands: Band 1 Blue (0.450–0.520 µm), Band 2 Green (0.520–0.6 µm), Band 3 Red (0.630–0.690 µm) and Band 4 Near Infrared (0.760–0.900 µm).

GeoEye-1 This satellite was launched on 6 September 2008 at an orbital altitude of 681 km. The satellite collects images at 0.41 m panchromatic and 1.65 m multispectral resolution. It can collect up to 350,000 km2 of pan-sharpened multispectral imagery per day with three days’ revisit period. The spectral range of panchromatic imagery is 0.440–0.800 µm and multispectral imagery in four bands: Band 1 Blue (0.450–0.510 µm), Band 2 Green (0.510–0.580 µm), Band 3 Red (0.655–0.690 µm) and Band 4 Near Infrared (0.780–0.920 µm).

WorldView-1 This satellite was launched in September 2007 at an orbital altitude of 496 km with an average revisit time of 1.7 days. This satellite has only a panchromatic imaging system with very high spatial resolution of 0.5 m at 0.4–0.9 µm with 11-bit data. The satellite is also equipped with state-of-the-art geo-location accuracy capabilities and exhibits stunning agility with rapid targeting and efficient in-track stereo collection.

WorldView-2 This satellite, launched on 8 October 2009, is the first high-resolution 8-band multispectral commercial satellite with an orbital altitude of 770 km, and has an average revisit time of 1.1 days. WorldView-2 provides 46 cm panchromatic resolution and 1.85 m multispectral resolution with 11-bit data. The spectral range of panchromatic imagery is 0.450–0.800 µm and multispectral imagery in eight bands: Band 1 Coastal (0.4–0.45 µm), Band 2 Blue (0.450–0.510 µm), Band 3 Green (0.510–0.580 µm), Band 4 Yellow (0.585–0.625 µm), Band 5 Red (0.630–0.690 µm), Band 6 Red Edge (0.705–0.745 µm), Band 7 Near Infrared 1 (0.770–0.895 µm) and Band 8 Near Infrared 2 (0.860–1.040 µm).

WorldView-3 This is the first multi-payload, super-spectral, high-resolution commercial satellite, launched on 13 August 2014 at an orbital altitude of 617 km. WorldView-3 provides 31 cm panchromatic resolution, 1.24 m multispectral resolution, 3.7 m short-wave infrared resolution, and 30 m CAVIS (clouds, aerosols, vapors, ice, and snow) resolution. The radiometric resolution of panchromatic and multispectral region is 11 bit and 14-bit data of SWIR. WorldView-3 has an average revisit time of less than one day (Table 3.5).

Table 3.5 WorldView-3 sensor band

WorldView-4 This satellite, launched on 11 November 2016, has panchromatic and multispectral (visible and NIR) bands, two sensor bands with the same orbit altitude and spatial resolution as WorldView-3 . Both sensors provide 11-bit pixel information. The swath width of the sensors at nadir is 13.1 km with daily revisits. It also contains 3200 Gb solid state onboard storage capacity.

6.3 SPOT Series

Satellite Pour l’Observation de la Terre (SPOT) is a programme which is run by the French government in collaboration with Sweden and Belgium. Since 1986, SPOT satellites have been acquiring images of the Earth. There are seven satellites in this series. Among these SPOT 1, 2 and 3 ceased operation in 2003, 2009 and 1996, respectively (http://www.cnes.fr/web/CNES-en/1415-spot.php). SPOTs 1-3 carried a high resolution visible (HRV) sensor, while SPOT 4 carried a high resolution visible infraRed (HRVIR) sensor. SPOT 5 has a high-resolution stereoscopic (HRS) imaging instrument (HRS), dedicated to taking simultaneous stereo pairs of a swath strip 120 km across and 600 km long. The stereopairs are also acquired in panchromatic mode with a spatial resolution of 10 m. Vegetation instruments were also deployed on SPOT 4 (vegetation 1) and SPOT 5 (vegetation 2) with a very wide angle Earth observation instrument and 1 km of spatial resolution. These use the same spectral bands as the HRVIR instruments (B2, B3 and Mid-IR) plus an additional band known as B0 (0.43–0.47 μm) for oceanographic applications and for atmospheric corrections.

SPOT 6 and 7 are currently operational (Table 3.6). The objectives of the programmes are to explore Earth’s resources; detect and forecast phenomena involving climatology and oceanography; and monitor human activities and natural phenomena. The SPOT 6/7 constellation is composed of two twin satellites operating as a true constellation on the same orbit and phased 180° from each other (Fig. 3.9). Added to their oblique viewing capability (up to 45° angle) and exceptional agility, this orbit phasing allows the satellites to revisit any point on the globe daily. The Pleiades twins are very high-resolution sensor (0.5 m) satellites on the same orbit. The phased orbit of the satellite constellation offers 1-day revisit above 40° latitude within a ±30° angle corridor; 2-days revisit between equator and 40° latitude; and 1-day revisit with two satellites and an increased angle of 45°. The Pleiades constellation also offers high-resolution stereoscopic cover capability. This is achieved within the same pass of the area, which enables a homogeneous product to be created quickly and also provides an additional quasi-vertical image (tristereoscopy), thus allowing the user to obtain an image together with its stereoscopic environment.

Table 3.6 Characteristics of SPOT satellites
Fig. 3.9
figure 9

SPOT 6/7 and PLEIADER 1A/1B constellation (http://www.cnes.fr/web/CNES-en/1415-spot.php)

6.4 Indian Remote-Sensing Satellites

IRS series The Indian Remote Sensing (IRS) satellite system is one of the largest constellations of remote-sensing satellites operating in the world today (Table 3.7). The IRS programme began with the launch of IRS-1A in 1988 and presently includes more than ten satellites that continue to provide imagery in a variety of spatial resolutions from better than a metre ranging up to 500 m.

Table 3.7 Characteristics of Indian remote-sensing satellites

IRS 1A This was the first-generation Indian Remote Sensing Satellite (IRS), mounted on a Vostok rocket and launched on 17 March 1988 from the Baikanur Cosmodrome, Kazakhstan, Russia. The satellite carried three pushbroom scanners, one LISS-I and two LISS-II. The initial mission life of this satellite was three years but IRS-1A completed 8 years and 4 months in July 1996.

IRS-1B This satellite was launched on 29 August 1991 using the same vehicle as IRS-1A. It completed 12 years and 4 months on 20 December 2003. It also carried the same sensor specification as IRS-1A.

IRS-1C This was India’s second-generation operational remote-sensing satellite, launched on 28 December 1995 from Baikanur, mounted on a Molniya rocket. It carried payloads with enhanced capabilities that included better spatial resolution, an additional spectral band, modified revisit period and augmented remote-sensing capabilities. It had three payloads, viz., PAN, LISS-III and WiFS. The mission was completed on 21 September 2007 after serving for 11 years and 8 months.

IRS-1D This satellite was launched on 27 September 1997 by the PSLV-C1 (Polar Satellite Launching Vehicle-C1) from the Satish Dhawan Space Centre (SDSC, also known as the Shriharikota Range or SHAR), India. It was a follow-on satellite to IRS-1C and belonged to the second generation of IRS series of satellites. It had similar capabilities to IRC-1C in terms of spatial resolution, spectral bands, stereoscopic imaging, wide field coverage and revisit capability. The mission was completed during January 2010 after serving for 12 years and 3 months.

ResourceSat-1 This satellite was launched on 17 October 2003 to continue the remote-sensing data services provided by IRS-1C and IRS-1D. It was mounted on the PSLV-C5 launch vehicle from the SHAR Centre. This is the most advanced remote-sensing satellite built by ISRO, carrying payloads of LISS-IV, LISS-III, AWifS-A and AWiFS-B cameras, with a mission life of five years.

ResourceSat-2 This is a follow-on mission to ResourceSat-1, launched on 20 April 2011 by the PSLV-C16 from the SHAR Centre to provide enhanced multispectral and spatial coverage. It has 70 km of swath in an LISS-IV sensor with 10-bit radiometric resolution. It also carries an addition payload known as an automatic identification system (AIS) for ship surveillance to plot position, speed and other information about ships. ResourceSat-2A was launched by the PSLV-C36 on 7 December 2016 from the SHAR Centre. It is intended to continue the remote-sensing services to global users provided by ResourceSat-1 and ResourceSat-2.

IRS-P3 This experimental Earth observation satellite was launched by the PSLV-D3 on 21 March 1996 from the SHAR Centre. IRS-P3 carried two remote-sensing payloads—a Wide Field Sensor (WiFS) similar to that of IRS-1C for vegetation dynamic studies, with an additional short wave infrared (SWIR) band and a Modular Opto-electronic Scanner (MOS) designed for oceanic applications. It also carried an X-ray astronomy payload and a C-band transponder for radar calibration. The mission was completed during January 2006 after serving 9 years and 10 months.

IRS-P4 (Oceansat-1) This was the first satellite primarily built for ocean applications, launched by the PSLV-C2 from the SHAR Centre on 26 May 1999. This satellite carried an ocean colour monitor (OCM) and a Multi-frequency Scanning Microwave Radiometer (MSMR) for oceanographic studies. Thus, IRS-P4 expanded the capabilities of earlier launched remote-sensing Indian satellite applications to newer areas. It completed its mission on 8 August 2010.

Oceansat-2 This satellite extends the services of earlier Oceansat satellites, and was launched by the PSLV-C14 from the SHAR Centre on 23 September 2009. It carries three payloads: an Ocean Colour Monitor, Ku-band Pencil Beam scatterometer and radio occultation sounder for atmosphere (ROSA). The OCM has an 8-band multi-spectral camera operating in the visible and near-infrared region. The scatterometer will be used to determine ocean surface-level wind vectors through estimation of radar backscatter. The ROSA is used to characterise the lower atmosphere and the ionosphere for the development of several scientific studies. The mission life of this satellite was five years but it is still operational.

TES The Technology Experiment Satellite (TES) was launched by the PSLV-C3 on 22 October 2001. TES is an experimental satellite to demonstrate and validate new technologies in the field of satellites. It carries a panchromatic camera for remote-sensing experiments.

CartoSat-1 This is the first Indian Remote Sensing Satellite capable of providing in-orbit stereo images and was launched on 5 May 2005 by the PSLV-C6 from the SHAR Centre. It has two payloads that carry PAN-Fore and PAN-Aft sensors for cartographic applications to meet global requirements. The spatial resolution of the camera is 2.5 m. This satellite also provides stereo pairs during 5-day revisit periods, required for generating digital elevation models and orthoimage and value-added products for various GIS applications. The mission life was five years and it is still operational.

CartoSat-2 This is an advanced remote-sensing satellite capable of providing scene-specific spot imagery and was launched on 10 January 2007 at 630.6 km orbital altitude by the PSLV-C7 from the SHAR Centre. The panchromatic camera (PAN) on-board the satellite can provide imagery with a spatial resolution better than 1 m and a swath of 9.6 km. This is the nation’s second mapping satellite since May 2005. The satellite can be steered up to ±45° along- and ±26° across-track providing a 5-day revisit period. The data from the satellite can be used for detailed mapping and other cartographic applications at the cadastral level, urban and rural infrastructure development and management , as well as applications in land information system (LIS) and GIS. The mission life is five years.

CartoSat-2A This is the third satellite of the CartoSat series launched on 28 April 2008 at 630 km orbital altitude by the PSLV-C9 rocket, from the SHAR Centre. It is a sophisticated and rugged remote-sensing satellite that can provide scene-specific spot imagery. This satellite carries a panchromatic camera (PAN) operating at 0.5–0.85 μm and can be steered up to ±45° along- and across-track to facilitate imaging of any area more frequently. The spatial resolution of this camera is better than 1 m and provides a swath of 9.6 km. Imagery from this satellite is used for cartographic application, as for CartoSat-2.

CartoSat-2B This satellite was launched on 12 July 2010 by the PSLV-C15 from the SHAR Centre at an orbital altitude of 637 km. It is an advanced remote-sensing satellite mainly intended to provide remote-sensing data services for the users of multiple spot-scene imagery. This satellite provides better than 1 m spatial resolution and 10 km swath in the panchromatic band. It can be steerable up to ±26° along- and across-track to achieve stereoscopic imagery and provide a 4/5 day revisit capability. The satellite imagery can be useful for village-level/cadastral-level resource assessment and mapping, detailed urban and infrastructure planning and development, transportation system planning, preparation of large-scale cartographic maps, preparation of micro-watershed development plans and monitoring at the village/cadastral level.

CartoSat-2C This was launched by the PLSV-C34 on 22 June 2016 from the SHAR Centre into a 505 km polar sun-synchronous orbit. It carries a multispectral imaging system in addition to the panchromatic imager which is a first for the CartoSat series. The spectral range is of 450–900 nm and reaches a ground resolution of 65 cm when employing apparent velocity reduction for along-track imagery. Four MX detectors with bandpass filters between 450 and 860 μm deliver imagery at a 2 m ground resolution along a 10 km swath.

CartoSat-2D This was launched on 15 February 2017 by the PSLV-C37.

CartoSat-2E This was launched on 23 June 2017 by the PLSV-C38.

CartoSat-2F This was launched on 12 January 2018 from the SHAR Centre by the PSLV-C40 launch.

The imagery sent by the CartoSat series will be useful for utility management, coastal land use and regulation, urban and rural applications and cartographic applications.

RISAT-1/RISAT-2 Radar Imaging Satellite-1 (RISAT-1) is an Indian state-of-the-art satellite for microwave remote sensing carrying a synthetic aperture radar (SAR) payload operating in the C-band (5.35 GHz), which enables imaging of the surface features during both day and night under all weather conditions. The purpose of active microwave remote sensing is to provide cloud penetration and day–night imaging. This unique characteristic has various applications in agriculture, particularly paddy monitoring in kharif (monsoon) season and management of natural disasters like floods and cyclones. RISAT-2 is a radar imaging satellite that enhances the ISRO’s capability for disaster management applications.

7 Digital Image Processing

The satellite image which is received by the sensor in the ground station is in raw image format, which is not useful to the user community, and requires a significant amount of processing. There are geometric and radiometric errors in the raw image, which are generated during the image acquisition processes. The clarity of the raw image is very poor and the noise present can also decrease image quality. This problem should be addressed for better visualisation and positional accuracy, which rectifies the inherent distortion in the raw image. Digital image processing in general involves the use of computers for manipulating digital images in order to improve their quality and/or modify their appearance (Wolf and Dewitt 2000). It is a task of processing and analysing the digital data using an image-processing algorithm. This is a process applied by the operator, instructing the computer to perform an interpretation according to certain conditions. These conditions are defined by the operator through various algorithms. The main processes are pre-processing , enhancement and image classification .

7.1 Pre-processing

Pre-processing is the process of image rectification and restoration to correct distortion to create a more faithful representation of the original image. Typically, geometric and radiometric correction (as well as noise correction) are applied.

Geometric Correction This involves relating the spatial coordinates in the image to the corresponding spatial coordinates on the Earth’s surface. Geometric correction means the repositioning of the pixels from their original locations in the data array into a specified reference grid. It involves three processes: the selection of a suitable mathematical distortion model, coordinate transformation and resampling (interpolation ) (Schowengerdt 2006).

Mathematical Distortion Model Two approaches, the satellite model and polynomial model, are used to correct the raw image by mathematical distortion . In the satellite model, information on satellite position, altitude, orbit, scan geometry and Earth model is used to produce system-corrected products. The residual of the system-corrected products can be corrected by the polynomial functions and ground control points (GCPs) (Table 3.8).

Table 3.8 Number of GCPs per order of polynomial

Transformation Model The characteristics of the Earth (geoid) cannot be represented on a flat surface without shrinking, breaking or stretching it somewhere. It is also not possible to achieve all five properties (true shape , equal area, true distance, true direction and simplicity) required to make a perfect map. It is, however, possible to develop projections which have one or more of these properties, though not all of them (Mishra and Ramesh 2002). There are various projections available for transformation of the coordinate from 3-dimension to 2-dimension modelling.

Resampling Methods These methods transform the raw image into a referenced image (transformation) , resulting in empty pixels in the array of the image. These empty pixels are filled in by the interpolation process known as resampling. There are three methods of resampling : nearest-neighbour, bilinear and cubic convolution (Fig. 3.10). In nearest-neighbour resampling , the pixel value in the original image that is spatially nearest to the calculated position is copied to the location in the new image. It means that the value of each new pixel in the reference image is the nearest pixel value of the calculated raw image. This approach has the merit that there is no calculation required to derive the output pixel value, so it does not alter the pixel value. Bilinear interpolation uses 2×2 neighbourhood pixels and smooths the results and imagery produced by interpolation, in which the pixel values copied into the reference image are an average of the four neighbourhood pixel values of the raw image. It can introduce the new pixel value by averaging the four nearest pixels of the raw image. Bicubic interpolation uses 4×4 neighbourhood pixels. The smoothing incurred with bilinear interpolation may be avoided by the cubic interpolation method, which provides a slightly sharper image, involving more computation processes and introducing new pixel values by averaging the nearest 16 pixels.

Fig. 3.10
figure 10

Type of resampling

Radiometric Correction Radiometric correction is categorised into two groups: cosmetic and atmospheric. Cosmetic correction involves all those operations that are aimed at correcting visible errors and noise in the image data. Defects in the data may be in the form of line dropouts (periodic or random missing lines), line striping and random or spike noise. Line dropouts occur due to recording problems when one of the detectors of the sensor gives wrong data or stops functioning. The Landsat Thematic Mapper has 16 detectors in all its bands except the thermal band. A loss of one of the detectors would result in every sixteenth scan line being a string of zeros that would plot as a black line on the image. In Landsat MSS, this defect would occur in every sixth line. Line striping is far more common than line dropout. It often occurs due to the non-identical detector response. Although the detectors for all satellite sensors are carefully calibrated and matched before the launch of the satellite, over time the response of some detectors may drift to higher or lower levels. As a result, every scan line recorded by that detector is brighter or darker than the other lines. Histogram matching is the most popular method to correct this effect. Random or spike noise occurs during data transmission . The individual pixel acquires values much higher or lower than the surrounding pixels, which produce much brighter or darker spots. All reflected and emitted radiations leaving the Earth’s surface are attenuated mainly due to absorption and scattering by the constituents in the atmosphere. These distortions are wavelength dependent and corrected by atmospheric correction techniques for haze, sun angle and skylight. Thus, radiometric corrections constitute an important step in the pre-processing of remotely sensed data. They are comprised of cosmetic corrections to reduce the influence of atmospheric and illumination parameters. Atmospheric corrections are particularly important for generating image mosaics and for computing multi-temporal remote-sensing data.

7.2 Image Enhancement

Image enhancement involves techniques for increasing the visual distinctions between features in a scene. It can be divided into two parts, one operating on individual pixels and enhancing their values without reference to their spatial context, also known as radiometric enhancement , e.g. contrast enhancement, histogram equalisation ; the other, using spatial information, deals with the average value of neighbouring pixels , known as spatial enhancement, e.g. spatial filtering. An image histogram is simply a graph or table showing the number of pixels (y-axis) in an image having pixel value (x-axis). An image histogram is a graph (Fig. 3.11) that shows the distribution of pixel values in a digital image, displaying the frequency of the digital number values. The number of pixels is shown on the y-axis, the DN values on the x-axis. Histogram matching is very important for producing a mosaic of two images.

Fig. 3.11
figure 11

Histogram without enhancement

Contrast Enhancement is a conversion of an original digital range into a full range of display. It is only intended to improve the visual quality of a displayed image because of low visibility in the raw image. There are various reasons for low contrast in the original digital image. Sometimes objects and background have made similar or uniform responses at the same wavelength. So a scene itself may have a low contrast ratio. Scattering of electromagnetic radiation also reduces the contrast in short wavelengths. The contrast refers to the range (or ratio) between maximum and minimum intensity over the image. The larger the ratio, the easier it is to interpret.

The sensor is designed to acquire the full range of radiance from 0 to 255 DN value (0 means black and 255 means white in grey scale) in 8-bit quantification data. But the scene generally receives a radiance range much less than the full range, resulting in low contrast in the displayed image. Radiance values from the ocean, with low solar elevation angle and high latitude, are low, while those from snow and sand, with high solar elevation angle and low latitude are high. The contrast enhancement is categorised into linear and non-linear contrast stretches.

Linear Stretch A linear stretch is used when equal weight is given to all DN values (Fig. 3.12). It is used to increase or decrease the contrast of an image. If the DN range is less than the display range then linear stretch increases the contrast of the image. If the DN range is more than the display range the linear stretch decreases the contrast of the image. The human eye cannot usually distinguish more than about 50 DN (grey level) at any one time (Schowengerdt 2006).

Fig. 3.12
figure 12

Histogram of linear enhancement

Whereas X is the DN value of a raw image, Xmin is the minimum DN value of a raw image (79) and Xmax is the maximum DN value of a raw image (158). For example, if we linear enhance the DN value of 100, then (100 − 79/79 − 158) * 255 = 68. So the value in the Look Up Table (LUT) is 68 by linear stretch. LUT is used to generate the display of an image, and not the image data themselves. The original image data are unchanged.

Non-linear Stretch or Histogram Equalisation This is used when it is felt necessary to weight the DN by their frequency of occurrence. The number of grey levels in the enhanced image is less than that in the original image due to grouping of certain adjacent grey values. In histogram equalisation, more contrast is applied to the high frequency of the DN value of the original image. In this process dark and bright portions of the original image/histogram are compressed, resulting in loss of information at both ends (Fig. 3.13).

Fig. 3.13
figure 13

Histogram equalisation

Gaussian Stretch This enhances the contrast at the tails of the histogram, at the expense of contrast in the middle part of the histogram (grey scale).

Spatial Filtering The digital image is made up of spatial components known as pixels having digital numbers at different places. The digital image consists of high frequencies and low frequencies. Contrast enhancement does not alter the image data, merely the way they are displayed. Spatial filtering, on the other hand, does change the image data according to the pixel values in its neighbourhood. Spatial averaging is one of the functions of spatial filtering, which is used to reduce noise or speckle in the data. Diagrammatically, a spatial filter is a grid of boxes, with each box representing a pixel, having their brightness value termed a kernel or convolution matrix or mask (Fig. 3.14). The centre of the kernel is the average value of the surrounding box pixel value (3 × 3, 5 × 5, 7 × 7). Spatial filtering is a means of improving the image by suppressing (low-pass filtering) or enhancing (high-pass filtering) certain spatial frequencies, directions and textures (Rosenfeld and Kak 1976). Spatial frequency is a ‘roughness’ of the tonal variations occurring in an image (Fig. 3.15).

Fig. 3.14
figure 14

Kernels 5 × 5 and 7 × 7

Fig. 3.15
figure 15

Low- and high-pass filtering

$$ {\text{Image}} = {\text{low pass}} + {\text{high pass}} $$

The low-pass filter passes low frequencies and blocks high frequencies. It preserves the local mean (the sum of their weight is one) and smooths the output layer . Low-pass filters smooth the image resulting in a blurring effect in the output image. The larger the window (kernel) the smoother the image. This filter is very useful in periodic ‘salt and pepper’ noise removal.

$${\text{Low pass}} = {\text{image}} - {\text{high pass}}$$

The high-pass filter removes the local mean (the sum of their weight is zero) and produces an output which is a measure of the deviation of the input signal from the local mean.

$$ {\text{High pass}} = {\text{image}} - {\text{low pass}} $$

7.3 Image Classification

The rectified image can be used for extracting the information. This information can be extracted by two methods. The first, visual image interpretation, involves making a printout of the rectified image; the second is digital image classification. Digital image classification is a process of generating thematic categorisation of similar pixels. Image classification (pattern recognition ) is the process used to produce thematic maps from imagery (Schowengerdt 2006). It involves the analysis of multispectral image data and the application of statistically based decision rules for determining the land-use/cover identity of each pixel in an image. Basically, multispectral data with 8 bits (256 colours) is used to categorise single-band thematic information into several classes. The level of classes depends on the resolution of the sensor used to capture the image. There are various classification systems with different levels available as given by Anderson (Table 3.9) and the National Remote Sensing Centre, Government of India (Table 3.10) etc. It is generally accepted that the Anderson level I categories can be reliably mapped using Landsat MSS imagery, and level II categories with TM and SPOT multispectral imagery (Lillesand and Kiefer 1987). Level III and further requires SPOT, IRS LISS-IV or higher-resolution imagery for classification.

Table 3.9 Land-use and land-cover classification system for use with remote-sensor data (Anderson et al. 1976)
Table 3.10 Land-use/cover scheme used by the National Remote Sensing Centre (2007)

7.4 Elements of Aerial Photo/Image Interpretation

Tone This refers to the relative brightness or tonal variation in photographic film and represents the radiance value received by the sensor from the object on the Earth’s surface. Some objects appear darker and crisper than others. Light tones represent areas with high radiance and dark tones represent areas with low radiance. The nature of the materials on the Earth’s surface affects the amount of light reflected. The terms light, medium and dark are used to describe the tonal variation. For example, the area of laterite soil is dark grey in tone and the areas of salt-affected soil are light grey in tone; dry soil has light tones and wet soil has dark tones in photographs.

Colour In multispectral imagery, colour is the most important element to discriminate two features which cannot be easily identified by tonal variation in panchromatic imagery. For example, in a true-colour image healthy vegetation is represented by green but in a panchromatic one, it is represented in greyscale; even in standard false-colour composites, it will be represented by red.

Tone and colour are the basic and primary elements of interpretation.

Size Some features are easily identified by their size, with reference to their length, width, perimeter and area in the context of the scale of the photograph. Size is a relative term which may be small, medium or big, according to the scale of the photograph/imagery. The size of a water body, for example, will help to determine whether it is a small pond or a big lake. National highways can be easily distinguished from smaller roads. Long rivers can be distinguished from smaller tributaries . Residential areas are easily distinguished from industrial areas in the urban environment .

Shape This refers to geometric shapes, e.g. linear, curvilinear, circular, elliptical, radial, square, rectangular, triangular, hexagonal, star, elongated etc. Consolidated agricultural areas tend to have geometric shapes like rectangles and squares. Streams are linear (line) features that can have many bends and curves. Canals, roads, and railway lines tend to have fewer curves than streams. Stadiums may be circular or elliptical shapes. Some objects can be identified almost solely on the basis of their shapes, such as the pyramids in Egypt or the Pentagon building in the USA.

Texture This refers to the roughness and smoothness of the features in aerial photograph/satellite imagery, as well as the arrangement of tonal variation or repetitions of tone and colour. The textural classes may be smooth (uniform, homogeneous), intermediate and rough (coarse, heterogeneous). Grassland appears smoother than forest. Paddy fields appear smoother than sugar-cane fields. Water in a lake or a cemented area appears smoother than ploughed agricultural land.

Size, shape and texture are secondary elements for interpretation.

Pattern Features of the Earth’s surface produce regular, linear, systematic, irregular or random spatial arrangements. These may be natural or man-made features. The difference between planned (systematic) and unplanned cities can be observed. Chandigarh city has a checkerboard pattern while Connaught Place in New Delhi has a radial pattern. The pattern of the drainage may be radial, trellis, dendritic etc. The differences between forests, forest plantations and orchards can also be observed. The patterns formed by the features in photographs/imagery can be used to identify the objects.

Shadow Shadows are clues to identify an object. These are cast by the object on the vertical aerial photograph. They may provide more information than the objects themselves, particularly when determining height. For example, the shadows cast by a hill or mountain may help to identify physiographic information. Objects ranging in size and type from the Qutab Minar and the Eiffel Tower to a bridge or signboard are often very informative. Shadows also help to determine the height of features such as high-rise buildings in the aerial photograph.

Patterns and shadows are tertiary elements of interpretation.

Site or Location This refers to geographical location. This characteristic of photographs/imagery is important in identifying the feature located in a particular area or region such as various vegetation types and landforms. For example, some tree species are found more commonly in one geographic location than in others such as evergreen forests, mangroves etc.; some landforms are found in particular locations such as sand dunes, alluvial fans, river deltas; large circular depressions in the ground are identified as sinkholes in central Florida; some cultural features such as brick-kilns, thermal power plants and nuclear power plants can be determined.

Association Some objects on the Earth’s surface are always found in association with others. These associated features provide clues as to the identity of the object such as a sugar mill associated with a surrounding field of sugar-cane, molasses tank, storage godown (warehouse) etc. A vegetated area within an urban setting may be a park. Commercial centres will likely be located next to major roads, railways or waterways. Industrial areas are associated with several clusters. Some structures may also help us determine the precise nature of enterprises such as the combination of one or two tall chimneys, a large central building, siting along a waterway, cooling towers and solid fuel piles which point to the presence of a thermal power station.

Resolution Resolution of a sensor system may be defined as its capability to discriminate two closely spaced objects from each other. It may be high, medium or low. Small features can be identified from high-resolution imagery. For example cadastral-level or infrastructure mapping needs high-resolution imagery where individual plots or houses can be identified. Regional-level mapping requires comparatively low-resolution imagery.

7.5 Visual Image Interpretation

This is defined as an act of examining the image to identify the object or phenomenon and judge its significance by interpretation. It is based on an interpreter’s ability to extract the information visually with the help of various characteristics present in the image known as elements of image interpretation (cf. Chap. 4). With the help of this element or set of elements, the interpreter prepares the interpretation key, a reference that provides the logical rules to identify the features or objects in the image. The quality of the interpretation depends on the quality of the image—its clarity, tonal or colour contrast and sharpness that enable one to distinguish one object from another. Campbell (1978) has defined five categories of image interpretation strategies; i.e. field observation, direct recognition, interpretation by inference, probabilistic interpretations and deterministic interpretation. The visual interpretation methods involve a sequence of activities including detection, recognition and identification, analysis, deduction, classification, idealisation and accuracy determination. Detection involves selectively picking out objects that are directly visible. Recognition and identification involve naming objects. Analysis involves trying to detect their spatial order. Deduction is rather more complex and involves the principles of convergence of evidence to predict the occurrence of certain relationships in the aerial photographs. Classification involves arranging the objects and elements that have been identified into an orderly system. Idealisation uses lines which are drawn to summarise the spatial distribution of objects. The final stage is accuracy assessment to validate the classification (Curran 1985). Visual interpretation has certain disadvantages; for example it requires intensive labour for delineation and evaluation of each and every theme. Our eyes cannot discriminate certain features due to poor tonal characteristics, resulting in entire spectral characteristics not being utilised by the interpreter. It is also difficult to incorporate into the GIS environment for further analysis in comparison to digital classification.

7.6 Digital Image Classification

There are basically two methods for digital image classification: supervised and unsupervised, although hybrid classification is done by a combination of both methods. The ultimate aim of digital image classification is to increase the accuracy of the classified image. Thus it is important to select appropriate satellite images and their spectral bands for the classification process because it will help in collecting the training sample as well as increase the time of processing in the computer. The highly correlated spectral band will increase computing time rather than classification accuracy (Moik 1980). It is preferable to determine the degree of inter-band correlation and to use only wavelength bands that are poorly correlated to each other (Curran 1985).

Supervised Classification This method is performed when the interpreter has a priori knowledge about an image area. To obtain meaningful and accurate image classifications there is a need for the environmental scientist to take the computer operator’s seat and interact with the image data by supervising the classification sequence (Schmidt 1975). There are a few steps that should be followed while applying supervised classification methods, such as selection of the sample or training samples, evaluation of selected samples and appropriate classification algorithms.

Training samples These are sets of pixels selected to represent the individual land-use/cover class (feature class). The samples must be pure and representative of the particular class. The aim of training is to obtain sets of spectral data that can be used to determine decision rules for the classification of each pixel in the whole image data set (Merchant 1982). To a large extent, our ability to perform accurate classification of a given multispectral image is determined by the extent of overlap between class signatures.

Sample evaluation Accurate classification of an image totally depends on the training samples, and must be evaluated by signature separability analysis and a contingency matrix. Separability analysis can be performed on the training samples to estimate the expected error in classification for various feature combinations as well as band combinations (Table 3.11). Separability is a statistical measure of distance between two signatures. It can be calculated for any combination of bands that is used in the classification, enabling the ruling out of any bands that are not useful in the results of the classification (ERDAS Field Guide 2005). The contingency matrix determines the purity of the sample pixel of each class; if the pixels of the sample are pure, then it provides accurate classification results (Table 3.12).

Table 3.11 Report of signature separability
Table 3.12 Contingency matrix (error matrix)

Classification algorithms There are various algorithms to assign an unknown pixel to known classes. The most important and frequently used algorithms are minimum distance to mean, parallelepiped and maximum likelihood classifier. These algorithms are used to classify the unknown pixels in the image.

The minimum distance to mean algorithm is a very simple and fast method to classify the image where the mean value of the digital number (DN) from the training sample is calculated and unknown pixels are allotted the class nearest to the mean value of the DN. The most important feature of this algorithm is that there are no unclassified pixels because every pixel is spectrally closer to either one or another sample mean. This problem is eliminated by deciding the upper limit of the pixels that are farthest from the means of their classes. Another problem is that this classifier does not consider the variability of the sample data resulting in misclassification due to the fact that pixels that belong to the class are usually spectrally closer to their mean than those of other classes to their means.

The parallelepiped algorithm or box classifier creates a box around each class in the training sample, and the pixels in the total data set can then be classified by the box into which they fall. It is also known as a parallelepiped algorithm because opposite sides are parallel. The box is created by considering the range of the value in the training samples or of the mean and standard deviation per class. The range is defined by the lowest and highest DN values in each band and appears as a rectangular shape. If the DN value of the pixel does not fall in the range then it is classified as an unknown pixel. The disadvantage of this algorithm is that the overlap between the classes or a pixel may fall into two or more boxes or parallelepipeds. In such a case, there are two methods: one in which the overlapped pixel is classified by the order of sample taken and the other in which the pixel is classified by the parametric rule. If the pixel cannot satisfy the parametric rule then it can be classified as unknown.

Maximum Likelihood Classifier considers not only the cluster centre but also its shape , size and orientation (Janssen and Huurneman 2001). This classifier calculates the mean, variance and correlation for each class of training samples, on the usually valid assumption that the data for each class are normally distributed (Castleman 1979). With this information the spread of pixels around each mean value can be described using a probability function, resulting in bell-shaped surfaces that are called probability density functions. There is one such function for each spectral category. This classifier delineates ellipsoidal equiprobability contours in the scatter diagram (Lillesand and Kiefer 2000). After evaluating probability in each category, the pixel is assigned to the most likely class or high-probability value. This function is used to classify the unclassified pixel to a particular class. This algorithm is the most accurate classifier and the results are totally reliant on the statistical computation, meaning it takes a long time to compute.

Post-classification smoothing The low-pass filtering is applied to remove the random noise in the classified image. The central pixel value in the window is replaced by the mostly frequently occurring value.

Unsupervised Classification Unlike supervised classification, unsupervised classification does not use the training sample to classify the image. An algorithm determines the internal structure of the data, not the training sample. The classification of classes is based on clusters or natural groupings of data value known as spectral classes. A cluster is a distinct group of pixels in a localised region of the multidimensional data space. These classes or clusters are initially unidentified, and need to be identified with the help of the interpreter. The main feature of this classification is that it can identify the distinct spectral classes presented in the image which may not be identified by supervised training samples. The K-mean is one of the most important algorithms and can be used for clustering. In this algorithm, a system arbitrarily locates the number of cluster centres (number of required classes as defined by a user) in the 2-dimensional image data known as a mean vector. Then each pixel of an image is assigned to the class whose mean vector is closest, thus forming the first set of decision boundaries of the classes. Again the new set of the mean vector is calculated on the basis of a previous set of classes and the pixels are reassigned accordingly. In each iteration, the K-mean will tend to gravitate towards concentrations of data. The iterations are continued until there are no significant changes in pixel assignments.

Hybrid The main objective of image classification is to produce an accurate thematic map from the image. In this regard, sometimes supervised and unsupervised classification techniques individually do not produce the desired level of accuracy. For example, in supervised classification , the user may not be able to delineate the particular signature, or in results, the class signature is not statistically separable in a feature space. In the case of unsupervised classification, this algorithm only considers the internal structure of the data and sometimes produces classes that are insignificant to the user. The combined approach can eliminate both classification drawbacks and produce satisfactory results. In this approach, initially unsupervised classification is applied with approximately 5–10 times more desired clustering. Then these clusters must be evaluated by various field data, resulting in some clusters being combined or subdivided. Finally the evaluated classes can be utilised for supervised classification.

7.7 Accuracy Assessment

Once the image is classified, there is a need to assess the accuracy of the classification that can represent true information about an area. It is not possible to check each pixel of an image to verify. Stratified random sampling is one of the best methods to collect samples in the land-use/cover classified map for accuracy assessment. These collected samples can be verified with fieldwork or other ancillary data known as reference data. The overall verification of the sample data is known as ground truthing. It is an acquisition of knowledge about the study area by various sources which may be primary or secondary, such as fieldwork, analysis of previous images or photographs, personal experiences etc. Ground-truth data are considered to be the most accurate data available about the area of study. They should be collected at the same time as the remotely sensed data, so that they correspond as much as possible (Star and Estes 1990). Both data, i.e. field-verified referenced data and sample data from the classified image, can be put into the error matrix (Table 3.13). The error matrix is also known as a contingency table or confusion matrix . It is a 2-dimensional matrix where a column represents the reference data and a row represents classified data. The accurate classification of each class which is verified from the field is mentioned diagonally in the matrix and all the non-diagonal samples are errors. A sample which is verified in the field but not classified in the same (verified) class is an error and it may be by omission or commission. If a verified sample is not found in the classified class it is known as an error of omission . For example, a sample verified as agriculture is not classified in the agriculture class and is classified into others. A classified sample not verified in the field verification but that is found in other classes is known as an error of commission . For example, a classified sample of agriculture is verified as another class or classes.

Table 3.13 Accuracy assessment by error matrix

In the classified image, 505 samples are collected. Among these, 480 samples are found to be correct by the field verification in this class. It means that 25 samples (505 − 480 = 25) are found to be incorrect. That difference of 25 samples (20 in forest and 5 in wasteland) is an error of commission , which occurs in the wrong classes (Table 3.14). The next aspect of the error matrix is error of omission , where 540 samples of agriculture are found in the field verification. Among these only 480 are correctly classified in agriculture and another 60 samples (50 in forest and 10 in fallow land) are classified in other classes as errors of omission.

Table 3.14 Assessment of omission and commission errors

The accuracy of individual categories can be calculated by dividing the number of correctly classified pixels in each category by either the total number of pixels in corresponding rows or in columns to obtain producers’ or users’ accuracy, respectively (Table 3.15). The producers’ accuracy is calculated by dividing the number of correctly classified pixels in each category by the number of total verified pixels in a particular category. The users’ accuracy is calculated by dividing the number of correctly classified by the number of total classified pixels in each category. The overall accuracy is calculated by dividing the total number of correctly classified pixels by the total number of verified pixels.

Table 3.15 Assessment of producers, users and overall accuracy

8 Remotely Sensed Data and GIS

All the remotely sensed data in the geo-corrected form and classified image may be used as a base map and thematic map, respectively. The geo-corrected or geo-referenced map is also used as a base map for the digitisation of different layers from the imagery or aerial photographs. The multi-temporal satellite imagery can be used for overlay analysis and updation. Sometimes, a classified raster image can be converted into vector format for further geographical database creation and analysis.

9 Conclusion

Remote sensing is a technique to acquire data remotely. It requires a multidisciplinary approach. There are several mechanisms involved, from source of energy to supply of end product to the end user. Remote sensing can be conducted from three platforms: groundborne, airborne and spaceborne. The product of remote sensing may be in the form of a photograph or imagery. Electromagnetic energy is the medium by which different sensors acquire information about the Earth’s surface. The limited portion of the electromagnetic energy used is known as an atmospheric window, where the energy interacts with the Earth’s features and is reflected back to the upper atmosphere where a deployed sensor is utilising that energy to form an image. The basic material of the Earth is soil, vegetation and water. These materials are interacting differently with electromagnetic energy, which is reflected in the reflectance curve. There are various Earth-resource observation remote-sensing satellites in orbit. Landsat is one of the pioneers in satellite observation systems, followed by European and Indian satellites launched to obtain information about the Earth’s surface. The imagery received from the satellite is not usable directly by the user community, so that digital image processing is applied to geometric and radiometric correction as well as enhancement techniques to increase the interpretability of the imagery in digital classification and visual interpretation with the help of elements of photo or image interpretation keys. This classified or interpreted satellite imagery needs accuracy assessment for correctness, and finally these images are used as thematic maps. Satellite imagery with classification can also be used in a GIS environment to create a database as a base layer . Thus, remote sensing provides the base information as well as thematic information to integrate with a GIS environment.

Questions

  1. 1.

    What is meant by remote sensing? Discuss its process.

  2. 2.

    Discuss the different types of platform in remote sensing.

  3. 3.

    What is the difference between electromagnetic radiation and the electromagnetic spectrum?

  4. 4.

    Discuss the electromagnetic spectrum and atmospheric window.

  5. 5.

    Describe the different Earth-resource satellites orbiting the planet.

  6. 6.

    What is meant by digital image processing? Discuss the various processes to achieve the end-user product.