Keywords

15.1 Introduction

We can understand compressive as compressed and sensing as sampling. As the technology advances, the concept of “doing more with less” becomes essential. To achieve this objective, researchers have been involved for a decade to improve and reinvent new techniques for digital signal processing, image and video processing, speech processing, sensors, digital data acquisition system, digital communication system etc. International Data Corporation reveals that the amount of data generated worldwide is nearly 1.8 trillion gigabytes in 2011, 2.8 trillion in 2012, and by 2020 it will be approx 40 trillion GBs or 40 zettabytes. As we see data are growing very fast per year. In contrast, the growth rate of memory storage is slower. So there is big gap between data production and data storage. As a consequence, this gap is going to be exponentially widened for data production and computational power and at the same time it will effect communication rates as well. With the introduction of compressive sensing in 2004 by Emmanuel Candès, Terence Tao, and David Donoho an exponentially growth of new techniques the sensor data has taken place. Data deluge is a major problem today and compressive sensing gives a promising solution to it.

CS mainly relies on two principles sparsity and incoherence. Sparsity pertains to the signals of interest and incoherence to the sensing modality.

Sparsity enables the signals to store information in few samples. It is an inherent property of CS by which reconstruction can be done accurately and by the virtue of this property it can be applied in diverse field. When we apply compressive sensing for digital images, it will overcome the problems associated with the huge memory storage, processing time, and cost of computational process.

15.2 Historical Background

In 1795, Prony [1] developed a method known as Prony’s analysis to extract useful information from a uniformly sampled signal which further builds a series of sinusoids or damped complex exponentials. It is used for the parameter estimation of the signal frequency components like estimation of frequency, amplitude, phase, and damping components of a signal. It represents the sampled data as a linear combination of exponentials and was initiated because it can estimate the frequency [1]. This method also works well with nonlinear equation that utilizes the linear equations. In order to solve for the different exponential components, the square error of approximation must be calculated and it must attain minimal error. CS was used in seismology in 1970 when Claerbout and Muir gave use of absolute value error criteria in place of least square data modeling. An example can be seen of this stability in averaging by median rather than arithmetic mean.

One of the famous theories of signal processing is the Nyquist/Shannon sampling theory [2]. This principle states that a signal/image can be represented and reconstructed if sampling frequency f s is greater than or equal to twice of the highest frequency f m in the signal; f s  ≥ 2f m

Candès and Donoho introduced an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” which says that this conventional theory is inaccurate. They discovered important results on the minimum amount of data needed to reconstruct an image even though the amount of data would be deemed insufficient by the Nyquist–Shannon criterion [3, 4]. They explain that signal and images can be reconstructed from far fewer data than what we usually do as a conventional and common practice which follows Shannon–Nyquist density sampling theory. Compressive sensing builds upon the fundamental fact that we can represent many signals using only a few nonzero coefficients in a suitable basis or dictionary. Nonlinear optimization can then enable recovery of such signals from very few measurements. The theoretical foundation of this revolution is the pioneering work of Kotelnikov, Nyquist, Shannon, and Whittaker on sampling continuous-time band-limited signals [2, 57]. Their results demonstrate that signals, images, videos, and other data can be exactly recovered from a set of uniformly spaced samples taken at the so-called Nyquist rate of twice the highest frequency present in the signal of interest. After this discovery, much of signal processing has moved from the analog to the digital domain. Digitization has enabled the creation of sensing and processing systems that are more robust, cheaper and, consequently, more widely used than their analog counterparts.

Candès and Donoho stated that it is possible to reconstruct images or signals accurately from a number of samples which is far smaller than the desired resolution of the image/signal, e.g., the number of pixels in the image. The field of CS grew out of the work of Candès, Romberg, Tao and of Donoho, who showed that a finite-dimensional signal having a sparse or compressible representation can be recovered from a smaller set of linear, non-adaptive measurements [3, 4, 810].

In 1990 George, Gorodnitsky, and Rao studied sparsity in biomagnetic imaging and other contexts [1113]. Simultaneously, Bresler, Feng, and Venkataramani proposed a sampling scheme for acquiring certain classes of signals consisting of k components with nonzero bandwidth (as opposed to pure sinusoids) under restrictions on the possible spectral supports, although exact recovery was not guaranteed in general [1416]. In the early 2000s Blu, Marziliano, and Vetterli developed sampling methods for certain classes of parametric signals that are governed by uniform sampling [10].

15.3 Methodology

With the help of few sensors, only super-resolution signals can be obtained in compressive sensing. So, a new data acquisition protocol can be possible which converts analog signal to digital signal with less number of sensors. This sampling theory gives the fundamental methods for sampling and compressing data concurrently.

Figure 15.1a shows bulky data acquisition and Fig. 15.1b shows that the dispensable data can be rejected. Compressing of the dispensable data can be rejected. Compressing the signal which has some structure can be accomplished without intuitive a loss. This work can be carried out in the following steps:

  1. a.

    Obtaining the whole signal.

  2. b.

    Computation of transform coefficients.

  3. c.

    Encoding the highest coefficients and throwing away all other coefficients.

Fig. 15.1
figure 1

a Raw: 15 MB b JPEG: 150 KB. Source First EU-US Frontiers of Engineering Symposium, Cambridge, September 2010

This process of massive data acquisition followed by compression is extremely uneconomical (e.g. JPEG 2000) [3, 4, 8]. Instead of acquiring the data followed by compression, one can acquire the data that is already compressed, so there is no need to dispose anything [9]. Compressive sampling suggests ways to economically translate analog data into compressed digital form [17, 18].

Some of the key insights underlying this new theory are given below.

Sparsity From a general viewpoint, sparsity and compressibility have played and continue to play a fundamental role in many fields of science. Sparsity leads to efficient estimations; for example, the quality of estimation by thresholding or shrinkage algorithms depends on the sparsity of the signal we wish to estimate. Sparsity leads to efficient compression; for example, the precision of a transform coder depends on the sparsity of the signal we wish to encode [19].

Mathematically, we have a vector fR n (such as the n-pixel image in Fig. 15.2) which we expand in an orthonormal basis (such as a wavelet basis)

Fig. 15.2
figure 2

a Original megapixel image with pixel values in the range [0, 255]. b Its wavelet transform coefficients. c The reconstruction obtained by zeroing out all the coefficients. Source Emmanuel et al., IEEE signal processing, page 23, March 2008

ψ = [ψ 1, ψ 2, …ψ n ] as follows:

$$f\left( t \right) = \sum\limits_{i = 1}^{n} {x_{f} \psi_{f} (t)}$$
(15.1)

where x i is the coefficient sequence of f, x f  = <f, ψ f >. It will be convenient to express f as ψ x (where \(\psi\) is the n × n matrix with \(\psi_{1} , \ldots ,\psi_{n}\) as columns). The implication of sparsity is now clear: when a signal has a sparse expansion, one can discard the small coefficients without much perceptual loss.

Signal can be expanded as a superposition of spikes, sinusoids, B-splines, wavelets, curvelets, shearlets, alpha-molecule, and so on.

In the above figure, original megapixel image with pixel values in the range [0, 255] is taken and its wavelet transform coefficients are arranged in random order for enhanced visibility. Relatively few wavelet coefficients capture most of the signal energy; many such images are highly compressible. (c) The reconstruction obtained by zeroing out all the coefficients in the wavelet expansion but the 25,000 largest (pixel values are threshold to the range [0, 255]). The difference with the original picture is hardly noticeable. As described in “Under sampling and Sparse Signal Recovery,” by Candès [4], this image can be perfectly recovered from just 96,000 incoherent measurements.

Incoherence extends the duality between time and frequency and expresses the idea that objects having a sparse representation in ψ must be spread out in the domain in which they are acquired, just as a Dirac or a spike in the time domain is spread out in the frequency domain. Put differently, incoherence says that unlike the signal of interest, the sampling/sensing waveforms have an extremely dense representation in ψ. The observation is that one can design efficient sensing or sampling protocols that capture the useful information content embedded in a sparse signal and condense it into a small amount of data. These protocols are nonadaptive and simply require correlating the signal with a small number of fixed waveforms that are incoherent with the sparse basis. The most remarkable thing about these sampling protocols is that they allow a sensor to very efficiently capture the information in a sparse signal without trying to comprehend that signal. Further, there is a way to use numerical optimization to reconstruct the full-length signal from the small amount of collected data. In other words, CS is a very simple and efficient signal acquisition protocol which samples—in a signal independent fashion—at a low rate and later uses computational power for reconstruction from what appears to be an incomplete set of measurements [4, 13].

To address the logistical and computational challenges involved in dealing with such high-dimensional data, we often depend on compression, which aims attending the most concise representation of a signal that is able to achieve a target level of acceptable distortion. One of the most popular techniques for signal compression is known as transform coding, and typically relies on finding a basis or frame that provides sparse or compressible representations for signals in a class of interest [16, 2026]. Compressed sensing (CS) has emerged as a new framework for signal acquisition and sensor design. CS enables a potentially large reduction in the sampling and computation costs for sensing signals that have a sparse or compressible representation. While the Nyquist–Shannon sampling theorem states that a certain minimum number of samples is required in order to perfectly capture an arbitrary band-limited signal, when the signal is sparse in a known basis, we can vastly reduce the number of measurements that need to be stored. Consequently, when sensing sparse signals we might be able to do better than suggested by classical results.

Restricted Isometric Property (RIP) is a famous tool for analyzing the performance of CS in acquisition and reconstruction. RIP in acquisition although similar to conventional method but its sensing paradigm makes the difference.

If \(\rlap{-} X\) is the signal to be sensed then

$$Y = \varPhi \rlap{-} X$$
(15.2)

where \(\rlap{-} X \in {R^n}\), Φ is m by n measurement matrix and Υ ϵ R m is measurement vector. In conventional sensing paradigm, m must be equal to n whereas in CS m can be far less than n.

Some advances using CS in communication, signal, and video processing are discussed below

1. Block compressed sensing

Gan [27] in block compressed sensing for natural images proposed image acquisition through block by block manner. Here, original image is divided into small blocks (basically B × B) and each block is sampled independently which leads to faster speed. CS was recommended in order to overcome the conventional imaging system which samples the original images into digital format at a higher rate which was impossible. In case of devices with low power and image resolution, when considering I r  × I c where N = IrIc with n measurements, the image in the block CS which is divided into small blocks is sampled the block CS shows that it can be implemented as a random 2D filter bank [27] (Fig. 15.3).

Fig. 15.3
figure 3

Filter bank implementation of block CS

The sensors are pseudoinverse, i.e., they are sensitive to noise. In filter bank implementation of Block CS, the input image of each FIR filter goes through a rectangular decimation matrix M which gives the CS samples as output.

Nonlinear signal reconstruction method is used to improve the quality of reconstructed images, it uses a process of hard thresholding to remove the Gaussian noise which is based merely on three functions describing the whole procedure transmitting the input signal to yield a certain coefficient, that largest coefficient is kept and rest are set to zero, later the inverse transformed is applied to obtain the reconstructed signal. Weiner filter exploits the statistical properties of the image and can be used to restore images in the presence of blur as well as noise, the minimum squared error (M.S.E.) is minimized by using the orthogonality condition. In minimizing the M.S.E., the Weiner filter tends to smooth the image more than what the human eye would prefer. The reason being that M.S.E. weighs all the errors equally regardless of their location in the image and after that hard thresholding are applied which cuts down the Gaussian noise. Frame Expansions are used in order to recover natural images. For better reconstruction results, two-frame expansions are used. UWT which stands for undecimated wavelet transform which provides full description of the image, and OLT, i.e., oversampled lapped transform as the name suggests it represents the structures by overlapped block by block processing [27].

2. Acquisition and reconstruction using FIR (Finite Impulse Response) filter

Tropp et al. [28] proposed a technique for acquiring and reconstruction of signals using FIR filter. The main purpose of compressed sensing is to develop a linear measurement operator, Φ: R d  R n, nonlinear reconstruction algorithm, A: R n  R d (to recover sparse signals) [27].

Each of the signals in CSA represented are in 2 m real numbers where m < d where m = no. of sparse signals and n = no. of measurement of signals and d = length of the signals. The given equation shows that the reconstruction process is stable

$$||A(\Phi_{S} + v)||_{2} \; = \;C||v||_{2}$$
(15.3)

The compressive sampling acquisition (CSA) functions are designed only for finite length signals and the reconstruction process requires too much of space and time. Reduction of the size of audio signals is done by sampling at Nyquist rate (done to determine the stability of the feedback systems) before applying the lossy compressed equations.

Random filtering process was proposed to think beyond the Shannon–Nyquist sampling where the compressed version of digital signal is acquired which is applicable to analog signals and had a huge impact on analog/digital converters.

Random filters helps in acquiring compressed version of digital data which includes following processes like convolution of the signals and down sampling and helps in implementation in analog hardware in analog/digital converters. Random filtering contributes well in measurement process of analog signals and captures all sparse and compressible signals as compared to CSA where analog signals can perform limited functions which includes (1) filtering, (2) modulation, (3) sampling, and where the measurement process is not casual.

In the random filtering process of compressed sensing, signal x is measured by:

  1. (a)

    Convolution of the signal x with an FIR filter h has random taps which is then down sampled to obtain a compressed data y (Fig. 15.4).

    Fig. 15.4
    figure 4

    a Signal acquisition using convolution. b Signal acquisition using FFT

  2. (b)

    Using FFT/IFFT.

Random filters also accelerates the reconstruction algorithm and measurement algorithm, can trade longer filters for fewer measurements, easily implementable in software and hardware, and focuses on continuous-time signals [28].

3. Temporal Compressive Sensing

Yuan et al. [29] has introduced adaptive temporal compressive sensing for video. Here video compressive sensing has been developed to capture high-speed videos at low frame rate by means of temporal compression. The proposed method is to determine the temporal compression ratio N F based on the motion of the scene which is to be sensed (Fig. 15.5).

Fig. 15.5
figure 5

Block matching of P × P block in frame B with best-matched block in frame A. Source Xin Yuang et al., ICIP 2013, IEEE [29]

They have estimated the motion of the objects within the scene by doing partitioning of frame A (e.g., previous frame) into P × P (pixels) blocks then took a predefined window size M × M (pixels) searching all the P × P blocks in the M × M windows in frame B (e.g., current frame) around the selected block in frame A and finally find the best-matching block in the window according to some metric (e.g., mean squared error), and use this to compute the block motion.

15.4 Applications

  1. 1.

    Data acquisition: Compressive sensing reduces the number of measurements. Single pixel camera which is invented at Rice University makes a drastic change in imaging system.

  2. 2.

    Medical imaging: CS is widely used in medical imaging, particularly in magnetic resonance imaging. MR images have sparsity properties in Fourier, wavelet, curvelet, and shearlet domain. With the introduction of CS-based techniques, it is easy to take advantage of their implicit sparsity and reduction in the number of measurements without hampering the image quality.

  3. 3.

    Channel estimation: Compressed channel-based estimation uses nonlinear reconstruction algorithm and gives better result.

  4. 4.

    Wireless sensor network: Large number of sensors perform the task of data gathering for wireless sensor network.

  5. 5.

    Video scrambling: Block-based CS sampling is used on quantized coefficient. It provides security improvement and coding efficiency.

Other applications of compressed sensing include coding and information theory, machine learning, hyperspectral imaging, seismic imaging, cognitive radio networks, geophysical data analysis, computational biology, network traffic, remote sensing, radar analysis, robotics and control, A/D conversion, and many more.