Keywords

1 Introduction

The conventional digital signal processing is based on the Nyquist criteria which states that any band-limited signal can be exactly reconstructed if it is sampled at least twice the maximum signal frequency [1]. Most of the signal processing systems are using the same criteria. According to this theory, it is not possible to reconstruct the signal if it is sampled at sub-Nyquist rates. Compressive sensing (CS) is a new revolutionary technique in signal processing by which one can do signal acquisition and reconstruction without the limits on the sampling frequency [2]. It can reconstruct the signals based on numerical optimization algorithm.

In the traditional communication, the data is first sampled and then the compression is applied to reduce the storage and transmission costs. Instead, if we combine these two steps into a single step i.e., compressing the signal at the time of sensing itself leads to a new method called compressive sensing. This will reduces the demand of high speed data acquisition systems and also enhances the efficient utilization of resources.

In radar imaging applications higher frequencies are required for good resolution. Designing Analog to Digital Converters (ADC) at that higher sampling rates may not be possible with the current day ADC technology. Different beam-forming methods are used to extract the radar images from the radar array. In-order to achieve the higher resolution images large number of array elements and Ultra Wide Band (UWB) signals are required. This results in a huge data collection, storage and high computational complexity. As the processing of data takes more time the targets may change their positions which causes blurred images. Compressive Sensing is the promising technology to acquire reliable, high resolution radar images with fewer data samples and less number of computations.

Emmanuel Candès et al., in the year 2004, proposed a new method in which an original image was reconstructed with less number of data samples that does not follow the Nyquist criterion [3, 4]. With the application of CS, Professors of Rice University developed the “single-pixel” camera. Since then compressive sensing has seen many applications in industry.

Compressive Sensing can be applied to the signals which have sparse representation. Most of the natural signals are sparse in one or other domain and are suitable for compressive sensing. Currently, CS is applied in many fields like image processing [5], radar signal processing [6] and communications [7] etc. In CS, the high dimensional signal is converted to low dimension space using measurement matrix. During the re-construction process in CS, the measurement matrix plays a vital role. The measurement matrix and basis matrix should follow the Restricted Isometric Property (RIP) [2]. A great deal of work is going ahead to outline a proficient measurement matrix which will lead to lower computational cost and storage space.

In the remaining part of the paper, the Compressive Sensing framework is presented in Sect. 2. A Gaussian modulated sinusoidal pulse of 4 GHz frequency is used as a test signal for compressive sensing. Different sensing matrices were introduced in Sect. 3. Reconstruction quality and results are discussed in Sect. 4. The paper concludes by summarizing the significance of different measurement matrices for compressive sensing.

2 Compressive Sensing

Compressive sensing is a new technology by which one can reconstruct the original signal with lesser number of random samples [2, 8]. The framework of the CS is divided into three major categories: (i) sparse representation, (ii) compression and (iii) reconstruction. The framework of CS is applicable to a signal if it sparse. Most of the natural or man-made signals are sparse in native domain or transform domain. The sparse signal can be compressed to a lower dimensional signal using the measurement matrix. This process is called the compression. These compressed signals are used to recover the original sparse signal using the sparse recovery algorithms. This comes under the solving of under-determined system of linear equations.

CS relies mainly on two principles: sparsity and incoherence. A signal is said to be sparse if most of its components are zeros. In other words the signal should have a fewer non-zero components in its representation. A signal may be sparse in its native domain or can be made sparse in the transformed domain. Coherence is a quality measure of the measurement matrix, lower coherence leads to better performance of the recovery algorithm.

Most common and man-made signals are compressible or can have compact representations when expressed in a favorable basis. Let x represents a signal of length N. If the signal \(x\in \mathcal {R}^N\) is assumed to have S-sparse representation in the complete dictionary set of \(\varPsi \) (\({N\,\times \,N}\) matrix), then x can be expressed as

$$\begin{aligned} x=\varPsi \alpha \end{aligned}$$
(1)

where \(\alpha \) is a sparse signal with S non-zero entries. If a signal y is acquired using M number of random measurements, from the linear combination of the points in x, then it can be written as

$$\begin{aligned} y=\varPhi x=\varPhi \varPsi \alpha \end{aligned}$$
(2)

where, \(\varPhi \) is a measurement matrix with dimension \({M\,\times \,N}\). Measurement matrix can be framed by the random measurements or by using different transformations or the combination of the two. If \(M<<N\) it is not possible to restore the signal accurately from less number of measurements. Such cases are treated as underdetermined system of equations and can be solved using linear algebra but leads to infinite solutions. However, if the signal has the sparse nature, with only S nonzero positions and satisfies the condition \(S<M\), then we can pick one exact solution from many by using linear programming [9].

There exists several algorithms to solve this sparse recovery problem. Few among them are Convex optimization, Greedy approach and Bayesian methods. In convex category, a solution is obtained using optimization algorithms like basis pursuit, gradient descent. These are complex in nature and require high recovery time. Greedy techniques which are iterative in nature provides the result in a faster manner whereas bayesian based procedures requires prior information about the sparse signal. In general, the existence of a unique solution is dependent on the measurement matrix. This paper uses greedy based Orthogonal Matching Pursuit (OMP) algorithm [10] for signal reconstruction and analyses the effect of different measurement matrices in compressive sensing.

3 Measurement Matrices

Measurement matrix plays a vital role in compressive sensing. Particularly for faithful signal reconstruction the compressed signals should contain all the significant information. Otherwise it is not possible to reconstruct the the original signal back. Measurement matrix takes the prominent role both in signal compression and reconstruction. Choosing an efficient measurement matrix is very much necessary for CS.

The quality of measurement matrix is decided by the following conditions: Coherence and Restricted Isometric Property (RIP). If the measurement matrix follows the above two properties, it ensures the uniqueness of the reconstructed signal. Coherence of a matrix measures the most extreme connection between any two columns of the matrix. Smaller the coherence better the reconstruction, which means that with fewer samples perfect recovery is possible for sparse signal.

If \(\varPhi \) is a measurement matrix of \(M\,\times \,N\) having normalized column vectors \(\varPhi _1, \varPhi _2,\varPhi _3,....\varPhi _N\). Then the mutual coherence constant is defined as

$$\begin{aligned} \mu (\varPhi )=\max \limits _{i \ne j} \frac{|<\varPhi _i.\varPhi _j>|}{||\varPhi _i||_2.||\varPhi _j||_2} \end{aligned}$$
(3)

Restricted Isometric Property (RIP) is also called as the uniform uncertainty principle. It assures the success of sparse recovery algorithms. The restricted isometric constant of order s involves all s-tuples of columns of measurement matrix, unlike coherency which takes pairs of columns. With the coherence, smaller restricted isometric constants are desired. The formal definition of the RIP is as follows.

$$\begin{aligned} (1-\delta _k)||x||^2_2 \le ||\varPhi x||^2_2 \le (1+\delta _k)||x||^2_2 \end{aligned}$$
(4)

The measurement matrix \(\varPhi \) satisfy the RIP property if there exist a constant \(\delta _k\) satisfying the above equation [11]. The \(\delta _k \in {[0,1]}\) is called the restricted isometric constant of \(\varPhi \) and the value should be smaller than 1.

A number of measurement matrices which follows the above properties has been already proposed. These can be comprehensively partitioned into two classes: random and deterministic.

Random matrices are produced using random functions. They are easy to generate and satisfy the RIP with higher probabilities. These are again divided into two types: structured and unstructured. Structured random matrices are generated by selecting the random rows of generated random functions. Examples are partial hadamard matrix and random partial Fourier matrices. Matrices of unstructured type are generated using the given distribution function. Examples are Gaussian and Bernoulli which are generated using the Gaussian and Bernoulli distributions.

Unlike the random matrices which are generated in random form, deterministic matrices are constructed deterministically that satisfy the RIP and coherency properties. These are additionally of two sorts: semi-deterministic and full deterministic. Semi-deterministic matrices are generated in two stages. In the initial step, entries of the first column are generated randomly based on some functions. In the second stage, remaining columns of the matrix are generated by applying a straight forward change on the first column. Examples are Circulant and Toeplitz frameworks. Full-deterministic matrices have an unadulterated deterministic development. Examples of these type include Chirp sensing, second-order Reed-Solomon and Quasi-Cyclic Low-Density Parity-Check code (QC-LDPC).

This paper selects the Gaussian random matrices, Random Bernoulli matrices, random partial Fourier matrix, partial orthogonal random matrices, partial hadamard matrices, Toeplitz matrices and chaotic random matrices [12,13,14,15,16] as measurement matrices for radar pulse compression and reconstruction using CS.

3.1 Gaussian Random Measurement Matrix

The probability density function of a random variable x in Gaussian distribution is given as

$$\begin{aligned} f(x)=\frac{1}{\sqrt{{2}{\pi }{\sigma ^2}}}{e^{-\frac{(x-\mu )^2}{2\sigma ^2}}} \end{aligned}$$
(5)

where \(\mu \) is the expectation or the mean, and \(\sigma ^2\) is the variance of the distribution. The elements of the Gaussian random matrix \(\varPhi _{i,j}\) are independent random variables which obey the Gaussian distribution with mean of 0 and variance 1. It can be written as

$$\begin{aligned} \varPhi _{i,j}=N(0,1) \end{aligned}$$
(6)

The random matrix \(\varPhi \) (\(M\,\times \,N\)) satisfies the RIP with probability of at least \(1-\varepsilon \) provided

$$\begin{aligned} M\ge \frac{Cs}{\varepsilon ^2}log(\frac{N}{\varepsilon ^2s}) \end{aligned}$$
(7)

where C is a common constant (\(C>0\)), M indicates the number of measurements to take out from N, which is the length of the input signal and s is the sparsity level [17]. This accurately reconstruct the signal and is most commonly used. But the problem with this matrix is that all the elements are uncertain and need to be stored. That means this matrix requires large storage and high computational complexity which indicates difficult hardware implementation.

3.2 Random Bernoulli Matrix

Each element in this matrix follows Bernoulli distribution which is a discrete probability distribution and a special case of binomial distribution. If X is a random variable with this distribution, we have:

$$\begin{aligned} Pr(X=1)=p=1-q=1-Pr(X=0) \end{aligned}$$
(8)

The probability mass function f of this distribution, over k possible outcomes, is

$$\begin{aligned} f(k;p)={\left\{ \begin{array}{ll} p, &{} \text {if } k=1\\ 1-p, &{} \text {if } k=0 \end{array}\right. } \end{aligned}$$
(9)

Bernoulli matrix \(B\in R ^{M\,\times \,N}\) is having the entries of +1 or −1 and is given by

$$\begin{aligned} \varPhi _{i,j}={\left\{ \begin{array}{ll} 1,&{} \text {if } p=1/2\\ -1,&{} \text {if } 1-p=1/2 \end{array}\right. } \end{aligned}$$
(10)

where p denotes the probability of the value.

The condition to satisfy RIP for random Bernoulli matrix is same as the Gaussian random matrix [12].

3.3 Random Partial Fourier Matrix

Partial Fourier matrix is formed using the Fourier matrix of size \(N\,\times \,N\). In this case first we will generate a Fourier matrix of size \(N\,\times \,N\) whose entries are given by the equation

$$\begin{aligned} \varPhi _{m,n}=\exp ^{2\pi i m n/N} \end{aligned}$$
(11)

where mn = 1, 2, 3.....N. From this \(N\,\times \,N\) matrix, an \(M \,\times \,N\) measurement matrix is constructed by selecting M random rows. If \(M\ge C.s.log (N/\varepsilon )\), [18] this matrix follows the RIP with a probability of at least \(1-\varepsilon \).

3.4 Partial Orthogonal Random Matrix

Matrix \(\varPhi \) is said to be orthogonal, if it satisfies the condition \(\varPhi ^T\varPhi =I\). Thus the column vector of a matrix \(\varPhi \) is a standard orthogonal vector. The method of constructing a partial orthogonal matrix includes the generation of an \(N\,\times \,N\) orthogonal matrix \(\varPhi \), and selecting M random rows from that matrix.

3.5 Partial Hadamard Matrix

Hadamard matrix is a square matrix composed by elements \(+1\) and \(-1\) and satisfies the orthogonality condition. The method of generating the partial hadamard matrix is same as the partial orthogonal matrix except for the generation of hadamard matrix in place of orthogonal matrix. This matrix follows RIP with probability of at least \(1-\frac{5}{N}-e^{-\beta }\), if \(M\ge C_0(1+\beta )Slog N\), where \(\beta \) and \(C_0\) are constants.

3.6 Toeplitz Matrix

This matrix is generated by using the successive shift of a random variable t where \(t= (t1, t2\ldots ..t_{Q+M-1})\varepsilon R^{Q+M-1}\). The vector t is generated by using the Bernoulli distribution function whose entries are +1 or −1. This is a circulant matrix with constant diagonal i.e. \(t_{m,n}=t_{m+1,n+1}\). The matrix is framed in the following form

(12)

After forming the \(N\,\times \,N\) matrix, a random \(M\,\times \,N\) matrix is selected such that the Toeplitz matrix follows the RIP with probability at least \(\delta _k< \delta \) if \(M\ge C \delta S^2 log(N/S)\). The (mn)th entry of t is given by \(t_{m,n}=t_{m-n}\). The structural characteristics of this matrices reduce the randomness of elements, which impacts in reducing the memory and hardware complexity. But this matrix does not correlate with all the signals and is used only with some special signals.

3.7 Chaotic Random Matrices

The chaotic random matrices can be derived from the logistic map function which can be expressed as \(x_n+1 =\mu x_n (1-x_n)\) where \(\mu \varepsilon (0,4)\) and \(x_n\varepsilon (0,1)\). For the special case of \(\mu =4\), the solution of the system is given by \(x_n=(1/2)(1-cos(2pi\theta 2^n))\), where \(\theta \varepsilon [0,pi]\) which satisfies \(x_0=(1/2)(cos 2pi\theta )\) [14]. It is well known that chaotic system can produce very complex sequences. The chaotic matrix is given by

(13)

where the scalar \(\sqrt{2/M}\) is for normalization. Chaotic matrix follows the RIP for constant \(\delta >0\) with good probability providing that \(s\le O(M/log(N/s)\).

Fig. 1.
figure 1

(a) Input Gaussian modulated sinusoidal pulse (b) Reconstructed Pulse with 600 Samples

4 Simulation and Analysis

Figure 1(a) shows the Gaussian modulated sinusoidal pulse of 4 GHz frequency which is used as input signal for compressive sensing. The Gaussian pulse itself is treated as a sparse representation because it has more number of zeros. The compresssion is performed using different measurement matrices as discussed in Sect. 3. Finally, the Gaussian pulse is successfully recovered using OMP algorithm and the recovered signal is shown in Fig. 1(b). PSNR (Peak Signal to Noise Ratio) and recovery time is taken as the key parameters to evaluate the measurement matrix performance for faithful signal reconstruction. The simulations are carried out on MATLAB R2015a software with Intel I7 octa core processor.

Fig. 2.
figure 2

Plot of PSNR for different measurements

Fig. 3.
figure 3

Plot of reconstruction time for different measurements

During simulation, the length of the original signal is taken as 3600 samples. The value of M, which is the number of compressed measurements from the input samples, is varied from 1 to 600 with a displacement of 30. Peak Signal to Noise Ratio (PSNR) is calculated to show the difference between the recovered and original signal and the equation used is \(PSNR=20 log{\frac{MAX(x)}{\sqrt{MSE}}}\), \(MSE =\frac{1}{N}{\sum {(\hat{x}-x)^2}}\) where x and \(\hat{x}\) are the original and recovered signals respectively.

Figures 2 and 3 shows the graphs of PSNR and execution time of the reconstructed signal for different lengths of the compressed signal respectively. From the figures it can be concluded that partial Fourier matrix is giving the highest PSNR for almost all measurements as compared with the other matrices. However, it fails interms of execution time. As the number of measurements increases the execution time increases, and is the highest comparing with other measurement matrices. In the case of signal recovery time Bernoulli matrix is taking the lowest computation time as compared with other matrices. However, interms of PSNR value it is not the lowest compared with the other measurement matrices. On the other hand Hadamard matrix gives good performance in terms of both PSNR and recovery times. Hence, this matrix is preferred as measurement matrix for signal reconstruction using CS.

5 Conclusion

This paper uses Gaussian modulated sinusoidal pulse of 4 GHz as stimulus for compressive sensing. The signal is further compressed with Gaussian random matrix, Bernoulli random matrix, partial orthogonal random matrix, partial hadamard matrix, random partial Fourier matrix, Toeplitz matrix, and chaotic random matrix. The simulation results shows that partial Fourier matrix performs better in terms of PSNR, Bernoulli matrix is good for fast signal reconstruction whereas Hadamard measurement matrix is optimum for both PSNR and fast signal reconstruction. This measurement matrix can be choosen as optimum measurement matrix interms of PSNR and speed for compressive sensing.