Keywords

1 Introdution

With the recent advances in computer technologies and Internet applications, the number of multimedia files increase dramatically. Thus, despite extraordinary advances in computational power, the acquisition and processing of signals in application areas such as imaging, video and medical imaging continues to pose a tremendous challenge. The recently proposed sampling method, Compressed Sensing (CS) introduced in [13], can collect compressed data at the sampling rate much lower than that needed in Shannon’s sampling theorem by exploring the compressibility of the signal. Suppose that x ∈ ℝN × 1 is a length-N signal. It is said to be K-sparse (or compressible) if x can be well approximated using only K ≪ N coefficients under some linear transform

$$ \boldsymbol{x}=\boldsymbol{\varPsi} \boldsymbol{\omega} $$
(73.1)

where Ψ is the sparse transform basis, and ω is the sparse coefficient vector that has at most K (significant) nonzero entries.

According to the CS theory, such a signal can be acquired through the random linear projection:

$$ \boldsymbol{y}=\boldsymbol{\varPhi} \boldsymbol{x}+\boldsymbol{n}=\boldsymbol{\varPhi} \boldsymbol{\varPsi} \boldsymbol{\omega} +\boldsymbol{n} $$
(73.2)

where, y ∈  M × 1 is the sampled vector with M ≪ N data points. Φ represents a M × N random matrix, which must satisfy the Restricted Isometry Property (RIP) [4], and n ∈  M × 1 is the measurement noise vector. Solving the sparsest vector ω consistent with the Eq. (73.2) is generally an NP-hard problem [4]. For the problem of sparse signal recovery with ω, lots of efficient algorithms have been proposed. Typical algorithms include basis pursuit (BP) or l 1-minimization approach [5], orthogonal matching pursuit (OMP) [6], and Bayesian algorithm [7].

CS theory provides us a new promising way to achieve higher efficient data compression than the existing ones [810]. The reference [9] proposed a new kind of sampling methods, it sampled the edge of the high frequency part of the image densely and the non-edge part randomly in the encoder, instead of using the measurement matrix to obtain the lower-dimensional observation directly in the traditional compressed sensing theory. The reference [10] proposed an improved compressed sensing algorithm based on the single layer wavelet transform according to the properties of wavelet transform sub-bands. But this article doesn’t consider the characteristic of high sub-band frequency coefficients. In this paper, the proposed unequal-compressed sensing algorithm combines the compressed sensing theory with the characteristics of the wavelet coefficients. The proposed algorithm has better reconstructed image quality in the same compression ratio. Usually, the coefficients are not zero but compressible (most of them are negligible) after sparse transform (e.g., DWT). Therefore, the sparse Bayesian learning (SBL) algorithm is used to resolve the reconstructed problem, which can recover the image correctly and effectively.

The remainder of this paper is organized as follows. Section 73.2 introduces the characteristics of wavelet coefficients. In Sect. 73.3, the proposed algorithm for image compression is presented. In Sect. 73.4, experiment results are given. Conclusions of this paper and some future work are given in Sect. 73.5.

2 Characteristics of Wavelet Coefficients

2.1 Characteristic One

Wavelet transform as the sparse decomposition has been widely used in compressed sensing. In this paper, we choose the discrete wavelet transform (DWT) as the sparsifying basis. As we know, the low frequency coefficients are much more important than the high frequency coefficients. Table 73.1 gives an idea about the PSNR (in dB) achieved that the image restoration only used low frequency coefficients or high frequency coefficients when the images are decomposed by four-scale DWT. In the Table 73.1, we can see that the low frequency coefficients make far greater contribution to the PSNR than the high frequency coefficients. Therefore, we retain the low frequency to ensure the quality of image restoration, and the high frequency coefficients are measured by the random Gaussian matrix to achieve compression.

Table 73.1 The PSNR of four-scale DWT

Table 73.2 shows that the PSNR is achieved by the Lena images using multi-scale DWT. We can see that the scale of the decomposition S is smaller, the contribution to the PSNR which is made by low frequency coefficients is greater, and the length of the low frequency coefficients (N/2S) is longer. Therefore, we can choose the proper the scale of the decomposition S according to the compression ratio.

Table 73.2 The PSNR of multi-scale DWT

2.2 Characteristic Two

In order to reduce the computational complexity and storage space, we make each column of image x as a N × 1 signal to be coded alone. It is compressible in the discrete wavelet basis Ψ. The coefficients of the two-scale decomposition structure are shown in Fig. 73.1.

Fig. 73.1
figure 1

The coefficients of the two-scale decomposition structure

We can see that wavelet coefficient c consists of three parts, which are cA 2 (low frequency coefficients), cD 2 (one of the high frequency sub-band coefficients) and cD 1 (the other high frequency sub-band coefficients). We choose one column of image 256 × 256 Lena as a 256 × 1 signal to be decomposed alone. The decomposed high frequency sub-band coefficients are shown in Fig. 73.2.

Fig. 73.2
figure 2

The decomposed high frequency sub-band coefficients

We can see that, the locations of the larger coefficients in cD 1 are almost two times than the locations of the larger coefficients in cD 2. Then, we extend cD 2 using the zero insertion method, the comparison of the extended cD 2 and cD 1 are shown in Fig. 73.3. As can be seen from the Fig. 73.3, if the high frequency coefficient is lager (or smaller) at a certain location in cD 1, the high frequency coefficient is also larger (or smaller) with a great probability at the same location in the extended cD 2(ex _ cD 2). Let s _ cD 1 _ cD 2 represents the sum of ex _ cD 2 and cD 1, it has only a few of larger coefficients c d , other coefficients are almost close to zero. More importantly, the locations of the larger coefficients in s _ cD 1 _ cD 2 are almost the same as the locations of the larger coefficients in cD 1 and ex _ cD 2. Let

Fig. 73.3
figure 3

The comparison of the extended cD 2 and cD 1

$$ \boldsymbol{s}\kern.15em \_\kern.15em \boldsymbol{c}{\boldsymbol{D}}_1\kern.15em \_\kern.15em \boldsymbol{c}{\boldsymbol{D}}_2=\boldsymbol{ex}\kern.15em \_\kern.15em \boldsymbol{c}{\boldsymbol{D}}_2+\boldsymbol{c}{\boldsymbol{D}}_1 $$

Then

$$ \boldsymbol{c}{\boldsymbol{D}}_1=\boldsymbol{s}\kern.15em \_\kern.15em \boldsymbol{c}{\boldsymbol{D}}_1\kern.15em \_\kern.15em \boldsymbol{c}{\boldsymbol{D}}_2-\boldsymbol{ex}\kern.15em \_\kern.15em \boldsymbol{c}{\boldsymbol{D}}_2 $$

Let c d represents the only a few of the largest coefficients in s _ cD 1 _ cD 2. According to ex _ cD 2 and c d , we can recover cD 1 approximately.

3 The Proposed Algorithm for Image Compression

In the original CS algorithm for image compression, the N × N image is firstly decomposed by a certain sparse transform (e.g. wavelet transform). And then, random Gaussian matrix Φ is employed to measure the wavelet coefficients. At the decoder, the original image is recovered by SBL algorithm and inverse wavelet transform. However, we study that the high frequency sub-band coefficients have a strong relevance. The algorithm which combines compressed sensing theory with the characteristics of the wavelet coefficients is proposed. The specific steps are as follows:

  • Step 1: in order to reduce the computational complexity and storage space, we make each column of image X as a N × 1 signal x to be decomposed alone. It is compressible in the DWT Ψ. We decompose the signal x by S-scale DWT. Retain the low frequency coefficients cA s . The scale of the decomposition S is decided by the compression ratio.

  • Step 2: let s _ cD S  _ cD i represent the sum of ex _ cD S and cD i , i = 1, … S–1. Restore the only a few of the larger coefficients c i , i = 1, … S–1. Let m represents the number of the largest coefficients c i .

  • Step 3: the random Gaussian matrix \( M\times \frac{N}{2^s}\left(M<N/{2}^s\right) \) Φ is employed to measurement sampling the cD S to yield a sampled vector y = Φ ⋅ cD S .

  • Step 4: we can use the SBL algorithm to recover the vector \( \boldsymbol{c}{\widehat{\boldsymbol{D}}}_S \) and then, according to \( \boldsymbol{c}{\widehat{\boldsymbol{D}}}_S \) and c i to recover \( \boldsymbol{c}{\widehat{\boldsymbol{D}}}_i \), i = 1, … S–1. Finally, we use the inverse discrete wavelet transform to reconstruct the original signal.

The compression ratio of the proposed CS method is

$$ \alpha =\frac{N/{2}^S+m\times i+M}{N} $$
(73.3)

Where, N/2S is the number of the low frequency coefficients, m × i represents the total number of the largest coefficients c i , i = 1, … S–1. M is the dimension of measurement matrix Φ, and, M < N/2S.

4 Experimental Results

The performance of the proposed algorithm has been evaluated by simulation. We choose the 256 × 256 Lena, Fruits, Baboon and Boat image as the original signals. We set the compression ratio α from 0.1 to 0.9. First, we determine the scale of the wavelet transform S according to the compression ratio α, and then get the length of the low frequency coefficients N/2S. Secondly, we choose the proper the dimension M of measurement matrix to guarantee the high frequency coefficients cD S transmits reliably. When the result is decimal, we choose the integer ⌊M⌋. Thirdly, we get the number of the largest coefficients m according to the formula (73.3). When the result is decimal, we choose the integer ⌊m⌋. In conclusion, the principle of distribution of these parameters is to ensure the reliable transmission of more important information in the fixed compression ratio α.

Similarly, we conduct the simulation using two types of original CS algorithm. One method is that the signal is decomposed by four-scale DWT (In general, the decomposition level of 256 × 256 image should be more than four to satisfy the sparsity), and then, all of the wavelet coefficients are measured by random Gaussian matrix. (In this paper, we named the general CS method). The compression ratio is calculated by the formula α 1 = M/N. The other method is that the signal is decomposed by single layer DWT, and then, the low frequency coefficients are retained, the high frequency coefficients are measured [10]. (In this paper, we named the single layer CS method). The compression ratio is calculated by the formula \( {\alpha}_2=\frac{N/2+M}{N} \), α 2 ≥ 0.5. Both of the two methods have not considered the characteristics of high frequency coefficients.

The PSNR of Lena, Fruits, Baboon and Boat images at different compression ratio using unequal-CS, general CS and single layer CS algorithms are shown in Fig. 73.4.

Fig. 73.4
figure 4

The compared PSNR of (a) Lena, (b) Fruits, (c) Baboon and (d) Boat

These figure shows that the proposed unequal-CS method achieves much higher PSNR than the other two CS methods. Especially, the gap is much larger in low compression ratio. What’s more, with the decrease of compression ratio, the PSNR of the rate of decline is much lower than the other two CS methods. And, the PSNRs which are achieved by the proposed unequal-CS method are not less than 18 dB. It indicates that the proposed method has a better stability for various compression ratios.

However, from the Fig. 73.4, we can see that the PSNR of Baboon image are lower than that of Lena, Fruits and Boat image at the same situations, because there are more high frequency coefficients in Baboon. After measurement, the loss is much. As this type of sources, the high frequency coefficients are larger relatively; the proposed method is not suitable for them.

Conclusion

An unequal-compressed sensing algorithm is proposed in this paper. According to the characteristics of the wavelet coefficients, we propose an unequal-compressed sensing algorithm which combines the compressed sensing theory with the characteristics of the wavelet coefficients and the PSNR of the proposed algorithm is greatly improved than the original CS algorithms. What’s more, it indicates that the proposed method has a better stability for various compression ratios.