1 Introduction

With the wide spread application of imaging spectrometer remote sensing technology, people gradually demand higher quality of the hyperspectral images, but due to the undesirable environmental conditions, the contrast and definition of the hyperspectral remote sensing images are affected with different degree [23, 45]. Therefore, the image enhancement method is a key technology of achieving and preserving the details of the hyperspectral images. Currently, image enhancement algorithms can be divided into two categories: spatial-domain and transform-domain enhancements [2, 14]. In terms of spatial transformation, the histogram equalization (HE) is a traditional enhancement technique [19, 21, 22, 35], but it has the problems of over-enhancement and over-amplified noise; the contrast-limited adaptive histogram equalization (CLAHE) is an effective method to enhance the local contrast of the image, and it solves the defects of HE approach to some extent [9]; there are also some simple and effective image enhancement algorithms based on spatial-domain such as dualistic sub image histogram equalization (DSIHE) [36], unsharp masking [33], gamma correction [10]. With the rapid development of the digital image processing technology, image enhancement techniques based on transform-domain are also proposed by scholars, such as wavelet transform [46], curvelet transform [28], contourlet transform [8], etc. The contourlet transform can provide a flexible multi-resolution, directional and structured decomposition for images [15], it overcomes the inherent drawbacks of inefficiency about the wavelet transform, but the contourlet transform does not have shift-invariance. In order to solve the problem of shift-invariance, the nonsubsampled contourlet transform (NSCT) is proposed based on the theory of contourlet transform [4, 37]. Now according to the characteristics of NSCT, this transform is widely used in image fusion and image enhancement algorithms [41, 44].

Pu et al. [29] proposed a hyperspectral remote sensing image enhancement method based NSCT and unsharp masking to deal with the problem of low contrast and noise amplification, this algorithm has achieved good results in mean and standard deviation, but it can appear the over-enhancement phenomenon. Liu et al. [24] proposed a hyperspectral remote sensing image enhancement approach based on NSCT and mean filter, this method has the advantages in term of the image definition, contrast and peak signal to noise ratio (PSNR), but the computation burden is high. Wu et al. [39] proposed a hyperspectral image enhancement technique based on multi-scale retinex (MSR) in NSCT domain, it is effectiveness in term of the image information entropy and contrast. Li et al. [12] proposed a novel remote sensing image enhancement technique based on NSCT, and this approach has significant improvement in image enhancement.

With the further research on the theory of transform domain, the shearlet transform and nonsubsampled shearlet transform (NSST) are proposed, and these two methods are also widely used in remote sensing image denoising and enhancement [7, 34]. Yang et al. [42] proposed a remote sensing image enhancement algorithm based on shearlet transform and fuzzy enhancement, experimental results demonstrate that this technique obtains a good visual effect, the entropy and mean also have a significant promotion. Wu et al. [40] proposed an adaptive image enhancement technique based on NSST transform and constraint of human eye perception information fidelity, and this method also achieved a good results such as definition and contrast. Lv et al. [27] proposed a hyperspectral remote sensing image approach based on NSST theory and guided filter, but this method may appear the over-enhancement phenomenon due to the over-process of the high-frequency components.

Although these algorithms have achieved certain results in image enhancement, the pseudo-Gibbs phenomenon still exists. In order to solve this problem, a novel hyperspectral remote sensing image enhancement based on NSST is proposed in this paper. We mainly focus on the design of enhancement strategies for both the low-frequency and high-frequency components. The main contributions of the proposed algorithm are outlined as follows.

  1. 1)

    We introduce the guided filter into the filed of image enhancement. The guided filter model is used to process the low-frequency component. The guided filter is experimentally verified to have a better contrast enhancement effect than the traditional bilateral filtering.

  2. 2)

    We present a novel high-frequency denoising strategy with the improved fuzzy contrast.

  3. 3)

    We propose a new remote sensing image enhancement approach in the NSST domain by using the enhancement and denoising strategies mentioned earlier. Firstly, the image is decomposed into a low-frequency component and some high-frequency components by NSST transform; Secondly, the guided filter is used to deal with the low-frequency component to improve the contrast and the improved fuzzy contrast is adopted to adjust the coefficients of high-frequency sub-bands. Thirdly, the inverse NSST (INSST) is applied to reconstruct the image, and the final enhanced image is obtained. Extensive experiments are conducted to verify the effectiveness of the proposed algorithm on different remote sensing image enhancement problems. Seven image enhancement algorithms are used for comparison. Experimental results demonstrate that the proposed approach can achieve good performances on both the subjective and objective assessments.

The remainder of this paper is organized as follows. In Section 2, we introduce the theory of nonsubsampled shearlet transform. The proposed hyperspectral remote sensing image enhancement method is described in Section 3. The Section 4 presents the experiments results and compares them with other state-of-the-art algorithms. Finally, the conclusions are summarized in Section 5.

2 Theoretical analysis

2.1 Non-subsampled shearlet transform

Non-subsampled shearlet transform can be constructed by combining geometry and multi-scale in affine system. When the dimension n = 2, the affine system of shearlet transform can be described as follows [6, 38]:

$$ {M}_{AB}\left(\phi \right)=\left\{{\phi}_{i,j,k}(x)={\left|\det A\right|}^{i/2}\phi \left({B}^j{A}^ix-k\right):i,j\in Z,k\in {Z}^2\right\} $$
(1)

where ϕ ∈ L2(R2), L represents integrable space and det represents determinant of a matrix. Both A and B are invertible matrices of 2 × 2, and ∣ det B ∣  = 1. If MAB(ϕ)has a tight frame, the elements of MAB(ϕ)are called synthetic wavelets. A is called an anisotropic expansion matrix, and Ai is associated with scaling transformations; B is a shear matrix, and Bj is associated with geometric transformations which keep the area constant. The expressions for A and B are as follows [26]:

$$ A=\left(\begin{array}{l}a\kern1.5em 0\\ {}0\kern1em \sqrt{\mathrm{a}}\end{array}\right)\kern1.75em B=\left(\begin{array}{l}1\kern1.5em s\\ {}0\kern1.5em 1\end{array}\right) $$
(2)

Assume a = 4, s = 1, the Eq. (2) is described as follows:

$$ A=\left(\begin{array}{l}4\kern1.5em 0\\ {}0\kern1.5em 2\end{array}\right)\kern1.75em B=\left(\begin{array}{l}1\kern1.5em 1\\ {}0\kern1.5em 1\end{array}\right) $$
(3)

The discretization process of NSST can be divided into two steps: multi-scale decomposition and localization of orientation. The multi-scale decomposition is achieved by non-subsampled pyramid filter bank (NSP). The localization of the orientation is accomplished by a shearlet filter (SF).

Fig. 1 depicts the frequency domain decomposition and the frequency domain support space, respectively. The multi-scale and multi-directional decomposition process of NSST is shown in Fig. 2, the decomposition levels of NSP is 3.

Fig. 1
figure 1

a The frequency domain decomposition of shearlet transform; b the frequency domain support space of shearlet transform

Fig. 2
figure 2

The multi-scale and multi-directional decomposition process of NSST

3 Proposed method

3.1 Guided filter in low-frequency sub-band

The guided filter is widely used in image enhancement and edge preserving. In this article, the guidance image, input image and filtered image are defined as I, p and q. The guided filter is compelled by the local linear model, and it is defined as follows [5]:

$$ {q}_i={a}_k{I}_i+{b}_k,\kern0.75em \forall i\in {w}_k $$
(4)

where i and k present the index of the pixel and the local square window w by the radius r, respectively. The ak and bk are defined as follows [25]:

$$ {a}_k=\frac{\frac{1}{\mid w\mid }{\sum}_{i\in {w}_k}{I}_i{p}_i-{\mu}_k{\overline{p}}_k}{\delta_k^2+n} $$
(5)
$$ {b}_k=\overline{p_k}-{a}_k{\mu}_k $$
(6)

where μk andδkpresent the mean and variance of the guidance image, n represents the regularization quantity. The filtered image is computed with the following equation:

$$ {q}_i={\mathrm{a}}_{1i}{I}_i+{b}_{1i} $$
(7)

where a1i and b1i presents the mean of a and b.

The enhanced image can be achieved by the formula:

$$ G=\beta \left(p-q\right)+q $$
(8)

where β is a parameter controlling the degree of enhancement.

3.2 Improved fuzzy contrast in high-frequency sub-bands

The high-frequency parts contain the details information including the edge and contour of image, these components also contain more noise. By selecting the appropriate threshold, the noise can be suppressed to the greatest extent, and the loss of detail information can be reduced. Therefore, the improved fuzzy contrast is used to adjust the high-frequency coefficients of the image.

In the fuzzy algorithm, the membership function is constructed by the following equation [34]:

$$ {\mu}_{i,j}=T\left({d}_{i,j}\right)={\left[1+\left({d}_{\mathrm{max}}-{d}_{i,j}\right)/{F}_p\right]}^{-{F}_e} $$
(9)

where Fp and Fe are the reciprocal fuzzy parameters and exponential fuzzy parameters, respectively, they affect the uncertainty of fuzzy plane. T() represents the membership degree transformation, and maps the gray value of an image to the fuzzy domain. di, j is the gray value of the current pixel; dmax is the maximum gray value. The Fp and Fe are defined as follows:

$$ {F}_p=\frac{3}{4}\left({d}_{\mathrm{max}}-{d}_{\mathrm{min}}\right) $$
(10)
$$ {F}_e=2 $$
(11)

The generalized contrast enhancement operator is used to enhance theμi, j, and the corresponding formula is computed by:

$$ {\mu}_{ij}^{\hbox{'}}=\Big\{{\displaystyle \begin{array}{l}{2}^{q-1}{\mu}_{ij}^q\kern5.5em {\mu}_{ij}\le a\\ {}1-{2}^{q-1}{\left(1-{\mu}_{ij}\right)}^q\kern2.5em else\end{array}} $$
(12)

where q is 2. The threshold a in the Eq. (12) is usually defined as 0.5, but it is unreasonable. In this paper, the Otsu method is applied to select the best threshold value. The Otsu algorithm divides the pixels in the image into two categories according to the gray threshold T, namely class C1 and class C2. The type of C1 is composed by the gray pixels values between [0, T]; the type of C2 is composed by the gray pixels values between [T + 1, 255]. The between class variance of the C1 and C2 is defined as follows [34]:

$$ O{(t)}^2={W}_1(t){W}_2(t){\left[{U}_1(t)-{U}_2(t)\right]}^2 $$
(13)

where W1(t) and W2(t) are the ratios of the number of pixels in C1 and C2 to the total number of pixels in the image, respectively; U1(t) and U2(t) are the average gray value of pixels in the C1 and C2, respectively; Tis ranked sequentially in [0, 255], the T value that makes the maximum between class variance is the best threshold a of Otsu.

Finally, the T−1inverse transformation is carried out on the adjusted membership degree\( {\mu}_{ij}^{\hbox{'}} \), and the enhanced high-frequency coefficient Dijat the location (i, j) is obtained, the equation is described as follows:

$$ {D}_{ij}={T}^{-1}\left({\mu}_{ij}^{\hbox{'}}\right)={d}_{\mathrm{max}}-{F}_p\left({e}^{-\ln \left({\mu}_{ij}^{\hbox{'}}\right)/{F}_e}-1\right) $$
(14)

3.3 Steps of the proposed method

  1. Step 1:

    The input image is decomposed into a low-frequency component and some high-frequency components by NSST.

  2. Step 2:

    The guided filter with Eqs. (4)–(8) is used to improve the contrast of low-frequency component, and the high-frequency components are processed by the fuzzy contrast with Eqs. (9)–(14).

  3. Step 3:

    The inverse NSST (INSST) is applied to the processed coefficients to obtain the reconstructed image, and the enhanced image is achieved.

The flow chart of the proposed method is shown in Fig. 3.

Fig. 3
figure 3

The flow chart of the proposed approach

4 Results and discussions

In this section, a large number of hyperspectral remote sensing images are simulated on Matlab 2016b. In order to verify the effectiveness of the proposed approach, the traditional histogram equalization (HE) [47], image enhancement based on NSCT and unsharp masking (NSCT1) [29], remote sensing image enhancement method based on NSST and guided filtering (NSST) [27], image enhancement technique based on mean filter and unsharp masking in NSCT domain (NSCT2) [24], linking synaptic computation for image enhancement (LSCN) [43], image enhancement method based on nonsubsampled contourlet transform (NSCT3) [16], image enhancement based on CLAHE and unsharp masking in NSCT domain (NSCT4) [17] are compared. We use visual and objective indicators as evaluation criteria. In terms of objective indicators, entropy (H) [18], measure of enhancement (EME) [20], structural similarity index metric (SSIM) [11, 30], average pixel intensity (API) [27], peak signal to noise ratio (PSNR) [1, 13], correlation coefficient (CC) [3], root mean square error (RMSE) [32], relative averages spectral error (RASE) [31], and running time (s) are selected in this paper. The corresponding data as shown in Tables 1, 2, 3, 4 and 5. The higher the values of the metrics H, EME, SSIM, API, PSNR and CC, the better enhancement effect of the image; the lower the values of the indicators RMSE, RASE and Runtime, the better enhancement effect of the image. In this paper, β is set to 4; the NSCT decomposition layers is 3, and direction numbers are 8, 16 and 16; the NSST decomposition layers is 4, and the direction numbers are 1, 2, 4 and 8.

Table 1 The evaluation metric of the eight methods on Image 1
Table 2 The evaluation metric of the eight methods on Image 2
Table 3 The evaluation metric of the eight methods on Image 3
Table 4 The evaluation metric of the eight methods on Image 4
Table 5 The average evaluation metric of the eight methods on 50 images with the size 512 × 512

4.1 Subjective assessment

Fig. 4 is the enhancement result simulated on the Image 1, and the size is 512 × 512. Fig. 4(a) is the original image; Fig. 4(b) shows the enhanced image obtained by HE, and it appears the over-enhancement; Fig. 4(c) presents the result achieved by NSCT1, there is an obvious distortion in the image; the enhanced image obtained by NSST is shown in Fig. 4(d), and it also appears the over-enhancement phenomenon; Fig. 4(e) represents the result enhanced by NSCT2, the contrast of the image is a litter lower; Fig. 4(f) represents the enhancement result achieved by LSCN, the definition of the image is a little lower; Fig. 4(g) shows the image enhanced by NSCT3, some blurred areas appear in the image; Fig. 4(h) depicts the result obtained by NSCT4, it makes some regions too dark; Fig. 4(i) is the enhancement image achieved by the proposed algorithm, and the image has obvious enhancement effect.

Fig. 4
figure 4

Comparisons on Image 1. a Original image; b HE; c NSCT1; d NSST; e NSCT2; f LSCN; g NSCT3; h NSCT4; i Proposed method

Fig. 5 is the enhancement result simulated on the Image 2, and the size is 512 × 512. Fig. 5(a) is the original image; Fig. 5(b) depicts the image enhanced by HE, it appears over-enhancement phenomenon in some regions; Fig. 5(c) is the result obtained by NSCT1, and the image is too bright; Fig. 5(d) shows the image achieved by NSST, the visual effect of the image is not good; Fig. 5(e) shows the result enhanced by NSCT2, the image is a litter dark; Fig. 5(f) depicts the image obtained by LSCN, and the definition of the image is relatively low; the image enhanced by NSCT3 is shown in Fig. 5(g), it is distorted; Fig. 5(h) is the result achieved by NSCT4; the result enhanced by the proposed technique is shown in Fig. 5(i), and it has moderate brightness and clear edge.

Fig. 5
figure 5

Comparisons on Image 2. a Original image; b HE; c NSCT1; d NSST; e NSCT2; f LSCN; g NSCT3; h NSCT4; i Proposed method

Fig. 6 is the result simulated on Image 3, and the size is 512 × 512. Fig. 6(a) presents the original image; Fig. 6(b) is the image enhanced by HE, and it makes some regions too dark; Fig. 6(c) depicts the result obtained by NSCT1, and the brightness of the image is too large; Fig. 6(d) shows the image enhanced by NSST, and the detail distortion of the image is very serious; Fig. 6(e) presents the result enhanced by NSCT2; the image enhanced by LSCN is shown in Fig. 6(f); the results obtained by NSCT3 and NSCT4 are shown in Fig. 6(g)-(h), respectively; Fig. 6(i) depicts the image enhanced by the proposed approach, and the object as well as background is clearly visible.

Fig. 6
figure 6

Comparisons on Image 3. a Original image; b HE; c NSCT1; d NSST; e NSCT2; f LSCN; g NSCT3; h NSCT4; i Proposed method

Fig. 7 is the result simulated on Image 4, and the size is 512 × 512. Fig. 7(a) is the original image; Fig. 7(b) shows the image achieved by HE; Fig. 7(c)-(d) depict the results obtained by NSCT1 and NSST, respectively, and the visual effects of these two images are rather poor; Fig. 7(e)-(f) are the images enhanced by NSCT2 and LSCN, respectively, and these two images have achieved some enhancement effect; Fig. 7(g) shows the result obtained by NSCT3, but the contrast is low; Fig. 7(h) is the image enhanced by NSCT4, the result appears the over-enhancement phenomenon; the enhancement result achieved by the proposed algorithm is shown in Fig. 7(i), and the image has obvious effects on image edge retention and contrast enhancement.

Fig. 7
figure 7

Comparisons on Image 4. a Original image; b HE; c NSCT1; d NSST; e NSCT2; f LSCN; g NSCT3; h NSCT4; i Proposed method

4.2 Objective assessment

The information entropy (H) is defined as amount of information contained in an image, and it can be evaluated as follows:

$$ Entropy(p)=-\sum \limits_{l=0}^{L-1}P(l)\log P(l) $$
(15)

where P(l) presents the density function of the image at gray level l, L shows the quantity of gray levels. Larger value of the information entropy indicates that more information content is available in the enhanced image.

The measure of enhancement (EME) is described as follows:

$$ EME=x\left(\frac{1}{k1k2}\sum \limits_{i=1}^{k1}\sum \limits_{j=1}^{k2}20\ln \left[\frac{I_{\max; k,l}^w}{I_{\min; k,l}^w+c}\right]\right) $$
(16)

where c is 0.0001. The higher EME indicates the image has a good enhancement effect.

The structural similarity index metric (SSIM) reflects the degree of distortion of the image, and it is defined as follows:

$$ SSIM\left(x,y\right)=\frac{\left(2{\mu}_x{\mu}_y+{c}_1\right)\left(2{\sigma}_{xy}+{c}_2\right)}{\left({\mu}_x^2+{\mu}_y^2+{c}_1\right)\left({\sigma}_x^2+{\sigma}_y^2+{c}_2\right)} $$
(17)

where μx and μypresent the average value of x and y, respectively. \( {\sigma}_x^2 \) and \( {\sigma}_y^2 \) present the variance of x and y, respectively. c1 and c2 are constants, and the two parameters are defined as follows:

$$ {c}_1={\left({k}_1D\right)}^2 $$
(18)
$$ {c}_2={\left({k}_2D\right)}^2 $$
(19)

where k1 and k2 are set to 0.01 and 0.03, respectively. D is the dynamic range of pixel values.

The average pixel intensity (API) is mean, it measures an index of contrast, and it is defined as follows:

$$ API=\frac{1}{MN}\sum \limits_{i=1}^M\sum \limits_{j=1}^NI\left(i,j\right) $$
(20)

where M and N present the size of the image, I(i, j) is pixel intensity at (i, j).

The peak signal to noise ratio (PSNR) denotes the denoising performance of the algorithm. The PSNR is higher, the antinoise performance of the method is better. It is defined as follows:

$$ PSNR=10\times \log \frac{U^2}{(RMSE)^2} $$
(21)

where U depicts the maximum value of the pixel, and the root mean square error (RMSE) is described as follows:

$$ RMSE=\sqrt{\frac{1}{M\times N}{\sum}_{x=1}^N{\sum}_{y=1}^M{\left({f}_{x,y}-{h}_{x,y}\right)}^2} $$
(22)

where fx,y and hx,y present the enhancement image pixel and original image pixel, respectively.

The correlation coefficient (CC) measures the similarity in small size structures between the original and the enhanced images. The values close to 1 show the two images are highly similar, and the corresponding equation is defined as follows:

$$ CC=\frac{2{\sum}_{x=0}^{M-1}{\sum}_{y=0}^{N-1}Q\left(x,y\right).P\left(x,y\right)}{\sum_{i=0}^{M-1}{\sum}_{j=0}^{N-1}{\left|Q\left(x,y\right)\right|}^2+{\sum}_{i=0}^{M-1}{\sum}_{j=0}^{N-1}{\left|P\Big(x,y\Big)\right|}^2} $$
(23)

where Q and P present the enhancement image and original image, respectively.

The relative average spectral error (RASE) is depicted as a percentage, it denotes the average performance of the method in the given spectral bands, and it can be computed by:

$$ RASE=\frac{100}{M}\sqrt{\frac{1}{N}\sum \limits_{i=1}^N RMSE\left({F}_i\right)} $$
(24)

where M shows the mean radiance of the N original images Fi. The lower the error is, the better the algorithm.

Tables 1, 2, 3 and 4 show the objective metrics data of the proposed algorithm and the comparative approaches. From the Table 1, we can see that the H, SSIM and RASE achieved by the proposed technique are the best. From the Tables 2, 3 and 4, we can denote that the metrics data computed by the proposed technique have obvious advantages.

In order to prove the effectiveness of the proposed algorithm in this paper, 50 hyperspectral remote sensing images with the size 512 × 512 from the USC-SIPI Database are selected to simulate, and the average metrics are shown in Table 5. From the data, we can notice that the algorithm proposed in this paper has a good performance in terms of the metrics H, SSIM, PSNR, RMSE and RASE as compared to other seven approaches; the EME and CC of HE method are the best, but the corresponding values computed by the proposed algorithm are still ranked third; the API of NSCT1 algorithm is the best, the API of the proposed technique is ranked third; the running time of HE is the shortest, the proposed method is ranked second and the computation efficiency is relatively high. From the results, we can denote that the proposed algorithm is able to enhance the hyperspectral remote sensing images effectively and has a satisfactory effect.

5 Conclusions

In this paper, an extraordinary hyperspectral remote sensing image enhancement method based on NSST transformation is proposed, and it has achieved a good performance in terms of subjective and objective aspects. We mainly deal with the decomposition coefficients of the NSST transform, and make full use of the advantages of guided filter and fuzzy contrast in image processing. The main steps of this algorithm can be described as follows: Firstly, the input image is decomposed into one low-frequency component and some high-frequency components by NSST transform; Secondly, the guided filter is applied to enhance the contrast of the low-frequency component, and the improved fuzzy contrast is adopted to suppress the noise of the high-frequency components; Thirdly, the inverse NSST is used to reconstruct the image, and the enhanced image is obtained. Simulation results clearly show that the proposed approach outperforms other state-of-the art techniques based on visual quality assessment, and the experimental data are very satisfactory when compared to the previous methods. However, some parameter values in this algorithm need to be determined by empirical values. The future research will focus on the adaptability of the method.