1 Introduction

With rich spectral information contained in hundreds of continues spectral frames from visible to near-infrared (400 nm to 2500 nm) having spectral resolution of 10 nm [53]. HyperSpectral Image (HSI) has been applied in multiple application ranging from aerospace [22], cultivation [61], climate monitoring [24], document verification [45], earth observation [41], food quality control [40], forestry [46], military reconnaissance [27], pharmaceuticals [34], pollution detection [1], soil health observation [2] etc. Apart from the above applications, remote sensing [55] is a one of the most active area of hyperspectral (HS) image processing, where researchers develop the algorithms related to the image compression for faster data transmission [21], feature extraction for target detection [16], image denoising [14], image segmentation [43], change detection [60], object classification for land-cover analysis [42], object recognition [37, 38] etc. The satellite based Hyperspectral image sensor acquire the HSI from their remote platform, process and transmit the same through the high frequency wireless channels [18]. There are hundreds of frequency frames in HSI, the pixel depth in HS image is about 12 bit/pixel to 16 bit/pixel [70]. Due to this a lot of memory has been required to save the few HSI in the onboard sensor memory [4, 5].

It has been known these electronics sensors generate a lot of data [62]. So, compression is a must step to save the memory and increase the performance of the sensor. Beside the storage of the HSI image, transmission bandwidth, sensor performance, power management and data transmission time are the major issues which can address by the HSI compression algorithm [54, 63].

On the basis of data loss the hyperspectral image compression algorithm (HSICA) are classified into the three category lossy, lossless and near lossless [44, 73], while on the basis of the coding process the HS image compression algorithms are classified into the six category prediction based algorithm [72], neural network (NN) based algorithm [39], vector quantization based algorithm [74], machine learning based algorithm [81], transform based algorithm [6] and hybrid compression algorithm [11].

The prediction based algorithm, the predicator (spatial, spectral and spatial-spectral) exploits the spectral correlation between the frames of the HS image and calculate the prediction error. The predictive error is encoded by the entropy coding methods such as huffman coding, or arithmetic coding etc. [80]. The prediction based algorithms are data dependable (compression ratio is depend upon the image) and these algorithms work with the lossless compression only. Adaptive Differential Pulse-Code Modulation (ADPCM), Variable-Length Coding (VLC), LookUp Tables (LUT), Cluster DPCM, Context-based Adaptive Lossless Image Coding (CALIC) are the promising prediction based HS image compression algorithms [26, 78].

The vector quantization (VQ) based compression algorithm constructed the code book (spectral libraries) and it is used when the spectral correlation is stronger than the spatial correlation. The VQ is optimal block coding strategy where the algorithm processes the spectral block compression. The VQ has three stages codebook generation process, encoding procedure and the decoding procedure. The codebook gets training from the input image having many codewords for multiple blocks. The searching of the best codeword for image block is completed in the encoding process. All blocks are encoded as codeword and codebook is also transmitted with these codeword. The image is constructed with the help of received codebook in the decoding process [10].

The transform based HS image compression algorithms works with both lossy and lossless compression. These algorithms use the mathematical transform (Fourier transform, Cosine transform, wavelet transform, Karhunen-Loeve Transform etc) to transform the image data into the domain where the data represented by the less correlated high energy coefficients [76]. These algorithms performance remarkable at the low bit rates during the lossy compression. Karami et al. [32] proposed the compression algorithm with the 3D-DCT and sparse tucker decomposition. Bilgin et al. [12] compressed the HS images through the 3D-EZW with the 3D-DWT. Penna et al. [51, 52] used the Karhunen-Loeve Transform (KTL) for the decorrlation of the spectral correlation in HS image. The KTL with the JPEG2000 (DCT based) achieved the high coding gain by removing the correlation between the pixels of HS image [35]. Wang et al. gave 3D lapped transformation approach for HS image compression [78].

The NN-based algorithms have the neural network layer structure of multiple layer structure. The complexity in NN based compression algorithm increases rapidly but the coding gain also increases [31]. Before the start of the compression process, the NN is trained with help machine learning algorithms. The NN network with the predictive coding has outstanding prediction capability. Due to less predictive error, the coding gain increases. Autoencoder network, recurrent neural network, feed-forward neural network, radial basis function etc. are the major types of neural networks that the extensive use in the image compression process [56, 57]. Tensor decomposition has also been used along with deep learning technique in CNN-NTD [39, 64].

The machine learning-based HS image compression uses support vector machine, tensor decomposition, deep learning methodology for the HS image compression [50]. Although, it has a high coding gain as NN-based HS image compression algorithm but the complexity is also increased significantly. The wavelet transform is applied before the tensor decomposition. Zhang et al. [81] gave a tensor decomposition approach, which obtained a core tensor that is considered to be the representative of the original tensor. Das [20] presented a tensor-based compression algorithm that works both with HS images and video compression.

The hybrid compression algorithms are the combination of any two methods mention above. Hybrid compression algorithms have high coding efficiency than other types of compression algorithms but this coding gain is obtained at the cost of the high coding complexity. Li te al. [36] presented an algorithm that uses the predictor with the VQ. Báscones et al. [19] uses VQ and PCA for the spectral decorrelation and after that applied JPEG2000 to the obtain principle component for the spatial decorrelation.

Through 3D-LMBTC [7] and 3D-ZM-SPECK [9] reduces the demand of coding memory significantly but they increases the complexity of the compression scheme with respect to the other state of art HS image compression Scheme 3D-LSK [49] and 3D-NLS [65]. The proposed HS image compression scheme is a low memory solution with high coding efficiency and low complexity.

The remaining part of this paper is structured as follows. In Section 2 related work is discussed for the 3D-LCBTC which includes dyadic wavelet transform, set partitioned HS image compression algorithm and 3D-WBTC [6]. Details of the proposed HS image compression algorithm 3D-LCBTC are described in Section 3. The experimental results and discussion are provided in Section 4 while the conclusion is discussed in Section 5.

2 Related work

2.1 Dyadic wavelet transform

The wavelet transform is widely used mathematical transform for the image compression. It is powerful tool to convert the image to other domain having few high energy component (low frequency component). The transform based image compression algorithms work with both lossy (till bit budget available) and lossless compression [9, 71]. So, these algorithms are widely used in compression process. The wavelet transform is applied on the HS image in three ways. Either applies 3D wavelet transform on the whole HS image or apply 2D wavelet transform frame by frame in spatial domain or apply 2D wavelet transform in spatial domain and then 1D transform in the spectral domain [78].

The performance of wavelet transform is better than other mathematical transform. The single level wavelet transform transforms the HS image into the eight non-overlapping coefficient sub-bands. The scanning order is LLL, LHL, HLL, LLH, HHL, HLH, LHH, and HHH. The H is high while L is low. The transform coefficients present in the LLL sub-band consider as the coarsest coefficients. The n level 3D-DWT can be obtained by repeating the process n time on the HS image cube. The inverse transform is obtained by the sampling and filtering sub-bands to construct the original HS image [3, 79].

2.2 3D set partitioned wavelet transform based HS image

The 3D set partitioned wavelet transform based HS image compression have superiority than other type of transform compression algorithms such that embeddedness, low computational complexity, low coding memory requirement and high coding efficiency [5]. These algorithms further sub divide into three type according to the set partitioned rule.

  1. 1.

    Zero Block Cube Compression Algorithms: This type of algorithms partitioned the HS image cube into the contiguous block cubes. The zero block cube is a block cube in which no significant coefficients with reference to the current threshold. The 3D-SPECK [67], 3D-LSK [49], 3D-SPEZBC [28] and 3D-ZM-SPECK [9] are the well known HSICA under this category.

  2. 2.

    Zero Tree Compression Algorithms: This type of algorithms partitioned the HS image by means of grouping the wavelet coefficients corresponding to the same location and form the spatial orientation tree (SOT) [6, 23]. The SOT is a zero tree having the no significant coefficients with reference to the current threshold. The 3D-SPIHT [68], DDWT-BISK [13] and 3D-NLS [65] are the well known HSICA under this category.

  3. 3.

    Zero Block Cube Tree Compression Algorithms: This type of algorithms combine the useful feature of both, zero block cube and zero tree compression algorithms. These algorithms partitioned the HS image into the contiguous block cubes and after that block cube trees are formed with the roots in the topmost sub-band in a zero tree fashion. The 3D-WBTC [6] and 3D-LMBTC [7] are the well known HSICA under this category. The 3D-WBTC [6] and 3D-LMBTC [7] have remarkable performance at the low bits rates because it can accumulate more insignificant coefficients than 3D-SPIHT [68] and 3D-NLS [65].

Tang et al. proposed the 3D-SPECK [67] which is the extension of the SPECK for the gray scale images. The 3D-LSK [49] is a listless version of 3D-SPECK which uses a coding small memory for the tracking the significance of the partitioned set or coefficient. The 3D-SPIHT [68] and it’s listless version 3D-NLS [65] are also state of art petitioned HS image compression algorithms.

The 3D-WBTC [6] utilizes the block cube tree structure to achieve high coding efficiency at low bit rate but at the high bit rate it’s performance degrade due to the complex structure and multiple read/ write operation on the linked lists. The 3D-LMBTC [7] is a listless version of 3D-WBTC [6] which reduces the complexity and coding memory. The 3D-ZM-SPECK [9] is the special case of 3D-SPECK [67] which follows the same partitioned rule but it does not uses any link lists or state tables. The demand of memory reduces in 3D-ZM-SPECK [9] significantly but the complexity was increased compare to the 3D-LSK [65] due to the set or coefficient testing for each bit plane.

The listless HSICA such as 3D-LSK, 3D-NLS, 3D-LMBTC & 3D-ZM-SPECK does not use the linked list to track the significance of the sets or coefficients but they use state tables or marker to track the sets or coefficients. The associated lists the data depended and multiple memory write or read operation increase the complexity of compression process [9]. The coding memory of listless HSICA is fixed and depends on the size & pixel depth of HS image.

Table 1 gives the comparative analysis of some state of art 3D wavelet based set partitioned HS image compression scheme.

Table 1 Wavelet transform based set partitioned HS image compression algorithm

2.3 3D wavelet block tree coding (3D-WBTC)

The 3D-WBTC [6] is a block cube tree based set partitioned HS image compression algorithm which exploits the redundancies within sub-band. The 3D-WBTC [6] combines the useful features of 3D-SPECK [67] and 3D-SPIHT [68]. The 3D-WBTC [6] is based on spatial orientation trees (SoT) in which a node is a block cube of ‘m x n x p’ coefficients rather than to a single coefficient in 3D-SPIHT [68]. Each SoT has a root node that is present in the LLL band of the transform HS image. By creating the block cube tree, eight 3D-SPIHT’s SoTs are combined into a single SoT of 3D-WBTC [6]. A set of descendent block cubes are referred as type ‘A’ block cube tree and the set of grand descendent block cubes are referred as type ‘B’ block tree.

The 3D-WBTC [6] uses three link lists to store the significance information about the partitioned sets or coefficients: list of insignificant block (LIB), list of insignificant block set (LISB) and list of significant pixel (LSP). The 3D-WBTC [6] initialized with the block cubes in topmost LLL band (left) are added to the LIB while their descendants are added in LIBS. The LSP starts as empty list. Each block cube present in LIB has eight offspring block cubes at the same spatial orientation in the higher frequency sub band. Each bit plane starts with the sorting pass followed by the refinement pass.

In the sorting pass, the coefficients are encoded from top most bit plane and move towards the least significant bit-plane. If a block cube is found insignificant to the current threshold, then ‘0’ is generated for the whole block cube and block cube is remains in LIB and it will again tested for the succeeding threshold. If a block cube is found significant to the current threshold,, then ‘1’ is generated. The block cube is partitioned in to the eight equal small block cubes through the octa tree partitioning. This octa tree partitioning will continue till it reaches to the coefficient level (block cube size of one). The parent block cube is remove from the list. If the coefficient is significant to the current threshold, then ‘1’ will be generated (significance) with the sign bit. The coefficient moves to the LSP. If the coefficient is not significant to the current threshold then it moves to LIB. After octa partitioned and significance testing to the current threshold, the block cube is deleted from the LIB. The insignificant block cube and its descendants are remain in LIBS. A significant type ‘A’ block cube set is partitioned into type ‘B’ block cube set with eight offspring block cubes. The type ‘B’ cube sets are added to the end of LIBS while eight offspring block cubes are tested for current threshold. A significant type ‘B’ block set is partitioned into type ‘A’ block cube set and added into the end of LIBS. This process continues till all block cube sets are encoded. After sorting pass, the refinement pass initiated for current threshold and one refinement bit for each coefficient generated. The threshold is reduce by half and process runs till the bit budget exhausts.

3 3D-low complexity block tree coding (3D-LCBTC)

The 3D-LCBTC is a low-weight version of 3D-WBTC [6] which has high coding gain, low coding memory requirement and low complexity. The 3D-LCBTC is a zero block cube tree-based set partitioned hyperspectral image compression algorithm which has the same partitioned rule as 3D-WBTC [6]. Like 3D-WBTC [6], 3D-LCBTC considers a block cube as node in the spatial oriented tree. Through this, a large number of insignificant descendent can be represented by a single coefficient. So, the zero block cube tree HS image compression scheme has higher gain in low bit rates. The node block cube present in the top LLL band has seven offspring nodes in the highest decomposition level sub-bands. The 3D-LCBTC uses the fixed block cube size (2 × 2 × 2) rather than changing the size of block cube as in 3D-WBTC [6]. The fix size of the block cube reduces the complexity of the proposed compression scheme significantly. Unlike other set partitioned compression schemes, the 3D-LCBTC executes refinement pass before sorting pass.

The 3D-LCBTC uses two state tables and two linked list for the tracking of the partitioned block cube or coefficients. Detail of the state table and linked list is given in Table 2.

Table 2 Associated state table and linked list used in 3D-LMBTC

The BCSM and DSM are used to store the significance information about each node. The 3D-LCBTC consists of two passes, initialization pass and bit plane pass. Each bit plane pass consist of sorting pass (SP) and refinement pass (RP). Unlike 3D-SPIHT [68] or 3D-WBTC [6], 3D-.

LCBTC executes refinement pass before sorting pass. The transform coefficients which are significant in last bit plane, are encoded and a refinement bit is generated for each coefficient in the refinement pass. The transform coefficients or block cubes, which have not become significant at previous threshold, are encoded in the sorting pass. The BCSM gives the information about the significance of each block cube in the HS image while DSM gives the information about the associated descendants. A block cube ‘a’ is significant if BCSM(a) = 1, while the information related to the significance of descendants of all nodes are store in DSM. When all descendants under the node ‘a’ are significant to the current bit plane then whole the whole block cube tree originated from the node ‘a’ is significant and it is represented as DSM(a) = ‘1’. This block cube tree is encode in refinement pass instead of sorting pass. Two associated list LCBC and LPBC are used in the sorting pass.

Initialization: The encoding process starts from the top most bit plane (most significant) ‘n’ and proceeds towards the lower bit planes till the bit budget is available. The 3D transform HS image is converted to the 1D array Ci through Morton mapping [9]. The highest threshold (n) is calculated by the Eq. 1 and magnitude of the threshold (T) is calculated by Eq. 2

$$ n=\left\lfloor {\log}_2\left[\max \left\{\left|{C}_i\right|\right\}\right]\right\rfloor $$
(1)
$$ T={2}^{\mathrm{n}} $$
(2)

The block cubes present in LLL band of transform HS image is assigned ‘1’ in BCSM while other block cubes are assigned as ‘0’. All block cubes present in the DSM are assigned as ‘0’. The associated list LCBC and LPBC are initialized as empty.

Refinement Pass: Refinement pass is executed for those block cubes which are significant to the pervious bit plane. The significant block cubes are present in the BCSM having the tag ‘1’. Each coefficient of the significant block cube is encoded in RP and a single refinement bit is sent for coefficients which are previously significant. If the coefficients is significant first time (block cube is significant initially), than the sign bit is also sent with the significant bit. The block cubes are encoded in the breadth first approach in which the more significant coefficients are encoded first. The 3D wavelet transform HS image has most of the information present in the higher band than the lower band.

Sorting Pass: Those block cubes which are left in refinement pass are encoded in the sorting pass. Each block cube tree has roots in the LLL band. The sorting pass starts in block cube tree wise. The second block tree will be encoded when the previous block tree is completely encoded. Through this sequence follow, the requirement of the list memory has been reduce significantly and also less number of read/write operation which further reduce the complexity of the proposed compression algorithm.

The significance of the block cube ‘B’ and block cube tree ‘BT’ (starting from node ‘a’) is calculated with the Eq. 3 and Eq. 4

$$ {S}_n\ (B)\ \left\{\begin{array}{c}1\kern1.75em if\ T\le \underset{i\in B}{\max}\left[\left|{C}_i\right|\right]\kern0.5em \le \kern0.5em 2T\\ {}0\kern1.5em if\kern3.25em \underset{i\in B}{\max}\left[\left|{C}_i\right|\right]\kern0.5em <\kern0.5em T\ \\ {}\phi \kern1em if\kern2.75em \underset{i\in B}{\max}\left[\left|{C}_i\right|\right]\kern0.5em \ge \kern0.5em 2T\end{array}\right. $$
(3)
$$ {S}_n\ (BT)=\left\{\begin{array}{c}1\kern1.5em if\ T\le \underset{i\in B}{\max}\left[\left|{C}_i\right|\right]\kern0.5em \le \kern0.5em 2T\kern0.5em \\ {}0\kern1.5em if\kern4.5em \underset{i\in B}{\max}\left[\left|{C}_i\right|\right]\kern0.5em <\kern0.5em T\kern0.5em \end{array}\right. $$
(4)

The sorting pass uses the two linked lists LCBC and LPBC. The LCBC is used to encode the coefficients present in the block cube tree. The LPBC will execute after the execution of all entries in LCBC. The LPBC is use to update the state table.

When a block cube at node ‘a’ with its associated block cube tree is insignificant to the current threshold, only ‘0’ bit is sent to the output. If block cube is significant then ‘1’ is sent to the output and encoding process of performed for all eight coefficients. If all descendant block cubes are insignificant to the current threshold, then ‘0’ is sent to the output otherwise, for the significant descendant block cubes, ‘1’ ‘0’ sent to the output with offsprings are added to the last of LCBC.

The significant offspring block cube is significant to the current threshold, then ‘1’ is sent to the output and encoding process of performed for all eight coefficients.

If a block cube at node ‘a’ and it’s offspring block cubes are significant to the current threshold, then if descendant block cubes are insignificant to the current threshold, then only ‘0’ is sent to the output. The output ‘1’ is generated and the block cubes are added to LCBC and the parent block cube moves to the LPBC.

After the completion of the bit plane the threshold is reduce by half and next pass (bit plane) executed. This process is repeated till the bit budget is available or desire bit rate is achieved. The decoder follows the same procedure as encoder except for the additional step of significance testing of those sets which contains the refinement bit.

The pseudo code for the 3D-LCBTC is given in Table 3. The Encode function is used to encode the block cube at node ‘a’ of the transform HS image. The ‘p’ is the address of the coefficient while I(p) is the value of the coefficient.

Table 3. Pseudo code of proposed 3D-LCBTC encoding algorithm.

Let consider Bs, t, u(x) is a root block cube present in the LLL band of the transform HS image where ‘s,t,u’ is the address of the coefficient present in the top left of the block cube while ‘y’is define as a dimension of the block cube. The O(xor Os, t, u(x) is the set of offspring block cubes of the root block cube Bs, t, u(x). The offspring block cubes are defined as (Fig. 1).

Fig. 1
figure 1

A block cube tree structure

Bs, t, u (x)A root block cube of size ‘y x y x y’ that have wavelet transform coefficients [ Cj, k, l | s ≤ j ≤ (s + y ); t ≤ k ≤ (t + y); u ≤ l  ≤ (u + y)]

Bs, t, u (x)A root block cube of size ‘y x y x y’ that have wavelet transform coefficients

$$ \left[\ {C}_{j,k,l}\ |\ s\le j\le \left(s+y\ \right);t\le k\le \left(t+y\right);u\le l\kern0.5em \le \left(u+y\right)\right] $$

Os, t, u(x) = [ B2s, 2t, 2u(x), B2s + y, 2t, 2u(x), B2s, 2t + y, 2u(x), B2s, 2t, 2u + y(x)

B2s + y, 2t + y, 2u(x), B2s, 2t + y, 2u + y(x), B2s + y, 2t, 2u + y(x), B2s + y, 2t + y, 2u + y(x)]

ds, t, u (x)Set of all descendent block cubes of the root block cube Bs, t, u(x)

The coding memory used by the 3D-LCBTC is calculated by the memory MEMLCBTC used by the associated lists (LCBC and LPBC) & state tables (BCSM and DSM).

The BCSM is used to store the status of block cube. Since, the size of block cube in 3D-LCBTC is fixed (2 × 2 × 2). Hence, the size of BSM is MNP/8 for the HS image of ‘M x N x P’. The DSM is used to store the status of the descendant block cube. For the lowest decomposition level of wavelet transform (L = 1) when no descendant is present. Thus, no entry in DSM is required. Hence, the size of DSM is MNP/64. The size of state tables is fixed throughout the coding process and it depends on the size of the test HS image. So, the requirement of memory used by the state table is fixed. The total memory (bit) MEMST required by the state tables is given by

$$ {MEM}_{ST}=\left[\frac{MNP}{8}+\frac{MNP}{64}\right]=\left[\frac{9\ast MNP}{64}\right] $$
(5)

The 3D-LCBTC used two linked lists LCBC and LPBC. The 3D-LCBTC encodes one block cube tree at a time. Common nodes are present in both lists at a time. The number of nodes depends on the level of the dyadic wavelet transform. The entries in the linked lists changes every time and the linked list memory is referred as dynamic memory. Experimentally, the maximum number of entries in the LCBC list MEMLCBC and LPBC list MEMLPBC calculated as

$$ {MEM}_{LCBC}=\left[8+7\left(L-2\right)\right] $$
(6)
$$ {MEM}_{LPBC}\kern0.5em =\kern0.75em 1+\sum \limits_{n=1}^{L-2}{8}^n $$
(7)

Each entry in the linked lists is completed by the address bits, which is calculated by the log2MNP .

So, the total memory MEMLL required by the linked lists is

$$ {MEM}_{LL}=\kern0.5em {\log}_2(MNP)\ast \left[9+7\left(L-2\right)+\sum \limits_{n=1}^{L-2}{8}^n\ \right] $$
(8)

The coding memory MEMLCBTC required by the 3D-LCBTC encode is given as

$$ {MEM}_{LCBTC}=\left[\frac{9}{64} MNP+\kern0.5em {\log}_2(MNP)\ast \left\{9+7\left(L-2\right)+\sum \limits_{n=1}^{L-2}{8}^n\ \right\}\ \right] $$
(9)

4 Result and analysis

The proposed HSICA is compared with the existing state of art HSICA 3D-SPECK [67], 3D-SPIHT [68], 3D-WBTC [6], 3D-LSK [49], 3D-NLS [65], 3D-LMBTC [7] and 3D-ZM-SPECK [9]. The performance of the proposed compression algorithm 3D-LCBTC is measured on the basis of coding efficiency, coding memory and computational complexity & tested on four publically available HS images which are Washington DC Mall, Urban, Jasper Ridge and Cuprite. The detail description about the HS images is presented in Table 4.

Table 4 Detail of HS images used for analysis

The five level dyadic wavelet transform is used to transform the HS image. The wavelet transform coefficients are quantized to the nearest integer. The morton mapping (linear indexing) is used to convert the transform coefficients from 3D matrix to 1D array [8]. After that, the coefficients are encoding with the help of state of art set partitioned HS image compression algorithms 3D-SPECK [67], 3D-SPIHT [68], 3D-WBTC [6], 3D-LSK [49], 3D-NLS [65], 3D-LMBTC [7] and 3D-ZM-SPECK [9] with proposed 3D-LCBTC. Whole work is performed by using Intel core i3 central processing unit @ 1.6 GHz, RAM of 8 GB, 64 bit operating system and Windows 8.1 operating system. The HS images are cropped from the left top corner to the size of a cube and zero padding is done if it is required. The coding efficiency is calculated in decibel (dB) for Peak Signal to Noise Ratio (PSNR) and Bjøntegaard delta peak signal-to-noise rate (BD-PSNR) while coding memory and coding complexity is calculated in kilobyte (KB) and second (sec).

4.1 Coding efficiency

The metric used to calculate the coding efficiency are Peak Signal to Noise Ratio (PSNR), Bjøntegaard delta peak signal-to-noise rate (BD-PSNR), Structural Similarity Index (SSIM) and Feature Similarity Index (FSIM) [9].

The Peak Signal to Noise Ratio is the measurement of the rate-distortion (RD) performance. Mathematically, the PSNR is calculated as in Eq. 10 [58].

$$ PSNR=10\ {\log}_{10}\left(\frac{{\mathit{\operatorname{MAX}}}_a\ast {\mathit{\operatorname{MAX}}}_a\ }{MSE}\right) $$
(10)

The MAXa is the maximum value of the signal. The MSE (Mean Square Error) is calculated with the Eq. 11

$$ MSE=\frac{1}{N_{pix}}\ \sum \limits_i\sum \limits_j\sum \limits_k{\left[f\left(i,j,k\right)-\kern0.5em g\left(i,j,k\right)\right]}^2 $$
(11)

The Npix is the sum of all pixels present in the all frames of HS image. The reconstructed HS image is defines as g(i,j,k) while the original HS image is defined as f(i,j,k).

The compression ratio (CR) is the ratio between the bits used in the original HS image to the bits used in the reconstructed HS image [59]. Mathematically, it is formulated as follows in Eq. 12.

$$ \mathrm{Compression}\ \mathrm{Ratio}\ \left(\mathrm{CR}\right)=\left[\frac{\mathrm{Original}\ \mathrm{HS}\ \mathrm{image}\ \mathrm{size}\ \mathrm{in}\ \mathrm{bits}}{\mathrm{Decoded}\ \mathrm{bits}\mathrm{tream}\ \mathrm{size}\ \mathrm{in}\ \mathrm{bits}}\right] $$
(12)

The mathematical relationship between the bit rate (bpppb) and compression ratio for HS image is defined in Eq. 13 [9].

$$ \mathrm{Bit}\ \mathrm{Rate}\ \left(\mathrm{bpppb}\right)=\left[\frac{\mathrm{Pixel}\ \mathrm{depth}\times \mathrm{Row}\times \mathrm{Column}\times \mathrm{Frequency}\ \mathrm{Frames}\ }{\mathrm{Compression}\ \mathrm{Ratio}}\right] $$
(13)

The high value of compression ratio with the good PSNR shows the robustness of compression algorithm.

The 3D-LCBTC is a zerotree based set partitioned HS image compression scheme and has same set partition rules as 3D-WBTC [6]. The Table 5 gives the comparative analysis of the coding efficiency (PSNR) for 3D-LCBTC with seven state of art set partitioned HS image compression algorithms. From the Table 5, it is clear that the 3D-LCBTC outperform in the high bit rates (from 0.2 to 1) with other compression algorithms. It has been observed from the Table 5 that the variation between the PNSR of proposed 3D-LCBTC and 3D-WBTC [6] exists in the range of −0.49 dB ~ 0.45 dB for Washington DC Mall HS image, −0.27 dB ~ 0.08 dB for Cuprite HS image, −0.03 dB ~ 0.34 dB for Urban HS image and − 0.09 dB ~ 0.22 dB for Jasper Ridge HS image. Similarly, the variation between the PNSR of proposed 3D-LCBTC and 3D-LMBTC [7] exists in the range of −0.36 dB ~ 0.89 dB for Washington DC Mall HS image, −0.11 dB ~ 0.68 dB for Cuprite HS image, −0.01 dB ~ 1.24 dB for Urban HS image and − 0.11 dB ~ 0.98 dB for Jasper Ridge HS image. The variation between the PNSR of proposed 3D-LCBTC and 3D-ZM-SPECK [9] exists in the range of exists in the range of - 0.46 dB ~ 0.48 dB for Washington DC Mall HS image, − 0.30 dB ~ 0.30 dB for Cuprite HS image, − 0.02 dB ~ 0.64 dB for Urban HS image and - 0.03 dB ~ 0.75 dB for Jasper Ridge HS image. For the perfect reconstruction of any HS image after the compression process, the numerical value of PSNR should be infinity [33], but the image degradation having PSNR numeric value of 40 dB or higher is invisible by human eyes [69].

Table 5 Comparison of Coding Efficiency (PSNR) between 3D-LCBTC and other compression algorithms for four HS images at fifteen different bit rates

The Structural Similarity Index Measure (SSIM) is an image quality metric that calculates the similarity between two images (original HS image and reconstructed HS image). It is has been.

noticed from the Table 6 that SSIM index is similar to all HS image compression algorithms. Mathematically, SSIM [47] is calculated with the Eq. 14

$$ \mathrm{SSIM}\left(\mathrm{x},\mathrm{y}\right)=\frac{\left(2{\upmu}_{\mathrm{x}}{\upmu}_{\mathrm{y}}+{\mathrm{C}}_1\right)\left(2{\upsigma}_{\mathrm{x}\mathrm{y}}+{\mathrm{C}}_2\right)}{\left({\upmu}_{\mathrm{x}}^2+{\upmu}_{\mathrm{y}}^2+{\mathrm{C}}_1\right)\left({\upsigma}_{\mathrm{x}}^2+{\upsigma}_{\mathrm{y}}^2+{\mathrm{C}}_2\right)} $$
(14)
Table 6 Comparison of Structural Similarity (SSIM) index between 3D-LCBTC and other compression algorithms for two HS images at fifteen different bit rates

The input HS image is represented as ‘x’ while reconstructed image after compression process is represented as ‘y’. The mean average of the input HS image ‘x’ and reconstructed HS image ‘y’ is μx and μy. The variance of the input HS image ‘x’ and reconstructed HS image ‘y’ is \( {\sigma}_x^2\ and\ {\sigma}_y^2 \). The covariance between these HS images is given as σxy. The correction factors are represented by the C1 & C2.

The Feature-Similarity (FSIM) index maps the features and measures the similarities between the input HS image and reconstructed HS image [66]. It has been observed from Table 7 that FSIM index is similar to all HS image compression algorithms.

Table 7 Comparison of Feature-Similarity (FSIM) index between 3 D-LCBTC and other c ompression algorithms for two HS images at fifteen different bit rates

Bjontegaard metric calculation which is known as BD-PSNR [25] is calculated for all four HS images under test. It had been noticed from the Table 8 that 3D-LCBTC outperformed 3D-SPIHT [68], 3D-NLS [49] and 3D-LMBTC [7]. The 3D-LCBTC has a superior performance with reference to other compression algorithms for the two HS images Urban and Jasper Ridge.

Table 8 Bjøntegaard Delta PSNR gain of 3D-LCBTC with other HS image compression algorithms for 15 different bit rates

4.2 Coding memory

The 3D-LCBTC uses both state table and linked lists for the tracking of the significant block cubes or coefficients. The 3D-LCBTC utilized the best features of the listless algorithm and list- based algorithm. From Eq. 9, the coding memory is calculated and it is fixed throughout the compression process. The calculation of coding memory is calculated when the demand of memory is highest. The state table requires 288 KB memory and linked lists need 12.59 KB of coding memory. The memory required for the coding process is the sum of these two and it is fixed. Table 5 gives the comparative analysis of the coding memory use by the different compression algorithms. Due to the two types of separate state tables (BCSM and DSM), the coding memory requirement is higher than the 3D-LMBTC [7] and 3D-ZM-SPECK [9], but superior to the 3D-NLS [65] and 3D-LSK [49].

The 3D-ZM-SPECK [9] is a novel implementation of 3D-SPECK [61], which does not require any coding memory as the refinement pass is merged with the sorting pass. The 3D-LMBTC [7] requires only four types of coefficients; hence the requirement of coding memory is also very less. The 3D-LSK [49] and 3D-NLS [65] uses 4 bits/coefficient marker and 8 bits/coefficient marker. The 3D-LCBTC outperforms the list based set petitioned compression algorithms at the high bit levels (higher bit rate at 0.1 bpppb) (Table 9).

Table 9 Comparison of coding memory between 3D-LCBTC and other compression algorithm s for four HS images at fifteen different bit rates

4.3 Coding complexity

The coding complexity of any image compression algorithm is measured by the time required for the encoding process (compression) and decoding process (reconstruction). The encoding time is always greater than the decoding time because the encoder took more time to find the set size and testing of the sets for each threshold. Table 10 shows the encoding time and Table 11 shows the decoding time. It has been observed from the result that 3D-LCBTC out perform with all other compression algorithms for the high bit rate (bpppb = 0.1) except for cuprite HS image at bpppb 1. For the low bit rates 3D-LCBTC outperform with other compression algorithms except 3D-LSK [44]. The 3D-LCBTC performance is superior than 3D-LMBTC [7] and 3D-WBTC [6]. The low complexity is due to the fixed block cube size while for 3D-LMBTC [7] and 3D-WBTC [6], the block cube size is kept changing according to the significance of the block cube and block cube tree. The size of the state tables used in the 3D-LCBTC is small with respect to the 3D-NLS [56]. Accessing memory multiple time creates the load on the sensor and increases the complexity of the compression algorithm. The coding complexity of the listless compression algorithm (3D-LSK, 3D-NLS, 3D-LMBTC, 3D-ZM-SPECK) is always less than the list-based compression algorithm (3D-SPECK, 3D-SPIHT, 3D-WBTC). The 3D-LSK [44] uses 4 bits/coefficient marker and 3D-NLS [56] uses 8 bits/coefficient marker. The 3D-LMBTC [7] uses four types of marker (2 bits/coefficient marker). The coding complexity of 3D-ZM-SPECK [9] is higher because compression algorithm has to test the sets are coefficients for each threshold.

Table 10 Comparison of encoding time (coding complexity) between 3D-LCBTC and other compression algo rithms fo r four HS images at fifteen different bit rates
Table 11 Comparison of decoding time (coding complexity) between 3D-LCBTC and other c ompression algorithms for four HS images a t fifteen different bit rates

4.4 Effect of block cube size on the performance of 3D-LCBTC

In order to analyze the impact of the block cube size on the performance of proposed HS image compression algorithm, the coding efficiency (PSNR & SSIM), coding memory and coding complexity (encoding time & decoding time) is used for the analysis of effect of block cube size.

It had been noticed from the Table 12 that increase in the block cube size moderately improving the coding gain at the low bit rates but for the high bit rate, it almost same or degraded. The coding gain reduces because increase in the block cube size also increases the location of significance bits or significance block cubes. It also required the extra bits in searching for significant coefficients which let slightly increase in the coding complexity (due to multiple memory read or write operation). However, the requirement of coding memory reduces significantly due to the use of large block cube which limited the number of bits in the output list.

Table 12 Effect of different block cube size for the proposed HS image compression algorithm on PSNR, SSIM, encoding time & decoding time for the Washington DC Mall HS image

The visual representation (original and reconstructed) of four sub-bands (Band 25, 50, 100 and 175) for Urban HS image and Washington DC MALL HS image at CR = 16 is shown in Figs. 2 and Fig. 3. The Fig. 2 shows the visual representation of four frames (25, 50, 100 and 175) for Urban HS image and Fig. 3 shows the visual representation of four frames (25, 50, 100 and 175) for Washington DC MALL HS image.

Fig. 2
figure 2

Original Urban HS image (a) Frame 25 (b) Frame 50 (c) Frame 100 (d) Frame 175 Reconstructed Urban HS image with CR 16 (e) Frame 25 (f) Frame 50 (g) Frame 100 (h) Frame 175

Fig. 3
figure 3

Original Washington DC MALL HS image (a) Frame 25 (b) Frame 50 (c) Frame 100 (d) Frame 175 Reconstructed Washington DC MALL HS image with CR 16 (e) Frame 25 (f) Frame 50 (g) Frame 100 (h) Frame 175

5 Conclusion

The onboard HS image sensor has limited resources to process the HS images and transmit to the earth station. This paper proposes a novel algorithm that outperforms other state of art wavelet-based HS image compression algorithms on coding efficiency and coding complexity. The 3D-LCBTC is a compression algorithm which uses both state tables and linked list. The requirement of coding memory of 3D-LCBTC is higher than 3D-LMBTC which is due to the fixed size of the block cube. The fixed size of the block cube increases the demand of state table memory. The proposed HS image compression algorithm performs efficient compression without affecting the performance of succeeding applications.