Abstract
Complexity of any onboard hyperspectral image sensor is a challenging issue. The existing hyperspectral image compression algorithm plays a great role in reducing the data transmission bandwidth, data processing time, processing power and coding memory. Many wavelet transform-based set partitioned hyperspectral image compression algorithms are proposed in the past which work with lossy and lossless compression. These compression algorithms use lists or state tables to keep track of significant and insignificant sets or coefficients. The 3D wavelet block tree coding (3D-WBTC) has superior coding performance due to the exploitation of the inter sub-band & intra sub-band redundancy. The 3D-Low-Complexity Block Tree Coding (3D-LCBTC) is a novel implementation of 3D-WBTC which uses two state tables and very small size link lists. The 3D-LCBTC uses depth-first search approach which reduces the complexity of the compression process significantly. Thus, the proposed compression algorithm is a suitable candidate for resources-constrained onboard hyperspectral image sensors.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
With rich spectral information contained in hundreds of continues spectral frames from visible to near-infrared (400 nm to 2500 nm) having spectral resolution of 10 nm [53]. HyperSpectral Image (HSI) has been applied in multiple application ranging from aerospace [22], cultivation [61], climate monitoring [24], document verification [45], earth observation [41], food quality control [40], forestry [46], military reconnaissance [27], pharmaceuticals [34], pollution detection [1], soil health observation [2] etc. Apart from the above applications, remote sensing [55] is a one of the most active area of hyperspectral (HS) image processing, where researchers develop the algorithms related to the image compression for faster data transmission [21], feature extraction for target detection [16], image denoising [14], image segmentation [43], change detection [60], object classification for land-cover analysis [42], object recognition [37, 38] etc. The satellite based Hyperspectral image sensor acquire the HSI from their remote platform, process and transmit the same through the high frequency wireless channels [18]. There are hundreds of frequency frames in HSI, the pixel depth in HS image is about 12 bit/pixel to 16 bit/pixel [70]. Due to this a lot of memory has been required to save the few HSI in the onboard sensor memory [4, 5].
It has been known these electronics sensors generate a lot of data [62]. So, compression is a must step to save the memory and increase the performance of the sensor. Beside the storage of the HSI image, transmission bandwidth, sensor performance, power management and data transmission time are the major issues which can address by the HSI compression algorithm [54, 63].
On the basis of data loss the hyperspectral image compression algorithm (HSICA) are classified into the three category lossy, lossless and near lossless [44, 73], while on the basis of the coding process the HS image compression algorithms are classified into the six category prediction based algorithm [72], neural network (NN) based algorithm [39], vector quantization based algorithm [74], machine learning based algorithm [81], transform based algorithm [6] and hybrid compression algorithm [11].
The prediction based algorithm, the predicator (spatial, spectral and spatial-spectral) exploits the spectral correlation between the frames of the HS image and calculate the prediction error. The predictive error is encoded by the entropy coding methods such as huffman coding, or arithmetic coding etc. [80]. The prediction based algorithms are data dependable (compression ratio is depend upon the image) and these algorithms work with the lossless compression only. Adaptive Differential Pulse-Code Modulation (ADPCM), Variable-Length Coding (VLC), LookUp Tables (LUT), Cluster DPCM, Context-based Adaptive Lossless Image Coding (CALIC) are the promising prediction based HS image compression algorithms [26, 78].
The vector quantization (VQ) based compression algorithm constructed the code book (spectral libraries) and it is used when the spectral correlation is stronger than the spatial correlation. The VQ is optimal block coding strategy where the algorithm processes the spectral block compression. The VQ has three stages codebook generation process, encoding procedure and the decoding procedure. The codebook gets training from the input image having many codewords for multiple blocks. The searching of the best codeword for image block is completed in the encoding process. All blocks are encoded as codeword and codebook is also transmitted with these codeword. The image is constructed with the help of received codebook in the decoding process [10].
The transform based HS image compression algorithms works with both lossy and lossless compression. These algorithms use the mathematical transform (Fourier transform, Cosine transform, wavelet transform, Karhunen-Loeve Transform etc) to transform the image data into the domain where the data represented by the less correlated high energy coefficients [76]. These algorithms performance remarkable at the low bit rates during the lossy compression. Karami et al. [32] proposed the compression algorithm with the 3D-DCT and sparse tucker decomposition. Bilgin et al. [12] compressed the HS images through the 3D-EZW with the 3D-DWT. Penna et al. [51, 52] used the Karhunen-Loeve Transform (KTL) for the decorrlation of the spectral correlation in HS image. The KTL with the JPEG2000 (DCT based) achieved the high coding gain by removing the correlation between the pixels of HS image [35]. Wang et al. gave 3D lapped transformation approach for HS image compression [78].
The NN-based algorithms have the neural network layer structure of multiple layer structure. The complexity in NN based compression algorithm increases rapidly but the coding gain also increases [31]. Before the start of the compression process, the NN is trained with help machine learning algorithms. The NN network with the predictive coding has outstanding prediction capability. Due to less predictive error, the coding gain increases. Autoencoder network, recurrent neural network, feed-forward neural network, radial basis function etc. are the major types of neural networks that the extensive use in the image compression process [56, 57]. Tensor decomposition has also been used along with deep learning technique in CNN-NTD [39, 64].
The machine learning-based HS image compression uses support vector machine, tensor decomposition, deep learning methodology for the HS image compression [50]. Although, it has a high coding gain as NN-based HS image compression algorithm but the complexity is also increased significantly. The wavelet transform is applied before the tensor decomposition. Zhang et al. [81] gave a tensor decomposition approach, which obtained a core tensor that is considered to be the representative of the original tensor. Das [20] presented a tensor-based compression algorithm that works both with HS images and video compression.
The hybrid compression algorithms are the combination of any two methods mention above. Hybrid compression algorithms have high coding efficiency than other types of compression algorithms but this coding gain is obtained at the cost of the high coding complexity. Li te al. [36] presented an algorithm that uses the predictor with the VQ. Báscones et al. [19] uses VQ and PCA for the spectral decorrelation and after that applied JPEG2000 to the obtain principle component for the spatial decorrelation.
Through 3D-LMBTC [7] and 3D-ZM-SPECK [9] reduces the demand of coding memory significantly but they increases the complexity of the compression scheme with respect to the other state of art HS image compression Scheme 3D-LSK [49] and 3D-NLS [65]. The proposed HS image compression scheme is a low memory solution with high coding efficiency and low complexity.
The remaining part of this paper is structured as follows. In Section 2 related work is discussed for the 3D-LCBTC which includes dyadic wavelet transform, set partitioned HS image compression algorithm and 3D-WBTC [6]. Details of the proposed HS image compression algorithm 3D-LCBTC are described in Section 3. The experimental results and discussion are provided in Section 4 while the conclusion is discussed in Section 5.
2 Related work
2.1 Dyadic wavelet transform
The wavelet transform is widely used mathematical transform for the image compression. It is powerful tool to convert the image to other domain having few high energy component (low frequency component). The transform based image compression algorithms work with both lossy (till bit budget available) and lossless compression [9, 71]. So, these algorithms are widely used in compression process. The wavelet transform is applied on the HS image in three ways. Either applies 3D wavelet transform on the whole HS image or apply 2D wavelet transform frame by frame in spatial domain or apply 2D wavelet transform in spatial domain and then 1D transform in the spectral domain [78].
The performance of wavelet transform is better than other mathematical transform. The single level wavelet transform transforms the HS image into the eight non-overlapping coefficient sub-bands. The scanning order is LLL, LHL, HLL, LLH, HHL, HLH, LHH, and HHH. The H is high while L is low. The transform coefficients present in the LLL sub-band consider as the coarsest coefficients. The n level 3D-DWT can be obtained by repeating the process n time on the HS image cube. The inverse transform is obtained by the sampling and filtering sub-bands to construct the original HS image [3, 79].
2.2 3D set partitioned wavelet transform based HS image
The 3D set partitioned wavelet transform based HS image compression have superiority than other type of transform compression algorithms such that embeddedness, low computational complexity, low coding memory requirement and high coding efficiency [5]. These algorithms further sub divide into three type according to the set partitioned rule.
-
1.
Zero Block Cube Compression Algorithms: This type of algorithms partitioned the HS image cube into the contiguous block cubes. The zero block cube is a block cube in which no significant coefficients with reference to the current threshold. The 3D-SPECK [67], 3D-LSK [49], 3D-SPEZBC [28] and 3D-ZM-SPECK [9] are the well known HSICA under this category.
-
2.
Zero Tree Compression Algorithms: This type of algorithms partitioned the HS image by means of grouping the wavelet coefficients corresponding to the same location and form the spatial orientation tree (SOT) [6, 23]. The SOT is a zero tree having the no significant coefficients with reference to the current threshold. The 3D-SPIHT [68], DDWT-BISK [13] and 3D-NLS [65] are the well known HSICA under this category.
-
3.
Zero Block Cube Tree Compression Algorithms: This type of algorithms combine the useful feature of both, zero block cube and zero tree compression algorithms. These algorithms partitioned the HS image into the contiguous block cubes and after that block cube trees are formed with the roots in the topmost sub-band in a zero tree fashion. The 3D-WBTC [6] and 3D-LMBTC [7] are the well known HSICA under this category. The 3D-WBTC [6] and 3D-LMBTC [7] have remarkable performance at the low bits rates because it can accumulate more insignificant coefficients than 3D-SPIHT [68] and 3D-NLS [65].
Tang et al. proposed the 3D-SPECK [67] which is the extension of the SPECK for the gray scale images. The 3D-LSK [49] is a listless version of 3D-SPECK which uses a coding small memory for the tracking the significance of the partitioned set or coefficient. The 3D-SPIHT [68] and it’s listless version 3D-NLS [65] are also state of art petitioned HS image compression algorithms.
The 3D-WBTC [6] utilizes the block cube tree structure to achieve high coding efficiency at low bit rate but at the high bit rate it’s performance degrade due to the complex structure and multiple read/ write operation on the linked lists. The 3D-LMBTC [7] is a listless version of 3D-WBTC [6] which reduces the complexity and coding memory. The 3D-ZM-SPECK [9] is the special case of 3D-SPECK [67] which follows the same partitioned rule but it does not uses any link lists or state tables. The demand of memory reduces in 3D-ZM-SPECK [9] significantly but the complexity was increased compare to the 3D-LSK [65] due to the set or coefficient testing for each bit plane.
The listless HSICA such as 3D-LSK, 3D-NLS, 3D-LMBTC & 3D-ZM-SPECK does not use the linked list to track the significance of the sets or coefficients but they use state tables or marker to track the sets or coefficients. The associated lists the data depended and multiple memory write or read operation increase the complexity of compression process [9]. The coding memory of listless HSICA is fixed and depends on the size & pixel depth of HS image.
Table 1 gives the comparative analysis of some state of art 3D wavelet based set partitioned HS image compression scheme.
2.3 3D wavelet block tree coding (3D-WBTC)
The 3D-WBTC [6] is a block cube tree based set partitioned HS image compression algorithm which exploits the redundancies within sub-band. The 3D-WBTC [6] combines the useful features of 3D-SPECK [67] and 3D-SPIHT [68]. The 3D-WBTC [6] is based on spatial orientation trees (SoT) in which a node is a block cube of ‘m x n x p’ coefficients rather than to a single coefficient in 3D-SPIHT [68]. Each SoT has a root node that is present in the LLL band of the transform HS image. By creating the block cube tree, eight 3D-SPIHT’s SoTs are combined into a single SoT of 3D-WBTC [6]. A set of descendent block cubes are referred as type ‘A’ block cube tree and the set of grand descendent block cubes are referred as type ‘B’ block tree.
The 3D-WBTC [6] uses three link lists to store the significance information about the partitioned sets or coefficients: list of insignificant block (LIB), list of insignificant block set (LISB) and list of significant pixel (LSP). The 3D-WBTC [6] initialized with the block cubes in topmost LLL band (left) are added to the LIB while their descendants are added in LIBS. The LSP starts as empty list. Each block cube present in LIB has eight offspring block cubes at the same spatial orientation in the higher frequency sub band. Each bit plane starts with the sorting pass followed by the refinement pass.
In the sorting pass, the coefficients are encoded from top most bit plane and move towards the least significant bit-plane. If a block cube is found insignificant to the current threshold, then ‘0’ is generated for the whole block cube and block cube is remains in LIB and it will again tested for the succeeding threshold. If a block cube is found significant to the current threshold,, then ‘1’ is generated. The block cube is partitioned in to the eight equal small block cubes through the octa tree partitioning. This octa tree partitioning will continue till it reaches to the coefficient level (block cube size of one). The parent block cube is remove from the list. If the coefficient is significant to the current threshold, then ‘1’ will be generated (significance) with the sign bit. The coefficient moves to the LSP. If the coefficient is not significant to the current threshold then it moves to LIB. After octa partitioned and significance testing to the current threshold, the block cube is deleted from the LIB. The insignificant block cube and its descendants are remain in LIBS. A significant type ‘A’ block cube set is partitioned into type ‘B’ block cube set with eight offspring block cubes. The type ‘B’ cube sets are added to the end of LIBS while eight offspring block cubes are tested for current threshold. A significant type ‘B’ block set is partitioned into type ‘A’ block cube set and added into the end of LIBS. This process continues till all block cube sets are encoded. After sorting pass, the refinement pass initiated for current threshold and one refinement bit for each coefficient generated. The threshold is reduce by half and process runs till the bit budget exhausts.
3 3D-low complexity block tree coding (3D-LCBTC)
The 3D-LCBTC is a low-weight version of 3D-WBTC [6] which has high coding gain, low coding memory requirement and low complexity. The 3D-LCBTC is a zero block cube tree-based set partitioned hyperspectral image compression algorithm which has the same partitioned rule as 3D-WBTC [6]. Like 3D-WBTC [6], 3D-LCBTC considers a block cube as node in the spatial oriented tree. Through this, a large number of insignificant descendent can be represented by a single coefficient. So, the zero block cube tree HS image compression scheme has higher gain in low bit rates. The node block cube present in the top LLL band has seven offspring nodes in the highest decomposition level sub-bands. The 3D-LCBTC uses the fixed block cube size (2 × 2 × 2) rather than changing the size of block cube as in 3D-WBTC [6]. The fix size of the block cube reduces the complexity of the proposed compression scheme significantly. Unlike other set partitioned compression schemes, the 3D-LCBTC executes refinement pass before sorting pass.
The 3D-LCBTC uses two state tables and two linked list for the tracking of the partitioned block cube or coefficients. Detail of the state table and linked list is given in Table 2.
The BCSM and DSM are used to store the significance information about each node. The 3D-LCBTC consists of two passes, initialization pass and bit plane pass. Each bit plane pass consist of sorting pass (SP) and refinement pass (RP). Unlike 3D-SPIHT [68] or 3D-WBTC [6], 3D-.
LCBTC executes refinement pass before sorting pass. The transform coefficients which are significant in last bit plane, are encoded and a refinement bit is generated for each coefficient in the refinement pass. The transform coefficients or block cubes, which have not become significant at previous threshold, are encoded in the sorting pass. The BCSM gives the information about the significance of each block cube in the HS image while DSM gives the information about the associated descendants. A block cube ‘a’ is significant if BCSM(a) = 1, while the information related to the significance of descendants of all nodes are store in DSM. When all descendants under the node ‘a’ are significant to the current bit plane then whole the whole block cube tree originated from the node ‘a’ is significant and it is represented as DSM(a) = ‘1’. This block cube tree is encode in refinement pass instead of sorting pass. Two associated list LCBC and LPBC are used in the sorting pass.
Initialization: The encoding process starts from the top most bit plane (most significant) ‘n’ and proceeds towards the lower bit planes till the bit budget is available. The 3D transform HS image is converted to the 1D array Ci through Morton mapping [9]. The highest threshold (n) is calculated by the Eq. 1 and magnitude of the threshold (T) is calculated by Eq. 2
The block cubes present in LLL band of transform HS image is assigned ‘1’ in BCSM while other block cubes are assigned as ‘0’. All block cubes present in the DSM are assigned as ‘0’. The associated list LCBC and LPBC are initialized as empty.
Refinement Pass: Refinement pass is executed for those block cubes which are significant to the pervious bit plane. The significant block cubes are present in the BCSM having the tag ‘1’. Each coefficient of the significant block cube is encoded in RP and a single refinement bit is sent for coefficients which are previously significant. If the coefficients is significant first time (block cube is significant initially), than the sign bit is also sent with the significant bit. The block cubes are encoded in the breadth first approach in which the more significant coefficients are encoded first. The 3D wavelet transform HS image has most of the information present in the higher band than the lower band.
Sorting Pass: Those block cubes which are left in refinement pass are encoded in the sorting pass. Each block cube tree has roots in the LLL band. The sorting pass starts in block cube tree wise. The second block tree will be encoded when the previous block tree is completely encoded. Through this sequence follow, the requirement of the list memory has been reduce significantly and also less number of read/write operation which further reduce the complexity of the proposed compression algorithm.
The significance of the block cube ‘B’ and block cube tree ‘BT’ (starting from node ‘a’) is calculated with the Eq. 3 and Eq. 4
The sorting pass uses the two linked lists LCBC and LPBC. The LCBC is used to encode the coefficients present in the block cube tree. The LPBC will execute after the execution of all entries in LCBC. The LPBC is use to update the state table.
When a block cube at node ‘a’ with its associated block cube tree is insignificant to the current threshold, only ‘0’ bit is sent to the output. If block cube is significant then ‘1’ is sent to the output and encoding process of performed for all eight coefficients. If all descendant block cubes are insignificant to the current threshold, then ‘0’ is sent to the output otherwise, for the significant descendant block cubes, ‘1’ ‘0’ sent to the output with offsprings are added to the last of LCBC.
The significant offspring block cube is significant to the current threshold, then ‘1’ is sent to the output and encoding process of performed for all eight coefficients.
If a block cube at node ‘a’ and it’s offspring block cubes are significant to the current threshold, then if descendant block cubes are insignificant to the current threshold, then only ‘0’ is sent to the output. The output ‘1’ is generated and the block cubes are added to LCBC and the parent block cube moves to the LPBC.
After the completion of the bit plane the threshold is reduce by half and next pass (bit plane) executed. This process is repeated till the bit budget is available or desire bit rate is achieved. The decoder follows the same procedure as encoder except for the additional step of significance testing of those sets which contains the refinement bit.
The pseudo code for the 3D-LCBTC is given in Table 3. The Encode function is used to encode the block cube at node ‘a’ of the transform HS image. The ‘p’ is the address of the coefficient while I(p) is the value of the coefficient.
Let consider Bs, t, u(x) is a root block cube present in the LLL band of the transform HS image where ‘s,t,u’ is the address of the coefficient present in the top left of the block cube while ‘y’is define as a dimension of the block cube. The O(x) or Os, t, u(x) is the set of offspring block cubes of the root block cube Bs, t, u(x). The offspring block cubes are defined as (Fig. 1).
Bs, t, u (x)A root block cube of size ‘y x y x y’ that have wavelet transform coefficients [ Cj, k, l | s ≤ j ≤ (s + y ); t ≤ k ≤ (t + y); u ≤ l ≤ (u + y)]
Bs, t, u (x)A root block cube of size ‘y x y x y’ that have wavelet transform coefficients
Os, t, u(x) = [ B2s, 2t, 2u(x), B2s + y, 2t, 2u(x), B2s, 2t + y, 2u(x), B2s, 2t, 2u + y(x)
B2s + y, 2t + y, 2u(x), B2s, 2t + y, 2u + y(x), B2s + y, 2t, 2u + y(x), B2s + y, 2t + y, 2u + y(x)]
ds, t, u (x)Set of all descendent block cubes of the root block cube Bs, t, u(x)
The coding memory used by the 3D-LCBTC is calculated by the memory MEMLCBTC used by the associated lists (LCBC and LPBC) & state tables (BCSM and DSM).
The BCSM is used to store the status of block cube. Since, the size of block cube in 3D-LCBTC is fixed (2 × 2 × 2). Hence, the size of BSM is MNP/8 for the HS image of ‘M x N x P’. The DSM is used to store the status of the descendant block cube. For the lowest decomposition level of wavelet transform (L = 1) when no descendant is present. Thus, no entry in DSM is required. Hence, the size of DSM is MNP/64. The size of state tables is fixed throughout the coding process and it depends on the size of the test HS image. So, the requirement of memory used by the state table is fixed. The total memory (bit) MEMST required by the state tables is given by
The 3D-LCBTC used two linked lists LCBC and LPBC. The 3D-LCBTC encodes one block cube tree at a time. Common nodes are present in both lists at a time. The number of nodes depends on the level of the dyadic wavelet transform. The entries in the linked lists changes every time and the linked list memory is referred as dynamic memory. Experimentally, the maximum number of entries in the LCBC list MEMLCBC and LPBC list MEMLPBC calculated as
Each entry in the linked lists is completed by the address bits, which is calculated by the log2MNP .
So, the total memory MEMLL required by the linked lists is
The coding memory MEMLCBTC required by the 3D-LCBTC encode is given as
4 Result and analysis
The proposed HSICA is compared with the existing state of art HSICA 3D-SPECK [67], 3D-SPIHT [68], 3D-WBTC [6], 3D-LSK [49], 3D-NLS [65], 3D-LMBTC [7] and 3D-ZM-SPECK [9]. The performance of the proposed compression algorithm 3D-LCBTC is measured on the basis of coding efficiency, coding memory and computational complexity & tested on four publically available HS images which are Washington DC Mall, Urban, Jasper Ridge and Cuprite. The detail description about the HS images is presented in Table 4.
The five level dyadic wavelet transform is used to transform the HS image. The wavelet transform coefficients are quantized to the nearest integer. The morton mapping (linear indexing) is used to convert the transform coefficients from 3D matrix to 1D array [8]. After that, the coefficients are encoding with the help of state of art set partitioned HS image compression algorithms 3D-SPECK [67], 3D-SPIHT [68], 3D-WBTC [6], 3D-LSK [49], 3D-NLS [65], 3D-LMBTC [7] and 3D-ZM-SPECK [9] with proposed 3D-LCBTC. Whole work is performed by using Intel core i3 central processing unit @ 1.6 GHz, RAM of 8 GB, 64 bit operating system and Windows 8.1 operating system. The HS images are cropped from the left top corner to the size of a cube and zero padding is done if it is required. The coding efficiency is calculated in decibel (dB) for Peak Signal to Noise Ratio (PSNR) and Bjøntegaard delta peak signal-to-noise rate (BD-PSNR) while coding memory and coding complexity is calculated in kilobyte (KB) and second (sec).
4.1 Coding efficiency
The metric used to calculate the coding efficiency are Peak Signal to Noise Ratio (PSNR), Bjøntegaard delta peak signal-to-noise rate (BD-PSNR), Structural Similarity Index (SSIM) and Feature Similarity Index (FSIM) [9].
The Peak Signal to Noise Ratio is the measurement of the rate-distortion (RD) performance. Mathematically, the PSNR is calculated as in Eq. 10 [58].
The MAXa is the maximum value of the signal. The MSE (Mean Square Error) is calculated with the Eq. 11
The Npix is the sum of all pixels present in the all frames of HS image. The reconstructed HS image is defines as g(i,j,k) while the original HS image is defined as f(i,j,k).
The compression ratio (CR) is the ratio between the bits used in the original HS image to the bits used in the reconstructed HS image [59]. Mathematically, it is formulated as follows in Eq. 12.
The mathematical relationship between the bit rate (bpppb) and compression ratio for HS image is defined in Eq. 13 [9].
The high value of compression ratio with the good PSNR shows the robustness of compression algorithm.
The 3D-LCBTC is a zerotree based set partitioned HS image compression scheme and has same set partition rules as 3D-WBTC [6]. The Table 5 gives the comparative analysis of the coding efficiency (PSNR) for 3D-LCBTC with seven state of art set partitioned HS image compression algorithms. From the Table 5, it is clear that the 3D-LCBTC outperform in the high bit rates (from 0.2 to 1) with other compression algorithms. It has been observed from the Table 5 that the variation between the PNSR of proposed 3D-LCBTC and 3D-WBTC [6] exists in the range of −0.49 dB ~ 0.45 dB for Washington DC Mall HS image, −0.27 dB ~ 0.08 dB for Cuprite HS image, −0.03 dB ~ 0.34 dB for Urban HS image and − 0.09 dB ~ 0.22 dB for Jasper Ridge HS image. Similarly, the variation between the PNSR of proposed 3D-LCBTC and 3D-LMBTC [7] exists in the range of −0.36 dB ~ 0.89 dB for Washington DC Mall HS image, −0.11 dB ~ 0.68 dB for Cuprite HS image, −0.01 dB ~ 1.24 dB for Urban HS image and − 0.11 dB ~ 0.98 dB for Jasper Ridge HS image. The variation between the PNSR of proposed 3D-LCBTC and 3D-ZM-SPECK [9] exists in the range of exists in the range of - 0.46 dB ~ 0.48 dB for Washington DC Mall HS image, − 0.30 dB ~ 0.30 dB for Cuprite HS image, − 0.02 dB ~ 0.64 dB for Urban HS image and - 0.03 dB ~ 0.75 dB for Jasper Ridge HS image. For the perfect reconstruction of any HS image after the compression process, the numerical value of PSNR should be infinity [33], but the image degradation having PSNR numeric value of 40 dB or higher is invisible by human eyes [69].
The Structural Similarity Index Measure (SSIM) is an image quality metric that calculates the similarity between two images (original HS image and reconstructed HS image). It is has been.
noticed from the Table 6 that SSIM index is similar to all HS image compression algorithms. Mathematically, SSIM [47] is calculated with the Eq. 14
The input HS image is represented as ‘x’ while reconstructed image after compression process is represented as ‘y’. The mean average of the input HS image ‘x’ and reconstructed HS image ‘y’ is μx and μy. The variance of the input HS image ‘x’ and reconstructed HS image ‘y’ is \( {\sigma}_x^2\ and\ {\sigma}_y^2 \). The covariance between these HS images is given as σxy. The correction factors are represented by the C1 & C2.
The Feature-Similarity (FSIM) index maps the features and measures the similarities between the input HS image and reconstructed HS image [66]. It has been observed from Table 7 that FSIM index is similar to all HS image compression algorithms.
Bjontegaard metric calculation which is known as BD-PSNR [25] is calculated for all four HS images under test. It had been noticed from the Table 8 that 3D-LCBTC outperformed 3D-SPIHT [68], 3D-NLS [49] and 3D-LMBTC [7]. The 3D-LCBTC has a superior performance with reference to other compression algorithms for the two HS images Urban and Jasper Ridge.
4.2 Coding memory
The 3D-LCBTC uses both state table and linked lists for the tracking of the significant block cubes or coefficients. The 3D-LCBTC utilized the best features of the listless algorithm and list- based algorithm. From Eq. 9, the coding memory is calculated and it is fixed throughout the compression process. The calculation of coding memory is calculated when the demand of memory is highest. The state table requires 288 KB memory and linked lists need 12.59 KB of coding memory. The memory required for the coding process is the sum of these two and it is fixed. Table 5 gives the comparative analysis of the coding memory use by the different compression algorithms. Due to the two types of separate state tables (BCSM and DSM), the coding memory requirement is higher than the 3D-LMBTC [7] and 3D-ZM-SPECK [9], but superior to the 3D-NLS [65] and 3D-LSK [49].
The 3D-ZM-SPECK [9] is a novel implementation of 3D-SPECK [61], which does not require any coding memory as the refinement pass is merged with the sorting pass. The 3D-LMBTC [7] requires only four types of coefficients; hence the requirement of coding memory is also very less. The 3D-LSK [49] and 3D-NLS [65] uses 4 bits/coefficient marker and 8 bits/coefficient marker. The 3D-LCBTC outperforms the list based set petitioned compression algorithms at the high bit levels (higher bit rate at 0.1 bpppb) (Table 9).
4.3 Coding complexity
The coding complexity of any image compression algorithm is measured by the time required for the encoding process (compression) and decoding process (reconstruction). The encoding time is always greater than the decoding time because the encoder took more time to find the set size and testing of the sets for each threshold. Table 10 shows the encoding time and Table 11 shows the decoding time. It has been observed from the result that 3D-LCBTC out perform with all other compression algorithms for the high bit rate (bpppb = 0.1) except for cuprite HS image at bpppb 1. For the low bit rates 3D-LCBTC outperform with other compression algorithms except 3D-LSK [44]. The 3D-LCBTC performance is superior than 3D-LMBTC [7] and 3D-WBTC [6]. The low complexity is due to the fixed block cube size while for 3D-LMBTC [7] and 3D-WBTC [6], the block cube size is kept changing according to the significance of the block cube and block cube tree. The size of the state tables used in the 3D-LCBTC is small with respect to the 3D-NLS [56]. Accessing memory multiple time creates the load on the sensor and increases the complexity of the compression algorithm. The coding complexity of the listless compression algorithm (3D-LSK, 3D-NLS, 3D-LMBTC, 3D-ZM-SPECK) is always less than the list-based compression algorithm (3D-SPECK, 3D-SPIHT, 3D-WBTC). The 3D-LSK [44] uses 4 bits/coefficient marker and 3D-NLS [56] uses 8 bits/coefficient marker. The 3D-LMBTC [7] uses four types of marker (2 bits/coefficient marker). The coding complexity of 3D-ZM-SPECK [9] is higher because compression algorithm has to test the sets are coefficients for each threshold.
4.4 Effect of block cube size on the performance of 3D-LCBTC
In order to analyze the impact of the block cube size on the performance of proposed HS image compression algorithm, the coding efficiency (PSNR & SSIM), coding memory and coding complexity (encoding time & decoding time) is used for the analysis of effect of block cube size.
It had been noticed from the Table 12 that increase in the block cube size moderately improving the coding gain at the low bit rates but for the high bit rate, it almost same or degraded. The coding gain reduces because increase in the block cube size also increases the location of significance bits or significance block cubes. It also required the extra bits in searching for significant coefficients which let slightly increase in the coding complexity (due to multiple memory read or write operation). However, the requirement of coding memory reduces significantly due to the use of large block cube which limited the number of bits in the output list.
The visual representation (original and reconstructed) of four sub-bands (Band 25, 50, 100 and 175) for Urban HS image and Washington DC MALL HS image at CR = 16 is shown in Figs. 2 and Fig. 3. The Fig. 2 shows the visual representation of four frames (25, 50, 100 and 175) for Urban HS image and Fig. 3 shows the visual representation of four frames (25, 50, 100 and 175) for Washington DC MALL HS image.
5 Conclusion
The onboard HS image sensor has limited resources to process the HS images and transmit to the earth station. This paper proposes a novel algorithm that outperforms other state of art wavelet-based HS image compression algorithms on coding efficiency and coding complexity. The 3D-LCBTC is a compression algorithm which uses both state tables and linked list. The requirement of coding memory of 3D-LCBTC is higher than 3D-LMBTC which is due to the fixed size of the block cube. The fixed size of the block cube increases the demand of state table memory. The proposed HS image compression algorithm performs efficient compression without affecting the performance of succeeding applications.
References
Achard V, Foucher PY, Dubucq D (2021) Hydrocarbon pollution detection and mapping based on the combination of various hyperspectral imaging processing tools. Remote Sens 13(5):1020. https://doi.org/10.3390/rs13051020
Anand R, Veni S, Aravinth J (2017) Big data challenges in airborne hyperspectral image for urban landuse classification. In 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI): 1808–1814. https://doi.org/10.1109/ICACCI.2017.8126107
Bairagi VK, Sapkal AM, Gaikwad MS (2013) The role of transforms in image compression. Journal of The Institution of Engineers (India): Series B 94(2):135–140. https://doi.org/10.1007/s40031-013-0049-9
Bajpai S, Singh HV, Kidwai NR (2017) Feature extraction & classification of hyperspectral images using singular spectrum analysis & multinomial logistic regression classifiers. In IEEE International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT) Aligarh, India: 97-100. 10.1109/MSPCT.2017.8363982
Bajpai, Shrish, Harsh Vikram Singh, and Naimur Rahman Kidwai (2019) 3D modified wavelet block tree coding for hyperspectral images. Indonesian Journal of Electrical Engineering and Computer Science (IJEECS) 15 (2): 1001–1008. https://doi.org/10.11591/ijeecs.v15.i2.pp1001-1008
Bajpai S, Kidwai NR, Singh HV (2019) 3D wavelet block tree coding for hyperspectral images. International Journal of Innovative Technology and Exploring Engineering 8(6C):64–68
Bajpai S, Kidwai NR, Singh HV, Singh AK (2019) Low memory block tree coding for hyperspectral images. Multimed Tools Appl 78(19):27193–27209. https://doi.org/10.1007/s11042-019-07797-6
Bajpai, Shrish, Naimur Rahman Kidwai, Vishal Singh Chandel (2020) Low memory wavelet based hyperspectral image coding using 2D Dyadic Wavelet Transform, 11(6): 25–33. https://doi.org/10.34218/IJEET.11.6.2020.003
Bajpai S, Kidwai NR, Singh HV, Singh AK (2022) A low complexity hyperspectral image compression through 3D set partitioned embedded zero block coding. Multimed Tools Appl 81:841–872. https://doi.org/10.1007/s11042-021-11456-0
Báscones D, González C, Mozos D (2020) An FPGA accelerator for real-time lossy compression of hyperspectral images. Remote Sens 12(16):2563. https://doi.org/10.3390/rs12162563
Ben S, Parvathy VS, Laxmi Lydia E, Rani P, Polkowski Z, Shankar K (2020) Optimal deep learning based image compression technique for data transmission on industrial Internet of things applications Transactions on Emerging Telecommunications Technologies, e3976. https://doi.org/10.1002/ett.3976
Bilgin A, Zweig G, Marcellin MW (2000) Three-dimensional image compression with integer wavelet transforms. Appl Opt 39(11):1799–1814. https://doi.org/10.1364/AO.39.001799
Boettcher JB, Du Q, Fowler JE (2007) Hyperspectral image compression with the 3D dual-tree wavelet transform. IEEE International Geoscience and Remote Sensing Symposium: 1033-1036. https://doi.org/10.1109/IGARSS.2007.4422977
Chen Y, Huang TZ, He W, Zhao XL, Zhang H, Zeng J (2021). Hyperspectral image Denoising using factor group sparsity-regularized nonconvex low-rank approximation. IEEE Trans Geosci Remote Sens https://doi.org/10.1109/TGRS.2021.3110769.
Cheng KJ, Dill J (2014) Lossless to lossy dual-tree BEZW compression for hyperspectral images. IEEE Trans Geosci Remote Sens 52(9):5765–5770. https://doi.org/10.1109/TGRS.2013.2292366
Cheng T, Wang B (2021) Decomposition model with background dictionary learning for hyperspectral target detection. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14:1872–1884. https://doi.org/10.1109/JSTARS.2021.3049843
Christophe E, Mailhes C, Duhamel P (2008) Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding. IEEE Trans Image Process 17(12):2334–2346. https://doi.org/10.1109/TIP.2008.2005824
Chutia D, Bhattacharyya DK, Sarma KK, Kalita R, Sudhakar S (2016) Hyperspectral remote sensing classifications: a perspective survey. Trans GIS 20(4):463–490. https://doi.org/10.1111/tgis.12164
Daniel B, González C, Mozos D (2018) Hyperspectral image compression using vector quantization, PCA and JPEG2000. Remote Sens 10(6):907. https://doi.org/10.3390/rs10060907
Das S (2021) Hyperspectral image, video compression using sparse tucker tensor decomposition. IET Image Process 15(4):964–973. https://doi.org/10.1049/ipr2.12077
Datta A, Ghosh S, Ghosh A (2017) Supervised feature extraction of hyperspectral images using partitioned maximum margin criterion. IEEE Geosci Remote Sens Lett 14(1):82–86. https://doi.org/10.1109/LGRS.2016.2628078
Dmitriev EV, Kozoderov VV, Dementyev AO, Safonova AN (2018) Combining classifiers in the problem of thematic processing of hyperspectral aerospace images. Optoelectronics, Instrumentation and Data Processing 54(3):213–221. https://doi.org/10.3103/S8756699018030019
Dragotti PL, Poggi G, Ragozini ARP (2000) Compression of multispectral images by three-dimensional SPIHT algorithm. IEEE Trans Geosci Remote Sens 38(1):416–428. https://doi.org/10.1109/36.823937
Dussarrat P, Theodore B, Coppens D, Standfuss C, Tournier B (2021) Introduction to the ringing effect in satellite hyperspectral atmospheric spectrometry. Atmospheric Measurement Techniques Discussions: 1–12. https://doi.org/10.5194/amt-2021-121
Gnutti A, Guerrini F, Adami N, Migliorati P, Leonardi R (2021) A wavelet filter comparison on multiple datasets for signal compression and denoising. Multidim Syst Sign Process 32(2):791–820. https://doi.org/10.1007/s11045-020-00753-w
Goetz AF (2009) Three decades of hyperspectral remote sensing of the earth: a personal view. Remote Sens Environ 113(1):S5–S16. https://doi.org/10.1016/j.rse.2007.12.014
Gross W, Queck F, Vögtli M, Schreiner S, Kuester J, Böhler J, Middelmann W (2021) A multi-temporal hyperspectral target detection experiment: evaluation of military setups. In Target and Background Signatures VII 11865:38–48. https://doi.org/10.1117/12.2597991
Hou Y, Liu G (2007) 3D set partitioned embedded zero block coding algorithm for hyperspectral image compression. Remote Sensing and GIS Data Processing and Applications; and Innovative Multispectral Technology and Applications. International Society for Optics and Photonics 6790:679056. https://doi.org/10.1117/12.750975
Hou Y, Liu G (2008). Hyperspectral image lossy-to-lossless compression using the 3D embedded Zeroblock coding alogrithm. International Workshop on Earth Observation and Remote Sensing Applications: 1-6. https://doi.org/10.1109/EORSA.2008.4620308
Hou Y, Liu G (2008) Lossy-to-lossless compression of hyperspectral image using the improved AT-3D SPIHT algorithm. International Conference on Computer Science and Software Engineering 2:963–966. https://doi.org/10.1109/CSSE.2008.1351
Jiang Z, Pan WD, Shen H (2020) Spatially and spectrally concatenated neural networks for efficient lossless compression of hyperspectral imagery. Journal of Imaging 6(6):38. https://doi.org/10.3390/jimaging6060038
Karami A, Yazdi M, Asli, AZ (2010) Hyperspectral image compression based on tucker decomposition and discrete cosine transform. In 2010 2nd international conference on image processing theory, Tools and Applications: 122-125. https://doi.org/10.1109/IPTA.2010.5586739
Kidwai NR, Khan E, Zm-Speck RM (2016) A fast and memoryless image coder for multimedia sensor networks. IEEE Sensors J 16(8):2575–2587. https://doi.org/10.1109/JSEN.2016.2519600
Laureen C, Sacré P-Y, Dispas A, De Bleye C, Fillet M, Ruckebusch C, Hubert P, Ziemons E (2021) Pixel-based Raman hyperspectral identification of complex pharmaceutical formulations. Anal Chim Acta 1155:338361. https://doi.org/10.1016/j.aca.2021.338361
Lee HS, Younan NH, King RL (2002) Hyperspectral image cube compression combining JPEG-2000 and spectral decorrelation. IEEE International Geoscience and Remote Sensing Symposium 6:3317–3319. https://doi.org/10.1109/IGARSS.2002.1027168
Li R, Pan Z, Wang Y (2019) The linear prediction vector quantization for hyperspectral image compression. Multimed Tools Appl 78(9):11701–11718. https://doi.org/10.1007/s11042-018-6724-8
Liu R, Cai W, Li G, Ning X, Jiang Y (2021). Hybrid dilated convolution guided feature filtering and enhancement strategy for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters: 1–5. https://doi.org/10.1109/LGRS.2021.3100407
Liu R, Ning X, Cai W, Li G (2021) Multiscale dense cross-attention mechanism with covariance pooling for hyperspectral image scene classification. Mob Inf Syst 2021:1–15. https://doi.org/10.1155/2021/9962057
Luo Y, Qin J, Xiang X, Tan Y, Liu Q, Xiang L (2020) Coverless real-time image information hiding based on image block matching and dense convolutional network. J Real-Time Image Proc 17(1):125–135. https://doi.org/10.1007/s11554-019-00917-3
Medus LD, Saban M, Francés-Víllora JV, Bataller-Mompeán M, Rosado-Muñoz A (2021) Hyperspectral image classification using CNN: application to industrial food packaging. Food Control 125:107962. https://doi.org/10.1016/j.foodcont.2021.107962
Mishra MK, Gupta A, John J, Shukla BP, Dennison P, Srivastava SS, Kaushik NK, Misra A, Dhar D (2019) Retrieval of atmospheric parameters and data-processing algorithms for AVIRIS-NG Indian campaign data. Current Science 116(7):1089–1100. https://doi.org/10.18520/cs/v116/i7/1089-1100
Mitran T, Sreenivas K, Janakirama Suresh KG, Sujatha G, Ravisankar T (2021) Spatial prediction of calcium carbonate and clay content in soils using airborne hyperspectral data. Journal of the Indian Society of Remote Sensing 49:1–12. https://doi.org/10.1007/s12524-021-01415-5C
Miyoshi GT, Imai NN, Tommaselli AMG, Honkavaara E, Näsi R, Moriya ÉAS (2018) Radiometric block adjustment of hyperspectral image blocks in the Brazilian environment. Int J Remote Sens 39(15–16):4910–4930. https://doi.org/10.1080/01431161.2018.1425570
Mohan BK, Porwal A (2015) Hyperspectral image processing and analysis. Curr Sci 108(5):833–841
Morales A, Ferrer MA, Diaz-Cabrera M, Carmona C, Thomas GL (2014). The use of hyperspectral analysis for ink identification in handwritten documents. In 2014 International Carnahan Conference on Security Technology: 1-5. https://doi.org/10.1109/CCST.2014.6986980
Munmun B, Kumar SA, Praise SD (2021) Two-level band selection framework for hyperspectral image classification. Journal of the Indian Society of Remote Sensing 49(4):843–856. https://doi.org/10.1007/s12524-020-01262-w
Nadia Z, Lahdir M, Helbert D (2019) Support vector regressionbased 3D-wavelet texture learning for hyperspectral image compression. Vis Comput 36(7):1473–1490. https://doi.org/10.1007/s00371-019-01753-z
Nagendran R, Vasuki A (2020) Hyperspectral image compression using hybrid transform with different wavelet-based transform coding. Int J Wavelets Multiresolut Inf Process 18(01):1941008. https://doi.org/10.1142/S021969131941008X
Ngadiran R, Boussakta S, Sharif B, Bouridane A (2010) Efficient implementation of 3D listless SPECK. IEEE international conference on computer and communication engineering, 1–4. https://doi.org/10.1109/ICCCE.2010.5556843
Paul A, Kundu A, Chaki N, Dutta D, Jha CS (2021). Wavelet enabled convolutional autoencoder based deep neural network for hyperspectral image denoising. Multimedia tools and applications: 1-27. https://doi.org/10.1007/s11042-021-11689-z
Penna B, Tillo T, Magli E, Olmo G (2006). A new low complexity KLT for lossy hyperspectral data compression. In 2006 IEEE International Symposium on Geoscience and Remote Sensing: 3525-3528. https://doi.org/10.1109/IGARSS.2006.904
Penna B, Tillo T, Magli E, Olmo G (2007) Transform coding techniques for lossy hyperspectral data compression. IEEE Trans Geosci Remote Sens 45(5):1408–1421. https://doi.org/10.1109/TGRS.2007.894565
Plaza A, Benediktsson JA, Boardman JW, Brazile J, Bruzzone L, Camps-Valls G, Chanussot J, Fauvel M, Gamba P, Gualtieri A, Marconcini M (2009) Recent advances in techniques for hyperspectral image processing. Remote Sens Environ 113:S110–S122. https://doi.org/10.1016/j.rse.2007.07.028
Raikwar SC, Tapaswi S, Chakraborty S (2021) Bounding function for fast computation of transmission in single image dehazing. Multimed Tools Appl 81:1–24. https://doi.org/10.1007/s11042-021-11752-9
Ramakrishnan D, Bharti R (2015) Hyperspectral remote sensing and geological applications. Curr Sci 108(5):879–891
Ren W, Zhang J, Ma L, Pan J, Cao X, Zuo W, Liu W, Yang MH (2018). Deep non-blind deconvolution via generalized low-rank approximation. Advances in neural information processing systems: 297-307
Ren W, Pan J, Zhang H, Cao X, Yang MH (2020) Single image dehazing via multi-scale convolutional neural networks with holistic edges. Int J Comput Vis 128(1):240–259. https://doi.org/10.1007/s11263-019-01235-8
Rupali B (2018) Enhanced encrypted reversible data hiding algorithm with minimum distortion through homomorphic encryption. Journal of Electronic Imaging 27(2):023017. https://doi.org/10.1117/1.JEI.27.2.023017
Rupali B (2021) An improved reversible and secure patient data hiding algorithm for telemedicine applications. J Ambient Intell Humaniz Comput 12(2):2915–2929. https://doi.org/10.1007/s12652-020-02449-2
Saha S, Kondmann L, Zhu XX (2021) Deep no learning approach for unsupervised change detection in hyperspectral images. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 3:311–316. https://doi.org/10.5194/isprs-annals-V-3-2021-311-2021
Sahoo RN, Ray SS, Manjunath KR (2015) Hyperspectral remote sensing of agriculture. Curr Sci 108(5):848–859
Sharma D, Prajapati YK, Tripathi R (2018) Spectrally efficient 1.55 Tb/s Nyquist- WDM superchannel with mixed line rate approach using 27.75 Gbaud PM-QPSK and PM-16QAM. Optical Engineering 57(7):076102. https://doi.org/10.1117/1.OE.57.7.076102
Sharma D, Prajapati YK, Tripathi R (2018) Success journey of coherent PM-QPSK technique with its variants: a survey. IETE Tech Rev 37(1):36–55. https://doi.org/10.1080/02564602.2018.1557569
Subrahmanyam KV, Kumar KK, Reddy NN (2019) New insights into the convective system characteristics over the Indian summer monsoon region using space-based passive and active remote sensing techniques. IETE Tech Rev 37(2):211–219. https://doi.org/10.1080/02564602.2019.1593890
Sudha VK, Sudhakar R (2013) 3D listless embedded block coding algorithm for compression of volumetric medical images. J Sci Ind Res 72:735–748
Suresh KR, Manimegalai P (2019) Near lossless image compression using parallel fractal texture identification. Biomedical Signal Processing and Control 58:101862. https://doi.org/10.1016/j.bspc.2020.101862
Tang X, Pearlman WA (2004) Lossy-to-lossless block-based compression of hyperspectral volumetric data. IEEE International Conference on Image Processing, Singapore 5:3283–3286. https://doi.org/10.1109/ICIP.2004.1421815
Tang X, Pearlman WA (2006) Three-dimensional wavelet-based compression of hyperspectral images. In hyperspectral data compression springer, Boston, MA: 273-308. https://doi.org/10.1007/0-387-28600-4_10
Tausif M, Kidwai NR, Khan E, Reisslein M, FrWF-based LMBTC (2015) Memory-efficient image coding for visual sensors. IEEE Sensors J 15(11):6218–6228. https://doi.org/10.1109/JSEN.2015.2456332
Uddin MP, Mamun MA, Hossain MA (2021) PCA-based feature reduction for hyperspectral remote sensing image classification. IETE Tech Rev 38(4):377–396. https://doi.org/10.1080/02564602.2020.1740615
UmaMaheswari S, SrinivasaRaghavan V (2021) Lossless medical image compression algorithm using tetrolet transformation. J Ambient Intell Humaniz Comput 12(3):4127–4135. https://doi.org/10.1007/s12652-020-01792-8
Valsesia D, Magli E (2017) Fast and lightweight rate control for onboard predictive coding of hyperspectral images. IEEE Geosci Remote Sens Lett 14(3):394–398. https://doi.org/10.1109/LGRS.2016.2644726
Vura S, Patil P, Patil SB (2021) A study of different compression algorithms for multispectral images. Materials Today: Proceedings. https://doi.org/10.1016/j.matpr.2021.06.175
Wang X, Tao J, Shen Y, Qin M, Song C (2018) Distributed source coding of hyperspectral images based on three-dimensional wavelet. J Indian Soc Remote Sens 46(4):667–673. https://doi.org/10.1007/s12524-017-0735-1
Wei P, Yi Zou, Lu AO (2008). A compression algorithm of hyperspectral remote sensing image based on 3-D wavelet transform and fractal. 3rd International Conference on Intelligent System and Knowledge Engineering 1: 1237–1241. https://doi.org/10.1109/ISKE.2008.4731119
Wildenstein D, George AD (2021). Towards intelligent compression of hyperspectral imagery. In 2021 IEEE international conference on electronics, Computing and Communication Technologies: 1-6. 10.1/CONECCT52877.2021.9622585
Wu J, Wu Z, Wu C (2006) Lossy to lossless compressions of hyperspectral images using three-dimensional set partitioning algorithm. Opt Eng 45(2):027005. https://doi.org/10.1117/1.2173996
Yaman D, Kumar V, Singh RS (2020) Comprehensive review of hyperspectral image compression algorithms. Opt Eng 59(9):090902. https://doi.org/10.1117/1.OE.59.9.090902
Yaman D, Kumar V, Singh RS (2021) Parallel lossless HSI compression based on RLS filter. Journal of Parallel and Distributed Computing 150:60–68. https://doi.org/10.1016/j.jpdc.2020.12.004
Yaman D, Singh RS, Parwani K, Lunagariya S, Kumar V (2021) Convolution neural network based lossy compression of hyperspectral images. Signal Process Image Commun 95:116255. https://doi.org/10.1016/j.image.2021.116255
Zhang L, Zhang L, Tao D, Huang X, Du B (2015) Compression of hyperspectral remote sensing images by tensor approach. Neurocomputing 147:358–363. https://doi.org/10.1016/j.neucom.2014.06.052
Acknowledgements
I am sincerely thankful to the anonymous reviewers for their critical comments and suggestions to improve the quality of the paper.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that there are no conflicts of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Bajpai, S. Low complexity block tree coding for hyperspectral image sensors. Multimed Tools Appl 81, 33205–33232 (2022). https://doi.org/10.1007/s11042-022-13057-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-022-13057-x