Abstract
The hyperspectral image provides rich spectral information content, which facilitates multiple applications. With the rapid advancement of the spatial and spectral resolution of optical instruments, the image data size has increased by many folds. For that, it requires a compression algorithm having low coding complexity, low coding memory demand and high coding efficiency. In recent years, many coding algorithms are proposed. The wavelet transform-based set-partitioned hyperspectral compression algorithms have superior coding performance. These algorithms employ linked lists or state tables to track the significant/insignificant of the partitioned sets/coefficients. The proposed algorithm uses the pyramid hierarchy property of wavelet transform. The markers are used to track the significance/insignificance of the pyramid level. A single pyramid level has many sets. An insignificant pyramid level having multiple sets is represented as a single bit in proposed compression algorithm, while a single insignificant set in 3D Set Partition Embedded bloCK (3D-SPECK) and 3D-Listless SPECK (3D-LSK) is represented as a single bit. Through this, the requirement of the bits in the proposed algorithm is less than other wavelet transform compression algorithms at the high bit planes. The simulation result shows that the proposed compression algorithm has high coding efficiency with very less coding complexity and moderate coding memory requirement. The reduced coding complexity improves the performance of the image sensor and lowers the power consumption. Thus, the proposed compression algorithm has great potential in low-resource onboard hyperspectral imaging systems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The hyperspectral (HS) image from spaceborne spectrometers is a 3D volumetric data that has abundant spatial and spectral information ranging from visible near-infrared (from 400 to 1000 nm) and short wave infrared (from 1000 to 2500 nm) of the electromagnetic (EM) spectrum for a single scene [1, 2]. Due to high spectral resolution, the HS image is used in numerous applications such as precision farming [3], aerospace [4], medical surgery [5], drug sample verification [6], corrosion detection [7], document validation [8], food grain quality [9], mineral detection and exploration [10], urban planning [11], soil quality measurement [12], analysis for land use and land cover [13], meteorological condition monitoring [14], semiconductor device metrology [15], astrionics [16], monsoon monitoring [17], and military surveillance [18]. Remote sensing (RS) [19] is one of the fast-growing fields of HS imaging in which researchers develop algorithms related to the compression process [20], object classification [21], feature extraction [22], target detection [23], band selection [24], denoising [25], change detection estimation [26], feature reduction [27], dimensionality reduction [28], segmentation [29], image unmixing [30], etc. The HS images for the remote sensing applications are acquired from the onboard HS image sensors [31]. The memory required to save one HS image is approximately 150 MB. [32]. Thus, HS image compression becomes a necessary step before the HS image is transmitted to the earth station for further processing to save the memory storage, transmission bandwidth, data transmission time, and processing power [33,34,35,36]. Besides the above-mentioned advantages, HyperSpectral Image Compression Algorithm (HSICA) also reduces computational complexity, which improves the HS image sensor performance [37].
The classification of different HyperSpectral Image Compression Algorithms (HSICAs) can be performed on the basis of data loss or coding process [38]. Based on data loss, HSICAs can be divided into three sub-categories, named lossless, lossy and near-lossless HS image compression [39]. For the lossless compression, there are no data loss and the reconstructed HS image is as same as the original. The near-lossless compression has loss of some data but the reconstructed HS image is near to same as original HS image (before compression process) [40]. The lossy compression has the loss of image data, but it has a very high compression ratio than the other two types of compression. The lossy compression has low coding efficiency. The peak signal-to-noise ratio (PSNR) should be ‘∞’ for the ideal reconstruction of the HS image after the compression process [41]. On the other hand, human observers are almost unable to detect the HS image degradations that occur when the PSNR is at least 40 dB. [42]. The best lossless HSICA has a compression ratio (CR) of 4, which is insufficient [43]. So, the lossy HSICA is needed for the compression of the HS images.
On the basis of coding process, it can be further divided into the nine sub-categories which follows as transform coding (TC) [44], predicative coding (PC) [45], vector quantization (VQ) [46], compressive sensing (CS) [47, 48], sparse representation (SR) [49, 50], tensor decomposition (TD) [51], neural network (NN)-based HS image compression [52], machine learning (ML)-based [53], and hybrid compression algorithm [54].
The TC-based HSICA uses mathematical transform (Fourier transform, cosine transform, wavelet transform, Karhunen–Loeve transform, 3D dual-tree transform, lapped transform) to convert the HS image from time domain to frequency domain by applying in all three dimensions [55]. Mathematical transform removes the unwanted redundancy (spatial and spectral correlation) in the HS image. The wavelet transform has an excellent performance than other mathematical transforms because it offers a simultaneous localization in time and frequency domain. The TC-based HSICA also works with the other type of compression algorithms to achieve the compression (hybrid type) [38].
The 3D set-partitioned embedded zero block coding, 3D embedded zeroblock coding algorithm, improved AT-3D SPIHT algorithm, JPEG-2000 and spectral decorrelation, distributed source coding, 3D wavelet-fractal coding, adapting SPIHT, lapped transform and Tucker decomposition (LT-TD), spatial-orientation tree wavelet (STW), JPEG-2000 and spectral decorrelation are the state-of-the-art TC-based HSICA [39, 56,57,58,59,60,61,62,63].
Through listless HSICA has low coding complexity and constant coding memory requirement, the 3D-LMBTC [61] and 3D-ZM-SPECK [63] have little coding memory requirements, but they have high coding complexity. The 3D-LSK [59] and 3D-NLS [60] have low coding complexity with high coding memory requirement. The 3D-LCBTC [62] is a special case of 3D-WBTC [58], which uses the two small lists, LCBC & LPBC, and two-state marker tables, BCSM & DSM [62]. The 3D-LCBTC [62] has higher coding memory requirements than 3D-LMBTC [61] and 3D-ZM-SPECK [63]. The proposed HSICA 3D- Listless Block Cube Set Partitioning Coding (3D-LBCSPC) uses the property of wavelet transform and has high coding efficiency with the fixed coding memory. The 3D-LBCSPC follows the same partition rule as 3D-SPECK [56]. It also reduces the coding complexity, which makes it an appropriate choice for the resource constraint HS image sensors.
2 Related work
2.1 Set-partitioned hyperspectral image compression algorithms
The set-partitioned HS image compression algorithms use the set structure to represent a large number of insignificant coefficients. The set-partitioned HS image compression algorithm has several properties such as low coding memory requirement, low coding complexity, high coding efficiency and embeddedness, which make them a perfect choice for the compression of the HS image [40, 64]. The set-partitioned HSICAs can be classified into four types named as list-based set-partitioned HSICA [58], listless set-partitioned HSICA [40], list & state table-based set-partitioned HSICA [62] and array-based set-partitioned HSICA [43].
-
1.
List-based set-partitioned HSICA: This type of HSICA uses the linked lists for tracking the partitioned sets or coefficients. The 3D-SPIHT [57], 3D-SPECK [56] and 3D-WBTC [58] are the major compression algorithms under this category. The 3D-SPIHT & 3D-WBTC use three lists, while 3D-SPECK uses the two link lists for the tracking of the sets. As bit rates grow, the size of the lists grows rapidly and it also increases the coding complexity [28]. Thus, these HSICAs are not the best solution at the high bit rates.
-
2.
Listless set-partitioned HSICA: This type of HSICA uses the state table or marker for tracking the partitioned sets or coefficients. The 3D-LSK [59], 3D-NLS [60], 3D-LMBTC [61] and 3D-ZM-SPECK [63] are the major compression algorithms under this category. The demand for coding memory is constant and depends only on the dimension of the HS image and does not depend on the bit rate. Due to the state table/markers, it has very less coding complexity [63]. But the reduced coding complexity and coding memory come at the cost of reduced coding efficiency. This type of algorithm has slightly less coding efficiency if the bit budget is exhausted in between the bit plane [61].
-
3.
List & state table-based set-partitioned HSICA: The 3D-LCBTC [62] is a type of compression algorithm that uses the lists (2) and state table (2) to tracking of the partitioned sets or coefficients. 3D-LCBTC is less complex than other state-of-the-art HSICA with at par coding efficiency [62].
-
4.
Array-Based set-partitioned HSICA: The 3D-BPEC is a type of HSICA which uses arrays (six) to track the partitioned sets or coefficients. It has slightly lower complexity than list-based HSICA [43].
3 3D-Listless block cube set-partitioning coding (3D-LBCSPC)
The proposed 3D-LBCSPC is a low-weight listless version of 3D-SPECK [56], which has low coding complexity, low coding memory requirement and high coding efficiency at low bit rates. 3D-LBCSPC also outperforms the other wavelet transform-based listless HSICA 3D-LSK [59] and 3D-ZM-SPECK [63], which follows the same partitioned rules as 3D-SPECK [56]. 3D-LBCSPC uses the property of 3D dyadic wavelet transform in which a large number of insignificant coefficients are represented as a single numeric digit '0' at the high bit planes. 3D-LBCSPC uses the property of wavelet transform. 3D-LBCSPC needs less than three to six times bits at the highest bit plane than it’s peer compression algorithms. Thus, it outperforms at low bit rates.
3.1 State markers
3D-LBCSPC uses three types of state table markers for the tracking/significance of the partitioned block cube or coefficients. They are two fixed markers (α[η] and β[η]) and one variable marker (γ[η]). The numeric value of the fixed markers is fixed during the compression process, while the variable markers change the value according to the partition rule or bit plane. For the fixed markers, η is the leading indices of the wavelet transform level while for the variable marker η is the indices of all wavelet coefficients of the transform HS image.
The numeric value of the marker α[η] and γ[η] depends on the level of the wavelet transform. The HS image of size ‘N × N × N’ with ‘L’ level of wavelet transform, the initial value and final value of the markers α[η] and γ[η] are given in Eq. 1 and Eq. 2:
The mathematical value of the β[η] is the fixed value on the leading indices of each wavelet transform orientation. Alike 3D-LSK [29], each marker in the proposed HSICA holds 0.5 byte per coefficient.
The α[η] tracks wavelet pyramid level rather than the partitioned sub-band. It gave a great advantage at the low bit rates where the lots of transform coefficients are insignificant against the current threshold. If any pyramid level is found insignificant against the current threshold, then a single bit ‘0’ is used to represent the whole pyramid. In 3D-LSK [59] and 3D-SPECK [56] seven ‘0’ is used for the LLH, LHL, LHH, HLL, HLH, HHL and HHH sub-band. The β[η] marker is used to skip the multiple wavelet pyramid level instead of skipping a single pyramid level at the top bit plane. The γ[η] is used to track the set partitioned with the pyramid-level sub-band.
The 3D-LBCSPC uses three different types of symbols to define the single coefficients, which are as follows.
IC | The coefficient is insignificant to the last bit-plane and not tested for the current bit plane |
NC | The coefficient is significant to the current bit-plane |
SC | The coefficient is significant to the last bit plane and will be refined in the current bit plane |
The working of the static markers (α [η] and β [η]) and dynamic marker (γ [η]) for the wavelet pyramid level ‘L’ (for static markers) and ‘L-1’ (for dynamic markers) is described as below. In the same way, it can be generalized for the other working levels of the transform HS image. The markers are defined as in Tables 1, 2, and 3.
3.2 Proposed algorithm
The HS image is transformed (L level) with the dyadic wavelet transform. The transform HS image coefficients are quantized to the nearest integer. The transform HS image cube is converted to the 1D array (linear array) through the Morton mapping. The low-resolution sub-bands are present at the starting of the array, while the high-resolution sub-bands are present at the bottom of the array.
The proposed HSICA consists of two stages: initialization and bit planes pass. Each bit plane pass has three sub-passes named as insignificant coefficient pass (ICP), insignificant set pass (ISP) and refinement pass (RP). Further, ISP can be divided into insignificant bit plane pass (IBPP) and insignificant group of bit plane Pass (IGBPP).
3.2.1 Initialization pass
The encoding process of the proposed HSICA starts from the upmost bit plane ‘n’ and move toward the lower bit plane or until the bit budget is available. The initial threshold ‘T’ is shown in Eq. 3
where
The static marker (α [η]) and the dynamic marker (γ [η]) are initialized as follows.
* | α [1, 65, 129, 193, 257, 321, 385,449] = γ [1, 65, 129, 193, 257, 321, 385,449] = 3 |
---|---|
for LLL5 sub-band | |
* | α [513, 1025, 1537, 2049, 2561, 3073, 3585] = γ [513, 1025, 1537, 2049, 2561, 3073, 3585] = 4 |
for the staring nodes of LLH5, LHL5, LHH5, HLL5, HLH5, HHL5, and HHH5 sub-bands | |
* | α [4097, 8193, 12,289, 16,385, 20,481, 24,577, 28673] = γ [4097, 8193, 12,289, 16,385, 20,481, 24,577, 28673] = 5 |
for the staring nodes of LLH4, LHL4, LHH4, HLL4, HLH4, HHL4, and HHH4 sub-bands | |
* | α [32769, 65,537, 98,305, 131,073, 163,841, 196,609, 229377] = γ [32769, 65,537, 98,305, 131,073, 163,841, 196,609, 229377] = 6 |
for the staring nodes of LLH3, LHL3, LHH3, HLL3, HLH3, HHL3, and HHH3 sub-bands | |
* | α [262145, 524,289, 786,433, 1,048,577, 1,310,721, 1,572,865, 1835009] = γ [262145, 524,289, 786,433, 1,048,577, 1,310,721, 1,572,865, 1835009] = 7 |
for the staring nodes of LLH2, LHL2, LHH2, HLL2, HLH2, HHL2, and HHH2 sub-bands | |
* | α [2097153, 4,194,305, 6,291,457, 8,388,609, 10,485,761, 12,582,913, 14680065] = γ [2097153, 4,194,305, 6,291,457, 8,388,609, 10,485,761, 12,582,913, 14680065] = 8 |
for the staring nodes of LLH1, LHL1, LHH1, HLL1, HLH1, HHL1, and HHH1 sub-bands | |
* | β [513, 4097, 32,769, 262,145, 2097153] = 9 |
* | γ [η] will be initialized to a higher value more than 8 except for the η = 1, 65, 129, 193,…………………, 12,582,913, 14,680,065 |
3.2.2 Insignificant coefficient pass (ICP)
The insignificant coefficient pass (ICP) is used to test the insignificant coefficients of the previous bit plane or pass against the threshold of the current bit plane.
3.2.3 Insignificant set pass (ISP)
The insignificant set pass is the combination of two sub-passes, namely insignificant wavelet level (IWL) and insignificant group of wavelet level (IGWL). The IWL pass is used to test the specific wavelet pyramid level for insignificance against the current threshold, while IGWL pass is used to test the multiple wavelet pyramid levels for insignificance against the current threshold. These passes are conducted through the static markers. When the compression algorithm moves from the higher bit plane to the lower bit plane, these passes shall be ignored as maximum transform coefficients are significant at the lower bit planes.
3.2.4 Refinement pass (RP)
The refinement pass is used to send the refinement bits for those coefficients that are significant in any previous bit plane.
The algorithm starts from the top bit plane 'n' with all three types of markers initialized as per coefficient location in the 1D array defined in Tables 1, 2, and 3 for the HS image cube of size '256'. The five-level dyadic wavelet transform is used to transform the HS image. The 3D-LBCSPC follows the same partitioned rule as 3D-SPECK, but the testing of the significance of the coefficients is slightly different from the 3D-SPECK and 3D-LSK. Instead of testing for the block cubes, the 3D-LBCSPC test for the whole wavelet orientation. In the best condition, one insignificant wavelet orientation has maximum seven insignificant block cubes. This way for the top bit planes when there are a lot of insignificant coefficients, 3D-LBCSPC generates one bit to represent the insignificant wavelet orientation, while 3D-SPECK will generate the seven bits for the same set of coefficients. Identification of the wavelet orientation is performed through the markers as the marker is present as the first index of the block cube or wavelet orientations. For the significant wavelet orientation, 3D-LBCSPC executes the same process as 3D-SPECK and generates the same length of bit steam. It is partitioned till it reaches the coefficient level. For any significant block cube, the significance of the block cube is sent and the block cube is partitioned into equal block cubes. For any significant coefficient to the current bit plane, the significance coefficient with the sign bit is sent to the output. Thus, in the top bit plane, 3D-LBCSPC saves a lot of bits and the coding efficiency should be high for the low bit rates and for the high bit rate, it is almost the same as the other zero block cube set-partitioned HSICA. The pseudo-code for the 3D-LBCSPC is covered in Table 4.
4 Results and discussion
The implementation and validation of the proposed HSICA 3D-LBCSPC with the other wavelet transform-based set-partitioned HSICA 3D-SPECK (HSICA 1) [56], 3D-SPIHT (HSICA 2) [57], 3D-WBTC (HSICA 3) [58], 3D-LSK (HSICA 4) [59], 3D-NLS (HSICA 5) [60], 3D-LMBTC (HSICA 6) [61], 3D-LCBTC (HSICA 7) [62] and 3D-ZM-SPECK (HSICA 8) [63] are implemented on the Intel core i3 central processing unit @ 1.6 GHz (64 bit) and RAM of 8 GB. Four HS images are employed in this manuscript to determine the performance of the HSICAs, which include Washington DC Mall (Hyperspectral Image I), Yellowstone Scene 0 (Hyperspectral Image II), Yellowstone Scene 3 (Hyperspectral Image III), and Yellowstone Scene 18 (Hyperspectral Image IV) [65]. The “Yellow Stone” data set (having spatial dimension 512 by 680 and the spectral dimension of 224 with uncalibrated 16 bits/pixel) is captured by the AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) sensor and "Washington DC Mall" (having spatial dimension 1280 by 307 and the spectral dimension of 191 with the pixel depth of 14 bits per pixel) is captured by the HYDICE (Hyperspectral Digital Imagery Collection Experiment) sensor. The Washington DC Mall HS image has made man structure, while Yellow Stone HS images cover natural areas. The HS images are cropped from the left top corner to the size of a cube and zero padding is done if it is required. The five-level dyadic wavelet transform is applied to each HS image, and transform coefficients are quantized to the nearest integer. The 3D transform image cube is converted to the 1D array through the Morton mapping (linear indexing scheme) [62, 66]. The performance of the HSICAs is calculated based on coding efficiency (peak signal-to-noise ratio, mean square error, structural similarity index and feature-similarity index), coding memory and coding complexity (execution time required for the generation of the encoded bitstream and execution time required for the decoding of the received bitstream) [40, 67, 68]. The peak signal-to-noise ratio (PSNR) is calculated in decibel (dB), coding memory in Kilobyte (KB) and Megabyte (MB), encoding time and decoding time is calculated in second. The mean square error, structural similarity (SSIM) index and feature-similarity (FSIM) index are the unitless metrics [38, 69,70,71,72].
4.1 Coding efficiency
PSNR is mainly used to quantify the reconstruction quality of HS images affected by lossy compression. PSNR is mathematically shown in Eq. 5 [70]
The maximum value of the image signal is represented as MAXa. The mean square error (MSE) is calculated in Eq. 6
The A(x,y,z) is the original (uncompressed) HS image and B(x,y,z) is the reconstructed (compressed) HS image. The ‘N’ is a dimension (each) of the HS image. The compression ratio (CR) is a parameter (unitless) that defines the ratio between the bits required to represent the original image to the bits required to represent the reconstructed image. Mathematically, it defines as in Eq. 7
Bit rate associated with the compression process is defined as in Eq. 8
The 3D-LBCSPC has the same partition rule as 3D-LSK [59] and 3D-SPECK [56] (zero block cube-based set-partitioned HSICA). We observed from Table 5 (PSNR) that 3D-LBCSPC outperformers in the low bit rates (equal to 0.1 or less than 0.1) with the other HSICA. It has been also observed from Table 6 that 3D-LBCSPC has more significant bits than the other HSICA, which increases the PSNR of the proposed HSICA. 3D-LBCSPC uses the wavelet orientation property in which a single bit is used to define the seven insignificant sub-bands present in the same wavelet orientation plane while for the 3D-LSK [59], 3D-ZM-SPECK [63] and 3D-SPECK [56], each insignificant sub-band is defined by the bit at the high bit planes. So, a large number of bits are saved at the high bit plane level and for the high bit rate the performance of 3D-LBCSPC is almost better to its peer’s algorithms. It has been noticed from Table 5 that variation between the PNSR of proposed 3D-LBCSPC and 3D-SPECK [56] exists in the range of − 0.28 dB to 0.14 dB for Hyperspectral Image I, − 0.01 dB to 0.11 dB for Hyperspectral Image II, − 0.19 dB to 0.18 dB for Hyperspectral Image III, and − 0.27 dB to 0.14 dB for Hyperspectral Image IV. Similarly, the variation between the 3D-LBCSPC and 3D-LSK [59] exists in the range of 0.15 dB to 0.73 dB for Hyperspectral Image I, 0.03 dB to 0.36 dB for Hyperspectral Image II, 0.13 dB to 0.39 dB for Hyperspectral Image III, and − 0.1 dB to 0.28 dB for Hyperspectral Image IV. In the same way, the variation between the 3D-LBCSPC and 3D-ZM-SPECK [63] exists in the range of 0.07 dB to 0.64 dB for Hyperspectral Image I, 0.05 dB to 0.37 dB for Hyperspectral Image II, 0.05 dB to 0.54 dB for Hyperspectral Image III, and − 0.02 dB to 0.33 dB for Hyperspectral Image IV. For the ideal HS image reconstructed after the compression, the numeric value of MSE should be ‘0’ and PSNR numeric should be ‘\(\infty\)’ [38, 42]. Table 6 throws the detailed view of the HS image quality (HSIQ) as coding efficiency (PSNR) with the refinement coefficients (RC) and newly significant coefficients (NSC) for that bit rate.
Bjontegaard metric calculation or BD-PSNR is used to compare the rate-distortion performance of two different HSICA of the same HS image over a range of different bit rates (bpppb) [40]. Table 7 gives the numeric value of the BD-PSNR over the seven different bit rates.
4.2 Coding memory
The 3D-LBCSPC uses the markers to track the significance of coefficients or partitioned block cube sets. The memory required by the dynamic marker γ [η] is ‘RCW’ when each marker is the size of one byte (‘R’, ‘C’ and ‘W’ represents the three dimensions of the transformed HS image). In the same way, the static markers α [η] and β [η] require the memory of ‘7L + 8’ and ‘L’. The coding memory required by the sub-band coefficients is defined as ‘\(I{\mathcal{P}}\)’ (‘\(P\)’ is the size of the coefficient of sub-band and ‘I’ is the length of sub-band).
Thus, the total memory is required by the coding (encoding and decoding) process.
It has been noted that the static marker does not get updated. They use it as the reference for the dynamic marker to determine the significance of the wavelet transform level or new sub-band. The numeric value of the coding memory is calculated with the help of Eq. 9
The coding memory required by 3D-LSK is given as in Eq. 10
So, it is clear that the memory requirement of the 3D-LBCSPC is slightly higher than the 3D-LSK, which is equal to the ‘8(L + 1)’. For the five levels of the wavelet transform, only 48 bytes of extra memory is required (less than 1 KB coding memory). It has been clear from Table 8 that 3D-LBCSPC requires more coding memory than 3D-LCBTC, 3D-LMBTC and 3D-ZM-SPECK [61,62,63], but it outperformed the 3D-NLS [60]. It also requires less coding memory for the high bit rates (greater than 0.25 bpppb) than its list-based HSICA 3D-SPECK, 3D-SPIHT and 3D-WBTC [56,57,58].
4.3 Coding complexity
The coding complexity is the time required by the HSICA to encode the input HS image and decode the received bitstream to reconstruct the HS image [62]. It has been noticed in Tables 9, 10 that encoding time is greater than the decoding time. From Tables 9, 10, the proposed HSICA outperforms the other HSICA and it has the lowest coding time requirement for all bit rates. The complexity is reduced because the proposed HSICA uses the markers to define the wavelet level. If the whole wavelet level is insignificant, it saves the coding memory requirement and also it reduces the number of computation operations (logical and numeric).
For an insignificant wavelet level, the proposed HSICA requires only one significance test while for the other compression algorithms at least seven significance tests are required for the testing of the whole wavelet level. Thus, it has very low complexity at the low bit rates and moderate performance at the high bit rate.
5 Conclusion
The coding complexity is a big issue with the HS image sensors. The high coding complexity minimizes the performance of the sensor and more power is consumed by the sensor due to a lot of computations. Hence, for the resource constraint HS image sensors, compression algorithms have low coding complexity with low coding memory requirement and at par coding efficiency. The proposed HSICA 3D-LBCSPC is the low complexity compression algorithm that utilizes the property of the wavelet orientation. It also required a low fixed coding memory. 3D-LBCSPC gives the best coding efficiency performance at the low bit rate, but for the high bit rates, it performs at par with the other compression algorithms. It also works with both lossy and lossless compression. Thus, 3D-LBCSPC is an optimum choice for the low-resource HS image sensors.
Data availability
No dataset is generated in this research.
References
Sivakumar, C., Chaudhry, M.M., Paliwal, J.: Classification of pulse flours using near-infrared hyperspectral imaging. LWT. 15(154), 112799 (2022). https://doi.org/10.1016/j.lwt.2021.112799
Zabalza, J., Murray, P., Bennett, S., Campbell, A., Marshall, S., Ren, J., Yan, Y., Bernard, R., Hepworth, S., Malone, S., Cockbain, N.: Hyperspectral imaging based corrosion detection in nuclear packages. IEEE Sens. J. 23(1), 25607–25617 (2023). https://doi.org/10.1109/JSEN.2023.3312938
Sahoo, R.N., Rejith, R.G., Gakhar, S., Ranjan, R., Meena, M.C., Dey, A., Mukherjee, J., Dhakar, R., Meena, A., Daas, A., Babu, S.: Drone remote sensing of wheat N using hyperspectral sensor and machine learning. Precis. Agric. (2023). https://doi.org/10.1007/s11119-023-10089-7
Sarinova, A., Lisnevskyi, R., Biloshchytskyi, A., and Akizhanova, A.: The Lossless Compression Algorithm of Hyperspectral Aerospace Images with Correlation and Bands Grouping. 2022 International Conference on Smart Information Systems and Technologies (SIST). IEEE, pp. 1-5 (2022). https://doi.org/10.1109/SIST54437.2022.9945821.
Yoon, J.: Hyperspectral imaging for clinical applications. BioChip J. 16(1), 1–12 (2022). https://doi.org/10.1007/s13206-021-00041-0
Shinde, S.R., Bhavsar, K., Kimbahune, S., Khandelwal, S., Ghose, A., & Pal, A. Detection of Counterfeit Medicines using Hyperspectral Sensing. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, pp. 6155–6158, (2020). https://doi.org/10.1109/EMBC44109.2020.9176419.
Keane, A., Murray, P., Zabalza, J., Di Buono, A., Cockbain, N., Bernard, R.: Hyperspectral imaging analysis of corrosion products on metals in the UV range. Hyperspect. Imaging Appl. II(12338), 44–53 (2023). https://doi.org/10.1117/12.2647429
Zaman, Z., Ahmed, S.B., Malik, M.I.: Analysis of hyperspectral data to develop an approach for document images. Sensors. 23(15), 6845 (2023). https://doi.org/10.3390/s23156845
Aviara, N.A., Liberty, J.T., Olatunbosun, O.S., Shoyombo, H.A., Oyeniyi, S.K.: Potential application of hyperspectral imaging in food grain quality inspection, evaluation and control during bulk storage. J. Agric. Food Res. 8, 100288 (2022). https://doi.org/10.1016/j.jafr.2022.100288
Deepa, C., Shetty, A., Narasimhadhan, A.V.: Performance evaluation of dimensionality reduction techniques on hyperspectral data for mineral exploration. Earth Sci. Inform. 16(1), 25–36 (2023). https://doi.org/10.1007/s12145-023-00956-2
Nisha, A., and Anitha, A.: Current Advances in Hyperspectral Remote Sensing in Urban Planning. 2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT). IEEE, pp. 94–98, (2022). https://doi.org/10.1109/ICICICT54557.2022.9917771.
Pande, C.B., Moharir, K.N.: Application of hyperspectral remote sensing role in precision farming and sustainable agriculture under climate change: A review. Climate Change Impacts Nat. Resour. Ecosyst. Agric. Syst. 14, 503–520 (2023). https://doi.org/10.1007/978-3-031-19059-9_21
Moharram, M.A., Sundaram, D.M.: Dimensionality reduction strategies for land use land cover classification based on airborne hyperspectral imagery: a survey. Environ. Sci. Pollut. Res. 30(3), 5580–5602 (2023). https://doi.org/10.1007/s11356-022-24202-2
Zhang, Q., Smith, W., Sr., Shao, M.: The potential of monitoring carbon dioxide emission in a geostationary view with the GIIRS meteorological hyperspectral infrared sounder. Remote Sens. 15(4), 886 (2023). https://doi.org/10.3390/rs15040886
Jun, S., Choi, W., Kim, D., Park, H., Kyeon, D., Lee, K., Jeon, Y.J., Lee, C., Kim, K., Ha, J. and Ryu, S.: Semiconductor Device Metrology for Detecting Defective Chip Due to High-Aspect Ratio-Based Structures using Hyperspectral Imaging and Deep Learning. Metrology, Inspection, and Process Control XXXVII. Vol. 12496. SPIE (2023). https://doi.org/10.1117/12.2657062.
Thangavel, K., Spiller, D., Sabatini, R., Amici, S., Sasidharan, S.T., Fayek, H., Marzocca, P.: Autonomous satellite wildfire detection using hyperspectral imagery and neural networks: a case study on Australian wildfire. Remote Sens. 15(3), 720 (2023). https://doi.org/10.3390/rs15030720
Naik, B.B., Naveen, H.R., Sreenivas, G., Choudary, K.K., Devkumar, D., Adinarayana, J.: Identification of water and nitrogen stress indicative spectral bands using hyperspectral remote sensing in maize during post-monsoon season. J. Indian Soc. Remote Sens. 48, 1787–1795 (2020). https://doi.org/10.1007/s12524-020-01200-w
Shimoni, M., Haelterman, R., Perneel, C.: Hypersectral imaging for military and security applications: Combining myriad processing and sensing techniques. IEEE Geosci. Remote Sens. Mag. 7(2), 101–117 (2019). https://doi.org/10.1109/MGRS.2019.2902525
Bajpai, S., Singh, H.V., Kidwai, N.R.: Feature Extraction & Classification of Hyperspectral Images Using Singular Spectrum Analysis & Multinomial Logistic Regression Classifiers." 2017 International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT). IEEE, pp. 97–100 (2017). https://doi.org/10.1109/MSPCT.2017.8363982.
Chandra, H., and Bajpai, S.: Listless Block Cube Tree Coding For Low Resource Hyperspectral Image Compression Sensors. 2022 5th International Conference on Multimedia, Signal Processing and Communication Technologies (IMPACT), pp. 1–5. (2022) https://doi.org/10.1109/IMPACT55510.2022.10029076.
Ramamurthy, M., Robinson, Y.H., Vimal, S., Suresh, A.: Auto encoder based dimensionality reduction and classification using convolutional neural networks for hyperspectral images. Microprocess. Microsyst. 79, 103280 (2020). https://doi.org/10.1016/j.micpro.2020.103280
Zabalza, J., Ren, J., Wang, Z., Marshall, S., Wang, J.: Singular spectrum analysis for effective feature extraction in hyperspectral imaging. Geosci. Remote Sens. Lett. 11(11), 1886–1890 (2014). https://doi.org/10.1109/LGRS.2014.2312754
Sneha, K.A.: Hyperspectral imaging and target detection algorithms: a review. Multimed. Tools Appl. 81(30), 44141–44206 (2022). https://doi.org/10.1007/s11042-022-13235-x
Das, S., Bhattacharya, S., Routray, A., Kani Deb, A.: Band selection of hyperspectral image by sparse manifold clustering. IET Image Proc. 13(10), 1625–1635 (2019). https://doi.org/10.1049/iet-ipr.2018.5423
Zhang, J., Cai, Z., Chen, F., Zeng, D.: Hyperspectral image denoising via adversarial learning. Remote Sens. 14(8), 1790 (2022). https://doi.org/10.3390/rs14081790
Luo, F., Zhou, T., Liu, J., Guo, T., Gong, X., Ren, J.: Multiscale diff-changed feature fusion network for hyperspectral image change detection. IEEE Trans. Geosci. Remote Sens. 61, 1–13 (2023). https://doi.org/10.1109/TGRS.2023.3241097
Luo, F., Zou, Z., Liu, J., Lin, Z.: Dimensionality reduction and classification of hyperspectral image via multistructure unified discriminative embedding. IEEE Trans. Geosci. Remote Sens. 60, 1–16 (2021). https://doi.org/10.1109/TGRS.2021.3128764
Uddin, M.P., Mamun, M.A., Hossain, M.A.: PCA-based feature reduction for hyperspectral remote sensing image classification. IETE Tech. Rev. 38(4), 377–396 (2021). https://doi.org/10.1080/02564602.2020.1740615
Grewal, R., Kasana, S.S., Kasana, G.: Hyperspectral image segmentation: a comprehensive survey. Multimed. Tools Appl. 82(14), 20819–20872 (2023). https://doi.org/10.1007/s11042-022-13959-w
Das, S., Ghosal, S.: Unmixing aware compression of hyperspectral image by rank aware orthogonal parallel factorization decomposition. J. Appl. Remote. Sens. 17(4), 046509–046509 (2023). https://doi.org/10.1117/1.JRS.17.046509
Dahiya, N., Singh, S., Gupta, S.: Comparative analysis and implication of Hyperion hyperspectral and landsat-8 multispectral dataset in land classification. J. Indian Soc. Remote Sens. 51, 2201–2213 (2023)
Bajpai, S., Sharma, D., Alam, M., Chandel, V.S., Pandey, A.K., Tripathi, S.L.: Curvelet transform based compression algorithm for low resource hyperspectral image sensors. J. Elect. Comput. Eng. 2023, 1–18 (2023). https://doi.org/10.1155/2023/8961271
Bajpai, S., Kidwai, N.R.: Fractional wavelet filter based low memory coding for hyperspectral image sensors. Multimed. Tools Appl. (2023). https://doi.org/10.1007/s11042-023-16528-x
Sharma, D., Prajapati, Y.K., Tripathi, R.: Success journey of coherent PM-QPSK technique with its variants: a survey. IETE Tech. Rev. 37(1), 36–55 (2020). https://doi.org/10.1080/02564602.2018.1557569
Jaiswal, G., Rani, R., Mangotra, H., Sharma, A.: Integration of hyperspectral imaging and autoencoders: Benefits, applications, hyperparameter tunning and challenges. Comput. Sci. Rev. 50, 100584 (2023). https://doi.org/10.1016/j.cosrev.2023.100584
Dua, Y., Singh, R.S., Kumar, V.: Compression of multi-temporal hyperspectral images based on RLS filter. The Vis. Comput. 38(1), 65–75 (2022). https://doi.org/10.1007/s00371-020-02000-6
Chandra, H., Bajpai, S., Alam, M., Chandel, V.S., Pandey, A.K., Pandey, D.: 3D-Memory efficient listless set partitioning in hierarchical trees for hyperspectral image sensors. J. Intell. Fuzzy Syst. 45(6), 11163–11187 (2023). https://doi.org/10.3233/JIFS-231684
Bajpai, S.: Low complexity image coding technique for hyperspectral image sensors. Multimed. Tools Appl. 82(20), 31233–31258 (2023). https://doi.org/10.1007/s11042-023-14738-x
Dua, Y., Kumar, V., Singh, R.S.: Comprehensive review of hyperspectral image compression algorithms. Opt. Eng. 59(9), 090902 (2020). https://doi.org/10.1117/1.OE.59.9.090902
Bajpai, S.: Low complexity and low memory compression algorithm for hyperspectral image sensors. Wireless Pers. Commun. 131(2), 805–833 (2023). https://doi.org/10.1007/s11277-023-10455-8
Kidwai, N.R., Khan, E., Reisslein, M.: ZM-SPECK: A fast and memoryless image coder for multimedia sensor networks. IEEE Sens. J. 16(8), 2575–2587 (2016). https://doi.org/10.1109/JSEN.2016.2519600
Tausif, M., Khan, E., Pinheiro, A.: Computationally efficient wavelet-based low memory image coder for WMSNs/IoT. Multidimens. Syst. Signal Process. 18, 1–24 (2023). https://doi.org/10.1007/s11045-023-00878-8
Chandra, H., Bajpai, S.: 3D-Block Partitioning Embedded Coding for Hyperspectral Image Sensors. 2023 International Conference on Power, Instrumentation, Energy and Control (PIECON), pp 1–5 (2023). https://doi.org/10.1109/PIECON56912.2023.10085841.
Nagendran, R., Vasuki, A.: Hyperspectral image compression using hybrid transform with different wavelet-based transform coding. Int. J. Wavelets Multiresolut. Inf. Process. 18(1), 1941008 (2020). https://doi.org/10.1142/S021969131941008X
Valsesia, D., Magli, E.: A novel rate control algorithm for onboard predictive coding of multispectral and hyperspectral images. IEEE Trans. Geosci. Remote Sens. 52(10), 6341–6355 (2014). https://doi.org/10.1109/TGRS.2013.2296329
Li, R., Pan, Z., Wang, Y.: The linear prediction vector quantization for hyperspectral image compression. Multimed. Tools Appl. 78, 11701–11718 (2019). https://doi.org/10.1007/s11042-018-6724-8
Gunasheela, K.S., Prasantha, H.S.: Compressive sensing approach to satellite hyperspectral image compression. Inf. Commun. Technol. Intell. Syst. (2019). https://doi.org/10.1007/978-981-13-1742-2_49
Xu, K., Liu, B., Nian, Y., He, M., Wan, J.: Distributed lossy compression for hyperspectral images based on multilevel coset codes. Int. J. Wavelets Multiresol. Inform. Process. 15(02), 1750012 (2017)
Fu, W., Li, S., Fang, L., Benediktsson, J.A.: Adaptive spectral–spatial compression of hyperspectral image with sparse representation. IEEE Trans. Geosci. Remote Sens. 55(2), 671–682 (2016). https://doi.org/10.1109/TGRS.2016.2613848
Fu, C., Yi, Y., Luo, F.: Hyperspectral image compression based on simultaneous sparse representation and general-pixels. Pattern Recogn. Lett. 116, 65–71 (2018). https://doi.org/10.1016/j.patrec.2018.09.013
Das, S.: Hyperspectral image, video compression using sparse tucker tensor decomposition. IET Image Proc. 15(4), 964–973 (2021). https://doi.org/10.1049/ipr2.12077
Dua, Y., Singh, R.S., Parwani, K., Lunagariya, S., Kumar, V.: Convolution neural network based lossy compression of hyperspectral images. Signal Process. Image Commun. 95, 116255 (2021). https://doi.org/10.1016/j.image.2021.116255
Sujitha, B., Parvathy, V.S., Lydia, E.L., Rani, P., Polkowski, Z., Shankar, K.: Optimal deep learning based image compression technique for data transmission on industrial Internet of things applications. Trans. Emerg. Telecommun. Technol. 32(7), e3976 (2021). https://doi.org/10.1002/ett.3976
Báscones, D., González, C., Mozos, D.: Hyperspectral image compression using vector quantization, PCA and JPEG2000. Remote Sens. 10(6), 907 (2018). https://doi.org/10.3390/rs10060907
Bairagi, V.K., Sapkal, A.M., Gaikwad, M.S.: The role of transforms in image compression. J. Inst. Eng. INDIA Series B 94, 135–140 (2013). https://doi.org/10.1007/s40031-013-0049-9
Tang, X., and Pearlman, W.A.: Lossy-to-Lossless Block-Based Compression of Hyperspectral Volumetric Data. 2004 International Conference on Image Processing, Vol. 5., pp. 3283–3286, IEEE (2004). https://doi.org/10.1109/ICIP.2004.1421815
Tang, X., and Pearlman, W.A.: Three-Dimensional Wavelet-Based Compression of Hyperspectral Images. Hyperspectral Data Compression. Boston, MA: Springer US, pp. 273–308 (2006). https://doi.org/10.1007/0-387-28600-4_10.
Bajpai, S., Kidwai, N.R., Singh, H.V.: 3D wavelet block tree coding for hyperspectral images. Int. J. Innov. Technol. Explor. Eng. IJITEE. 8(6C), 64–68 (2019)
Ngadiran, R., Boussakta, S., Sharif, B., & Bouridane, A.: Efficient implementation of 3D listless SPECK. International Conference on Computer and Communication Engineering (ICCCE'10). IEEE, pp. 1–4, (2010). https://doi.org/10.1109/ICCCE.2010.5556843.
Sudha, V.K., Sudhakar, R.: 3D listless embedded block coding algorithm for compression of volumetric medical images. J. Sci. Ind. Res. 72, 735–748 (2013)
Bajpai, S., Kidwai, N.R., Singh, H.V., Singh, A.K.: Low memory block tree coding for hyperspectral images. Multimed. Tools Appl. 78(19), 27193–27209 (2019). https://doi.org/10.1007/s11042-019-07797-6
Bajpai, S.: Low complexity block tree coding for hyperspectral image sensors. Multimed. Tools Appl. 81(23), 33205–33232 (2022). https://doi.org/10.1007/s11042-022-13057-x
Bajpai, S., Kidwai, N.R., Singh, H.V., Singh, A.K.: A low complexity hyperspectral image compression through 3D set partitioned embedded zero block coding. Multimed. Tools Appl. 81(1), 841–872 (2022). https://doi.org/10.1007/s11042-021-11456-0
Bajpai, S., Singh, H.V., Kidwai, N.R.: 3D modified wavelet block tree coding for hyperspectral images. Indones. J. Elect. Eng. Comput. Sci. IJEECS. 15(2), 1001–1008 (2019). https://doi.org/10.11591/ijeecs.v15.i2.pp1001-1008
Kiely, A.B., Klimesh, M.A.: Exploiting calibration-induced artifacts in lossless compression of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 47(8), 2672–2678 (2009). https://doi.org/10.1109/TGRS.2009.2015291
Anand, A., Kumar, S.A.: A comprehensive study of deep learning-based covert communication. ACM Trans. Multimed. Comput. Commun. Appl. TOMM. 18(2), 1–9 (2022). https://doi.org/10.1145/3508365
Tang, X., Pearlman, W.A., Modestino, J.W.: Hyperspectral Image Compression Using Three-Dimensional Wavelet Coding. Image and Video Communications and Processing 2003. Vol. 5022. SPIE, (2003). https://doi.org/10.1117/12.476516.
Raja, S.P.: Wavelet-based image compression encoding techniques—a complete performance analysis. Int. J. Image Graph. 20(02), 2050008 (2020). https://doi.org/10.1142/S0219467820500084
Hernández-Cabronero, M., Kiely, A.B., Klimesh, M., Blanes, I., Ligo, J., Magli, E., Serra-Sagrista, J.: The CCSDS 123.0-B-2 low-complexity lossless and near-lossless multispectral and hyperspectral image compression standard: a comprehensive review. IEEE Geosci. Remote Sens. Mag. 9(4), 102–119 (2021). https://doi.org/10.1109/MGRS.2020.3048443
Bhardwaj, R.: Hiding patient information in medical images: an encrypted dual image reversible and secure patient data hiding algorithm for E-healthcare. Multimed. Tools Appl. 81(1), 1125–1152 (2022). https://doi.org/10.1007/s11042-021-11445-3
Zikiou, N., Lahdir, M., Helbert, D.: Support vector regression-based 3D-wavelet texture learning for hyperspectral image compression. Vis. Comput. 36(7), 1473–1490 (2020). https://doi.org/10.1007/s00371-019-01753-z
Setiadi, D.R.: PSNR vs SSIM: imperceptibility quality assessment for image steganography. Multimed. Tools Appl. 80(6), 8423–8444 (2021). https://doi.org/10.1007/s11042-020-10035-z
Acknowledgements
I am sincerely thankful to the anonymous reviewers for their critical comments and suggestions to improve the quality of the paper. The MCN for this manuscript is IU/R&D/2023-MCN002303.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Author information
Authors and Affiliations
Contributions
Shrish Bajpai develops the algorithm, simulates the algorithms, prepares the manuscript and reviews the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Consent for publication
Author agreed on the final approval of the version to be published.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Bajpai, S. 3D-listless block cube set-partitioning coding for resource constraint hyperspectral image sensors. SIViP 18, 3163–3178 (2024). https://doi.org/10.1007/s11760-023-02979-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-023-02979-0