Abstract
Dicom Images play important role in everyday life. Healthcare is the most important urge of human being. Bio- Medical domain has adopted imaging system to identify, analyze and detect diseases. These images equip themselves with intense embedded data and occupy huge memory. So, the world is in need of technique which reduces the memory requirement for storing dicom images. Transmission of this type of huge data consumes more bandwidth and thereby increases communication cost. So, a novel technique has to be invented which reduces the memory requirement for storing dicom images. Dicom image compression is the solution proposed to manage space requirement for dicom images which also does not degrade the quality of the image. Several techniques have been presented for compressing the image,but achieving desired performance is a challenging task. In this paper, Image compression scheme is presented using wavelet transform, Lifting Scheme. Linear Predictive Coding and Huffman Coding. The outcome of proposed approach is compared with various existing techniques. The experimental analysis shows that proposed approach achieves better performance in terms of histogram, PSNR, MSE and SSIM.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keyword
1 Introduction
Nowadays, the world is drastically shifting from scenario to scenario undergoing the transition from traditional offline process to recent online trends. Especially; with the advancement of studies in medical field, the classification and sub classified domains are obtaining its own identity. People from all over the world are using the bio medical technical facility to the maximum extent to get better diagnosis and thereby medication. Bio medical images are the richest source of disease diagnosis. The medical images are generated by radiographs, ultrasound, X-ray, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Photo acoustic imaging, and many others [1].
The enhancement in disease diagnosis through medical imaging has resulted in an increase in the digital data in the form of CT, X-Ray and Ultrasound Images. These are of high quality and high dimension images to assist in efficient diagnosis. They require huge storage space. For example, a Thorax CT scanning contained 25 slices with each slice having the thickness of 10 mm.It requires 600 MBs to GB space of storage [2]. Current, telemedicine and e-health applications need these images to be stored and transmitted over network [3]. Research has contributed to e-health domain by adopting data compression [4].
Image Compression has two categories: Lossy & Lossless [4, 5]. Both schemes reconstruct data, but lossy compression results in information loss and lossless compression reconstructs the image to match the original image efficiently. Lossy compression is achieved by adopting Discrete Cosine transform (DCT), Discrete Wavelet Transform (DWT), vector quantization and many more. Lossless Compression adopts decorrelation based methods like SPIHT, EZW and EBCOT, and entropy coding based schemes such as RLC, Huffman, and LZW coding [7].
Rest of the paper is organized into following subsections: Sect. 2 Describes available image compression technique, Sect. 3 explains format of Dicom image. Section 3 explains proposed Dicom image compression model, Section 4 analyses experimental results and Sect. 5 provides conclusion and future scope of the work.
2 Literature Survey
This section presents the brief study about existing techniques of biomedical image compression schemes.
2.1 Image Compression Techniques
Harpreet Kaur et al. [8] proposed lossless compression of dicom images using genetic algorithm. This method identifies Region of Interest, performs segmentation and then encoding operation is performed using Huffman coding and then implements compression. This method adopts genetic algorithm for finding ROI & segmentation process. Selectin of ROI completely depends on number of pixels in the image. The experimental results have proven better results which occupy less storage space.
Amit Kumar Shakya et al. [9] proposed both lossy and lossless image compression method. Mathematical operations are performed on image matrix by which region of interest is identified and high degree compression is performed on non ROI region. The novelty of the propose approach is feasibility in image size. Polygonal Image Compression is proposed which works on Square and rectangular images of different according to the requirement of the user. The proposed method provides reliable compression mechanism without losing the required data.
A. Umamageswari et al. [10] has adopted lossless JPEG2000 image compression technique. The proposed method uses new coding method called Embedded Block Coding with Optimized Truncation (EBCOT). Discrete Wavelet Transform along with LeGall53 filter is used for compression. Computation is done using lifting scheme. The transformation is performed without adopting quantization to retain the sensitive information in the dicom image. This lossless method provides a saving of 30%.
Romi Fadillah Rahmat et al. [11] proposed adopting Huffman Coding Technique considering the Bit Frequency Distribution [BFD] as an alternative to standard Lossless JPEG2000 Compression for DICOM file in open PAC settings. It computes bit frequency ratio, creates prefix tree comprising of code words. These code words are used as replacement codes for certain pixels values. Then, bit padding is performed to maintain the standard file size. The result is that the proposed method generates space savings of 72.98% with 1:3.7010 compression ratios.
Bruylants et al. [12] reported that JPEG2000 is a promising technique for DICOM image compression however, then the performance of JPECG 2000 can be improved for volumetric medical image compression. In order to take the advantage of JPEG 200, generic coded framework was presented by authors. This framework supports JP3D volumetric extension along with different types of wavelet transforms and intra-band prediction modes.
Zuo et al. [13] proposed Lossy and Lossless image compression. Lossy compression is not adaptable for medical image compression but have high compression ratio and lossless compression techniques protect data but have low compression ratio. So, both schemes were adopted by authors and ROI based compression is followed. Lossless Compression scheme is adopted for ROI region and Lossy compression scheme is adopted for non ROI region.
Along with compression problem, Computational complexity and memory requirements have to be addressed. This complexity was identified by Lone et al. [14] and found that the lossless compression occupied high amount of memory to encode and decode the image data. Considering these issues, researchers proposed Wavelet Block Tree Coding (WBTC) algorithm which uses spatial orientation block-tree coding approach. Here, the images are processed through block tree coding by dividing the image in 2x2 blocks and redundancy information in the sub band is obtained.
To improve the performance of lossless compression, a novel scheme was proposed by Song et al. [15] based on irregular segmentation and region based prediction. The approach proposes two phases. First phase adopts geometry adaptive and quadtree partitioning scheme for adaptive irregular segmentation. In the second phase, to operate on sub blocks and different regions, least square predictors are used. This scheme, improves reconstruction also by utilizing spatial correlation and local structure similarity.
Geetha et al. [16] reported that vector quantization (VQ) is widely adopted for image compression. The Linde– Buzo– Gray (LBG) is the mostly used type of VQ which compresses the image by constructing the local optimal codebook. In this work, authors considered the codebook construction as an optimization problem which is solved by using bio- inspired optimization technique.
3 Background
3.1 Introduction to DICOM Image
American College of Radiology (ACR) and National Electrical Manufacturer Association (NEMA) together developed standard in DICOM [9]. It is the most accepted format of image for communication. A DICOM image consists of inbuilt format necessary to represent, DICOM images are capable of storing patient details in 3D, slice thickness, image exposure parameters, actual organ size in the image. Dicom images possess high degree of information which supports to store, maintain and retrieve images (Fig. 1).
DICOM image is a two dimensional function f(x, y) where x and y are spatial coordinate and f is amplitude at any pair of coordinate (x, y).Digital image is composed of finite numbers of pixels arranged in matrix of M numbers of rows and N number of columns.
3.2 Image Compression Model
The general functions involved in the Dicom image compression is depicted in Fig. 2. The image to be compressed is transformed into spatial domain to represent the image in the format suitable for decomposition. After finding the region of interest, the pixel values are quantized. Then, identified pixels in the ROI are coded using the adopted coding technique. The coding phase generates the compresses image.
4 Proposed Architecture
A new model is proposed for dicom image compression using the hybrid technique composed of Linear Predictive Coding, Discrete Wavelet Transform, Lifting Scheme and Huffman Coding. The flowchart of the proposed dicom image compression model is illustrated in Fig. 3. The internal loop of the proposed model encompasses “Linear Predictive Coding, Discrete Wavelet Transform using lifting scheme and Huffman coding operation.
The steps involved in the compression process are:
-
Input image is processed by Linear Predictive Coding (LPC) which produces a coded image.
-
Discrete wavelet transform is applied on the coded image which divides the image into four sub bands LL.LH. HL and HH regions thereby generating the region of interested pixels.
-
The decomposed image is processed through Huffman coding where zigzag DCT scanning is applied to generate the coded image which is compressed. The compressed image is decoded through Huffman Decoding, Inverse DWT and Inverse LPC to reconstruct the image.
4.1 Implementation
4.1.1 Wavelet Compression
In this subsection, Discrete wavelet transform for image compression phase is described.. Wavelet decomposes the input signal into approximation, horizontal and vertical components and thereby contributes to compression operation. It performs three operations in loop namely transformation, quantization and coding. Telemedicine adopts wavelets in all its sub domains as an efficient means of dicom image formatting. The processes of wavelets are explained below.
Transformation:
This first phase represents the image in the domain necessary for processing the image. It performs inter conversion of domain of the image. Quantization: Visual pixels and Non Visual pixels are quantized with larger or smaller number of bits based on their importance in image area.
Entropy Coding:
The non-uniform distribution of symbols during quantization is exploited during the entropy coding.
The wavelet transform decomposes the image into four sub bands as HH, HL, LL, and LH. The DWT based scheme uses Haar filter in lifting scheme, and filter type –I. The type-I filters coefficients are given as:
where h1 denotes the filter coefficient for prediction and denotes filter coefficient for update phase for lifting scheme. First phase of wavelet transform consists of forward lifting phase
Below given Fig. 4 shows the architecture of forward lifting scheme which contains split, predict and update Operations.
This scheme splits the input signal into two parts using split function. These samples are denoted as even and odd data samples and expressed as:
With the help of correlation, The odd and even samples are obtained. Generally, the difference between actual, original and predicted samples is known as wavelet coefficient and this process is called as lifting scheme.
In next step, the update phase is applied where the even and odd sample values are updated based on the input samples. This generates the scaling coefficients. These coefficients are passed to the next step for further processing. This is expressed as follows:
where p denotes predict phase and u denotes Update phase.
After finishing these steps, the odd elements are replaced by the difference and even elements are replaced by the average values.This approach helps to obtain the integer coefficients which helps to make it reversible.
Similarly, the reverse lifting scheme is also applied to reconstruct the original signal. Thus, inverse wavelet transform is applied. The reverse operation has three steps which includes update, predict and merge. Figure 5 shows the architecture of reverse lifting scheme.
These operations are given as:
In this process, samples are reduced by factor of two and final step produces a single output. In this phase, the input signal is processed through the low-pass and band-pass filters simultaneously and the output data is down sampled in each stage. The complete process is repeated several times until the single output is generated by combining the four output bands.
4.1.2 Huffman Coding
In this section, Huffman coding for image compression is explained. The Huffman coding is widely adopted technique for lossless data compression. Mainly, this scheme assigns the variable length codes to the input data and this length depends on the frequencies of characters in input data stream [4]. Generally, the characters with high frequency occurrence are assigned the small code and characters with low-occurrence are assigned the largest code. The variable length codes are known as the prefix codes. These codes are assigned in such a way that once the code is assigned to one character, it should not be assigned to any other character. This helps to ensure the no uncertainty while decoding the data. The Huffman coding scheme contains two main steps which includes construction of tree and assigning the prefix codes to the characters. The general architecture [11] of Huffman coding is demonstrated in the Fig. 6.
Huffman Encoding Phase:
In this phase, the Huffman tree is constructed, which has the unique character along with their frequency of occurrence. It is a famous greedy algorithm which uses lossless compression. It uses variable length codes where the length of a character depends on the frequency of occurrence of the character considered. The most frequent characters are assigned short and small codes and the less used characters are assigned lengthy codes. Huffman coding adopts prefix codes Now, with the help of unique character, we build a leaf node which contains the unique characters. Later, all leaf nodes are considered and a minimum heap is constructed. This heap is used for prioritizing the queue. The character which has the least occurrence is assigned as the root of the tree. In next phase, we extract two nodes which are having the minimum heap and assign them as left and right child. This node is also added to the tree. This process is repeated until the heap contains only one node.
Prefix Rule:
-
It is used by Huffman coding to assign codes to bits.
-
It prevents ambiguities during the decoding phase.
-
It confirms that the prefix code is assigned to only one character and it is not assigned to any other character leading to redundancy.
Huffman Coding follows two major steps:
By using the input characters and their frequency of occurrence, Huffman tree is constructed.
Huffman Tree is traversed to assign codes to characters.
4.1.3 Huffman Decoding
The Huffman tree contains the entire information of characters. At the receiver end, the receiver starts scanning this tree in a zigzag scanning process. When tree is scanned towards the left of the tree, “0” is assigned as decoding output otherwise “1” is used for the decoded output. Final compression is attained by assigning low length bit data to most occurring symbols and lengthy bits to least occurring symbols. The decompression reverses the operation.
The procedure for obtaining Huffman code is shown in Fig. 6. The probabilities of s6 and s7 have the lowest values. Therefore, they are combined with each other to form the new probability of value 0.07. This means that the order of the probabilities will be rearranged. The process is repeated until the final stage of the process where two probabilities remain (in this example 0.5 and 0.4) is achieved. The bit assignment procedure is performed in the backward direction as show in Fig. 7 [4].
5 Results and Discussion
In this section, the experimental analysis of proposed combined approach of image compression is presented. The experiments are conducted using MATLAB and Python tools installed on windows platform. This approach is tested on biomedical images where different types of modalities such as ultrasound, CT and MRI images are included. For each type of modality, five images are considered. Ultrasound images are obtained from https://www.ultrasoundcases.info/cases/abdomen-and-retroperitoneum/, CT and MRI images are obtained from https://www.osirix-viewer.com/. In order to evaluate the performance of proposed approach, several analyses is performed such as histogram analysis, autocorrelation of adjacent pixels, information entropy, NPCR, UACI analysis, PSNR, and MSE.
5.1 Comparative Analysis for Compression
5.1.1 Histogram Analysis
In this section, the histogram analysis of proposed approach for different images is presented.. The histogram analysis of any image illustrates the graphical representation of tone distribution in any given digital image. The similarity of histogram between encrypted images shows the better cryptography whereas the actual histogram of different images differs from each other. The histogram amplitude of original and reconstructed image shows the deviation in the quality of the image while reconstructing. Below given Fig. 7 depicts the histogram analysis of image compression.
The performance of proposed approach is measured in terms of PSNR, MSE, and SSIM. These parameters can be computed as mentioned in below given Table 1.
The outcome of proposed approach is compared with existing schemes. Below given Table 2 shows a comparative analysis for image compression.
The comparative analysis shows that proposed approach achieves better performance when compared with existing techniques in terms of PSNR, MSE and SSIM.
6 Conclusion
In this work, the focus is on biomedical imaging and identified that currently telemedicine diagnosis systems are widely adopted. In these system, the data is transmitted to the remote location which consumes more bandwidth and also the medical images require huge storage space. Moreover, during the transmission, maintaining the security also considered as a prime task. Hence, the work presents a combined approach for data compression to reduce the storage requirement The compression scheme is based on the hybrid approach of predictive coding, Huffman coding and DWT framework. The paper presents an extensive experimental analysis and compared the outcome of proposed approach with existing schemes which shows that proposed approach achieves better performance. However, this approach is tested for 2D biomedical images thus 3D image processing and multispectral images still remains a challenging task which can be incorporated in future research.
References
Gonde, A.B., Patil, P.W., Galshetwar, G.M., Waghmare, L.M.: Volumetric local directional triplet patterns for biomedical image retrieval. In: 2017 Fourth International Conference on Image Information Processing (ICIIP), pp. 1–6. IEEE, December 2017
Liu, F., Hernandez-Cabronero, M., Sanchez, V., Marcellin, M.W., Bilgin, A.: The current role of image compression standards in medical imaging. Information 8(4), 131 (2017)
Amri, H., Khalfallah, A., Gargouri, M., Nebhani, N., Lapayre, J.C., Bouhlel, M.S.: Medical image compression approach based on image resizing, digital watermarking and lossless compression. J. Sig. Process. Syst. 87(2), 203–214 (2017)
Hussain, A.J., Al-Fayadh, A., Radi, N.: Image compression techniques: a survey in lossless and lossy algorithms. Neurocomputing 300, 44–69 (2018)
Kumari, M., Gupta, S., Sardana, P.: A survey of image encryption algorithms. 3DResearch 8(4), 37 (2017)
Rahman, M., Hamada, M.: Lossless image compression techniques: a state-of-the-art survey. Symmetry 11(10), 1274 (2019)
Uthayakumar, J., Vengattaraman, T., Dhavachelvan, P.: A survey on data compression techniques: From the perspective of data quality, coding schemes, data type and applications. J. King Saud Univ.-Comput. Inf. Sci. (2018)
Kaur, H., Kaur, R., Kumar, N.: Lossless compression of DICOM images using genetic algorithm. In: 2015 1st International Conference on Next Generation Computing Technologie (NGCT), pp. 985–989 (2015). https://doi.org/10.1109/NGCT.2015.7375268
Shakya, A.K., Ramola, A., Pandey, D.C.: Polygonal region of interest based compression of DICOM images. In: 2017 International Conference on Computing, Communication and Automation (ICCCA), pp. 1035–1040 (2017). https://doi.org/10.1109/CCAA.2017.8229993
Umamageswari, A., Suresh, G.R.: Security in medical image communication with arnold's cat map method and reversible watermarking. In: 2013 International Conference on Circuits, Power and Computing Technologies (ICCPCT), pp. 1116–1121 (2013). https://doi.org/10.1109/ICCPCT.2013.6528904
Rahmat, R.F.: Analysis of DICOM image compression alternative using Huffman coding. J. Healthcare Eng. Volume 2019, Article ID 5810540 (2019). 11 pages https://doi.org/10.1155/2019/5810540
Bruylants, T., Munteanu, A., Schelkens, P.: Wavelet based volumetric medical image compression. Signal Process. Image Commun. 31, 112–133 (2015)
Zuo, Z., Lan, X., Deng, L., Yao, S., Wang, X.: An improved medical image compression technique with lossless region of interest. Optik 126(21), 2825–2831 (2015)
Lone, M.R.: A high speed and memory efficient algorithm for perceptually-lossless volumetric medical image compression. J. King Saud Univ.-Comput. Inf. Sci. (2020)
Song, X., Huang, Q., Chang, S., He, J., Wang, H.: Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction. Med. Biol. Eng. Compu. 56(6), 957–966 (2017). https://doi.org/10.1007/s11517-017-1741-8
Geetha, K., Anitha, V., Elhoseny, M., Kathiresan, S., Shamsolmoali, P., Selim, M.M.: An evolutionary lion optimization algorithm-based image compression technique for biomedical applications. Expert. Syst. 38(1), e12508 (2021)
Raja, S.P.: Joint medical image compression–encryption in the cloud using multiscale transform-based image compression encoding techniques. Sādhanā 44(2), 1 (2019). https://doi.org/10.1007/s12046-018-1013-9
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Latha, H.R., Rama Prasath, A. (2022). ICPCH: A Hybrid Approach for Lossless Dicom Image Compression Using Combined Approach of Linear Predictive Coding and Huffman Coding with Wavelets. In: Guru, D.S., Y. H., S.K., K., B., Agrawal, R.K., Ichino, M. (eds) Cognition and Recognition. ICCR 2021. Communications in Computer and Information Science, vol 1697. Springer, Cham. https://doi.org/10.1007/978-3-031-22405-8_21
Download citation
DOI: https://doi.org/10.1007/978-3-031-22405-8_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22404-1
Online ISBN: 978-3-031-22405-8
eBook Packages: Computer ScienceComputer Science (R0)