Keywords

1 Introduction

The enormous quantity of data as medical images demands extensive data storage capacity, data processing, and data analyzing as they are difficult to transfer. Even though latest developments in the storage systems are available, the digital communication system needs larger data storage capacity and data transmission bandwidth which exceeds the capabilities of available technologies. It is advantageous to represent with smaller storage bits; whenever there is a need for the original image to be reconstructed, it is transferred as compressed image information. Image compression is a minimized graphics file without significant degradation in the quality of the image. In the image decompression, the images are converted back to the original one, or the best approximation of the original images. Digital image is a two-dimensional array of picture elements called pixels which represents the intensity at specific points in an image. Nowadays medical imaging generates images in the digital format for easy access, storage for future retrieval, and transmission from one location to another. As these imaging techniques produce a high volume of data, compression becomes mandatory for the storage and reducing transmission time. Digital images can be classified into different types, e.g., binary, grayscale, color, false color, multispectral, and thematic [1, 2].

1.1 Image Compression

The process of representing the image with less number of bits by removing the redundancies from the image is called compression (Gonzalez and Woods 2002) which is described in terms of compression ratio (CR) or the number of bits per pixel (bpp) termed bit rate. CR and bit rate are determined using the following formula [3]:

$$ \mathrm{CR}=\frac{\mathrm{Original}\ \mathrm{image}\ \mathrm{size}\ \mathrm{in}\ \mathrm{bits}}{\mathrm{Compressed}\ \mathrm{image}\ \mathrm{size}\ \mathrm{in}\ \mathrm{bits}} $$
$$ \mathrm{Bit}\ \mathrm{rate}=\frac{\mathrm{No}\ \mathrm{of}\ \mathrm{bits}\ \mathrm{transmitted}}{\mathrm{seconds}} $$

In general, three types of redundancy can be identified [4].

There are three types of redundancies:

  • Coding redundancy

  • Inter-pixel redundancy

  • Psycho-visual redundancy

1.2 Coding Redundancy

In images, some gray values appear more frequently than others. By assigning less number of symbols (bits) to more probable ones and a number of symbols (bits) to less probable ones, coding redundancies can be effectively reduced. A variable length coding is a commonly used technique which explores coding redundancy to reduce the redundant data from the image [5]. The most accurate and popular coding techniques of variable length are Huffman and arithmetic coding.

1.3 Inter-pixel Redundancy

This method is used to remove the inter-pixel correlation of images [6].

1.4 Psycho-visual Redundancy

The natural eye doesn’t have equivalent affectability to all or any visual detecting data. Certain data might be a littler sum huge than other data in typical visual handling. This data is named as psycho outwardly excess data. It’s frequently wiped out without changing the visual nature of the image as such an information isn’t urgent for ordinary visual preparing. The end of psycho-visual excess information is alluded as quantization, since it brings about loss of quantitative information [7].

1.5 Image Compression Model

The image pressure framework is presented in Fig. 1. The source encoder that is presented in Fig. 1a lessens redundancies of the information image. The mapper changes the information image into a variety of coefficients to diminish between pixel redundancies which is a reversible cycle. The image encoder makes a fixed or variable length code to speak to the quantizer yield.

Fig. 1
figure 1

Block diagram of image compression. (a) Source encoder. (b) Source decoder

The source decoder is presented in Fig. 1b. It contains two squares, namely, image decoder and reverse mapper. These squares play out the converse activity of image encoder and mapper individually. The recreated image could conceivably be an accurate copy of the information image [8, 9].

1.6 Classification of Image Compression

Comprehensively the image pressure is separated into two sorts: lossless image pressure and lossy image pressure. In lossless image pressure, the remade image is practically like the first image. The degree of image pressure accomplished can be spoken to by CR. The CR showed for lossless strategies is regularly around 2:1 to 3:1. The pressure proportion of lossy image pressure is consistently higher than that of lossless pressure methods. However, the reproduced image contains corruptions, comparative with the first image. A lossy compression method is called visually lossless. The loss of information caused by compression method is invisible for an observer.

1.7 Quality Measures for Image Compression

Quality measures are evaluated on the grounds that the quantitative proportions of attributes or properties of the outcome. Quality measures are the estimation devices which might not decide the norm of the outcome. Quality measures additionally choose how legitimate the calculations are in conceiving the predefined results. The quality estimates utilized for assessing the pressure are top sign to commotion proportion (PSNR), compression ratio (CR), mean square error (MSE), and bits per pixel (bpp). During this PSNR and compression proportion are valuable for pressure and information transmission. Mean square mistake is useful for imagining the blunder. PSNR gauges the norm of a remade image contrasted and a smart image. The basic thought is to register one number that mirrors the norm of the packed image. Customary PSNR measures probably won’t acknowledge as obvious with human abstract recognition. A few examination bunches are performing on perceptual measures, yet PSNR is utilized in light of the fact that they’re simpler to figure. Likewise note that various measures don’t generally mean better quality.

The mean square error (MSE) of the reconstructed image is computed as follows:

$$ \mathrm{MSE}=\frac{1}{n}\sum \limits_{i=1}^n{\left({y}_i-{\tilde{y}}_i\right)}^2 $$
(1)

where Eq. (1) is the sum over i and j denotes the sum of all pixels in the images. The PSNR relates the MSE to the maximum amplitude of the original image. PSNR is measured in decibels and is defined as

$$ \mathrm{PSNR}=10{\log}_{10}\left[\frac{\max {\left(r\left(x,y\right)\right)}^2}{\frac{1}{n_x{n}_y}{\sum}_0^{n_x-1}{\sum}_0^{n_{xy}-1}{\left[r\left(x,y\right)-t\left(x,y\right)\right]}^2}\right] $$
(2)

where in Equation (2) 255 is the maximum possible intensity for 8-bit grayscale image. In image compression, acceptable values of PSNR are in between 30 dB and 50 dB; the higher is better [11].

1.8 Lossless Compression

In lossless weight methodologies , the reproduced picture after weight is a lot of equivalent to the principal picture. Generally, lossless weight is gotten by coding strategies. Entropy coding encodes the genuine game plan of pictures with the less number of pieces expected to address them using the probability of the pictures. Weight is procured by giving variable size codes to pictures. The shorter codeword is given to more potential pictures. Huffman coding and arithmetic coding are the chief recognized entropy coding methods. Lossless weight systems are regularly realized using Huffman coding and arithmetic coding. Huffman coding may be a most picked prefix code. It consigns a social event of prefix codes to pictures set up on their probabilities. Pictures that happen more consistently will have shorter codewords than pictures which happen less generally. Also two pictures having codewords with same most extraordinary length happens once in a while. Huffman coding is insufficient when the letter set size is almost nothing and a probability of happening of pictures is very skewed. Calculating coding is more capable when the letter size all together is near nothing or the picture probabilities are outstandingly skewed. Making codewords for plans of pictures is viable than conveying an extraordinary codeword for each picture during a string. A specific number related code could be created for a specific progression without making codewords for all game plans of that length. This is routinely unprecedented for Huffman codes. One marked regard is allotted to a square of pictures, which is especially decodable. Calculating coding gives higher weight extents than Huffman coding. Run-length encoding strategy is the most un-irksome weight methodology. It’s profitable when the information to be compacted contains long runs of repeated characters or pictures [12].

1.9 Lossy Compression

Lossy compression can be implemented by transform and encoding methods. The transform will decompose the image and encoder will remove the repeated data. This method will give high amount of compression.

1.10 Medical Image Compression

Several analysts inside their investigations have shown the novel advance in the field of clinical pressure in both lossless and lossy classifications [13]. Lossless compressions can do a high pressure proportion of 3:1, restoring the image without loss of information. As advanced images include a sweeping proportion of room for putting away, the more prominent aspect of the investigation is focused on lossy pressure that clears irrelevant information sparing all the appropriate and crucial image information. All the methods include different degrees of wavelet deterioration prompting the high computational intricacy at the upper degree of disintegration more subtleties which will be an edge to ask bigger pressure proportions yet brings about energy misfortune. Energy held will be more if the image is decayed to less levels; however pressure accomplished is a littler amount [14, 15].

Table 1, the development of medical image compression algorithms concern only on space reduction and does not concentrate much on the characterization of the images after compression [16]

Table 1 Image compression algorithms

2 Wavelet Transform

The wavelet transform is very important for image compression. This will decompose the images. There are many transforms available. Based on their characteristics, we can select suitable transform for particular applications. There are Daubechies, Haar, Symlet, Coiflet, and biorthogonal transforms available. Wavelet transform is used in this work to decompose the images. The basic wavelet is Daubechies wavelet. The Haar wavelet is given below [17,18,19]:

$$ \psi (t)=\Big\{{\displaystyle \begin{array}{ll}1& 0\le t<1/2,\\ {}-1& 1/2\le t<1,\\ {}0& \mathrm{otherwise}.\end{array}} $$
(3)

Its scaling function ϕ(t) can be described as

$$ \phi (t)=\Big\{\frac{1}{0}\frac{0\le t<1,}{\mathrm{otherwise}.} $$
(4)

2.1 Wavelet Transform-Based Compression

In lossy pressure, the reproduced image after pressure is a guess of the primary image. A lossy pressure technique is entitled outwardly lossless when the loss of information brought about by pressure strategy is undetectable to a spectator. Lossy pressure is frequently characterized into two classes, namely, spatial space methods and change area procedures. In spatial area methods, the pixels inside the image are utilized, whereas in change space procedures, the image pixels are changed over into a substitution set of qualities, as change coefficients, for additional preparing. Prescient coding might be a recognizable spatial space strategy that works explicitly on the image pixels. Transform technique is a widely utilized technique in lossy pressure. An image is compacted by changing the corresponding pixels to a totally novel portrayal (change space) where they’re de-related. The change coefficients are free of one another, and the vast majority of the energy is stuffed during a couple of coefficients. The change coefficients are quantized to downsize the measure of pieces inside the image, and accordingly the piece of quantized nonzero coefficients must be encoded. This is frequently a many-to-one planning. Quantized coefficients are additionally compacted utilizing entropy coding procedures to an obviously flexible and better by and large pressure. The change used in the change area might be a direct change. It gives more effective and direct method of pressure. Lossy pressure comprises of three sections. The essential part might be a change to downsize the between pixel excess of image. At that point a quantizer is regularly applied to dispose of psycho-visual repetition to speak to the information with less number of pieces. The quantized pieces are then productively encoded to ask more pressure from the coding excess. In lossy pressure the information misfortune is because of quantization of the image co-production. Quantization is frequently measured in light of the fact that the way toward arranging the image into various pieces and speaking to each piece with a value. A quantizer fundamentally decreases the measure of pieces important to store the changed coefficients by lessening the precision of these qualities. Scalar quantization (SQ) is frequently performed on every individual coefficient and vector quantization (VQ) on a gaggle of the coefficient. Numerous scientists are attempting to improve pressure plans utilizing modern vector quantization yet setting apportioning in various leveled tree accomplished better outcomes utilizing uniform scalar quantization. During this pressure uniform scalar quantization is frequently utilized for improving pressure efficiency [20, 21].

2.2 Significance of Wavelet Analysis

In compression it is ought to be underlined that Fourier change includes averaging of the sign with a period direction which brings about a misfortune inside the nitty gritty transient data of the sign. Fourier change likewise includes a fixed goal for all frequencies. Conversely, wavelet examination changes an image inside the time area into a recurrence space with various goals at various sign frequencies. As such, it gives a multi-goal way to deal with image investigation. Inside the wavelet-based methodology, the higher the sign recurrence, the better the goal and the reverse way around. The wavelet approach gets a period scale deterioration of the sign into account utilizing an interpretation (time) boundary and a scale boundary. There are two methodologies: persistent wavelet changes (CWT) and discrete wavelet changes (DWT). In both CWT and DWT approaches, the understanding boundary is discrete; though the size boundary is permitted to fluctuate consistently in CWT [22], it is yet discrete in DWT. So as to beat impediments in pressure, a few methodologies are proposed which upheld time-recurrence confinement, similar to envelope examination: Gabor windowed Fourier transform (GWFT) and wavelet investigation techniques. Generally, the Fourier change (FT) is broadly used in image handling. Since it doesn’t give time confinement, it’s seldom suitable for non-fixed cycles. Along these lines it’s less valuable in breaking down non-fixed information, where there’s no reiteration inside the district sampled [23]. Furthermore, one among the limitations of Fast Fourier transform (FFT) in image investigation is the nonappearance of worldly data. The short-time Fourier transform (STFT) confines time by moving time window. The width is fixed of as far as possible the high-recurrence run. Wavelet changes permit the segments of a non-fixed sign to be investigated, permit channels to be built for both fixed and non-fixed signals, and have a window whose transmission capacity shifts in relation to the recurrence of the wavelet. The wavelet modifies down the image into different scales inside the time area, while the Fourier change presents an image in light of the fact that the total of sinusoidal elements of single recurrence. The wavelet change removes image highlights and non-fixed aggravation highlights in an image over the whole range without a prevailing waveband. The arrangement of wavelets would characterize a base from which a symmetrical deterioration of the main image is regularly made with similarity to the Fourier examination. Symmetrical wavelet changes catch free data. That measures a full decay of the image was done the measure of wavelet coefficients was an equal on the grounds that the first image and will be recombined to recreate the main image. It’s accepted that wavelet investigation can assume a major part in pressure research and for diagnostics device. Wavelet transform is a broadly received strategy for pressure. Essential pressure plot during this strategy is actualized inside the accompanying the request the relationship, quantization, and encoding. The DCT and DWT are well-known changes wont to de relate the pixels. The wavelet change deteriorates the image into various recurrence subgroups, to be specific lower-recurrence subgroups and better-recurrence subgroups, by which smooth varieties and subtleties of the image are frequently isolated. The majority of the energy is compacted into lower-recurrence subgroups. The greater part of the coefficients in higher-recurrence subgroups are little or zero and have a twisted to be assembled and furthermore are situated inside a similar relative spatial area inside the subgroups. Hence pressure strategies utilize wavelet changes that are effective in giving high paces of pressure while keeping up great image quality and are better than DCT-based techniques. In DCT a large portion of the energy is compacted into lower-recurrence coefficients to quantization [23]; the vast majority of the upper-recurrence coefficient become little or zero and have a twisted to be assembled. DCT is performed on 8x8 non-covering blocks, and along these lines the DCT coefficients of each square inside the image are quantized. Yet, at higher pressure proportions, hindering antiques are obvious utilizing JPEG strategy. Each degree of deterioration makes low-recurrence parts (estimation sub-band LL) and high-recurrence segments (three detail subgroups LH, HL, and HH) utilizing low-pass and high-pass channels (hL(k) and hH(k), individually. LL sub-band is frequently additionally disintegrated for ensuing degree of decay. On the off chance that the degree of disintegration expands, the better subtleties are caught all the more effectively. The image subtleties are pressed into a little number of coefficients, which are decreased to less number by following a quantization. The blunder or misfortune in data is on account of the quantization step. This results in a markdown in the pieces with various probabilities and entropy. Figures 2 and 3 show the primary image and hence the 1-level wavelet decay of dark-scale CT lung image which is of size 512 × 512 [24, 25].

Fig. 2
figure 2

Original image (512 × 512)

Fig. 3
figure 3

DWT-based decomposed image (level 1)

2.3 Selection of Decomposition Level Based on Quality of Image Compression

It is important to choose an appropriate number of decay levels dependent on the idea of the image or on a reasonable model. In this examination, the most extreme incentive for the nature of pressure has been considered as a standard for choice of disintegration level. The utilization of wavelet change to investigate or break down the image is called decay. Wavelets are two sorts of channels. The technique to register the wavelet change by recursively averaging and separating coefficients is known as the channel bank, which initially is a low-pass channel (lpf) and subsequently a high-pass channel (hpf). Every one of the channels is down inspected by two. Every filter of those two yield images can be further [26, 27].

Practically speaking, it’s important to pick a suitable number of deterioration levels that will uphold the personality of the image, or on a proper measure. During this investigation, the most extreme incentive for the norm of pressure has been considered as a model for choice of decay level. The utilization of wavelet change to explore or disintegrate the image is named deterioration. Wavelets are two kinds of channels. The strategy to process the wavelet change by recursively averaging and separating coefficients is named the channel bank, which initially might be a low-pass channel (lpf) and subsequently might be a high-pass channel (hpf). Every one of the channels is down inspected by two. Every one of these two yield images is regularly additionally changed. Correspondingly, this cycle is regularly rehashed recursively a few times, prompting a tree structure called the decay tree . Wavelet decay creates a group of progressively composed disintegrations. It breaks down an image into a progressive arrangement of approximations and subtleties. The sum inside the progressive system regularly compares to a dyadic scale. The decision of a fitting degree of the progressive system will rely on the image. At each level j, an estimate at level j or Aj and a deviation image or Dj are manufactured. Figure 4 presents a graphical portrayal of this progressive three-level decomposition [29, 30].

Fig. 4
figure 4

Graphical representation of three-level decomposition

In wavelet-based image coding, differing types of orthogonal and bio-orthogonal filters are designed by researchers for compression. The choice of wavelet filters plays a crucial role in achieving an efficient compression performance, since there’s no filter that performs the simplest for all sort of images. The Haar wavelet isn’t suitable for compression, thanks to its property of discontinuity, and it yielded the worst performance in compression. The Daubechies wavelet may be a continuous orthogonal compactly supported wavelet, but it’s not symmetric. The prevailing compression method uses the biorthogonal wavelet rather than orthogonal. The Daubechies, Symlet, and Coiflet filters have a singular property of more energy conservation, more vanishing moments, and regularity and asymmetry than bio-orthogonal filters. The second-order wavelet was chosen because the mother wavelet has advantages for solving local performance of two-dimensional images. Biorthogonal wavelet of order 1.1, Symlet wavelet of order 2, and Coiflet wavelet of order 2 because of the mother wavelet are chosen for compression. The wavelet transform is employed in compression, for the decomposing of the pictures into low-frequency and high-frequency coefficient. The varied mother wavelets like Symlet, Coiflet, and biorthogonal wavelet transform are utilized in this work, their effectiveness of compression is evaluated, and optimum mother wavelet is additionally chosen from the results.

3 Encoding

3.1 Encoding the Images

This is the method to apply the images after decomposition by transform. Here the following methods are applied and performance is measured.

3.2 Types of Encoding

3.2.1 Embedded Wavelet

The EZW is simple, yet amazingly powerful, image pressure calculation, having the property that the pieces in the spot stream are created arranged by centrality, yielding a totally implanted code. The installed code speaks to a succession of paired choices that separate an image from the “invalid” image. This calculation applies a spatial direction tree structure, from which noteworthy co efficient can be removed in the wavelet domain. EZW encoder doesn’t really pack any image. It organizes just the wavelet coefficients so that they can be compacted in the most ideal way.

3.2.2 SPIHT

This algorithm adopts a spatial orientation tree structure, from which significant coefficients can be extracted in the wavelet domain.

The SPIHT algorithm is unique in that it does not directly transmit the contents of the sets, the pixel values, or the pixel coordinates.

The SPIHT coder incorporates a grouping of arranging and refinement passes applied with diminishing greatness limits. In the arranging pass, the coefficients that surpass and equivalent to the current size limit are named as noteworthy and inconsequential assuming in any case. At the point when a coefficient is right off the bat named as noteworthy, the indication of the coefficient is quickly yielded. In the event that the indication of the critical coefficient is positive, SPIHT coder yields “1.” Then again, it communicates “0” to the touch stream. At the point when the unimportant hubs are coded, SPIHT coder examines the coefficient in the fixed request, which spares a ton of pieces by parceling the hubs in the subsets that contain numerous immaterial coefficients for the current size limit. After all the coefficients are examined in the arranging pass, SPITH coder at that point begins to deal with the refinement pass and halves the quantization threshold for the next pass until the magnitude threshold equals to 0.

3.2.3 Spatial Orientation Tree Wavelet

The STW is similar to SPIHT. STW applies the variation in encoding the zero tree information. The locations of transformed values undergo state transitions, from one threshold to the next.

3.2.4 Wavelet Difference Reduction

The WDR gives a lesser PSNR. SPIHT encoding method codes the individual bits of wavelet transform coefficients after decomposing the image in a bit-plane sequence. Thus, it is capable of achieving high compression at higher decomposition level when compared with other encoding methods.

4 Lossless Compression

In lossless compression techniques, there is no loss of information in the reconstructed image after compression. In general, high quality is achieved by this method. This method can be performed by encoding method. The amount of compression of this method is less when compared with lossy method.

Lossless image compression techniques can be implemented using Huffman coding and arithmetic coding.

4.1 Huffman Coding

Huffman coding is the most chosen prefix coding technique. It allocates a gathering of prefix codes to images established on their probabilities. Images that happen more regularly will have shorter codewords than images which happen less often. Likewise two images having codewords with same greatest length is less likely to happen. Huffman coding is ineffectual when the letter set size is close to nothing, and along these lines the likelihood of event of images is slanted.

4.2 Arithmetic Coding

Number juggling coding is more productive when the letters in order size are close to nothing or the image probabilities are exceptionally slanted. Creating codewords for successions of images is productive than producing a different codeword for each image during a grouping. A solitary number juggling code are frequently acquired for a particular succession without creating codewords for all groupings of that length. This is regularly inconceivable for Huffman codes. One label esteem is allocated to a square of images, which is solely decodable. Number juggling coding might be such a variable-length entropy encoding. A string is changed over to number-crunching encoding; as a rule characters are put away with less number of pieces. Number-crunching coding does an identical entire message into one number, a part n where (0.0 ≤ n < 1.0).

Figures 5a–d and 6a–d show the compressed image of cancer-affected CT lung sagittal view using biorthogonal wavelet. The numerical results seen from Table 3.15 indicate that the SPIHT yields better results compared to other compression methods.

Fig. 5
figure 5

(a) Compressed image of CT cancer lung sagittal view using bio-orthogonal with EZW, (b) compressed image of CT cancer lung sagittal view using bio-orthogonal with EZW, (c) compressed image of CT cancer lung sagittal view using bio-orthogonal with EZW (d) compressed image of CT cancer lung sagittal view using bio-orthogonal with EZW

Fig. 6
figure 6

(a) Compressed image of CT cancer lung sagittal view using bio-orthogonal with SPIHT (CR = 85.24%, PSNR = 41.84), (b) compressed image using bio-orthogonal with SPIHT (CR = 19.26%, PSNR = 44.24), (c) compressed image using bio-orthogonal with SPIHT (CR = 5.9%, PSNR =36.95), (d) compressed image using bio-orthogonal with SPIHT (CR = 1.71%, PSNR = 29.10)

From the table, it is often inferred that the PSNR is suffering from an outsize marginally with the rise in CR . The choice of wavelet plays an important part in achieving an efficient compression performance because there’s no filter that can perform the simplest for all images. The main objective of this work is to realize a high compression ratio, which is achieved with a better level of decomposition. The number of filter banks used is high at higher decomposition level. Some information will be lost.

5 Conclusion

In the lossy compression, the decomposition levels and vanishing moments are varied in the different compression algorithms. It is observed that all the mother wavelets performed well at first level of decomposition, irrespective of their types and image formats. The increment in the decomposition level produced a less PSNR value and more compression ratio increase irrespective of the wavelet type used for compression. The bits per pixels are same as PSNR value, since both are related. The minimum errors are obtained by decomposition level one.