1 Introduction

Compressing digital images is considered one of the most important research fields in image processing due to the benefits yielded [1], where digital image compression works to reduce the size of images in its multiple forms [10, 12]. Because of the tremendous technological development, the captured images have become very large [41], so it has become necessary to search for new compression technologies capable of compressing these images in order to provide enough storage spaces, store the images, and to facilitate the processes of transfer and transmission via the Internet [2, 35].

There are many techniques used to compress images [26, 27]. However, there is a need to search for new ways to provide more excellent compression rates that commensurate with the type of images and the goal of using the images [46]; as the compression rate changes from one technique to another and from one image to another, depending on the goal of using the images [17]. Accordingly, there are two types of classifications for compressing digital images to preserve digital image data [11, 13, 47].

We do not lose any data from the images after decompressing through compression techniques of this type, and lossless compression is usually used to eliminate unimportant data during transmission operations [14, 43]. However, the compression rate in this type of compression is meager. Unlike lossless techniques, some data is lost in this type of compression when decompressing [19]. However, high storage space is provided compared to the lossy compression type [44]. Therefore this type of compression is often used in digital cameras to provide storage space. Due to the enormous technological developments and advancements in images [38,39,40]. The goal of all types of compression is to reduce the images’ size to the lowest extent and not distort the images significantly while maintaining image quality in terms of accuracy and efficiency. Hence the need for more ways and methods to compress images is in high demand.

This paper addresses one of the most critical fields in the modern world, digital images are characterized as massive quantities because almost everyone has a smartphone or a digital camera, and everyone wants to share or display these images. Image quality is rising daily, led by the unprecedented race between smartphone manufacturers to provide the most unique and accurate images. High-quality cameras that would give the perfect image created the need for a compression method that would help transfer such images quickly, without the need for much storage and with the best quality possible. The proposed digital image compression method is based on dividing the image into blocks of different sizes; two-by-two, four-by-four, and eight-by-eight. Next, pass the blocks in these stages by applying mathematical operations to the values in each block and then use the Huffman Coding Technique; it has to compress it, keep the image quality as possible. Therefore, the main goal of the proposed method would yield results with a high compression rate by taking advantage of the lossy compression and maintaining a good ratio of quality using the lossless compression type, meaning benefit from the best of both types. This paper tested a new method to calculate the effect inflicted on the compression rate if a new way of dividing the image into blocks is implementing, thus deciding if the proposed method is efficient and effective. The proposed method got better-comprossed images than other comparative methods when used two-by-two blocks.

The contributions of the paper are given as follows.

  • Develop a new digital image near-lossless technique.

  • Improve the compression rate along with the quality using a new technique.

  • Validate and compare the performance of the developed techniques with previously available ones.

The rest of this paper is organized as follows. Section 2 presents the related works that have been published in the literature. Section 3 presents Preliminaries of the current basic method. Section 4 offers the proposed image compression method. Section 5 presents the experimental results and discussion—finally, the research conclusion and future work directions are given in Section 6.

2 Related works

In this section, a preview some related and similar previous works are given.

A new practical learned lossless image compression system is proposed [28], L3C, which outperforms the popular engineered codecs, PNG, WebP, and JPEG2000. The new system replicas the image supply jointly with educated auxiliary representations in RGB space. It also requires three forward passes to predict all pixel probabilities in the place of each pixel. As a result, when compare L3C with the fastest PixelCNN variant (MultiscalePixelCNN), L3C gives a two order speed up, and the auxiliary learning representation is crucial and outperforms predefined support pictures such as an RGB.

The paper presented a technique for processing compressed digital images with compression techniques with the same compression ratio by performing procedures before any lossless method is executed [5]. The method divides the image into 2 × 2 and reduces the minimum values in pixels for each row and column. The efficiency of the technology has been proven to be applied to images of different sizes and types. Furthermore, the decompressed image using the technique has been improved to compression measures such as mean square error (MSE), mean absolute error (MAE), and peak signal-to-noise ratio (PSNR).

[8] created an end-to-end trainable model for image compression that is based on variational auto-encoders. This model integrates a hyperprior to capture spatial dependencies in the latent representation successfully. This hyperprior relates to side information, a notion that applies to all modern image codecs, however primarily unexplored in the image compression area using artificial neural networks (ANNs); this model trains a complex prior conjointly with the underlying auto-encoder. Therefore it leads to an advanced image compression when measuring visual quality using the popular Multi-structure similarity index and yields rate-distortion performance surpassing known artificial neural networks-based methods when evaluated using a more traditional metric based on squared error.

[42] proposed a new simple algorithm that is more flexible than existing codecs for optimizing auto-encoders for lossy image compression. The end-to-end trained architecture was demonstrated to achieve performance on high-resolution images. Furthermore, the simple algorithm shows effective ways and better understanding than JPEG 2000 regarding Structural Similarity Index and Mean opinion scores for dealing with non-differentiability in training auto-encoders for lossy compression.

This paper aims to introduce a novel lossy technique called RFID for image compressing [30]. The technique depends on collecting the redundancy and similarity between the neighboring pixels of images by rounding the pixels’ strengths trailed by the dividing process, which will decrease the range of the intensities and increase the redundancy of these intensities. The algorithm can be applied alone or followed by any lossless compression algorithm; thus, the RFID is helpful for all-natural images of high-bit depths and colored images. Also, it shows excellent performance when the Huffman algorithm follows RIFD.

The paper presented a discussion on digital image compression techniques: Huffman, Dwt, and Fractal Coding [21]. By taking the resulting images from the three-way compression, comparing the compression ratio, PSNR compared to each other. After comparison, it shows that Fractal Coding is an excellent compression technique for comparing images in two ways. The same comparison can be made in other different methods, such as Neural Networks and Fuzzy logic.

A hybrid of DCT and fractal image compression techniques was proposed in [33], which was implemented using Matlab. The proposed hybrid coding scheme was evaluated using color images. Incomes of the Huffman encoding technique encode the given image. The result shows the effectiveness of the proposed scheme in compressing the color image. When matched to JPEG with image quality 14,12,10,5,3 respectively, they concluded that the proposed technique has successfully compressed the images with high PSNR values.

The authors have produced an approach for recognizing two-dimensional objects by involving digital image processing and geometric logic to recognize breast cancer cells from a given tissue sample [34]. The methods converted the three-dimensional RGB image into two-dimensional black and white image adaptation, color pixel classification for object-background departure, calculating object metrics using area-based filtering, and use of bounding box. Using MATLAB, the algorithm was developed and simulated. The results were 99% accurate using a set of 180 images of the three primary colors (red, green, and blue), and four basic 2D geometric shapes were used for analysis.

[7] present a multi-scale data-compression and a non-separable 2D error control algorithm based on Harten’s interpolatory framework for multiresolution that gives a specific estimate of the precise error among original and decoded images. The proposed algorithm does not trust a tensor-product strategy to compress two-dimensional images. After data-compression by applying this non-separable multi-scale transformation, the user gets the exact value of the RMSE, and PSNR before the decoding process occurs. As a result, the proposed algorithm can use to obtain lossless and near-lossless image compression.

A simple and effective method is proposed to compress images in [37]. The method succeeded in the size reduction of images while keeping the quality. The method based on the Wavelet Transform was used to transform the original image. After quantization and thresholding of DWT coefficients, Run-length coding and Huffman coding schemes were used to encode the image. DWT is based on the immensely popular JPEG 2000 technique. As a result, Run-Length Encoder provides a lossless representation of data with a reduced number of bits. Also, the Huffman encoder makes compressed data ready for transmission in the form of the bit stream. Finally, many beneficial researches on images can be found in [1, 6, 16, 18, 25].

3 Preliminaries

This section contains the main concepts of the science of digital image processing and the science of digital image compression, an explanation of some measures of digital image compression, and some previous literary studies related to the paper’s subject.

3.1 Digital image processing

Most people worldwide are considered visual beings; they rely on their vision to gather and process information. Some people say they will not believe until they see, and as the famous proverb says, “a picture is worth a thousand words”, therefore the goal of creating images grow dramatically, and with the massive surge of new cheap technologies to capture pictures, the need for processing such images became a necessity [29].

Digital image processing is defined as the science of applying a group of techniques and algorithms on a digital image in order to process, analyze, enhance, extract information or optimize image features such as sharpness and contrast, using a digital computer [36]. This science started back in the 60s in some research. The main goal was to enhance the image’s quality where the image used to be of bad quality. It was then manipulated to make it more transparent, such as the images collected on the moon. It was manipulated after calculating the sun’s position when that image was taken to adjust lighting and other features; such a process was costly and time-consuming. In the 70’s all of that changed due to the production of cheaper, more specialized hardware. It only became cheaper, faster, and more available because all personal smart devices nowadays have all the capabilities to modify. Process images [45], the process of Image processing generally has three steps:

  • Importing the image through image acquisition tools;

  • Manipulating and analyzing the acquired image;

  • The result is an altered image.

An image is defined as a two-dimensional function f(x, y), where x and y are the spatial (plane) coordinates. The amplitude of any pair of coordinates (x, y) is called the intensity of the image at that level; if x, y and the amplitude values of f are finite and discrete quantities, we call the image a digital image. A digital image is composed of a finite number of elements called pixels. Each of which has a particular location and value, the image accuracy and number of pixels in an image go hand in hand. When one goes up, the other goes up too; in other words, a digital image is a 2-D array of pixels. The following figure is an example of a digital image array (matrix) (Fig. 1).

Fig. 1
figure 1

2-D Image and matrix

There are three types of digital images [9].

1. Binary Images: Black and white images with a pixel value of 0 or 1.

2. Grayscale Images: Black and white color and all color gradients in between pixel value between 0 and 255; each pixel represents eight bits to express the color scale.

3. Color Images: images with colors, pixel consists of three eight-bit parts representing the intensity of the red, green, and blue base color. The source of digital images is the electromagnetic (EM) energy spectrum, which is divided into many types shown in Fig. 2 below.

Fig. 2
figure 2

Electromagnetic energy spectrum

3.2 Digital image compression

Images that are produced nowadays are incredibly high in numbers. With larger size and resolution, and with the advanced technologies and trends among people such as sharing images and videos instantly, the need to reduce an image’s size became crucial. Images take up much storage. The bigger the image, the slower it gets transferred; therefore, scientists developed methods to reduce that size, namely image compression.

Digital Image Compression has multiple applications, which vary from image compression for personal use to compressing more essential images such as medical images. Digital Image Compression aims to save a lot of memory space and is therefore extensively used for compression of photos, technical drawings, medical imaging, artworks, maps, and others. Images condensed in size by Digital Image Compression can be quickly sent, uploaded, or downloaded in less time, making the sharing of images a lot easier and faster. When choosing a compression algorithm, the following must be considered [15]:

  1. 1.

    Efficiency: one must use an algorithm that best suits the type of image that would be compressed.

  2. 2.

    Lossless: an algorithm would be chosen based on how much is ok to lose within the image’s quality; for example, a medical image must be returned without any data loss; therefore, a lossless method would be used.

  3. 3.

    Compression rate: Higher compression rate means higher efficiency.

  4. 4.

    Complexity/time: an algorithm would be chosen based on the application where the image would be used or sent to some acquire less computational complexity, and faster processing others do not.

There are multiple benefits gained from image compression [3], such as image transmitting would be of less cost because the cost is related to the time spent on transmitting the image, computing power use would be reduced therefore saved because the less size an image is, the less power for transmission is needed, transmission errors would be less because fewer bits are being transferred, and encoding and compressing of the image would help have a certain level of transmission.

Compression techniques are given as the following general outline:

1. Identifying all similar colored pixels by the same color name, code, and the number of pixels; by doing this, one pixel can match hundreds or thousands of pixels within the image.

2. Create and represent the image using mathematical wavelets.

3. Image would be split into multiple parts, each distinguishable using a fractal. Figure 3 represents the general steps used in image compression [31].

Fig. 3
figure 3

Image compression steps

3.2.1 Lossless image compression

As the name tells, lossless image compression tends not to lose any quality within the image, therefore this type of compression is applied to essential types of images such as medical images, which acquire high quality and extreme accuracy. Image formats that use lossless image compression are RAW, BMP, GIF, and PNG. The following techniques are included in lossless compression [28, 32]:

  • Huffman encoding

An algorithm based on the frequency of occurrence of a symbol in the file is compressed and on statistical coding, meaning that the probability of a symbol has a direct bearing on the length of its representation [22]. The more likely the occurrence of a symbol is, the shorter will the bit-size representation be. Usually, specific characters are used more than others in any given file. Huffman compression is a variable-length coding system that allocates smaller codes for the most frequently used characters and more extensive codes for the least frequently used characters to reduce files’ size being compressed and transferred.

  • Run length encoding

Run-length encoding (RLE) is considered a simple type of data compression in which runs of data (that is, series in which the same data value appears in many successive data elements) are deposited as a single data value and count instead of as the original run [4]. It is the most useful type to be used on data that has many 9f such runs; it is considered not very beneficial on files that do not have many runs as because it might result in a file with big size, it is considered appropriate to palette-based images, which means it does not work well on continuous-tone images like as photographs.

  • LZW coding

It is generally used for lossless text compression and was invented in [20]. It is considered very easy to use and broadly applied for UNIX file compression; this method encodes a series of characters with a unique code using a table-based lookup algorithm. The first 256 8-bit code, 0–255, is entered in a table as an initial entry because an image contains 0–255 distinct pixels. The following codes come from 256 to 4095, inserted into the bottom of the table. This algorithm works better with text compression and performs well when presented with too redundant data files, such as tabulated numbers and computer sources; on the other hand, it does not particularly suit other types of compression.

3.2.2 Lossy image compression

Lossy image compression is where some of the data from your image will be removed, which will minimize the image’s size. This slight loss is almost invisible and hard to recognize, but all at the expense of the image’s quality. In this technique, the compression rate is higher than the one in lossless compression, meaning the size of the image would be smaller, but this process can not be reversed; therefore, it is always recommended to have a backup of the original image, image formats that use lossy image compression are GIF, and JPEG [23, 24].

4 Procedures and methodology

Huffman technique is the chosen technique in this paper since it is lossless. It maintains the image quality and accuracy. Huffman is applied after several steps in this paper. In this section, the mechanism of work on this technique will be explained in detail. In the next section, many different types of images will be applied to the proposed technology that we called Near-Lossless Image Compression Technique. Then apply the same images to the Huffman technique and compare the results with each other. Explain and discuss the Compression Rate (CR) ratios and compare the quality criteria for images according to the following criteria: Peak signal-to-noise ratio (PSNR), and mean squared error (MSE).

4.1 Set of benchmark images

There are a bunch of special images for checking new algorithms, it can be obtained from the signal and image processing institute of the University of Southern California (“USC-SIPI image Database”, 2018), research results are generally compared in a quantitative or visual way, and there are many special measuring tools for measuring research results in digital photo science. There are many types of digital images, digital images were named after this fact that they can be represented digitally, in this paper, two types of digital images were chosen:

  • The gray scale image

Is the image with the gradation between black and white? These images’ matrix values are between (0–255) for black and white, where the value (0) represents black. The value (255) represents white; the remaining numbers are gray hues between dark gray and light gray.

  • The color image

The color image is composed of three matrices; each matrix contains specific hues, the colors in the three arrays are RGB (Red, Green, and Blue), the combination of three colors gives us the rest of the colors that we see on digital screens. Color digital images are common to use on computer monitors.

4.2 Technologies used to compress digital images

Digital image compression is divided into two types: lossy and lossless. They differ in terms of compression ratio and amount of deformation, each of which has positives; one of the positives of lossy is compression of images with significant compression ratios, but it causes data loss, as for lossless, it maintains data and quality, but with few pressure ratios.

4.2.1 Lossless part

The Huffman technique proposed to work on its compressed images relatively little. It does not affect the accuracy and quality of the image; it was chosen in this proposal to experiment with increasing the compression rate with minimal effect on image resolution and quality. By dividing the image, which helps increase the amount of compress, the proposed technique will also be explained in detail and experiment with a variety of images on it. Color images grayscale images to increase the compression rate while maintaining the image quality and accuracy as possible (Fig. 4).

Fig. 4
figure 4

Near-Lossless Image Compression Technique

4.2.2 Enhancement of Huffman by splitting the image into blocks

The main idea of this proposal is to take several steps before applying the Huffman technique to increase the amount of compression and maintain image quality, as shown by the model in the following Fig. 5 and in Algorithm 1. Figure 5, an illustration of the image components below, where the value of the pixel shows the color gamut of the image.

Fig. 5
figure 5

Crop out portrait of Lina

Fig. 6
figure 6

Implementation steps for Near-Lossless Image Compression Technique implementing

figure a

4.2.3 Near-lossless image compression technique implementing

Figure 6 (A) Displays the original image data. Figure 6(B) After the process of finding the lowest value in the block and subtracting it from all the values in (A). Figure 6(C) Data after subtracting one from the odd values in Block (B). Figure 6(D) Data (C) after the process of dividing all the numbers by two. Figure 6(E) Huffman application outputs on (D)

After performing the previous studied steps, the Huffman technique was chosen to make the compression rate higher.

4.2.4 Algorithm: Near-lossless image compression technique steps

  1. 1

    Read image file.

  2. 2

    Reverse Huffman compression.

  3. 3

    Split the image into block 8 * 8.

  4. 4

    Multiplying all values in the block by the number (2).

  5. 5

    Add the value stored in dictionary file to all block values.

5 Experimental results and discussion

This section illustrate the proposed Near-Lossless Image Compression Technique results and contribution to the knowledge by conducting a comparison with Huffman coding algorithm results in term of Image quality and size by using tow image sets (Colored and Grayscale images). Matlab programing language were used to develop the proposed algorithm as described in Fig. 5. Then the new software had been used to test the images results.

Matlab is software that uses the fourth-generation programming language and works on numerical analysis, includes strengthening and running algorithms. It is possible to work on arrays and set up user interfaces. It is one of the most prominent programs used in the digital image range. There are many versions of Matlab software. We used version (MATLAB 9.4.0) because this version has features that are not available in previous versions, for example, graphical controls.

5.1 Experimental settings

A set of standardized images was applied, and the performance of the proposed technique was measured based on the results. Standard images such as: (baboon, arctic hare, boat, cameraman, Lena) are all (8, 16, and 24) bits for color and grayscale images. Standard images are obtained from the available databases: (Image Database, 2018).

Choosing the optimal block size that achieves the best compression rates and less image distortion is necessary to enhance the algorithm performance. The researcher tested the algorithm using (2 × 2, 4 × 4, and 8 × 8 blocks) separately for the colored and grayscale image. The three-block size results were analyzed to find the suitable block size that achieves the best performance.

5.2 Results and discussion]

We used two standard metrics; Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). Since the proposed algorithm is a near-lossless technique that indicates the decompressed image should have a tiny distortion, this distortion should not be seen by the human visual system. As described before, the Huffman algorithm is a lossless technique, which indicates that the MSE should be zero, and the PSNR should be infinite.

We compared the proposed algorithm and the Huffman algorithm in image quality using the (MSE and PSNR). To better understand the results, we represent the MSE results by bar-charts, where each column represents the MSE value for each image. The best image quality is achieved by using the 2 × 2 block size. The Huffman algorithm’s MSE values are zero for all the images since it is a lossless compression technique Table 1.

Table 1 Lists the MSE results for the colored images by using different block size and compared against Huffman algorithm MSE results

The MSE value for colored and grayscale images is small, as given in Table 2, which indicates that the proposed algorithm is a near-lossless technique, and the de-compressed images should have a low rate of distortion. Nevertheless, this distortion is not recognizable to be the human visual system. The best image quality is achieved by using the 2 × 2 block size.

Table 2 Lists the MSE results for the grayscale images by using different block size and compared against Huffman algorithm MSE results

Table 3 lists the PSNR results for the colored images by using different block size and compared against Huffman algorithm PSNR results. It is clear that the proposed method got better results compared with other methods. To better understand the results, we represent the PSNR results by bar-charts, where each column represents the PSNR value for each image. The best image quality is achieved by using the 2 × 2 block size. The Huffman algorithm’s PSNR values are infinite for all the images since the PSNR is calculated by dividing the distorted value by MSE (MSE is zero in this case).

Table 3 Lists the PSNR results for the colored images for the proposed technique and Huffman algorithm

Table 4 represents the PSNR results for the grayscale images. The best image quality is achieved by using a 2 × 2 block size. The PSNR values for colored and grayscale images indicate that the proposed algorithm is the near-lossless technique. The de-compressed images should have a small distortion rate; nevertheless, this distortion is not recognizable by the human visual system. Moreover, the proposed algorithm got a promising performance compared to other well-known algorithms.

Table 4 Displays PSNR results for grayscale images in the proposed technique and Huffman

In this part, a comparison is conducted between the proposed algorithm and the Huffman algorithm in terms of image size, as shown in Table 5. The Compression Rate (CR) is measured to find the algorithm percentage of saving bits. The CR can be measured by dividing the output compressed image size by the input image size. The best CR is when the Cr results are close to zero; therefore, the compressed images should have a smaller size than the original. It can be seen from the above data, the (8 × 8) Block achieved the best compression rates among all the block sizes and saved 28% more than the Huffman algorithm.

Table 5 Lists the CR results for the colored images for the proposed technique by using different block size and Huffman algorithm

It can be seen from the above Table 6, the (4 × 4) Block achieved the best compression rates among all the block size with 3% CR less than the (8 × 8) blocks and saved 19% more than Huffman algorithm. So, it reflects the ability of the proposed method in giving better results. Table 6 shows that the proposed algorithm enhances the Huffman storage-saving by 16% when using (8 × 8) blocks. Finally, it is clear in Table 7 that the proposed method when use two-by-two blocks is the best method, which is according to the Friedman ranking test. In Fig. 7, the average CR results are given for various techniques.

Table 6 Lists the CR results for the grayscale images for the proposed technique by using different block size and Huffman algorithm
Table 7 The Friedman ranking test
Fig. 7
figure 7

The average CR results for various techniques

Fig. 8
figure 8

Baboon colored image compressed on Huffman

Fig. 9
figure 9

Baboon color image compressed on Near-Lossless Image Compression Technique Block (2 * 2)

Fig. 10
figure 10

Baboon color image compressed on Near-Lossless Image Compression Technique Block (4 * 4)

Fig. 11
figure 11

Baboon color image compressed on Near-Lossless Image Compression Technique Block (8 * 8)

Fig. 12
figure 12

Mosque grayscale image compressed on Huffman

Fig. 13
figure 13

Mosque grayscale compressed on Near-Lossless Image Compression Technique Block (2 * 2)

Fig. 14
figure 14

Mosque grayscale compressed on Near-Lossless Image Compression Technique Block (4*4)

Fig. 15
figure 15

Mosque grayscale compressed on Near-Lossless Image Compression Technique Block (8*8)

Figures 8, 9, 10, 11, 12, 13, 14, 15 shows several examples of original images and the decompressed images, for colored images and grayscale images using various compression technique blocks. These confirmed the performance of the proposed algorithm in comparison and decompressed images

6 Conclusions and future work

The paper represents a new technique for compressing color and gray digital images to demonstrate that the Huffman compression rate increases when the image is divided into blocks of different sizes. They are applied to images of different types and sizes with different bit depths.

Huffman’s lossless technique differs from the proposed technique in that the proposed technique is close to being lost. It is a new compression technology that performs some simple steps on the blocks before they are pressed into Huffman. Near-Lossless Image Compression Technique technology focuses on increasing the compression rate for Huffman lossless technology. After doing several experiments by compressing a set of gray and colored images onto Huffman, the same sets of images were pressed Near-Lossless Image Compression Technique. The results were compared with each other. The results showed an improvement in compression rates, a very slight loss of image resolution, and minimal deformation while compressing (close) technology. For example: when comparing CR values ​​between Block (2 * 2) of Near-Lossless Image Compression Technique, the Huffman Lossless Technology, (0.72–0.83) = −11% in gray, (0.74–0.94) = −20% when color pictures, this means that Block (2 * 2) cannot be pressed in Near-Lossless Image Compression Technique.

When comparing CR values ​​between Block (4 * 4) of Near-Lossless Image Compression Technique, the Huffman Lossless Technology, (0.72–0.53) = 18% for grayscale images, (0.74–0.54) = 20% when color pictures, this means the compression results using a block (4 * 4) from the Near-Lossless Image Compression Technique are a bit good. When comparing CR values ​​between Block (8 * 8) of Near-Lossless Image Compression Technique, the Huffman Lossless Technology, (0.72–0.56) = 16% when gray pictures, (0.74–0.48) = 26%, this means that the compression using a block (8 * 8) from Near-Lossless Image Compression Technique gives perfect results, especially on color pictures.

After performing this experiment, it is suggested to develop a digital compression science. Work to improve the stress rate on this technique by trying the same steps and applying other lossless technology, suggesting logical steps similar to this technique, and trying it on, Huffman. A new optimization algorithm can be adapted to solve these problems in the future.