Introduction

Recent progression in the area of information and communication technology has an outcome with the creation of a massive quantity of data. As a result, transmission and storage of data rates are increased and the lack of security and privacy concerns are also experienced. The requirements of security concerns have made the development of encryption approaches and the necessities of limited storage and transmission resources have led to the development of compression approaches. The multimedia data are transmitted over the internet which may subject to lack of bandwidth and security threats. To overcome the drawbacks, image encryption approaches and image compression approaches are developed.

The recent advancement in technology has provided the world with a scenario where chunks of information are transmitted effectively through the public network. The public transmission channels are reliable to transmit the data that may susceptible to vulnerability. The transmitted information may be accessed and modified by ransom ware like unauthorized users [1]. Hence, the transmission of data is attained with a secured format using encryption approaches. The encryption approaches enrich the security of digital images and it shows significance in the transmission of military image, telemetric medical imaging, and video conferencing [2]. In image encryption, the actual image is converted into a meaningless form of data and the receiver will decrypt the image using a unique key that ensures the protection of the transmitted image.

The propagation of multimedia images over the network is growing day by day and the rate of data growth is higher than technology growth. The generation of huge data needs high storage and transmission capability. To address these issues, image compression approaches are introduced. It reduces the size in bytes of the multimedia file without reducing the quality of the multimedia file. Image compression reduces the cost of transmission and storage [3]. Image compression eliminates data irrelevancy and redundancy.

The classification and the overall schema of image encryption approaches are detailed in “Classification of Image Encryption Technique”, the assessment of performance evaluation metrics for image encryption approaches are described in “Assessment of Performance Evaluation Metrics based on Image Quality”, the classification of image encryption approaches are described in “Classification of Data Compression Techniques”, the assessment of performance evaluation metrics of image compression techniques are elucidated in “Assessment of Image Compression Evaluation Metrics” and “Conclusion” concludes with the scope of image encryption and compression approaches.

Classification of Image Encryption Technique

Image encryption converts the actual information into a meaningless structure before transmitting over the public network. Image encryption algorithms play a significant role in protecting the images [4]. The confidential information is inserted into the digital media such as images, audio files, and videos to conceal its existence and the process is carried by the stenography. Thus, information hiding is achieved by stenography in which the existence of data is recognized only by the appropriate receiver. Whereas, in the context of cryptography, the obscured form of data is displayed rather than hiding. In the digital watermarking scheme, digital information is implanted with a distinctive identifiable signal called a watermark [5]. To validate the digital information, a watermark in the digital information will be retrieved at the recipient end. The digital information can be an image, video, text, and audio files that are secured using the watermarking approach. Whenever the illegal usages of watermarked images are identified, the implanted watermark is retrieved to validate the ownership claims.

Framework of Image Encryption Approach

The general framework of the image encryption approach is illustrated in Fig. 1. The plain image act as an input image that is encrypted and it is termed as a ciphered image. The plain and cipher image is signified as PI and CI, respectively. The process of encryption is presented as:

$${\text{CI}} = {\text{En}}\_{\text{Fn}}_{{{\text{En}}\_{\text{K}}}} \left( {{\text{CI}}} \right),$$

where En_Fn() is an encryption function that is applied to the image PI using the encryption key (EK). Similarly, at the recipient side, the decryption function (De_Fn()) and decryption key (De_K) is applied in the encrypted image to retrieve the original image is denoted as:

$${\text{PI}} = {\text{De}}\_{\text{Fn}}_{{{\text{De}}\_{\text{K}}}} \left( {{\text{CI}}} \right).$$
Fig. 1
figure 1

Outline of image encryption approach

The image encryption approach is classified as symmetric and asymmetric approach. In the context of symmetric encryption, both the encryption and the decryption keys are similar i.e., En_K = De_K. In this approach, keys are kept confidential during the data transmission process. The dissimilarity identified in the keys i.e., \(\mathrm{En}\_\mathrm{K}\ne \mathrm{De}\_\mathrm{K}\) is termed as asymmetric encryption. Whereas, De_K is maintained as private and the En_K is maintained as public.

Image Encryption Approach

The image encryption approaches are categorized into four major types which are compressive sensing, optical, transform, and spatial domain as illustrated in Fig. 2.

Fig. 2
figure 2

Categorization of image encryption approach

The increasing usage of multimedia contents and its applications has necessitated the security system to protect confidential information. Encryption of information is a prominent way of protecting the transmitted data and ensuring the security aspects. Thus, image encryption is projected to protect the images and the security requirement has facilitated the development of various encryption approaches. The normal image will be encrypted and transmitted. The encrypted images will be accessed only by the authorized parties which assure the security of the information. The encryption approaches and its categories are explained in Fig. 2 and its significance is elucidated in the subsequent table. The image encryption approaches are expressed in Table 1.

Table 1 Significance of image encryption approach

Assessment of Performance Evaluation Metrics Based on Image Quality

The effectiveness of the image encryption approach is measure with the assistance of evaluation metrics. The varied characteristics of an image encryption approach are investigated using these parameters.

Differential Analysis

The differential attacks are analyzed using the parameters called unified averaging changing intensity (UACI) and number of pixel change rate (NPCR). The sensitivity of the algorithm towards these trivial alterations in the plain image is tested using the differential attack. The images are altered by the attackers and encrypt the images by applying the same secret key. The relationship among the original and modified images identified.

Unified Averaging Changing Intensity (UACI)

UACI estimates the average intensity of divergence among the encrypted and its relevant plain image that having a difference of one pixel [22, 23]. It is determined as,

$${\text{UACI}} = \frac{{\sum\nolimits_{l.m} {B\left( {l,m} \right) - B^{\prime}\left( {l,m} \right)} }}{{255 \times {\text{Wi}} \times {\text{Hi}}}} \times 100,$$

where \(B\left(l,m\right)\) represents encrypted image and \({B}^{\prime}(l,m)\) represents an altered image.

Number of Pixel Change Rate (NPCR)

The value of NPCR is estimated as,

$$\mathrm{NPCR}=\frac{{\sum }_{l,m}B(l,m)}{\mathrm{Wi}\times \mathrm{Hi}}\times 100.$$

Here,

$$B(l,m)=\left\{\begin{array}{c}0 \, {\text{if}} \, B(l,m)={B}^{\prime}(l,m)\\ 1 \, {\text{if}} \, B(l,m)\ne {B}^{\prime}(l,m)\end{array}\right.,$$

where Wi and Hi represent the width and height of the images, respectively. B (l, m) denotes the difference among the relevant pixels of the original and altered image. The range on NPCR belongs [0, 100]. The rate of NPCR in the encrypted file must be close to 100. Both the UACI and NPCR values have to be maximized in the process of encryption [24].

Statistical Analysis

The encryption approaches are decrypted using statistical analysis. The correlation analysis (CA) and the histogram analysis (HA) are applied to investigate the adjacent pixel of the image to ensure the robustness of the encryption approach against the statistical analysis.

Correlation Coefficient (CC)

The correlation coefficient is applied to spot the similarity among the relevant pixel of an encrypted and original image. The rate of the adjacent pixels of the original information is strongly interrelated with the direction, i.e., vertical, diagonal, and horizontal. The best image encryption approach minimizes the association in the ciphered image [15]. The value of correlation coefficient is estimated as [25]:

$$R_{{\left( {i,j} \right)}} = \frac{{{\text{Ci}}\left( {i,j} \right)}}{{\sqrt {{\text{De}}\left( i \right) \cdot {\text{De}}\left( j \right)} }}.$$

Here,

$${\mathrm{Ci}}_{\left(i,j\right)}=\frac{{\sum }_{x=1}^{\mathrm{Ke}}\left({i}_{x}-{B}_{\left(i\right)}\right)\left({j}_{y}-{B}_{\left(j\right)}\right)}{\mathrm{Ke}},$$
$$\mathrm{De}\left(i\right)=\frac{1}{\mathrm{Ke}}\sum_{x=1}^{\mathrm{Ke}}{\left({i}_{x}-{B}_{\left(i\right)}\right)}^{2},$$
$$\mathrm{De}\left(j\right)=\frac{1}{\mathrm{Ke}}\sum_{y=1}^{\mathrm{Ke}}{\left({y}_{j}-{B}_{\left(j\right)}\right)}^{2},$$

where Ci(x, y) denotes the covariance among the sample i and j which is a coordinate value of the image. The pixel pairs are represented as Ke of (xi, yi). The standard deviation value of i and j are De(i) and De(j). B(i) indicates the value of xi pixel and the range of Ci belongs to [− 1, 1] and it should be near to 0.

Histogram Analysis (HA)

The value of pixels distributed across the image that is exposed with the assistance of the histogram analysis. The histogram value entirely varies for the original and encrypted images. The distribution of histograms in the original image is non-uniform and the encrypted image is uniform in nature [2]. That is the value of the pixel is equally distributed in space.

Information Entropy (IE)

IE is used to estimate the average information per bit in multimedia content. Every pixel in the multimedia content has varied values and possible information. Therefore, the entropy value of an image determines the equality of uniform distribution [26]. It is estimated as:

$$E\left(S\right)=-\sum_{s} \left(\mathrm{Po}({s}_{l})\times {\log}_{2}\mathrm{Po}({s}_{l}) \right),$$

where E(S) denotes the entropy of the source image (S), Po(si) represents the occurrence probability si. The range of IE belongs [0, 8] and an 8-bit image is close to the value 8.

Key Analysis (KA)

The major part of the encryption algorithm is used in generating the security key and the effectiveness of the algorithm relies on the potency of the key that also develops resistance against a variety of security attacks. The considerable properties of security keys are high sensitivity and huge keyspace. If the key size is huge, then the process of decryption is tedious for the attacker and the sensitivity made it unrecoverable [14].

Noise Attack (NA)

The attackers may introduce noise values into the encrypted image that may destroy the needed information in the image. The receiver may not recover the image after the intrusion and the efficient approach can overcome this attack. The attacker may introduce Poisson noise, Gaussian, and additive into the encrypted multimedia content [2].

Execution Time (ET)

The time required to precede the encryption process is determined as execution time (ET). It is the combination of run and compile time [14]. The minimum ET determines the effectiveness of the approach and measured in terms of minutes, milliseconds, or seconds.

Bit Correct Ratio (BCR)

The value of BCR is applied to measure the variation among the original and encrypted multimedia files. It determines the exactness of the altered decrypted image [27]. It is estimated as:

$$\mathrm{BCR}=\left(1-\frac{{\sum }_{i,j}^{X\times Y}O\left(i,j\right)\oplus \mathrm{De}\left(i,j\right)}{X\times Y}\right),$$

where i and j values represent the coordinate points of a pixel in the image of dimension \(X\times Y\) for the original image O and decrypted image D and the XOR operation is \(\oplus\). The value of BCR belongs to the value of [0, 1].

Mean Squared Error (MSE)

The value of MSE is used to evaluate the correctness of the pixel value and the difference is the error value [28]. It is estimated as:

$$\mathrm{MSE}=\frac{1}{XY}\sum_{m=1}^{m=X}\sum_{n=1}^{n=Y}{ \left(O\left(m,n\right)-R\left(m,n\right) \right)}^{2}.$$

Peak Signal-to-Noise Ratio (PSNR)

The quality of the image is estimated by the PSNR value of the decrypted and original [29]. It is estimated as:

$$\mathrm{PSNR}=10{\log}_{10}\frac{{({2}^{n}-1)}^{2}}{\mathrm{MSE}},$$

where n denotes the count of the bits per pixel and PSNR is estimated in decibel (dB). The value of PSNR must be high and the range belongs to the value of \([0,\infty ]\).

Signals to Distortion Ratio (SDR)

The SDR value estimates the distortion value [30]. It is estimated as:

$$\mathrm{SDR}=10{\log}_{10}\frac{{\sum }_{m,n}O{(m,n)}^{2}}{{\sum }_{m,n}(O{\left(m,n\right)-\mathrm{De}(m,n) )}^{2}},$$

where De(m, n) and O(m, n) denotes the decrypted and the original image, respectively which is with the dimension of \(X\times Y\). It is estimated in decibel and range belongs to the value of \([0,\infty ]\). The value of SDR must be minimum for effective the algorithm.

Structural SIMilarity Index (SSIM)

The SSIM exposes the similarity of the decrypted and original image. This value is the quality assessment and estimated by numerous windows of the image having similar size [31]. It is estimated as:

$$\mathrm{SSIM}=\frac{\left(2{\mu }_{I}{\mu }_{\mathrm{De}}+{\mathrm{CI}}_{1}\right)\left(2{\sigma }_{I\mathrm{De}}+{\mathrm{CI}}_{2}\right)}{\left({\mu }_{I}^{2}+{\mu }_{\mathrm{De}}^{2}+{\mathrm{CI}}_{1}\right)\left({\sigma }_{I}^{2}+{\sigma }_{\mathrm{De}}^{2}+{\mathrm{CI}}_{2}\right)},$$

where \({\mu }_{I}\) denotes the average of an input (I) and \({\mu }_{\mathrm{De}}\) represent the decrypted (De) images. The variance of the I and De are \({\sigma }_{I}^{2} \, \mathrm{and} \, {\sigma }_{\mathrm{De}}^{2}\), respectively. \({\sigma }_{I\mathrm{De}}\) signifies the covariance of the values I and De. CI1 and CI2 represent the regularization with the value (0.01P)2 and (0.01P)2, respectively. The value of P is the dynamic range and the SSIM value belongs to the range of [− 1, 1].

Root Mean Squared Error (RMSE)

The RMSE value estimates the MSE value that gives accurate and precise data [32]. It is estimated as:

$$\mathrm{RMSE}=\sqrt{\frac{{\sum }_{l=1}^{X}{\sum }_{m=1}^{Y}{\left[O \left(l,m \right)-\mathrm{De} \left(l,m \right)\right]}^{2}}{XY}},$$

where the coordinate values are represented by l and m with the size of \(X\times Y\). The original and decrypted images are denoted as O and De, respectively. The range of RMSE lies between \([0,\infty ]\).

Mean Absolute Error (MAE)

The variation among the original and the decrypted image is estimated by the MAE value [33]. It is estimated as:

$$\mathrm{MAE}=\frac{1}{XY}{\sum }_{l=1}^{X}{\sum }_{m=1}^{Y}{\left|O\left(l,m\right)-\mathrm{De} \left(l,m \right)\right|},$$

where the original image is denoted as \(O\left(l,m\right)\) and the decrypted image is denoted as \(\mathrm{De} \left(l,m \right)\) with the pixel coordinate and dimension l, m, and \(X\times Y\), respectively. The range of MAE is [0, 2num − 1] where num is the count of bits per pixel and the value must be maximum.

Signal to Noise Ratio (SNR)

The efficiency of the algorithm is estimated quantitatively by the SNR value [34]. It is estimated as:

$$\mathrm{SNR}=\frac{{\sum }_{x,y}{\left[O(x,y)\right]}^{2}}{{\sum }_{x,y}{\left[O(x,y)-\mathrm{De}(x,y)\right]}^{2}},$$

where \(O\left(x,y\right)\) denotes the original image and \(\mathrm{De}\left(x,y\right)\) denotes the decrypted image with the pixel coordinates x and y. The SNR range lies between \([0,\infty ]\) and the SNR value must be maximum.

Classification of Data Compression Techniques

The transmission of multimedia content over the network is growing and the rate of data growth is higher than the expansion of technology [35, 36]. The huge amount of data generation necessitates the high storage and transmission capability with higher bandwidth. To address these shortcomings in the transmission technology, image compression approaches are introduced. Based on the requirement and the transmission condition, the image compression approaches are developed [37]. The categories of image compression approach are illustrated in Fig. 3.

Fig. 3
figure 3

Categorization of image compression approach

Image compression approaches minimize the size in bytes of the multimedia content without minimizing the quality of the multimedia file. Image compression decreases the cost of transmission rate and storage capacity. Generally, data irrelevancy and redundancy are eliminated by the image compression approach. The process of image compression is segregated into two categories namely, modeling and coding. At the initial stage, the multimedia content is analyzed for the occurrence of any redundant content in the file and it is retrieved to establish an effective model.

In the subsequent stages, the variation among the newly created model and the original data are considered as a residual data.

The value estimated from the difference acts as coding and it is coded by the encoding approach. There are numerous methods to characterize data and diversified descriptions made the establishment of various compression schemes. The significance of the compression approach is depicted in Table 2.

Table 2 Image compression approaches and its significant insights

Assessment of Image Compression Evaluation Metrics

The information theory offers an outline for the plan of a lossless compression scheme. In the system of information theory, the random variables entropy value is estimated by a term called self-retrieved information [74]. The random experiment results in an event called L that is having the probability P(L) and the relation of self-information is denoted in the below equation,

$$i\left(L\right)={- {\log}}_{b}P\left(L\right).$$

For two occurrences of X and Y, the self-retrieved information is associated to L and M is represented as:

$$i\left(L\right)={\log}_{b}\frac{1}{P(LM)}.$$

When the values L and M are autonomous nature, P(LM) = P(L)P(M), then the self-retrieved information correlated with L and M is determined by,

$$i\left(L\right)={\log}_{b}\frac{1}{P\left(L\right)P(Y)}={\log}_{b}\frac{1}{P\left(L\right)}+{\log}_{b}\frac{1}{P(Y)}=i\left(L\right)+i\left(M\right).$$

The performance of the compression algorithm is investigated in various aspects. The computational complexity, speed, memory, quality of the restructured data, and compression amount are the diversified aspects of the performance of the algorithm. The general measure to estimate the effectiveness of the algorithm is the compression ratio (C). It is determined as the ratio of the total count of bits needs to store the uncompressed data and compressed data.

$$\mathrm{CR}=\frac{\mathrm{count \, of \, bits \, in \, uncompressed \, information}}{\mathrm{count \, of \, bits \, in \, compressed \, information}}.$$

The CR is termed as bpb (bit per bit) and it determines the average counts of the bits need to be store the compressed information. Likewise, bpb is denoted as bits per pixel (bpp), whereas the advanced compression approaches incorporate the bits per character (bpc) that symbolizes the count of bits is necessary to reduce the character. Another estimation approach namely space saving is also incorporated that determine the minimization in the size of a file correlated to the size of the uncompressed and is estimated as follows:

$$\mathrm{Space} \, \mathrm{saving}=1-\frac{\mathrm{count \, of \, bits \, in \, compressed \, information}}{\mathrm{count \, of \, bits \, in \, uncompressed \, information}}.$$

By considering the instance having a file size of 21 MB and the compressed file with the size of 3 MB and the space-saving is 0.9 (1–21/3). The value 90% denotes the saved storage space that is saved due to the compression approach. The gain in compression is estimated as,

$$\mathrm{Compression} \, \mathrm{gain}={100{\log}}_{e}\frac{\mathrm{original} \, \mathrm{information}}{\mathrm{compressed} \, \mathrm{information}}.$$

The speed of compression is estimated by the cycles per byte (CPB) and the count of the byte requires in compressing the data to one byte. CF and CR values estimate the performance of the lossless compression approach. The evaluation measure to estimate the level of fidelity, distortion, and quality is needed for the reconstructed data when having lossy compression. The difference in the reconstructed and original multimedia content is lossless that is defined as distortion. The general metric used for evaluating the distortion is PSNR and it is a dimensionless number that is expressed by decibel (dB). The original (Li) and the reconstructed (Mi) are estimated as,

$$\mathrm{PSNR}={20{\log}}_{10}\frac{\max\left|{L}_{i}\right|}{\mathrm{RMSE}},$$

where RMSE (root mean square error) represents the square root of mean square error (MSE) and it is estimated by:

$$\mathrm{MSE}=\frac{1}{\mathrm{no}}{\sum }_{i=1}^{\mathrm{no}}{\left({L}_{i}-{M}_{i}\right)}^{2}.$$

When the values of original and the reconstructed images are similar then the value of RMSE is zero and PSNR is infinity. The image with better similarity holds high PSNR value and low RMSE value. The SNR value is used to estimate the error rate in the signal.

$$\mathrm{SNR}={20{\log}}_{10}\frac{\sqrt{\frac{1}{\mathrm{no}}{\sum }_{i=1}^{\mathrm{no}}{L}_{i}^{2}}}{\mathrm{RMSE}}.$$

In addition to this, the distortion is determined by the square value of the variation among the input and output signal that is the mean square error. The compression quality metrics cannot assess every kind of signal. To evaluate image compression approaches, the metrics used are PSNR, CR, MS-SSIM, MSE, SSIM, RMSE, MS-SSIM, etc.

Conclusion

Image encryption and compression approaches play a prominent role to handle security concerns and a huge amount of information generated in the digital world. Several compression and encryption approaches are established to process the numerous forms of data such as videos, audios, images, texts, and so on. This paper outlines the various approaches and assessment metrics of image encryption and compression methods. This paper also elaborates significance and various reviews of image encryption and compression algorithms. The developed algorithm's performance is evaluated by the assessment metrics. Based on the variation in the data and the algorithm, evaluation metrics may vary and various metrics have been described.