1 Introduction

Due to the fast development of network-related technologies in the last few decades, there has been a heavy usage of digital data for the purpose of information exchange (Chen et al. 2018; Suneja et al. 2019). Digital image, being an important integral component of this data, has different security concerns associated with it. For this, a number of image encryption algorithms have been proposed, the conventional being—Rivest–Shamir–Adleman (RSA), international data encryption algorithm (IDEA) and data encryption standard (DES). However, the intrinsic properties of images such as strong redundancy, high correlations among pixels, and bulky data capacity have led to the outmoding of the aforementioned standard encryption schemes (Enayatifar et al. 2017; Li et al. 2007; Solak et al. 2010; Solak and Çokal 2011; Wang et al. 2015a, b, c).

A colossal interest has emerged in the study of chaotic systems for image encryption as these types of systems work on a phase space of real numbers, are sensitive to initial conditions, and are also, stochastic or random in nature (Bisht et al. 2019a, b; Sneha et al. 2019). The similarity of chaos and cryptography has further catalysed the engagement of chaos theory in encryption. While cryptography involves rounds and secret key, chaos makes use of iterations and control parameters. Image encryption, generally, can be divided into two sections—diffusion and permutation. The need of diffusion arises in order to render the statistics of encrypted data independent of the original data. Permutation is also a prerequisite for augmenting the complexity between the key and image pixels, and this randomized complexity can be obtained using chaotic systems. In 1990, an approach for controlling chaotic system was proposed by Ott et al.(1990). Later, Fridrich (1998) also devised chaos based encryption and since then, various researches have been carried out to use chaos theory for reducing the redundancies of the encrypted image (Chen et al. 2004; Masuda et al. 2006). Yet, some common-attacks are not resisted by chaos-based algorithms (Zhang et al. 2012; Rhouma and Safya 2008). Spatial bit-level permutation explored in (Liu and Wang 2011) by dividing the color images into grayscale matrices has improved encryption algorithm and a more recent algorithm (Zhang and Wang 2015) exhibits spatiotemporal chaos bringing about higher efficiency. The proposed algorithm engages ILM, a high dimension chaotic system to obtain bit-level permutations of the individual R, G, B matrices, leading to a higher security and giving good encryption results.

Adleman (1994) proposed a new DNA-based method for image encryption, which consistently improved the security as an effective biological tool (Liu et al. 2012a, b; Zhang et al. 2010a, b, 2013). The fundamental nature of DNA to act as a carrier of information has led to researchers proposing varied encryption algorithms using DNA over chaos (Xiao et al. 2006; Zhang and Fu 2012). Moreover, because of huge storage, massive parallelism, and super-low power consumption, DNA encoding proves to be secure and efficacious (Head et al. 2000; Zheng et al. 2009). Combining DNA with high dimension chaotic system such as ILM results in a robust encryption system. DNA XOR, DNA XNOR and DNA addition are few of the operations that can be applied alongside chaos (Liu et al. 2012a, b). The only limitation of DNA encoding is the independency of the key-stream creation from the cipher text (Zhang 2015). This inadequacy of DNA paves the way for optimization of images.

Inspired by the evolution of natural species, EA has been variedly applied in order to solve the optimization problems in diverse areas of engineering. Storn (1995) introduced a stochastic population-based search technique, DE. Today, DE is being fruitfully applied in numerous fields such as communication (Storn 1996), pattern recognition (Ilonen et al. 2003), and mechanical engineering (Joshi and Sanderson 1999). To ensure operative functioning of the algorithm, appropriate evolutionary operators and effective encoding schemes need to be determined (Qin et al. 2009; Tuson and Ross 1998; Gómez et al. 2003; Julstrom 1995). In the proposed algorithm, the candidates can replace the parents, or the initial population, depending upon their fitness values. An optimized sequence is successfully produced and it can be further utilized to achieve an efficacious encryption scheme. In this paper, entropy has been employed for the fitness function. Generally, images which show high entropy are considered to be efficiently encrypted and DE operates on this basis.

In 2010, a novel technique for secret key generation through 128-bit hash function using MD5 of mouse-positions is introduced (Liu and Wang 2010). The paper exhibits greater security through larger key space produced by the one-time keys. In the proposed algorithm, to improve upon the security of encryption, SHA-256 is applied (Guesmi et al. 2016). The SHA-256 function produces a 256-bit hash from a 120-bit input, which expands the key-space to 2256. A small variation in the input bit can result to very large variation in the output, leading to Avalanche effect, which reduces the probability of the brute-force attack. There have been several new advances in the field of chaos based encryption system. Chaos systems based on mathematical models such as perceptron exhibit input parameters or weights being altered dynamically (Wang et al. 2010). A more recent algorithm (Wang et al. 2019) shows the power of fast encryption in real-time system. Incorporating parallel computation in this algorithm will improve the execution speed of the algorithm.

The main background and motivation behind the proposed work is the techniques proposed in (Abdullah et al. 2012; Suri and Vijay 2017; Enayatifar et al. 2014). In the year 2012, Abdullah et al. first time combined logistic map (LM) with genetic algorithm (GA) to introduce an optimized and more secure image encryption approach. Later in the year 2014, Enayatifar et al. (2014) extended the work by combining DNA with LM and GA to make the algorithm more secure and optimized. Recently, Suri et al. extended these two works by using weighted GA (a bi-objective approach of GA) with LM and GA to address the issue of objective selection while doing optimization using GA. The implemented approach in this paper, targets weakness of LM by using a better and efficient chaos map ILM, combines it with DNA and uses Differential optimization to get faster and optimized results. The evaluated parameters show that the conflation of ILM-based permutation, DNA diffusion and DE based optimization produces an optimized encrypted image, secure for transmission with high dynamicity due to the utilization of evolutionary algorithms.

The remaining paper is divided in the following manner. Section 2 enumerates the fundamentals that form the crucial elements of the algorithm. Section 3 discusses the proposed algorithm. Analysis is included in Sect. 4, and the final conclusion is presented in Sect. 5.

2 Preliminaries

This section elucidates the fundamental techniques used in our proposed algorithm.

2.1 Intertwining logistic map (ILM)

The classic chaotic system, LM is the simplest of all and is popular to its dynamicity (Fridrich 1998; Zhang et al. 2010a, b). The mathematical expression for LM function is defined as:

$$p_{k} = \mu \times p_{k} (1 - p_{k + 1} )$$
(1)

where \(p_{k }\) denotes the kth position value of the sequence and lies in the range (0,1]. The control parameter \(\mu\) is kept in the range of 3.57 and 4 to have a completely chaotic sequence.

Despite exhibiting randomness, the one-dimensional LM is sensitive to only one control parameter and has a smaller key-space and. The authors of (Alvarez and Li 2006), extended this one-dimensional LM function to a two-dimensional chaotic function that is mathematically expressed as:

$$p_{k + 1} = \lambda_{1} \times p_{k} \left( {1 - p_{k} } \right) + \delta_{1} \times p_{k}^{2}$$
(2)
$$q_{k + 1} = \lambda_{2} \times p_{k} \left( {1 - q_{k} } \right) + \delta_{2} \left( {p_{k}^{2} + p_{k} q_{k} } \right)$$
(3)

where \(p\) and \(q\) are the two chaotic sequences, lying in the range of (0,1], are generated using above two-dimensional chaotic function. The variables and δ are taken as 2.75 < \(\lambda_{1}\) ≤ 3.4, 2.75 < \(\lambda_{2}\) ≤ 3.45, 0.15 < \(\delta_{1}\) ≤ 0.21, 0.13 < \(\delta_{2}\) ≤ 0.15 to have a complete chaotic sequence. In Khade and Narnaware (2012), a three-dimensional LM has been proposed by extending the function of the two-dimensional LM, which is mathematically expressed as:

$$p_{k + 1} = \lambda \times p_{k} \left( {1 - p_{k} } \right) + \delta q_{k}^{2} p_{k} + \mu r_{k}^{3}$$
(4)
$$q_{k + 1} = \lambda \times q_{k} \left( {1 - q_{k} } \right) + \delta r_{k}^{2} q_{k} + \mu p_{k}^{3}$$
(5)
$$r_{k + 1} = \lambda \times r_{k} \left( {1 - r_{k} } \right) + \delta p_{k}^{2} r_{k} + \mu q_{k}^{3}$$
(6)

The variables , δ and µ are taken as 0.53 <  < 3.81, 0 <  < 0.022, and 0 <  < 0.015, respectively. The \(p_{0}\), \(q_{0}\) and \(r_{0}\) are kept in range the (0,1] to represent a non-linear chaotic system.

Wang and Xu (2014) designed an intertwining relation, where the value of one sequence depends upon the other two sequences, between three different LM sequences (Khade and Narnaware 2012; Kumar et al. 2016):

$$p_{k + 1} = \left[ {\lambda \times \alpha \times q_{k} \times \left( {1 - p_{k} } \right) + r_{k} } \right]Mod1$$
(7)
$$q_{k + 1} = \left[ {\lambda \times \beta v q_{k} + r_{k} \times \left( {1 + p_{k + 1}^{2} } \right)} \right]Mod1$$
(8)
$$r_{k + 1} = \left[ {\lambda \times \left( {q_{k + 1} + p_{k + 1} + \gamma } \right) \times \sin \left( {r_{k} } \right)} \right]Mod1 .$$
(9)

where \(\lambda\) have values between 0 and 3.9999, \(\alpha\) > 33.5, \(\beta\) > 37.9, \(\gamma\) > 35.7.

ILM chaotic sequences showcase uniform distribution as compared to LM sequence (Suri and Vijay 2019). Hence, the disadvantages of one-dimensional LM like blank windows, stable windows, and irregular distributions of iterated sequences are overcome by ILM (Chen et al. 2011). Figure from Wang and Xu (2014), compares the Lyapunov exponents of LM with that of ILM. It can clearly be seen that the Lyapunov exponents of ILM are all above zero, reinforcing the dynamical nature of ILM. Consequently, ILM is used in the approach for scrambling of image pixels (Fig. 1).

Fig. 1
figure 1

Lyapunov exponents of LM and ILM (Wang and Xu 2014)

2.2 Deoxyribonucleic acid (DNA)

In 1994, the first analysis of DNA computing was performed by Adleman. A (adenine), C (cytosine), G (guanine) and T (thymine) are the four nucleic acids that comprise a DNA sequence. It can be inferred from the Watson–Crick relationship, that pairing of Adenine nucleic acid is always done with Thymine nucleic acid to represent as complement sand pairing of Guanine is always done with Cytosine to represent as complements. DNA can be applied in encryption using binary system (Wang et al. 2015a, b, c; Enayatifar et al. 2015; Zhang et al. 2016). Tables 1 and 2 show the DNA encoding–decoding rules and DNA XOR operation respectively.

Table 1 DNA encoding–decoding rules
Table 2 DNA XOR operation

2.2.1 DNA rules

There are eight different ways of assigning two-bit values to all the four nucleic acids. Table 1 defines the assignment based on the rule number.

2.2.2 DNA XOR operation

When two of the nucleic acids undergo XOR operation, it is termed as DNA XOR. Following all the properties of DNA XOR, Table 2 shows the result of performing the operation.

2.3 Differential evolution (DE)

DE is a predominantly used EA in a wide range of scientific applications. Its high speed and low-resource utilization makes it a potential optimization tool for cryptosystems. DE differs from the conventional EA in its greedy approach for the selection of candidate. It aims at transforming the initial population, \(P\) to evolve into an optimum solution. Each vector in the initial population is multi-dimensional. The number of dimensions chosen to obtain the optimal solution depends upon the application on which DE is applied. For image encryption, the number of dimensions is taken equivalent to the size of the image. The population size, NP, determines the number of vectors, and is a critical parameter for DE optimization. DE, like other EA, involves three operations—mutation, crossover and selection. Mutation generates the mutant (biologically referred to as offspring) by making some alterations to the parents. Crossover engages the offspring and the parent to undergo a recombination process to produce the candidate vector. The interpolation of the offspring and the parent is determined by the crossover rate, CR. The selection operation then chooses the vector from among the offspring and the parent that will sustain. All the three operations- mutation, crossover and selection, are reiterated over again for the evolution of the optimum solution. Figure 2 exhibits the flow of DE algorithm used in this paper.

Fig. 2
figure 2

Differential evolution (DE) algorithm

2.3.1 Mutation

The genetic operator mutation, is used to produce the offspring \(O\) from the parent vector \(P\) in the population for each iteration, \(i\) and each dimension \(j.\) Given below are few strategies through which mutation process is carried out.

  1. 1.

    DE/best/1

    $$O_{i,j} = P_{best,j} + F\left( {P_{r1\left( i \right),j} - P_{r2\left( i \right),j} } \right)$$
    (10)
  2. 2.

    DE/rand/1

    $$O_{i,j} = P_{r1\left( i \right),j} + F\left( {P_{r2\left( i \right),j} - P_{r3\left( i \right),j} } \right)$$
    (11)
  3. 3.

    DE/rand-to-best/1

    $$O_{i,j} = P_{i,j} + F\left( {P_{best,j} - P_{i,j} } \right) + F\left( {P_{r1\left( i \right),j} - P_{r2\left( i \right),j} } \right)$$
    (12)
  4. 4.

    DE/best/2

    $$O_{i,j} = P_{best,j} + F\left( {P_{r1\left( i \right),j} - P_{r2\left( i \right),j} } \right) + F\left( {P_{r3\left( i \right),j} - P_{r4\left( i \right),j} } \right)$$
    (13)
  5. 5.

    DE/rand/2

    $$O_{i,j} = P_{r1\left( i \right),j} + F\left( {P_{r2\left( i \right),j} - P_{r3\left( i \right),j} } \right) + F\left( {P_{r4\left( i \right),j} - P_{r5\left( i \right),j} } \right)$$
    (14)

As seen above, a scaling factor, \(F\), is required in the process of mutation. Generally, a range of [0.4, 1] is viewed effective for better mutant generation (Qin et al. 2009). The random vector \(P_{rx\left( i \right),j}\). is exclusive of both \(P_{i,j}\) and \(P_{best,j}\). The proposed approach uses the DE/rand-to-best/1 method for generating the corresponding mutant vectors.

2.3.2 Crossover

The mutation process produces the offspring vector as well as the parent vector. Interpolating both of them to generate a new candidate vector is done by the crossover process. For adequate crossover to take place, appropriate CR is required, which is taken as 0.8 in this case. Three specific types of crossover exist as mentioned below.

  1. 1.

    Single point crossover

    In this type of crossover, two vectors, i.e. the parent vector and the offspring vector are divided into two halves. The candidate vector is formed by fusing first half from one vector and the second half from another.

  2. 2.

    Two-point crossover

    Two-point crossover operation divides the parent vector and the offspring vector in three parts by earmarking two points for division. The new candidate vector is formed by taking each of the three parts from any of the two vectors.

  3. 3.

    Multi-point crossover

    Similarly, in this operation, the two vectors are divided into multiple parts by taking multiple points and then the candidate vector is generated by any of the corresponding parts from aforementioned vectors.

The multi-point approach to the crossover operation gives a better mix of the two vectors. Hence, the multi-point crossover operation is employed along with a CR value of 0.8 to produce an effective candidate.

$$C_{i,j} = \left\{ {\begin{array}{*{20}c} {O_{i,j} , x_{i,j} \le CR} \\ {P_{i,j} , x_{i,j} > CR} \\ \end{array} } \right. .$$
(15)

Here, \(x\) refers to a random value of the range [0,1) that determines the corresponding dimension of each candidate vector.

2.3.3 Selection

The final operation applied after mutation and crossover is selection. Like the name suggests, this operation simply selects the vector that will prevail for the future iterations. The fitness function is the basis on which the selection is made. The entropy \(f_{x,}\) is used as the primary fitness function for this process. If the entropy of the parent vector is more, the parent vector is chosen for the next iteration. Else, the candidate vector is chosen as the next iteration parent vector.

$$P_{i,j + 1} = \left\{ {\begin{array}{*{20}c} {C_{i,j} , f_{Ci,j,} \ge f_{Pi,j,} } \\ {P_{i,j} , f_{Ci,j,} \le f_{Pi,j} } \\ \end{array} } \right.$$
(16)

3 Proposed algorithm of image encryption

The algorithm of the encryption process is discussed from the very beginning. It describes the secret key generation through SHA-256 followed by DE optimization that involves chaos-based permutation and DNA diffusion. The entire flow of the image encryption algorithm is shown in Fig. 3.

Fig. 3
figure 3

Block diagram of proposed approach

3.1 Color image input

For simplifying the process of encryption, a plain color image is broken down into three two-dimensional pixel matrices—R (red), G (green) and B (blue) that are further converted to one-dimensional matrices. Table 3 gives the pseudo code for performing this input conversion.

Table 3 Image input

3.2 Secret key generation using SHA-2

To have a larger key space and better key sensitivity, secure hash algorithm (SHA-2) has been used by the second step of the proposed approach to generate the seed value for the secret key. To generate this seed value, a 120-bit stochastically produced input initial secret key is used by the SHA-2 function. For three dimensions of an image, three chaotic sequences are generated by using 3 separate seeds. These seed values are generated using the pseudo code shown in Table 4.

Table 4 SHA-2 function to generate seed values

3.3 First permutation

Using the three seed values generated in the second step, the third step of the proposed technique generates ILM function. The one-dimensional R, G and B matrices are then shuffled using these three ILM generated sequences. Table 5 gives the pseudo code for this ILM based shuffling process.

Table 5 First permutation

3.4 Optimization through DE

In this step, the optimized mask sequence is obtained through DE. First, the population vector is randomly initialized and simultaneously, the fitness value for each is stored. For each iteration, each vector of the population undergoes the mutation, crossover and selection processes. Finally, the vector which has the best fitness value forms the optimized mask sequence. Table 6 shows the pseudo code for the same.

Table 6 Optimized mask DNA through DE

3.5 Final encryption

The final step includes steps 3.1–3.4 to generate the final cipher image. The seed obtained through SHA-256 is sent to generate the three ILM sequences. These sequences are used to shuffle the plain image. The optimized mask sequence is obtained through DE. This mask sequence is converted to the DNA format along with the shuffled image. The two then undergo diffusion by an operation of DNA XOR. The result is then DNA decoded to form the encrypted image. The pseudo code for the entire process is given in Table 7 and the entire flowchart is shown in Fig. 4.

Table 7 ILM-DE encryption
Fig. 4
figure 4

Flowchart of the proposed algorithm

4 Simulation results

This section gives the description of the experimental setup used to implement the proposed encryption technique. It also describes the different evaluation parameters that have been used to the test the encryption efficiency of the proposed method. An efficacious image encryption method should not only show resistance against differential and statistical attacks, but it should also be capable of handling brute force attacks viz. key sensitivity analysis and key space analysis. Hence, the proposed method has been evaluated against following various parameters such that the purpose of an efficient encryption technique can be achieved.

4.1 Experimental setup

For experimental setup, Python 2.7 has been used on the platform PyCharm 2.3 on a Windows 10 PC using an Intel Core i3 as the processor clocked at 1.7 GHz CPU with 4 GB RAM and 500 GB hard disk memory. Sample standard color images such as Lena, Bungee and Baboon of sizes 64 \(\times\) 64, 128 \(\times\) 128, 256 \(\times\) 256 and 512 \(\times\) 512 are tan as the input image data-set for conducting the experiments.

4.2 Key space analysis

This parameter determines the resisting ability of an encryption algorithm towards brute force attacks. It gives a measure of the key sample space from which encryption key selection is made. Hence, to reduce the feasibility of brute force the key sample space should be made very large. The proposed work uses SHA-2 function that generates a key-space of size 2256 that is considered to beis large enough to show resistance against brute-force attack (Alvarez and Li 2006).

4.3 Key sensitivity analysis

An efficient encryption technique is sensitive to minute changes in secret key used. This Avalanche effect is necessary for key sensitivity and it produces an output which is different from the previous one. To evaluate this parameter, a 360 bit secret key is used to encrypt the sample Bungee image of Fig. 5a is encrypted and Fig. 5b shows the encrypted output for this key.

Fig. 5
figure 5

a Plain bungee image. b Encrypted-image

Then, one-bit change is made in the original image and the altered one-bit image re-encrypted using the same secret key. Figure 6c shows the re-encryption results. Figure 6d shows the difference between the two cipher images 6b, c of Bungee. It can clearly be observed form the evaluation that the proposed technique is sensitive to secret keys and is able to resist exhaustive attacks. This also shows sensitivity of the encryption algorithm to the change of plain image as the two encrypted images 6b, c are generated from a plain image differing in only one pixel and yet the results are very efficient and highly independent of each other as illustrated in Fig. 6d. Apart from that, both the resultant encrypted images show expected values of parameters that are used to analyze the encryption algorithm of images which are discussed later in the paper, thus illustrating efficiency and sensitivity of encryption algorithm.

Fig. 6
figure 6

a Sample bungee image. b Encrypted image of sample bungee image. c Re-encrypted image of one bit-altered sample bungee image. d Difference image of encrypted and re-encrypted image

4.4 Differential attack

To perform this attack, two encrypted images are produced by the attacker by doing trivial changes in the original image, where one encrypted image is generated from the original image and the second encrypted image is produced from the changed original image. Hence, effort is made to establish a correlation by comparing the encrypted and the original image. Two parameters are used for testing the degree of differential attack, viz. Unified average changing intensity (UACI), which is the percentage of the average change in intensity of corresponding pixels, and Number of pixel change rate (NPCR) which signifies the percentage of pixels in the encrypted images that changed. These are mathematically expressed as:

$$UACI = \frac{1}{P \times Q}\mathop \sum \limits_{j = 1}^{P} \mathop \sum \limits_{k = 1}^{Q} \frac{{\left| {{\text{c}}_{1} \left( {{\text{j}},{\text{k}}} \right) - {\text{c}}_{2} \left( {{\text{j}},{\text{k}}} \right)} \right|}}{255} \times 100{\text{\% }}$$
(17)
$$NPCR = \frac{1}{P \times Q}\mathop \sum \limits_{j = 1}^{P} \mathop \sum \limits_{k = 1}^{Q} {\text{D}}\left( {{\text{j}},{\text{k}}} \right) \times 100{\text{\% }}$$
(18)

where \({\text{D}}\left( {{\text{j}},{\text{k}}} \right)\) is given as

$$D\left( {j,k} \right) = \left\{ {\begin{array}{*{20}l} {1,\quad {\text{c}}_{1} \left( {{\text{j}},{\text{k}}} \right) \ne {\text{c}}_{2} \left( {j,k} \right)} \\ {0,\quad otherwise} \\ \end{array} } \right.$$
(19)

where two encrypted images are denoted by \({\text{c}}_{1}\), \({\text{c}}_{2}\) and the pixel value at index \(\left[ {i,j} \right]\) in the image is denoted by \(c \left[ {i,j} \right]\).

The evaluated values of UACI and NPCR are shown in Table 8. It can be observed from the results that the proposed encryption method is very close to ideal values that are above 99 for NPCR and about 30 for UACI. Thus, it can be concluded from the obtained values of these two parameters that the proposed technique ensures efficacy in resisting differential and plain-text attack effectively.

Table 8 Analysis parameters tabulated for color data-set images

Table 9 shows an in-depth comparison of basic LM + DNA + DE and ILM + DNA + DE techniques on the basis of the parameters—NPCR, UACI, CC and entropy for Fig. 5a. High NPCR and UACI combined with low CC and an entropy closer to eight definitely prove a better chaotic efficiency of ILM as compared to LM.

Table 9 Comparison with earlier proposed methods

4.5 Histogram analysis

Histogram delineates the frequency of pixel distribution throughout the image and is an integral statistical feature. Histogram is a plot of frequency of each pixel value in the image. Technically, a cipher image should have flat histograms in contrast to the steep slope of plain image histogram. This increases the level of randomness and makes it difficult to fetch information from images.

The results of Histogram evaluation have been shown in Table 10. It can be observed that encrypted images using proposed method are having regular distribution in comparison to the source image that has an irregular or non-uniform distribution. Hence, it proves the ability of the proposed method against statistical attack.

Table 10 Histogram analysis

Variance analysis is a quantitative analysis which is used for evaluating uniformity of encrypted images. It is a mathematical representation of histogram analysis. The value of variance is inversely proportional to the uniformity in encrypted images, i.e., lesser the variance, more is the uniformity in ciphered images (Zhang and Wang 2014). Variance is evaluated as:

$$var\left( X \right) = \frac{1}{{n^{2} }}\mathop \sum \limits_{j = 1}^{n} \mathop \sum \limits_{k = 1}^{n} \frac{1}{2}\left( {x_{j} - x_{k} } \right)^{2}$$
(20)

where X is the vector for histogram values and X = {x1, x2,…, x256}, where xi is the number of pixels with value equal to i. For color image, variance is calculated by taking average of variance of three matrix corresponding to R, G and B components.

There are two ways for analysis using variance, first one is comparing variance of ciphered image with that of plain image. Variance of color bungee image was evaluated as 742,453.35 whereas the ciphered image has variance value 6497.44, thus showing more uniformity in ciphered image as compared to plain image.

Second method includes comparing variance value of multiple ciphered images resulting from encrypting the same bungee image with different encryption keys. All the values were observed in the range 6300–6600, thus depicting the efficiency of algorithm in uniformity of ciphered images and making the statistical attacks useless for the proposed algorithm.

4.6 Correlation coefficient (CC) analysis

To establish and realize a linear association between two adjacent image pixels the term Correlation is used. Plain or original image pixels have a high correlation whereas an encrypted image should have low CC value. The correlation coefficient is given by the formula rxy:

$$r_{xy} = \frac{{cov\left( {x,y} \right)}}{{\sqrt {D\left( x \right)} \sqrt {D\left( y \right)} }}$$
(21)

where

$$cov\left( {x,y} \right) = \frac{1}{S}\mathop \sum \limits_{i = 1}^{S} \left( {x_{i} - E\left( x \right)} \right)\left( {y_{i} - E\left( y \right)} \right)$$
(22)
$$D\left( x \right) = \frac{1}{S}\mathop \sum \limits_{i = 1}^{S} \left( {x_{i} - E\left( x \right)} \right)^{2} .$$
(23)
$$E\left( x \right) = \frac{1}{S}\mathop \sum \limits_{i = 1}^{S} x_{i}$$
(24)

where two successive pixels are denoted by \(x\), \(y\) and randomly selected pixel pairs \(\left( {x,y} \right)\) are denoted by \(S\). Expectation and variance of \(x\) are denoted by \(E\left( x \right)\) and \(D\left( x \right)\), respectively.

The results in Table 8 show the CC values-horizontal, vertical and diagonal obtained using the proposed method. The CC has been calculated by choosing randomly 1000 pairs of pixels from the image and then, using the duplets from this selected set of pixels to compute the coefficient value. The obtained results show that CC values are very low i.e. closer to zero, in case of encrypted images.

4.7 Resistance attack analysis

Sections 4.1 to 4.6 depict different metrics that help avoid the classical attacks (Wang et al. 2012; Bisht et al. 2018; Jaroli et al. 2018) based on the assumption that the mechanism of the cryptosystem is known thoroughly by a cryptanalyst barring the initial seed. The classical four types of attacks are mentioned below:

  • Ciphertext only where the attacker has the knowledge of a couple of cipher texts.

  • Known plaintext where the attacker has the knowledge of the plaintext and the corresponding cipher text.

  • Chosen plaintext where the attacker has selective access of encryption system from which the corresponding ciphertext can be extracted from the chosen plaintext.

  • Chosen chipertext where the attacker has selective access of decryption system from which the corresponding plaintext can be extracted from the chosen cipher text.

Since the proposed algorithm has a good key space and the chaos system is sensitive to the initial seed, the algorithm is resistant against chosen plaintext attack, which is one of the most common attacks. The DNA diffusion and optimization through DE along with ILM and SHA-256 make the cryptosystem more secure and resistant towards the aforementioned attacks.

4.8 Information analysis

Information analysis parameter termed as entropy is used to define the level of randomness or uncertainty. Low entropy signifies less ergodicity and high entropy signifies increase in the level of randomness (Bisht et al. 2019a, b). The ideal value for image entropy is eight and the numerical representation of entropy is defined as:

$$H\left( m \right) = \mathop \sum \limits_{i = 0}^{{2^{N} - 1}} P\left( {m_{i} } \right)\log_{2} \frac{1}{{P\left( {m_{i} } \right)}}$$
(25)

where the number of gray levels denoted by the variable \(N\), and the total number of symbols are denoted by the variable \(M\) (= \(2^{n}\)). Thevariable \(m_{i} \in M\) and the variable \(P\left( {m_{i} } \right)\) denotes the probability of having \(m_{i}\) levels in the image.

Table 8 shows the information entropy values of R, G, B components for the encrypted color Bungee image. The results justify the good information entropy values of the encrypted images.

4.9 Contrast analysis

This parameter is used to compute the intensity difference between the successive pixels of an image (Khan et al. 2015, 2017). In other words, this parameter enables a user to make distinction between various entities existing in the image. Hence, this parameter mainly emphasizes on intensity computation of a pixel and the computation is performed over the full image. The mathematical expression for contrast parameter is shown as (Khan et al. 2017):

$$C = \sum\nolimits_{i,j = 1}^{N} {|i - j|^{2} p(i,j)}$$
(26)

where gray-level co-occurrence matrices (GLCM) is defined by \(p\left( {i, j} \right)\). Number of rows and columns are denoted by \(N\). The evaluated results for the input image data set by using proposed method have been shown in Table 8.

4.10 Grayscale and binary image analysis

A grayscale image, commonly known as black-and-white image, is an image containing only one component, where each pixel depicts the intensity of light. Unlike grayscale image that has a range of 256 pixel values to choose from, binary images have only a choice of two pixel values (mostly black and white). However, like grayscale image it is also a type of digital image. The proposed technique of image encryption works for all three (grayscale, binary and color) types of images due to architecture flexibility.

4.11 Time comparison

The run time for the two EA algorithms i.e., GA and DE are compared by executing the programs on the experimental specifications described in Sect. 4.1. These EA algorithms depend on the two input parameters that are population size and number of iterations of the algorithm. More are the iterations or the population size, more is the time taken irrespective of the two algorithms. Though the time increases with these parameters, but the time taken by GA always remains manifolds higher than that of DE. The comparison between the two algorithms based upon time requirement is shown in Fig. 7.

Fig. 7
figure 7

Time comparison between DE and GA for different populations and iterations

4.12 Comparison with existing approaches

In the recent years, the researchers not only have used multi-dimensional chaos maps instead of one dimensional chaos maps, but have also combined the chaos maps with different techniques such as DNA, optimization methods, cellular automata to build more efficient and secure image encryption technique. The proposed work in this paper is also a perfect example of one such technique. Table 11 exhibits the comparison of the proposed technique with some of the earlier proposed image encryption algorithms that have used LM, DNA and genetic algorithms (Guesmi et al. 2016; Abdullah et al. 2012; Suri and Vijay 2017; Wang and Xu 2014). It also compares the approach with encryption techniques which involves manipulation of bits of pixels for encryption in their algorithms. One technique combines ILM with Reversible Cellular Automata (RCA), in which only higher 4-bit part of pixel is used as data to be encrypted (Wang and Luan 2013). Another technique involves cyclic shift of bits in pixel for encrypting the image data (Wang et al. 2015a, b, c). Motivated by the results of these method, the proposed algorithm in the paper contributes in two ways. Firstly, it engages the significant contributions presented by the aforementioned earlier methods. Secondly, and most importantly, it optimizes upon the former approaches by engaging the Evolutionary Algorithm that provide all the good features-augmented key space, high randomness and fast process. Thus, the proposed approach embodies the best of all the elements, providing an efficacious image encryption (Table 12).

Table 11 Analysis parameters tabulated for grayscale and binary images
Table 12 Comparison with earlier proposed methods

5 Conclusion and future work

The algorithm has been proposed to develop an efficacious approach to obtain an optimized image encryption. The approach is further an integration of SHA-2 for generating the seed, ILM for permuting the image pixels by using the location map and DNA diffusion. Moreover, the algorithm is optimized with the help of DE that produces a mask sequence, which is further converted to DNA and utilized in DNA diffusion process. The optimization plays a crucial role in providing an efficient encryption. High entropy values and low CC values directly infer better results for an optimized encryption. The results of DE optimization are also compared with that of GA optimization. Theoretical analysis and experimental results reinforce that the algorithm using DE demonstrates better encoding efficiency than GA. The results also corroborate the fact that encryption using DE is faster than encryption using GA. Hence, DE can be used to obtain a quicker and more secure encryption process.