1 Introduction

With the swift growth of the internet and technology, digital images are vulnerable to fraudulent manipulations. Digital images are the simplest and commonly used media to share and transfer information and they are even used in various fields as visual proof of things. Hence, manipulation detection in digital images has become the need of the hour [38]. Investigators have proposed several methods in the last decade that can be characterized into two major types: Active and Passive blind methods [37]. Active methods are employed when there is little knowledge about the image available beforehand [13]. Image watermarking and digital signatures are the techniques that come under active methods of forgery detection. On the other hand, passive methods of forgery detection work on intrinsic changes made in the image that is used for corroborating the genuineness of the images. Passive methods attracted much more attention in recent times as they are more practical as compared to active methods. In a practical scenario, only a single image is handed over to a forensic team with no aforementioned information. In such cases, passive blind methods are more appropriate. There are two types of forgery that commonly operate on digital images: copy-paste and splicing/image composites. For uncovering copy-paste forgery, many algorithms are proposed by researchers that follow a simple procedure of feature extraction followed by a feature vector matching [1, 10, 22, 36]. On the other hand, image splicing introduces some intrinsic changes in the images that further instigate inconsistencies. The major goal of any forgery detection method is to determine such inconsistencies in the images.

In the last couple of years, various methods of detection of image composites, or splicing have also been proposed. Farid [11] has proposed a method to unveil artificially higher-order correlations in the image due to forgery operation by analyzing the interactions between wave patterns in the frequency domain. A 266-dimensional feature vector was created by using wavelet transform and Markov features and resulted in a success rate of 91.8% for the Columbia dataset developed in the Digital Video and Multimedia (DVMM) laboratory at Columbia University [31]. The method used statistical features based on one-dimensional and two-dimensional moments which improve a similar methodology used in steganalysis [30]. Hsu and Chang [17] have proposed a novel method based on the function of camera response and geometrical invariants in a semi-automatic routine. Image segmentation was incorporated to make it fully automatic [18]. Dong et al. [6] have suggested a method based on run-length to distinguish between original and fake images. This method was later surpassed by He et al. [15]. Chen et al. [4]have reached a detection rate of 82.32% for the Columbia DVMM splicing assessment dataset by comparing the weighted alignment of the components of the Fourier transform on the image with the summation of the components using Support Vector Machine (SVM) classifier. The Gray Level Co-occurrence Matrix (GLCM) of edge image was also used for colored images to detect splicing forgery [39]. The edges of the chrominance component of the image were exhibited as a Markov chain. The significant limitation of the method was that only a single-color channel has been exploited for further processing. The feature dimensionality is kept low and an accuracy of 95.6% was attained on CASIA v2.0. Further, it has been noticed that CASIA v2.0 has some drawbacks and the detection rate deteriorated to 78% with Markov features after rectification of the dataset [32].

It has been observed from literature that Markov features can unveil the dependencies among adjoining pixels whenever there is any small change owing to splicing operation [9, 14, 16, 40]. Markov features are extracted using transitional probabilities of the current and future states of the image blocks. The sturdy capability of this matrix of transition probabilities is to indicate the correlation between the pixels. Hence, the method based on Markov-based features on the color characteristics of an image is proposed. Quaternions in the discrete cosine transform are used in various fields [12, 29] due to their applications for processing color images. In the case of image tampering detection, the maximum of the procedures transforms the color image into a grayscale image before the actual processing. Such algorithms do not take the color characteristics of the image into account. However, these color characteristics are important to distinguish between forged and original images as splicing operation modifies the color characteristics of an image. To make use of the information from all the three channels of the image, the concept of quaternions can be used. Quaternions are helpful in removing the redundancy and can provide a quantitative analysis of the image. With the implementation of quaternions, the detection accuracy of the composite image forgery detection algorithms can be improved by exploiting the color perversion. Thus, an algorithm for uncovering forgery in digital images using quaternions in the frequency domain is proposed in this article with very convincing results. The following are the two major reasons for proposing an approach based on Block-QDCT (BQDCT):

  1. 1.

    Diverse regions in the image have distinctive content information and textural information. We propose to apply overlapping Block-QDCT on the input image, and it can maintain a strategic distance from the effects of inconformity produced by diverse regions of natural image amid the process of regular DCT.

  2. 2.

    There are continuity, consistency, smoothness, periodicity and regularity between the pixels of an unaltered image [21], and this relationship will be altered gradually when processed for around 12 to 18 pixels, so the procedure of Markov-based Block-QDCT can diminish the impact presented by distant pixels on the proposed statistical features.

Further, classification plays a major role in various digital image processing tasks. Various data-driven approaches like machine learning or deep learning have demonstrated remarkable results in most image processing or classification problems [3, 23, 28]. Different classifiers can be fused together such that the outcome of the fusion of these classifiers can be more accurate as compared to the single classifiers, comprising the best one [5, 24]. The elementary notion behind fusing the results of various classifiers is that while making a decision regarding classification, the final outcome should not be dependent on the results of a single classifier but instead all the classifiers contribute in the decision making. In this paper, majority voting has been used to process the outcomes of the classifiers. The contributions of the article are as follows:

  1. 3.

    It proposes an unprecedented forgery detection approach that uses Markov based transitional probabilities on quaternions in frequency domain, i.e. the discrete cosine transform as feature extraction from color images.

  2. 4.

    Markov-based features help in detecting the intrinsic changes that occur due to forgery operations.

  3. 5.

    The extracted features are classified using fusion of various classifiers to obtain the best results.

  4. 6.

    Evaluation of proposed approach with eight state-of- the-art manipulation detection approaches has been displayed over three benchmark datasets.

  5. 7.

    Seven different performance metrics are calculated for three benchmark datasets using the proposed approach and established that it either beat the existing methods or gives promising outcomes in terms of accuracy.

The remaining article is structured as follows. Section 2 explains the proposed methodology. Section 3 gives the details about the experimental outcomes obtained from the simulations. At last, concluding notes and findings of the experimental results are given in Section 4.

2 Proposed methodology

The block illustration of the proposed methodology is given in Fig. 1. Intra-block and inter-block differences are utilized to formulate the feature vector in vertical, horizontal, and diagonal directions on the quaternion discrete cosine transformation of an RGB image. Also, the correlations between and within the blocks are considered along with major diagonal and anti-diagonal quaternion discrete cosine transform (DCT) coefficients. The final feature vector is created by calculating the correlations along every possible direction on DCT coefficients of the image. Further, the feature vector is provided to the various combinations of machine learning classifiers to perform the distinction between authentic and fake image.

Fig. 1
figure 1

Block diagram of proposed methodology

2.1 Constructing the quaternions

The notion of quaternions is very widely used in pure and applied mathematics. The quaternions are employed in three-dimensional computer graphics, texture analysis and computer vision applications. In color image processing, quaternions have proved to be very efficient as they consider all the three color channels and with the help of quaternions, a colored image can be considered as a vector field holistically. A quaternion is an extended complex quadruple having one real part and three imaginary numbers and is of the form as shown in (1).

$$ a+ bi+ cj+ dk $$
(1)

a, b, c and d are real numbers where a is the non-imaginary part of the quaternion and i, j and k are the basic quaternion units which they satisfy the Hamilton rule [2]:

$$ j=k, jk=i, ki= jji=-k, kj=-i, ik=-j $$
(2)

It also assures that i2 = j2 = k2 = ijk = −1. Quaternions are non-commutative for multiplication and other basics of quaternions can be found in [2]. A quaternion can be created from two complex numbers with the help of Clay-Dickson theorem [20]. All the three quaternion units are orthogonal to each other. Let us assume m, n ε C, m = a + bi, n = c + di, a, b, c, d ε R, then

$$ q=m+ nj,q\ \varepsilon\ Q,{j}^2=-1 $$
(3)

Hence,

$$ q=a+ bi+\left(c+ di\right)j=a+ bi+ cj+ di j=a+ bi+ cj+ dk $$
(4)

The transformation given in (4) is used to create the quaternion from complex numbers. The quaternions are also called as hyper-complex numbers [20]. More details on how to construct a quaternion can be found in [20, 27]. Assume μ1, μ2 are the two axes of the unit quaternion and they are mutually perpendicular, after that q can be disintegrated into two different complex coordinates in the direction of μ1 and μ2.

$$ q={m}^{\prime }+{n}^{\prime }{\mu}_2,{\mu}_2^2=-1 $$
(5)

where m’ = a’ + b’μ1, n = c’ + d’μ1, a’, b’, c’, d’ ε R then, q = a’ + b’μ1 + c’ μ2 + d’μ3. Here, μ3 = μ1μ2 and μ3 is perpendicular to μ1 and μ2. For the problem being addressed in this article, the coordinates of image (b, c, d) will be transformed into the coordinates (a’, b’, c’, d’) under the three axes i.e.μ1, μ2 and μ3.

2.2 Discrete cosine transform on quaternions

In literature, most of the techniques based on the discrete cosine transform separate the color channels of the image. For example, the image is changed to YCbCr color space, only Y component is chosen for detection procedure [19, 21]. This is not helpful in exploiting the correlation between all the color channels. On the other hand, QDCT can handle all the three channels simultaneously and colored images can be handled in an integrated manner. A RGB image can be denoted using quaternion matrix as shown in (6).

$$ {f}_q\left(m,n\right)={f}_r\left(m,n\right)i+{f}_g\left(m,n\right)j+{f}_b\left(m,n\right)k $$
(6)

Where fr(m, n), fg(m, n) and fb(m, n) are red, green and blue color components of the image [19]. In the proposed approach, forward quaternion discrete cosine transform (FQDCT) has been used. FQDCT can be categorized into two types: left-handed and right-handed since quaternion multiplication follows non-commutative rule.

2.3 Markov features of each block from discrete cosine transform on quaternions

The steps to calculate the Markov features are provided in [16] and it has been observed that the Markov chain features in DWT and DCT domain have performed considerably well. In our method, the primary stage to extricate the features differs from [16]. The original color images are split into sliding blocks of size 8 × 8. The block obtained after segmentation is still a colored sub-image. The quaternion is formed from the R, G and B component of the sub-image. Further, DCT is applied to obtained quaternion matrix. The coefficients of the transformation are assembled in a matrix and the square root of real and the imaginary part is computed. Thus, the final matrix is obtained by arranging the blocks according to the block location. To compute Markov features from the QDCT coefficients, round the coefficients and the absolute values for further processing. Next, compute vertical, horizontal, major diagonal and minor diagonal distances (Qv, Qh, Qd and Q-d within blocks by applying (7) to (10).

$$ {Q}_v\ \left(u,v\right)=Q\ \left(u,v\right)\hbox{--} Q\ \left(u,v+1\right) $$
(7)
$$ {Q}_h\ \left(u,v\right)=Q\ \left(u,v\right)\hbox{--} Q\ \left(u+1,v\right) $$
(8)
$$ {Q}_d\ \left(u,v\right)=Q\ \left(u,v\right)\hbox{--} Q\ \left(u+1,v+1\right) $$
(9)
$$ {Q}_{-d}\ \left(u,v\right)=Q\ \left(u+1,v\right)\hbox{--} Q\ \left(u,v+1\right) $$
(10)

where Q (u, v) is the matrix containing the rounded off QDCT coefficients. Similarly compute the vertical, horizontal, major diagonal and anti-diagonal differences between the blocks, i.e. inter-block distances (Rv, Rh, Rd and R-d using (11) to (14).

$$ {R}_v\ \left(u,v\right)=Q\ \left(u,v\right)\hbox{--} Q\ \left(u,v+8\right) $$
(11)
$$ {R}_h\ \left(u,v\right)=Q\ \left(u,v\right)\hbox{--} Q\ \left(u+8,v\right) $$
(12)
$$ {R}_d\ \left(u,v\right)=Q\ \left(u,v\right)\hbox{--} Q\ \left(u+8,v+8\right) $$
(13)
$$ {R}_{-d}\ \left(u,v\right)=Q\ \left(u+8,v\right)\hbox{--} Q\ \left(u,v+8\right) $$
(14)

Since, difference values calculated in (7) to (14) are obtained in the form of integers and contain a broad range. So, it is desirable to look for some methods like rounding and threshold. For this, a threshold T is given which is a positive integer. If the value of an entity after rounding in a different array obtained in (7) to (14) is more than T or less than -T, then the value will be substituted with T or -T correspondingly using (15).

$$ {A}_{new}=\kern0.5em \left\{\begin{array}{c}-T,\kern0.75em {A}_{old}<-T\\ {}T,\kern0.75em {A}_{old}>T\\ {}{A}_{old}, otherwise.\end{array}\right. $$
(15)

Finally, calculate the transitional probabilities of the above obtained inter-block and intra-block matrices using the equations given in (16) to (23).

$$ PM{1}_h\left(i,j\right)=\frac{\sum_{u=1}^{m-2}{\sum}_{v=1}^n\delta \left({Q}_h\left(u,v\right)=i,{Q}_h\left(u+1,v\right)=j\right)}{\sum_{u=1}^m{\sum}_{v=1}^n\delta \left({Q}_h\left(u,v\right)=i\right)} $$
(16)
$$ PM{1}_v\left(i,j\right)=\frac{\sum_{u=1}^{m-1}{\sum}_{v=1}^{n-1}\delta \left({Q}_h\left(u,v\right)=i,{Q}_h\left(u,v+1\right)=j\right)}{\sum_{u=1}^{m-1}{\sum}_{v=1}^{n-1}\delta \left({Q}_h\left(u,v\right)=i\right)} $$
(17)
$$ PM{2}_h\left(i,j\right)=\frac{\sum_{u=1}^{m-1}{\sum}_{\mathrm{v}=1}^{n-1}\delta \left({Q}_v\left(u,v\right)=i,{Q}_v\left(u+1,v\right)=j\right)}{\sum_{u=1}^{m-1}{\sum}_{v=1}^{n-1}\delta \left({Q}_v\left(u,v\right)=i\right)} $$
(18)
$$ PM{2}_v\left(i,j\right)=\frac{\sum_{u=1}^{m-1}{\sum}_{v=1}^{n-1}\delta \left({Q}_v\left(u,v\right)=i,{Q}_v\left(u,v+1\right)=j\right)}{\sum \limits_{u=1}^{m-1}\sum \limits_{v=1}^{n-1}\delta \left({Q}_v\left(u,v\right)=i\right)} $$
(19)
$$ PM{3}_d\left(i,j\right)=\frac{\sum_{u=1}^{m-16}{\sum}_{v=1}^{n-16}\delta \left({R}_d\left(u,v\right)=i,{R}_d\left(u+8,v+8\right)=j\right)}{\sum_{u=1}^{m-16}{\sum}_{v=1}^{n-16}\delta \left({R}_d\left(u,v\right)=i\right)} $$
(20)
$$ PM{3}_{-d}\left(i,j\right)=\frac{\sum_{u=1}^{m-16}{\sum}_{v=1}^{n-16}\delta \left({R}_{-d}\left(u+8,v\right)=i,{R}_{-d}\left(u,v+8\right)=j\right)}{\sum_{u=1}^{m-16}{\sum}_{v=1}^{n-16}\delta \left({R}_{-d}\left(u,v\right)=i\right)} $$
(21)
$$ PM{3}_h\left(i,j\right)=\frac{\sum_{u=1}^{m-8}{\sum}_{v=1}^{n-8}\delta \left({R}_v\left(u,v\right)=i,{R}_v\left(u+8,v\right)=j\right)}{\sum_{u=1}^{m-8}{\sum}_{v=1}^{n-8}\delta \left({R}_v\left(u,v\right)=i\right)} $$
(22)
$$ P\mathrm{M}{3}_v\left(i,j\right)=\frac{\sum_{u=1}^{m-8}{\sum}_{v=1}^{n-8}\delta \left({R}_h\left(u,v\right)=i,{R}_h\left(u,v+8\right)=j\right)}{\sum_{u=1}^{m-8}{\sum}_{v=1}^{n-8}\delta \left({R}_h\left(u,v\right)=i\right)} $$
(23)

Where i, j ε {−T, −T + 1, …, 0,… T-1, T}, m and n signify the number of rows and columns respectively in the authentic image. δ(.) = 1, only if it satisfies all the arguments otherwise δ (.) = 0. The obtained final count of the feature vector is given by (2 T + 1) × (2 T + 1) × 8. In our experiments T is taken as 4. Hence, 648-dimensional feature vector is obtained. The experiments were also run with thresholds 2, 3, 4 and 5. At threshold 2 and 3, the accuracy is decreasing. On the other hand, when the threshold is taken greater than 4, there is no significant change in the accuracy. Moreover the computational expense increases. So, T = 4 is chosen for further experimental work.

2.4 Classification

The features are labeled as forged and authentic and are fed to classifiers fused in various combinations. Decision level fusion is performed on the outputs of the classifiers as shown in Fig. 2. For that, the majority voting algorithm is used in which the decisions from the classifiers are preserved directly. No probabilistic modeling of the outputs is performed. That is why combining the classifiers is such manner is simple and inexpensive [5]. In decision level fusion, each classifier is allowed to make their decision independently. If there are N classifiers for two-class classification problem, the ensemble output will be accurate if at-least ⌊N/2 + 1⌋ classifiers select the right class. Now, let’s undertake that each classifier has a probability Pi of making a right decision. Then, the probability of ensemble combination of giving a right decision follows a binomial distribution. For experimental work, 80% data is considered for training and other 20% data is used for testing. The pseudo-code for the ensemble classification is shown in Algorithm 1.

Algorithm 1
figure a

Majority voting based Ensemble method.

Fig. 2
figure 2

Fusion of classifiers

3 Experimental outcomes

The experimentations have been implemented on Windows 10–64 bit personal computer with 16GB of RAM and Intel(R) Core-i7 processor. The code is written using Python programming language and to speed up the process of feature extraction, CUDA API is used. The details of the datasets and various experiments explicated to confirm the accuracy of the proposed method are given below:

3.1 Datasets used

Three standard datasets for image tampering detection have been used for assessment of the proposed approach which are available freely. The datasets considered in this work are Copy-move forgery detection (CoMoFoD) [34] dataset, CASIA version 1.0 [7] and CASIA version 2.0 [7] datasets. The two datasets viz. CASIA version 1.0 and CASIA version 2.0 are delivered by Chinese Academy of Sciences. Several manipulations are performed viz. rotation, translation, scaling, distortion and various combinations of such geometrical operations. Various post-processing methods, like blur, noise, color reduction, JPEG compression, etc. are also incorporated to the images of these datasets. The particulars of the datasets are mentioned in Table 1.

Table 1 Details of datasets used for evaluation

3.2 Performance metrics

The performance of the detection method is established by how appropriately the suggested method is capable of classifying the images into authentic and manipulated class. Different performance metrics are calculated for validating the proposed approach. All the metrics are described below:

Accuracy: It is the ratio of accurately identified images with total number of identified images. It is calculated using (24).

$$ Accuracy=\frac{TP+ TN}{TP+ FP+ TN+ FN}\times 100 $$
(24)

where TP is True Positives, TN is True Negatives, FP is False Positives and FN is False Negatives.

Precision: Precision is given as the number of image feature vectors correctly categorized as forged upon total number of image feature vectors labelled as forged and is given by (25).

$$ Precision=\frac{TP}{TP+ FP} $$
(25)

Root Mean Squared Error (RMSE): For finding out the classifier ensemble with low error-rate, RMSE metric has been calculated as shown in (26). For binary classification problems, RMSE helps to shed light on how distant the classification model is from giving the correct answer.

$$ RMSE=\sqrt{\sum \limits_{i=1}^n\frac{{\left({\hat{\mathrm{y}}}_i-{y}_i\right)}^2}{n}} $$
(26)

where \( {\hat{y}}_i \) is the predicted outcome, yi is the actual outcome and n is the total number of images tested using a particular classifier.

True Positive Rate (TPR): TPR reflects the classification model’s ability to detect the image features belonging to the forged class. Hence, higher the TPR, better is the classification model.

$$ TPR=\frac{TP}{TP+ FN} $$
(27)

False Rejection Rate (FRR): FRR determines the frequency at which the classification model makes a mistake by classifying a forged image as authentic. It is calculated using (28).

$$ FRR=\frac{FN}{FN+ TP} $$
(28)

F1-score: It is calculated using (29) and the highest possible value of F1-score is 1. The highest value indicates the perfect classification model. It indicates the balance between recall and precision by taking into account both the false negatives and false positives in contrast to accuracy.

$$ F1- score=\frac{2 TP}{2 TP+ FP+ FN} $$
(29)

Area Under the Curve (AUC): AUC is the measurement metric for indicating performance of classification models at numerous threshold settings. Receiver Operating Characteristics (ROC) curve is obtained by plotting TPR and FPR at diverse points. The area under ROC curve hence gives a value that determines how correctly the model classifies forged images as forged and original images as original.

3.3 Classification

Various experimentations are performed on three benchmark datasets. Firstly, individual classifier model viz. k-Nearest Neighbor, Linear Support Vector Machine (SVM), Decision Tree, Multi-Layer Perceptron (MLP) and Random Forest have been constructed and tested on features from the above mentioned datasets. The results can be seen in Table 2. The values of the metrics obtained from this experimentation are used to highlight the improvements achieved by the proposed method with ensemble classifiers. For k-NN, 5 nearest neighbor are considered and the distance metric used is Euclidean distance. For Linear SVM, linear kernel function is used and the kernel scale is set to 1. For construction of Decision Tree model, maximum number of splits is taken as 100 and Gini’s diversity index is used as the split criterion. Multi-Layer Perceptron is constructed using 20 hidden neurons with sigmoidal function as the activation function in hidden layer and softmax function at output layer. For Random Forest classifier, number of trees is taken as 50. Further to test the features extracted from these datasets, four ensemble combinations of different classifiers mentioned above are structured whose details are given in Table 3. Ensemble classifiers are beneficial as they result in lower error and less over-fitting. Various performance metrics are used for evaluation which is discussed in previous section. The experimental results on CASIA v2 dataset, CASIA v1 dataset and CoMoFoD dataset are given in Tables 4, 5 and 6 respectively. Accuracy, RMSE and AUC for different datasets under consideration are graphically shown in Figs. 3, 4 and 5, respectively. It can be observed from the results that the ensemble classifiers obtained better results as compared to individual classifier in case of all three benchmark datasets.

Table 2 Performance of individual classifier on benchmark datasets
Table 3 Different fusion combinations used for experiments
Table 4 Results of decision fusion on CASIA V2 Dataset
Table 5 Results of decision fusion on CASIA V1 dataset
Table 6 Results of decision fusion on CoMoFoD dataset
Fig. 3
figure 3

Accuracy using different ensemble method

Fig. 4
figure 4

RMSE using different ensemble method

Fig. 5
figure 5

Area Under the Curve using different ensemble method

3.4 Evaluation in comparison with state-of-art approaches

The results obtained from the experiments are compared and analyzed with state-of-the-art methods on various datasets. The method proposed in this paper outperformed [16, 19] in terms of accuracy on CASIA v1.0 dataset as shown in Table 7. The method in [16] processed color images, whereas [19] used the chrominance component of YCbCr model and our method uses all the three color components of an RGB image. The comparison of previous algorithms on CASIA v2.0 is shown in Table 8. The results obtained CoMoFoD dataset are also compared with state-of-art methods as shown in Table 9. In [8], the statistical properties of AC coefficients of JPEG compressed image are used with a standard deviation of non-zero DCT coefficients to give an accuracy of 98.0%. In [25], histogram of oriented Gabor magnitude is utilized to extract features, and then the compact features are served to SVM for classification.

Table 7 Comparison of accuracy on CASIA version 1.0 dataset
Table 8 Comparison of accuracy on CASIA version 2.0 dataset
Table 9 Comparison of accuracy on CoMoFoD

4 Conclusion

In most of the detection algorithms proposed by researchers in recent times, the colored input image is changed into a grayscale image prior to any pre-processing. Most of the methods disregard the color statistics of the image. In the suggested method, a digitally colored image can be processed in terms of a quaternion by taking all the color channels as imaginary components of the quaternion. The detection rate of the image forgery detection method is upgraded by using the property of quaternions to eliminate the redundancy and to represent the digital image in its entirety as a vector. An algorithm based on transition probabilities on quaternions of the image DCT coefficients is put forward in this article. The crux of the article is the passive detection of color images by expanding the attributes of transition probability from the frequency domain to unveil the dependencies among adjoining pixels wherever there is even a tiny change in pixels of the image due to the malicious tampering. The feature-set obtained is fed to an ensemble of classifiers and the majority voting is used to get the final output of classification. The performance of the approach is presented in terms of seven different performance metrics. The experiments are performed on three benchmark datasets viz. CoMoFoD dataset, CASIA version 1.0 and CASIA version 2.0 datasets. The proposed approach beats existing approaches over the CoMoFoD dataset and yields competitive outcomes for CASIA version 2.0 dataset.

In future, the Markov-based features will be tested along-with deep features extracted from different pre-trained architectures in order to improve the detection accuracy for forged images. By using the fusion concept, different kinds of characteristics of the images can be included in the analysis which further helps in increasing the efficiency of an approach.