Abstract
Reversible data hiding based on prediction methods is a data hiding technique wherein secret bits can be efficiently hidden into cover images. In this paper, we propose a reversible data hiding method based on multiple prediction methods and local complexity. At each level of data hiding algorithm, we evaluate four prediction methods to decide which method should be chosen to embed secret messages. We propose two tactics to evaluate and select prediction methods. When a prediction method is chosen to perform a shifting and embedding process, a threshold based on local complexity is used to determine which pixel should join the shifting and embedding process. If the local complexity of a pixel is smaller than the threshold, the pixel will join the process; otherwise, the pixel will cease to join the process. Therefore, more pixels will avoid executing pixel shifting. Doing so results in stego-images with lower distortion. The experimental results show that our embedding capacity and quality is superior to those of other approaches.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Reversible data hiding is an application of data hiding that can not only embed secret messages into images, but also can restore the original images after secret messages have been extracted [5, 24]. Thus far, reversible data hiding has been used in applications such as military maps, medical images [1, 20], and legal texts.
Several reversible data hiding schemes have been proposed [2, 8–18, 21–23, 25–27, 29–34]. The reversible data hiding (RDH) technique can be divided into three domains, spatial domain [1, 2, 8–18, 21–23, 25–27, 29–34], frequency domain [3], and compression domain [28]. For the spatial domain, the RDH techniques could be performed into three categories, liked as DE-based (difference-expansion-based) [2, 14, 26] and HS-based (histogram-based) [22, 32], pixel-value ordering [29]. According to these studies, many studies have used a variety of techniques to enhance and expand the capacity and/or quality of difference-expansion-based or histogram-based reversible data hiding methods. Tian (2003) presented a reversible data hiding method based on difference expansion [26]. He partitioned an image into pairs of pixel values, selected expandable difference numbers for difference expansion, and embedded a payload which included an authentication hash. In 2004, Alettar improved upon Tian’s method by utilizing a quad-based algorithm to enlarge the data embedding capacity [2]. Subsequently, some authors used prediction and selection methods to enhance the difference-expansion-based data hiding approach. For example, Lee et al. (2010) predicted a pixel by its left-hand and upper adjacent pixels [13]. In 2011, Yang and Zeng programmed a method of choosing a pixel by the character of its local pixels [16]. Conversely, Ni et al. (2006) presented a reversible data hiding method based on the histogram [22]. Their method guaranteed that the change in each pixel in the stego-image remained within ±1. Doing so ensured that the peak signal-to-noise ratio (PSNR) value of the stego-image would be at least 48 dB.
However, because Ni et al.’s method (2006) used pixel values in the original image to create a histogram, the peak points of the histogram were not high enough. Therefore, many researchers applied prediction methods to improve the structure of the histogram. For example, Lin et al.’s difference-image scheme [18], Tsai et al.’s predictive coding scheme [27], and Yang and Tsai’s interleaving prediction scheme [31]. Li et al.’s difference-image scheme sorted out many methods and provided a framework of histogram-based reversible data hiding with which to create a framework for data hiding and date extraction [15]; however, the testability of this framework was a minor problem.
Usually, steganography is one of the most important components of an information hiding algorithm. Some detection approaches have been proposed such as the spatial rich model (SRM) [7] and maxSRMd2 [4]. More methods, such as syndrome-trellis codes (STCs) and matrix embedding, have been proposed for minimizing the distortion in steganography and passing the detection of hidden information [6, 19].
In this paper, we propose a reversible data hiding scheme based on four prediction methods and local complexity. Here, secret data is embedded into the cover image, level by level [18]. At each level of data hiding alogrithm, an interleaving grouping approach is applied to divide the cover image into four groups [31]. Next, one of the four prediction methods is chosen to apply to each group within a level. In processing the pixels of each group, the predicted error is computed between the current pixel value and its predicted pixel value. Then, the local complexity, i.e., pixel variance, is analyzed to determine whether or not the predicted error will join the process of pixel shifting and data embedding. After pixel shifting and data embedding have been executed within one level, the difference between a cover pixel and a stego-pixel remains within ±1.
The rest of this paper is organized as follows. In Section 2, the interleaving prediction method proposed by Yang and Tsai in 2010 is introduced [31], the predicted error expansion method proposed by Lee et al. in 2010 is explained [13], and a general framework proposed by Li et al. in 2013 is introduced [15]. In Section 3, our reversible data hiding is presented. The experimental result of our scheme is demonstrated in Section 4. Finally, the conclusion is given in Section 5.
2 Related work
2.1 Yang-Tsai’s interleaving prediction method
Yang and Tsai proposed a reversible data hiding scheme based on interleaving prediction and histogram shifting [31]. The interleaving prediction method is used to promote the altitude of the peak point in histogram. The idea likes a chessboard, as shown in Fig. 1, and contains two stages. In the first stage, pixels with white-color are processed and predicted by their neighboring pixels of black-color. After that, in the second stage, black pixels are processed and predicted by their neighboring white pixels.
The detailed steps of the first stage are as follows. Let P i,j be a white pixel, where (i, j) is the location, and D i,j be the predicted error between P i,j and its predicted value. All predicted errors are collected to generate a histogram HS(D). Then, the following steps are executed, where each D i,j is changed into a new value D’ i,j .
-
Step 1:
Find two pairs of peak and zero points (Peak 1, Zero 1) and (Peak 2, Zero 2) from the histogram HS(D), such that Zero 2<Peak 2<Peak 1<Zero 1.
-
Step 2:
Shift the value of the predicted error D i,j by 1 in the following cases.
-
Case A:
Change all values in the range of [Zero 2 +1, Peak 2-1] to the left by 1 unit. This indicates that D’ i,j is set to D i,j -1 as D i,j ∈ [Zero 2 +1, Peak 2-1].
-
Case B:
Change all values in the range of [Peak 1 +1, Zero 1-1] to the right by 1 unit. This shows that D’ i,j is set to D i,j +1 as D i,j ∈ [Peak 1 +1, Zero 1-1].
-
Case A:
-
Step 3:
Conceal a secret bit when the predicted error D i,j is equal to Peak 1 or Peak 2 as the following two cases.
-
Case A:
If the to-be-embedded bit is 0, predicted error D i,j is unchanged. It indicates that D’ i,j is set to D i,j .
-
Case B:
If the to-be-embedded bit is 1,
$$ D{\hbox{'}}_{i,j}=\left\{\begin{array}{ll}{D}_{i,j}+1,\hfill & \mathrm{if}\kern0.5em {D}_{i,j}= Pea{k}_1\hfill \\ {}{D}_{i,j}-1,\hfill & \mathrm{if}\ {D}_{i,j}= Pea{k}_2\hfill \end{array}\right. $$
-
Case A:
-
Step 4
Transform all the predicted errors D’ i,j into pixel values by running the inversed interleaving prediction. Then, output the stego-image.
The outputted stego-image is used as the input image of the second stage. The second stage processes all black pixels. After the second stage is processed, another two pairs of peak and zero points (Peak 1, Zero 1) and (Peak 2, Zero 2) are created and the final stego-image is obtained.
2.2 Lee et al.’s prediction-error expansion method
Lee et al.’s method belonged to category of difference expansion [13]. This work took the concept of the predicted error and a threshold TH into consideration. For a current pixel P i,j , they used its left and up adjacent pixels to create a predicted value \( {\widehat{P}}_{i,j} \) and calculated the predicted error between P i,j and \( {\widehat{P}}_{i,j} \). To be reversible, the first row and first column of the image is unused for data hiding and the image is processed from left to right and from top to bottom. The detailed embedding algorithm of Lee et al.’s approach is described below.
Assume that the left and up adjacent pixels of the current pixel P i,j are (P 1, P 2). The predicted value \( {\widehat{P}}_{i,j} \) is defined as Eq. (1).
The predicted error is estimated by d = \( \left|{\widehat{P}}_{i,j}-{P}_{i,j}\right| \). A bit s is embedded into pixel P i,j according the following cases, where P’ i,j is the resulted stego-pixel.
-
Case A:
If d ≤ TH, bit s can be hidden by the following function:
$$ P{\hbox{'}}_{i,j}=\left\{\begin{array}{c}\hfill {\widehat{P}}_{i,j}+2\times d+s,\mathrm{if}\ {\widehat{P}}_{i,j}\le {P}_{i,j}\hfill \\ {}\hfill {\widehat{P}}_{i,j}-2\times d+s,\mathrm{otherwise}\hfill \end{array}\right. $$(2) -
Case B:
If d>TH, the current pixel cannot tolerate embedding a secret bit, and this pixel needs to run the following function:
$$ P{\hbox{'}}_{i,j}=\left\{\begin{array}{c}\hfill {P}_{i,j}+\left(TH+1\right),\mathrm{if}\ {\widehat{P}}_{i,j}\le {P}_{i,j}\hfill \\ {}\hfill {P}_{i,j}-TH,\ \mathrm{otherwise}\kern2.25em \hfill \end{array}\right. $$(3)
2.3 Li et al.’s general framework method
Li et al. proposed a general framework to construct histogram-based reversible data hiding in 2013 [15]. By the proposed framework, one can get a reversible data hiding algorithm by simply designing the so-called shifting and embedding functions. One of their embedding algorithms is described below.
For a 3 × 3 block x = x 1, x 2,…, x 9 shown in Fig. 2, they took the following linear predictor with non-uniform weight to predict x 5.
The prediction-error is denoted as e 5 = \( {x}_5-{\widehat{x}}_5 \).
Utilize the smooth pixels for reversible data embedding whereas ignoring the noisy ones will significantly reduce the embedding distortion. Then they took the following function
to measure the local complexity of pixel x 5 and used an integer-valued parameter s to select smooth pixels.
For an integer threshold t > 0, take \( {t}_l=\left\lfloor \frac{t}{2}\right\rfloor \) and \( {t}_r=\left\lceil \frac{t}{2}\right\rceil \). They then defined as following:
-
(1)
S = {x ∈ Z9 : −t l ≤ e 5 < t r , C(x) < s} and T = Z9 − S.
-
(2)
For x ∈ T, \( g(x)=\left\{\begin{array}{ll}\left({x}_1,\cdots, {x}_4,{x}_5+{t}_r,{x}_6,\cdots, {x}_9\right)\hfill & \begin{array}{cc}\hfill, \mathrm{if}\hfill & \hfill e{}_5\ge {t}_r\ \mathrm{and}\ C(x)<s\hfill \end{array}\hfill \\ {}\left({x}_1,\cdots, {x}_4,{x}_5-{t}_r,{x}_6,\cdots, {x}_9\right)\hfill & \begin{array}{cc}\hfill, \mathrm{if}\hfill & \hfill e{}_5<-{t}_l\ \mathrm{and}\ C(x)<s\hfill \end{array}\hfill \\ {}x\hfill & \begin{array}{cc}\hfill, \mathrm{if}\hfill & \hfill C(x)\ge s\hfill \end{array}\hfill \end{array}\right. \)
-
(3)
For x ∈ S and m ∈ {0,1}, fm(x) = (x 1,…, x 4, x 5 + ⌊e 5⌋ + m, x 6,…, x 9).
In the above definitions, S means the set of blocks where each block will be used to embed a secret bit m. T is the set of blocks where each block will do a shifting operation or nothing.
2.4 Our proposed methods
Our method takes four different prediction methods and their corresponding variance calculations into consideration. Firstly, the cover image is divided into four groups -- Group1, Group2, Group3, and Group4 -- using an interleaving grouping method. Figure 3 shows a grouping result, wherein each cell indicates a pixel and the number in the cell indicates the group number of the cell. Then, each stage with four processes is used to process the four groups, respectively. At each stage, the four prediction methods are evaluated, and the one with the largest capacity or the highest efficiency ratio is selected. Using the chosen prediction method, all pixels’ predicted errors and variance values are created using their neighboring pixels. A threshold TH is determined by variance values to decide whether or not a pixel will join the shifting and embedding process, is given in following subsections.
2.5 Data hiding algorithm
We take the first stage, which processes Group1, to describe our data hiding method. As shown in the gray part of Fig. 3, one Group1 pixel, says P i,j , is surrounded by eight neighboring pixels which belong to the other groups. Four prediction errors and four variance values of the four prediction methods are calculated as follows.
-
(1)
Chessboard prediction: For each pixel P i,j in Group1, its neighboring Group2 and Group3 pixels are chosen to predict P i,j . The predicted value P̂ i,j is the averaged gray value of the chosen pixels. The predicted error D i,j is defined as P i,j – P̂ i,j . Besides, a variance value V i,j is defined as the variance of the chosen pixels.
-
(2)
Edge prediction: For each pixel P i,j in Group1, the difference of its neighboring Group2 pixels and the difference of its neighboring Group3 pixels are calculated. The two pixels with the smaller difference value are chosen to predict P i,j . Then, the predicted error D i,j and the variance value V i,j are calculated.
-
(3)
Squared prediction: For each pixel P i,j in Group1, P i,j is predicted by the averaged gray value of its neighboring Group2, Group3, and Group4 pixels are chosen to predict P i,j . Then, the predicted error D i,j and the variance value V i,j are calculated.
-
(4)
Max-min-omitted prediction: For each pixel P i,j in Group1, its neighboring Group2 and Group3 pixels excluding the two pixels with the maximal value and the minimal value are chosen to predict P i,j . Then, the predicted error D i,j and the variance value V i,j are calculated.
2.5.1 Algorithm data-embedding-of-the-first-stage
Input: Cover image I, secret message S, embedding cost C, tactic A or tactic B.
Output: Stego Image I’, the couple data (Peak 1, Zero 1) and (Peak 2, Zero 2), a variance threshold TH, prediction method M.
-
Step 1:
For each pixel P i,j in Group1, a predicted error D i,j is calculated by each of the following four prediction methods M 1, M 2, M 3, and M 4:
-
M 1: Chessboard prediction
-
Case A:
If P i,j is has only two neighboring pixels P 1 and P 2 belonging to Group2 and Group3, the predicted error D i,j is given as
$$ {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_1+{P}_2}{2}\right\rfloor . $$(6) -
Case B:
If P i,j has only three neighboring pixels P 1, P 2 and P 3 belonging to Group2 and Group3, the predicted error D i,j is given as
$$ {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_1+{P}_2+{P}_3}{3}\right\rfloor . $$(7) -
Case C:
If P i,j is has four neighboring pixels P 1, P 2, P 3, and P 4 belonging to Group2 and Group3, the predicted error D i,j is given as
$$ {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_1+{P}_2+{P}_3+{P}_4}{4}\right\rfloor . $$(8)
-
Case A:
-
M 2: Edge prediction
-
Case A:
The same to Eq. (6).
-
Case B:
The same to Eq. (7).
-
Case C:
If P i,j has two neighboring Group2 pixels P 1 and P 2 and two neighboring Group3 pixels P 3 and P 4, the predicted error D i,j is given as
If |P 1-P 2| > |P 3-P 4|, \( {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_3+{P}_4}{2}\right\rfloor \);
else,
$$ {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_1+{P}_2}{2}\right\rfloor . $$(9)
-
Case A:
-
M 3: Squared prediction
-
Case A:
If P i,j has only three neighboring pixels P 1, P 2, and P 3, the predicted error D i,j is given as
$$ {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_1+{P}_2+{P}_3}{3}\right\rfloor . $$(10) -
Case B:
If P i,j has only five neighboring pixels P 1, P 2, P 3, P 4, and P 5, the predicted error D i,j is given as
$$ {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_1+{P}_2+{P}_3+{P}_4+{P}_5}{5}\right\rfloor . $$(11) -
Case C:
If P i,j has eight neighboring pixels P 1, P 2, P 3, P 4, P 5, P 6, P 7, and P 8, the redicted error D i,j is given as
$$ {D}_{i,j}={P}_{i,j}-\left\lfloor \frac{P_1+{P}_2+{P}_3+{P}_4+{P}_5+{P}_6+{P}_7+{P}_8}{8}\right\rfloor . $$(12)
-
Case A:
-
M 4: Max-min-omitted prediction
-
-
Step 2:
For each prediction method M i , a histogram is created from its predicted errors and two pairs of peak and zero points are (Peak 1, Zero 1) and (Peak 2 , Zero 2) with Zero 2i <Peak 2i <Peak 1i <Zero 1i , i = 1, 2, 3, 4.
-
Step 3:
Let the number of pixels falling in Peak 1i , Peak 2i , [Peak 1i +1, Zero 1i ], and [Zero 2i , Peak 2i -1] be J 1i , J 2i , I 1i , and I 2i , respectively. Select a prediction method M in the following two cases.
-
Case A:
If tactic A is inputted, the selected prediction method M is defined as
$$ M= \arg \underset{M_i,i=1,\ldots 4}{ \max}\left\{{J}_{1i}+{J}_{2i}\right\}. $$(14) -
Case B:
If tactic B is inputted, the selected prediction method M is the method with the maximal efficiency ratio, where the efficiency ratio R i is defined as
$$ {R}_i=\frac{J_{1i}+{J}_{2i}}{I_{1i}+{I}_{2i}},i=1,\ 2,\ 3,\ 4. $$(15)
-
Case A:
-
Step 4:
The selected method M is used to process Group1 to embed secret data. Using method M, let its predicted errors D i,j , two pairs of peak and zero points (Peak 1, Zero 1) and (Peak 2 , Zero 2) with Zero 2 <Peak 2 <Peak 1 <Zero 1 have been calculated. Additionally, for each pixel P i,j in Group1, a variance value V i,j is calculated by the selected prediction method. The variance calculations of the four prediction methods are similar, here only the chessboard prediction’s calculation is shown below:
-
Case A:
If P i,j has only two neighboring pixels P 1 and P 2 belonging to Group2 and Group3, the variance value V i,j is given as
$$ {V}_{i,j}=2\cdot \left[{\left({P}_1-\left\lfloor \frac{P_1+{P}_2}{2}\right\rfloor \right)}^2+{\left({P}_2-\left\lfloor \frac{P_1+{P}_2}{2}\right\rfloor \right)}^2\right]. $$(16) -
Case B:
If P i,j has only three neighboring pixels P 1, P 2, and P 3 belonging to Group2 and Group3, the variance value V i,j is given as
$$ {V}_{i,j}=\left\lfloor \frac{4}{3}\cdot \left[{\left({P}_1-\left\lfloor \frac{P_1+{P}_2+{P}_3}{3}\right\rfloor \right)}^2+{\left({P}_2-\left\lfloor \frac{P_1+{P}_2+{P}_3}{3}\right\rfloor \right)}^2+{\left({P}_3-\left\lfloor \frac{P{P}_1+{P}_2+{P}_3}{3}\right\rfloor \right)}^2\right]\kern0.5em \right\rfloor\ . $$(17) -
Case C:
If P i,j has four neighboring pixels P 1, P 2, P 3, and P 4 belonging to Group2 and Group3, the variance value V i,j is given as
$$ {V}_{i,j}=\kern0.5em {\left({P}_1-\kern0.5em \left\lfloor \frac{P_1+{P}_2+{P}_3+{P}_4}{4}\right\rfloor \right)}^2+\kern0.5em {\left({P}_2-\kern0.5em \left\lfloor \frac{P_1+{P}_2+{P}_3+{P}_4}{4}\right\rfloor \right)}^2+\kern0.5em {\left({P}_3-\kern0.5em \left\lfloor \frac{P_1+{P}_2+{P}_3+{P}_4}{4}\right\rfloor \right)}^2+\kern0.5em {\left({P}_4-\kern0.5em \left\lfloor \frac{P_1+{P}_2+{P}_3+{P}_4}{4}\right\rfloor \right)}^2. $$(18)
-
Case A:
-
Step 5:
Two arrays V peak , and V shift are created, where their rth elements are defined as
$$ {V}_{peak}\left[r\right] = \left|\left\{{P}_{i,j}\right.\right|\left.\left.{P}_{i,j}\in Group1,{D}_{i,j}= Pea{k}_1 orPea{k}_2,{V}_{i,j}=r\right\}\right| $$(19)$$ {V}_{shift}\left[r\right]=\left|\left\{{P}_{i,j}\right.\right|\left.\left.{P}_{i,j}\in Group1,{D}_{i,j}\in \left[ Pea{k}_1+1,Zer{o}_1\right] or\ \left[Zer{o}_2, Pea{k}_2-1\right],{V}_{i,j}=r\right\}\right| $$(20)where r is a non-negative integer and |.| means the size of the set.
-
Step 6:
Using the embedding cost C, TH is the maximal non-negative value satisfying
$$ \frac{V_{shift}\left[r\right]}{V_{peak}\left[r\right]}<C,0\le r\le TH. $$(21) -
Step 7:
Use TH to distinguish whether a Group1’s pixel P i,j will join the shifting and embedding process or not by setting the following flag:
$$ Fla{g}_{i,j}=\left\{\begin{array}{c}\hfill 1,\ \hfill \\ {}\hfill 0,\ \hfill \end{array}\right.\begin{array}{c}\hfill \mathrm{if}\ {V}_{i,j}\le TH.\hfill \\ {}\hfill \mathrm{if}\ {V}_{i,j}>TH.\hfill \end{array} $$If Flag i,j = 0, it means that the pixel P i,j in Group1 will not join the shifting and embedding process; If Flag i,j = 1, it means that the pixel P i,j in Group1 will join the shifting and embedding process.
-
Step 8:
Run the following cases for each predicted error D i,j and its variance V i,j .
-
Case A:
If Flag i,j = 1 and the predicted error D i,j is equal to Peak 1 or Peak 2 , fetch a secret bit from S and do the following two cases:
-
Case A1:
If to-be-embedded-bit is 0, D’ i,j is set to D i,j .
-
Case A2:
If to-be-embedded-bit is 1,
$$ \left\{\begin{array}{c}\hfill\ \mathrm{If}\ {D}_{i,j}= Pea{k}_1,\kern0.5em D{\hbox{'}}_{i,j}={D}_{i,j}+1\kern1.5em \hfill \\ {}\hfill \mathrm{If}\ {D}_{i,j}= Pea{k}_2,\kern0.5em D{\hbox{'}}_{i,j}={D}_{i,j}-1.\kern1em \hfill \end{array}\right. $$
-
Case A1:
-
Case B:
If Flag i,j = 1 and the predicted error D i,j falls into the range of [Zero 2 + 1, Peak 2 -1] or [Peak 1 + 1, Zero 1 -1], shift predicted error D i,j by one unit as follows:
$$ \left\{\begin{array}{c}\hfill\ \mathrm{If}\ {D}_{i,j}\in \left[ Pea{k}_1+1,Zer{o}_1-1\right],\kern0.5em D{\hbox{'}}_{i,j}={D}_{i,j}+1\kern1.5em \hfill \\ {}\hfill \mathrm{If}\ {D}_{i,j}\in \left[Zer{o}_2+1, Pea{k}_2-1\right],\kern0.5em D{\hbox{'}}_{i,j}={D}_{i,j}-1.\kern1em \hfill \end{array}\right. $$ -
Case C:
If Flag i,j = 0, do nothing, that is, D’ i,j is set to D i,j .
-
Case A:
-
Step 9:
Transform each predicted error D’ i,j into pixel value P’ i,j by the inverse of the prediction method M. Finally, all pixel values P’ i,j form the stego-image I’.
-
Step 10:
Output two pairs (Peak 1, Zero 1) and (Peak 2, Zero 2), a variance threshold TH, and a prediction method M.
The above algorithm, which processes Group1, is for the first stage of level-one data hiding. For reducing the time complexity, the second, third, and forth stages will use the same selected method M of the first stage. In Step 6, if none of r values satisfies Eq. (21), TH will not exist. In this case, none of secret data will be embedded in this stage. To avoid that none of data can be embedded in the next following levels, we increase the embedding cost C by a fixed value whenever a new level is processed. Note that the outputted parameters in Step 10 can be seen as secret data and can be embedded in the next stage. So, only the outputted parameters of the last stage cannot be embedded.
2.6 Data extracting and recovering algorithm
The data extracting and recovering algorithm is similar with the data embedding algorithm. Following shows the detailed steps of the data extracting and recovering algorithm for the last stage.
2.6.1 Algorithm Extracting-and-Recovering-of-the-Last-Stage
Input: Stego image I’, two pairs (Peak 1, Zero 1) and (Peak 2, Zero 2), a variance threshold TH, a prediction method M.
Output: Cover image I, secret message S.
-
Step 1:
For each pixel P’ i,j of stego-image I’, compute its predicted error D’ i,j and variance value V’ i,j by the prediction method M.
-
Step 2:
For each predicted error D’ i,j and its variance value V’ i,j , run the following data extracting and pixel recovering process.
If V’ i,j > TH, do nothing, that is, D i,j = D’ i,j .
If V’ i,j≤ TH, do the following cases.
-
Case A:
If the predicted error D’ i,j is equal to Peak 1 or Peak 2, obtain a secret bit 0 and recover the predicted error as D i,j = D’ i,j .
-
Case B:
If D’ i,j = Peak 1 + 1, extract a secret bit 1 and recover the predicted error as D i,j = D’ i,j -1.
If D’ i,j = Peak 2-1, extract a secret bit 1 and recover the predicted error as D i,j = D’ i,j + 1.
-
Case C:
If D’ i,j falls into the range of [Zero2, Peak2-2], recover the predicted error as D i,j = D’ i,j + 1.
If D’ i,j falls into the range of [Peak1 + 2, Zero1], recover the predicted error as D i,j = D’ i,j -1.
-
Case A:
-
Step 3:
Transform all predicted errors D i,j into pixels P i,j to form the cover image I.
-
Step 4:
Output the cover image I and all extracted message S.
Note that variance value V’ i,j in the data extracting and recovering process is the same to the variance value V i,j in the corresponding data hiding process. The reason is that both V i,j and V’ i,j are calculated from neighboring pixels, which are unchanged after the corresponding data hiding process is finished. Therefore, V’ i,j can be used to judge whether D’ i,j joins the data extracting and pixel recovering process.
2.7 Overflow/Underflow
Overflow and underflow occur when the shifting and embedding process causes a pixel’s value to be over 255 or under 0. To cope with this issue, we pre-modify the pixels that are susceptible to overflow or underflow, and use a location map to record this information. This information can be seen as a part of the secret data, given that it is embedded into the cover image.
In our histogram-based reversible data hiding method, each pixel will change by ±1, at most, at every level. Therefore, before embedding secret data into the image at a given level, we pre-transform pixels with values of 0 and 255 into pixels with values of 1 and 254, respectively, to prevent overflow and underflow as follows:
Also, judge those modifications, the location map is created before those modifications as follows:
where k is an index for those recorded pixels. Therefore, in each level, the size of the location map is k bits if there are k pixels with values equal to values 0, 1, 254, or 255.
3 Experimental results
In this section, we provide the resultant of embedding capacity and image quality to demonstrate the performance of our proposed scheme. In our experiment, ten gray images with size 512 × 512, which are depicted in Fig. 4, are used, and the secret message is obtained by random generation. To estimate the image quality, we applied the function of peak-signal-to-noise-ratio (PSNR) defined as Eq. (22). To estimate the embedding capacity, the function of ER (Embedded Ratio; bpp) is adopted, where ER = total embedded bits / size of cover image. In order to run the program efficiently, the embedding cost C is only inputted on the first level. Then, the C value is added by 0.5 on each of the following levels. Besides, for each level, the four stages will use the same prediction method which is chosen by the first stage.
where MSE is the mean square error between the cover image and the stego-image.
Tables 1 and 2 show the experimental results using tactics A and B, respectively. Because every level tries to choose a better prediction method, our approach can embed more secret data on every level. Besides, because the prediction methods have different characteristics, it can avoid using the same prediction method on every level such that some regions of pixel have been changed too much.
Table 3 shows a comparison between experimental results using methods proposed by Lin et al. [18], Hsiao et al. [17], Lin and Hsueh [30], Yang and Tsai [31], and our approach. From Table 3, one can see that that our approach clearly has more capacity when the PSNR values are similar. We have embedded at least 44 % (≅ 99.81 % − 55.72) more capacity than other methods due to the fact that our approach chooses the more efficient prediction method at every level.
Table 4 shows a comparison between methods proposed by Hu et al. [10], Luo et al. [11], Li et al. [16], Hong [21], Li et al. [15], and our proposed scheme. Our scheme has demonstrated better image quality than that of other methods when the size of the embedded payload is 20,000 bits, because our approach prohibits some prediction errors from entering the process of pixel shifting. In addition, we use four different prediction methods. The threshold, TH, strategy in our proposed scheme has efficiently eliminated the distortion caused by pixel shifting. This enables superior PSNR performance compared to that of previous studies. Figure 5 shows a comparison of PSNR values between our approach and that of Yang-Tsai [31], based on various embedded capacities. The average PSNR value using Yang-Tsai’s method is 52.52 dB, while the PSNR value using our tactic A approach is approximately 58.88 dB. Furthermore, using tactic B approach generated approximately 61.26 dB, which significantly improved upon Yang-Tsai’s method by 6.36 and 8.74 dB, respectively.
Importantly, some overhead is needed in our approach. To record a peak point, an origin, TH value, and a prediction, M, require 9, 9, 8, and 2 bits, respectively, for a total of 46 bits needed for a given stage. These overhead bits can be embedded into the image when the next stage is processed. Accordingly, if the data hiding program is executed for 20 levels, the required overhead will only be 3,680 bits, which is very small compared to the embedding capacity.
In addition, as shown in Table 5 we also compared the result with the previous work of Lee et al.’s scheme [13]. From this table, it is obvious that the averaged stego-image qualities of Lean and Baboon in our scheme are larger than that of Lee et al.’s scheme. As shown in Fig. 6, our scheme has better image quality than previous works of Yang and Tsai [31], Lin et al. [18], Hwang et al. [12], and Weng et al. [29] with the same embedding capacities.
As shown in Figs. 7 and 8, the original images and the stego-images are shown for comparison when the capacities are approaching to 300,000 bits using tactic A. It is visually difficult to distinguish between the original image and the stego-image for both Lena and Baboon by human eyes.
As shown in Tables 6 and 7, we used different C values to embed the secret bits into the image. Experimental results show that smaller C values would have larger capacities for the same PSNR value. Note that a smaller C value has a stricter sieving condition. When C is smaller, it has less capacity during low levels. However, it keeps the image with high quality. Therefore, it has better capacity and PSNR at high level.
Table 8 shows the temporal occurrences of overflow and underflow when 20 levels are executed. Compared to the embedding capacity of our approach, this is relatively little data, which can be easily embedded into the image.
4 Conclusions
We propose a reversible data hiding method based on four candidates for prediction methods and local complexity in order to enhance stego-image quality. We evaluate the four prediction methods by calculating their capacity or efficiency ratios to decide which prediction method will be used. Additionally, we use a variance strategy to determine a threshold, TH, for selecting which pixel should join the process of pixel shifting and data concealing. This variance strategy has efficiently improved upon histogram-based approaches to obtain a high-quality image. From the experimental results, our method has proven to enable a larger image capacity and better quality than those of previous works.
References
Al-Qershi QM, Khoo BE (2011) High capacity data hiding schemes for medical images based on difference expansion. J Syst Softw 31(4):787–794
Alttar AM (2004) Reversible watermark using the difference expansion of a generalized integer transform. IEEE Trans Image Process 13(8):1147–1156
Chang CC, Pai PY, Yeh CM, Chan YK (2010) A high payload frequency-based reversible data hiding method. Inform Sci 180(11):2286–2298
Denemark T, Sedighi V, Holub V, Cogranne R, Fridrich J (2014) Select-channel-aware rich model for steganalysis of digital images. Proc IEEE Int Work Inf Forensics Secur 48–53
Feng JB, Lin IC, Tsai CS, Chu YP (2006) Reversible watermarking: current status and key issues. Int J Netw Secur 2(3):161–170
Filler T, Fridrich J (2011) Minimizing additive distortion in stegannography using syndrome-trellis codes. IEEE Trans Inf Forensics Secur 6(3):920–935
Fridrich J, Kodovsky J (2012) Rich models for steganalysis of digital images. IEEE Trans Inf Forensics Secur 7(3):868–882
Fu DS, Jing ZJ, Zhao SG, Fan J (2014) Reversible data hiding based on prediction-error histogram shifting and EMD mechanism. AEU Int J Electron Commun 68(10):933–974
Hong W (2012) Adaptive reversible data hiding method based on error energy control and histogram shifting. Opt Commun 285(2):101–108
Hsiao JY, Chan KF, Chang JM (2009) Block-based reversible data embedding. Signal Process 89(4):556–569
Hu Y, Lee HK, Li J (2009) DE-based reversible data hiding with improved overflow location map. IEEE Trans Circuits Syst Video Technol 19(2):250–260
Hwang HJ, Kim HJ, Sachnev V, Joo SH (2010) Reversible watermarking method using optimal histogram pair shifting based on prediction and sorting. KSII Trans Internet Inf Syst 4(4):555–670
Lee CF, Chen HL, Tso HK (2010) Embedding capacity raising in reversible data hiding based on prediction of difference expansion. J Syst Softw 83(10):1864–1872
Lee CC, Wu HC, Tsai CS, Chu YP (2008) Adaptive lossless steganography with centralized difference expansion. Pattern Recogn 141(6):2097–2106
Li X, Li B, Yang B, Zeng T (2013) General framework to histogram-shifting-based reversible data hiding. IEEE Trans Image Process 22(6)
Li X, Yang B, Zeng T (2011) Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection. IEEE Trans Image Process 20(12):3524–3533
Lin CC, Hsueh NL (2008) A lossless data hiding scheme based on three-pixel block differences. Pattern Recogn 41(4):1415–1425
Lin CC, Tai WL, Chang CC (2008) Multilevel reversible data hiding based on histogram modification of difference images. Pattern Recogn 41(12):3582–3591
Liu W, Liu G, Dai Y (2014) Efficient alternative form of syndrom-trellis codes for large relative payloads. J Comput Inf Syst 10(21):9037–9044
Lou DC, Hu MC, Li Liu C (2010) Multiple-layer data hiding scheme for medical image. Comput Stand Interfaces 31(2):329–335
Luo L, Chen Z, Chen M, Zeng X, Xiong Z (2010) Reversible image watermarking using interpolation technique. IEEE Trans Inf Forensic Secur 5(1):187–193
Ni Z, Shi YQ, Ansar N, Su W (2006) Reversible data hiding. IEEE Trans Circuits Syst Video Technol 16(3):354–362
Pan Z, Hu S, Ma X, Wang L (2015) Reversible data hiding based on local histogram shifting with multilayer embedding. J f Vis Commun Image Represent 31(8):64–74
Shi YQ, Ni Z, Zou D, Liang C, Xuan G 2004() Lossless data hiding: fundamentals, algorithms, and applications. Proc IEEE ISCAS 33–36
Tai WL, Yeh CM, Chang CC (2009) Reversible data hiding based on histogram modification of pixel differences. IEEE Trans Circuits Syst Video Technol 19(6):906–910
Tian J (2003) Reversible data embedding using a difference expansion. IEEE Trans Circuits Syst Video Technol 16(3):890–896
Tsai P, Hu YC, Yeh HL (2009) Reversible image hiding scheme using predictive coding. Signal Process 89(6):1129–1143
Tu TY, Wang CH (2015) Reversible data hiding with high payload based on referred frequency for VQ compressed codes index. Signal Process 108:278–287
Wang X, Ding J, Pei Q (2015) A novel reversible data hiding based on pixel value ordering and dynamic pixel block. Inform Sci 310(20):16–35
Weng CY, Yang CH, Fan CI, Liu KL, Sun HM (2013) Histogram-based reversible information hiding improved by prediction with the variance to enhance image quality. In The 8th Asia Joint Conference on Information Security, Seoul, Korea, July 25–26
Yang CH, Tsai MH (2010) Improving histogram-based reversible data hiding by interleaving prediction. IET Image Process 4(4):223–234
Yousefl S, Rablee H, Yousefl E, Ghanbarl M ()2007 Reversible data hiding using histogram sorting and integer transform. Proc IEEE DEST 487–490
Zhao A, Luo H, Lu ZM, Pan JS (2011) Reversible data hiding based on multilevel histogram modification and sequential recovery. AEU Int J Electron Commun 65(10):814–826
Zhao Z, Luo H, Lu ZM, Pan JS (2011) Reversible data hiding base on multilevel histogram modification and sequential recovery. AEU Int J Electron Commun 65:814–826
Acknowledgement
This research was partially supported by the Ministry of Science and Technology of the Republic of China under the Grants NSC 101-2221-E-153-002-MY2, MOST 103-2221-E-153-005, and MOST 105-2221-E-153-010.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yang, CH., Weng, CY., Lin, YK. et al. High-fidelity lossless data hiding based on predictors selection. Multimed Tools Appl 76, 23699–23720 (2017). https://doi.org/10.1007/s11042-016-4133-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-016-4133-4