Reversible Data Hiding in Encrypted Images Using Median Edge Detector and Two’s Complement

: With the rapid development of cloud storage, an increasing number of users store their images in the cloud. These images contain many business secrets or personal information, such as engineering design drawings and commercial contracts. Thus, users encrypt images before they are uploaded. However, cloud servers have to hide secret data in encrypted images to enable the retrieval and veriﬁcation of massive encrypted images. To ensure that both the secret data and the original images can be extracted and recovered losslessly, researchers have proposed a method that is known as reversible data hiding in encrypted images (RDHEI). In this paper, a new RDHEI method using median edge detector (MED) and two’s complement is proposed. The MED prediction method is used to generate the predicted values of the original pixels and calculate the prediction errors. The adaptive-length two’s complement is used to encode the most prediction errors. To reserve room, the two’s complement is labeled in the pixels. To record the unlabeled pixels, a label map is generated and embedded into the image. After the image has been encrypted, it can be embedded with the data. The experimental results indicate that the proposed method can reach an average embedding rate of 2.58 bpp, 3.04 bpp, and 2.94 bpp on the three datasets, i.e., UCID, BOSSbase, BOWS-2, which outperforms the previous work.


Introduction
Data-hiding technology [1] plays a significant role in image fields such as identification, annotation, and copyright. However, the traditional data-hiding method [2] causes permanent distortion of the original images. In the judicial, medical, and military fields, each bit of an image is essential, and any distortion in images is unacceptable. This has led to an interest in the reversible data-hiding (RDH) method. RDH methods [3][4][5] can hide data and achieve the lossless recovery of the original images. However, the early RDH methods could not achieve a good performance. In the past decade, some effective RDH methods have been designed to achieve a better embedding rate and can be divided into three fundamental categories: lossless compression [6][7][8], difference expansion [9][10][11], and histogram shifting [12][13][14]. Combining the advantages of the above three different methods, researchers have proposed the combined strategies [15][16][17].
At present, cloud storage has become a popular service, especially for images, which need large storage space [18]. In the cloud scenario, RDH is an important method for cloud servers to manage outsourced images [19,20]. However, some cloud servers cannot be trusted, and storing images on the cloud may lead to privacy leakage [21]. To protect privacy, users encrypt their images before uploading them. Thus, it is vital to enable the cloud server to efficiently manage the encrypted images and allow the user to recover the original image losslessly at the same time [22]. Under such demands, the method of reversible data hiding in encrypted images (RDHEI) attracts considerable interest from researchers. a multi-MSB prediction and Huffman coding method. They compared the original pixel and predicted value from MSBs to LSBs and defined the length of the same bits as the pixel's label. To reserve room, after the image was encrypted, they used Huffman coding to compress the labels and embedded the Huffman codes into the encrypted pixels. According to the Huffman code of each encrypted pixel, the data hider can embed data.
In 2019, based on prediction errors rather than MSB, Yi and Zhou [36] proposed a new RRBE method. First, they proposed a parametric binary tree labeling (PBTL) scheme. In the PBTL scheme, there are two selectable parameters α and β. Through the parameters and a binary tree, n binary codes of category G 1 and one binary code of category G 2 can be generated. Then, they divided the image into s × s non-overlapping blocks and defined the first pixel in each block as the reference pixel. They used the reference pixel to predict the remaining pixels in each block and calculated prediction errors between the predicted values and the original values. They coded the n-most distributed prediction errors with G 1 codes and coded the remaining prediction errors with G 2 code. After the image was encrypted, they labeled each pixel with the binary code of its prediction error. Finally, in each pixel labeled by the G 1 code, in addition to the bits occupied by the binary code, the remaining bits can be used to embed data. In method [36], the reference pixels in the prediction method occupy a certain proportion and cannot be utilized. Therefore, in 2019, Wu et al. [37] improved the method of [36] by using the median edge detector (MED) predictor, and they achieved an average ER as large as 2.5 bpp.
Although the methods proposed in [36,37] achieved good improvements in the ER, these methods do not make full use of the prediction errors. In addition, in these methods, the content owner must share the auxiliary information with the data hider, such as the pixel labels or location map, thereby leaking the original image's information to the data hider. Thus, to increase the ER and reduce the risk of sharing auxiliary information, a new RRBE method based on the prediction errors is proposed in this paper. The contributions of this paper are summarized as follows: 1.
The proposed method achieves a higher ER than previous related methods. In the proposed method, two's complement is used to encode the prediction errors, making full use of spatial correlation. Therefore, more pixels are used to reserve room in the original image. Meanwhile, a label map is generated to record the overflowed pixels rather than embedding codes in these pixels. Compressing the label map can further reduce the room occupied by the auxiliary information. The experimental results show that the ERs of the proposed method are better than those of previous methods.

2.
The proposed method is more secure. In previous related methods, the auxiliary information is shared with the data hider for hiding data. Through the shared auxiliary information, a dishonest data hider can parse out the original image's spatial information, which may cause leakage of the content. To solve this problem, an MSBs rearrangement method is proposed to form a regular reserved room. Then, the label map is embedded into the regular reserved room and encrypts. In addition, two parameters are set for hiding data, so the data hider cannot obtain any spatial information of the image. Thus, the proposed method reduces the risk of sharing auxiliary information.
The rest of the paper is organized as follows. Section 2 describes the proposed RDHEI method. The experimental results are discussed in Section 3. Section 4 concludes this paper.

Proposed Method
This section describes the proposed method using the MED and two's complement. There are three phases in our method: (1) the content owner processes the original image to reserve room and encrypts the image to protect the content; (2) the data hider embeds secret data in the encrypted image; (3) the receiver extracts the data and recovers the original images.
In the first phase, the content owner performs two's complement generation and labeling methods in the original image to reserve room. Then, the content owner generates the label map as auxiliary information and embeds it in the image. Finally, the content owner encrypts the processed image using an encryption key K e . In the second phase, after using a data hiding key K d , the data hider embeds the secret data in the reserved room of the encrypted image. In the third phase, according to different keys, the receiver can extract the secret data or recover the original image losslessly from the marked encrypted image. An overview of the proposed method is shown in Figure 1. In addition, the main notations of this paper are listed in Table 1.
in the range of [0, 255], a prediction process is done to generate the prediction errors. First, to obtain the predicted value of each original pixel, the MED prediction method is used. In the MED, the pixels in the first row and first column are recorded as reference pixels. In addition, a schematic diagram of MED is shown in Figure 2. In Figure 2, p 1 , p 2 , and p 3 are three original pixels surrounding the currently predicted pixel p o (i, j). Then, the predicted value v(i, j) of each remaining original pixel Finally, each prediction error e(i, j) between the original pixel p o (i, j) and the predicted value v(i, j) is calculated by: Due to the spatial correlation of natural images, the distribution of prediction errors e(i, j)(2 ≤ i ≤ H, 2 ≤ j ≤ W) nearly follows a Laplace distribution with the location parameter equal to zero. Therefore, the center bins of the prediction errors distribution can record through two's complement. Furthermore, two's complement with different lengths can encode variable bins of prediction errors' distribution.
For 8-bit depth pixels, it is assumed that the length of two's complement is denoted α(1 ≤ α ≤ 7), the interval of the prediction errors that can be encoded by α-bit two's complement is [−2 α−1 , 2 α−1 − 1] (which is defined as U). Thus, an appropriate value of α can make the interval U contain the center bins of the distribution of the prediction errors. After determining α, each pixel in p o (i, j)(2 ≤ i ≤ H, 2 ≤ j ≤ W) can be divided into two categories according to its prediction error and U: (1) labeled pixel and (2) unlabeled pixel. The pixel that has a prediction error belongs to U is classified as the labeled pixel. Moreover, the pixel whose prediction error exceeds U is classified as the unlabeled pixel. The meaning of the label can be understood more in the following steps.
For the labeled pixels, their prediction errors can be encoded with α-bit two's complement. Based on this, a two's complement labeling method is proposed to reserve room. First, converting each labeled pixel p o (i, j) into an 8-bit binary sequence. Each bit 7 of the binary sequence can be calculated as: where is a floor function. Then, the prediction error e(i, j) of the corresponding labeled pixel is encoded by the α-bit two's complement and each bit e(i, j) k (k = 0, 1, . . . , α − 1) is calculated by: Finally, in Figure 3, e(i, j) k (k = 0, 1, . . . , α − 1) is embedded into p o (i, j) k (k = 0, 1, . . . , 7) by LSB to ensure that the original pixel can be recovered. After embedding the predicted pixels' prediction errors, the labeled pixels can be recovered by the α-bit two's complement. Therefore, the front (8 − α)-bit MSBs of labeled pixels can be used to embed data. On the other hand, for unlabeled pixels, their prediction errors cannot be encoded by α-bit two's complement. In other words, these pixels cannot be labeled with their prediction errors. The unlabeled pixels cannot be recovered after embedding the data. Thus, the unlabeled pixels must not be modified.
Through the two's complement labeling method, a labeled image I l is generated. An example of the two's complement labeling process on the part of Lena is shown in Figure 4, where α = 3. Figure 4a shows the binary sequences of the original pixels, and the gray blocks are reference pixels. Figure 4b shows the prediction errors of the original pixels, and the blue binary sequences are 3-bit two's complement. Some prediction errors cannot be converted to 3-bit two's complement because they are outside the interval [−4, 3], such as +5 and −5. The labeled image is shown in Figure 4c.

Label Map Generation and Embedding
The previous subsection describes different operations that are performed on each pixel based on its category (belonging to either labeled pixels or unlabeled pixels). However, as shown in Figure 4, the two's complement of labeled pixels and the original bits of unlabeled pixels have the same situation. Thus, it is impossible to distinguish the category of each pixel by its LSB and to embed data in the labeled pixels. To identify each pixel's category, a label map is used for the labeled image.
According to the principle of symmetry, a bitmap the same size as the labeled image is generated, which is defined as label map M. There is a symmetry relationship between the coordinates of M and I l . Then, the values of M can be set through the pixel's category of I l . According to the coordinates of the labeled pixels, the values of the corresponding location in M are set to 0. On the other hand, the values corresponding to the unlabeled pixels in M are set to 1. Because of the spatial correlation of the images, the label map has a large number of 0 and a small number of 1. Based on this, the extended run-length coding method [34] is used to compress M losslessly, and the bitstream obtained after compression is defined as B m .
After the compressed label map is obtained, B m must be embedded into the reserved room as auxiliary information. However, at the stage of image recovery, bitstream B m can be extracted when the pixels are divided into labeled or unlabeled. The identification information is derived from B m . Thus, this is a paradox. To solve this problem, an MSBs rearrangement method is proposed to embed the label map. First, one extracts the (8 − α)bit MSBs of unlabeled pixels to generate a bitstream, B u . Except for the reference pixels, the (8 − α)-bit MSBs of each pixel in the labeled image is reserved room. Therefore, a regular reserved room is obtained. Then, to ensure that the unlabeled pixels can be recovered, bitstream B u is spliced to the tail of bitstream B m to obtain a new long bitstream. Thus, continuous pixels' MSBs can be used to store the long bitstream regardless of whether the pixel is labeled or unlabeled. Finally, image I l is scanned from top to bottom and from left to right, and the long bitstream is embedded into the (8 − α)-bit MSBs of each pixel by using bit substitution simultaneously. Note that the reference pixels are not used to embed the long bitstream. After the embedding of the long bitstream is completed, the last pixel's coordinate of the embedding area is obtained. To extract the long bitstream from the image, the coordinate is stored and defined as parameter C p , for example, C p = (123, 45). To cut the long bitstream into B m and B u , the length of B m is defined as parameter L m , for example, L m = 162,341.
Additionally, for subsequent operation of data hiding and image recovery, the parameters α, C p , and L m are embedded into the image. To ensure that the parameters can be extracted correctly, these parameters are stored in the reference pixels. Furthermore, each parameter can be represented by a fixed-length binary number. Therefore, a fixed number of bits in reference pixels are used: L 1 bits are used to embed α and C p , and L 2 bits are used to embed L m . The values of L 1 and L 2 are calculated by: where is a ceiling function. The three parameters are divided into two parts because the parameter L m is not needed for data hiding. The data hider can embed data by parameters α and C p . The receiver needs parameters α, C p , and L m to recover the original image. Thus, by not sharing the parameter L m , the label map is protected from being retrieved by the data hider. In addition, to recover the reference pixels, the L 1 + L 2 bits that are replaced by these parameters are extracted as bitstream B r . Moreover, bitstream B r is embedded in the tail of the long bitstream (B m + B u ) in the same way. The new total bitstream is defined as B t . After the B r is embedded, the coordinate C p of the last pixel of the embedding area changes. Thus, the corresponding bits of C p in the reference pixels should be modified to the new value.
By troughing the label map embedding method, the processed image I p is generated. In the processed image, the front (8 − α)-bit MSBs of part pixels (starting from coordinate C p ) represent the reserved room in which the data can be embedded. An example of I p is shown in Figure 5, where α = 3. In Figure 5, the red bits in the reference pixels are the parameters. The 3-bit LSBs of each pixel have a different meaning; i.e., the blue bits are two's complement, and the gray bits are the original bits. The yellow bits consist of the total bitstream. In addition, the coordinate of the pixel that is enclosed by the dashed box is recorded as C p . The bits of 'xxx' are the reserved room.

Generation of an Encrypted Image
During this phase, to protect the content of the original image, the processed image I p is encrypted by the following method. First, an H × W pseudorandom matrix R is generated by a chaotic encryption system with an image encryption key K e . Each value in R is denoted r(i, j)(1 ≤ i ≤ H, 1 ≤ j ≤ W). Then, each pixel p p (i, j) in I p is encrypted with r(i, j), and the formula as follows: where p e (i, j) is the encrypted pixel, and the symbol ⊕ represents the exclusive-or operation. Finally, the encrypted image I e is obtained. Note that the L 1 bits in the reference pixels are not encrypted because these bits are shared with the data hider.

Data Hiding in the Encrypted Image
To allow the data hider to embed data in the encrypted image. The L 1 bits of the encrypted image containing the reserved room information are shared plaintext bits. Thus, after receiving the encrypted image I e , the data hider can embed secret data into the reserved room. To enhance security further, the data are encrypted by using a data encryption key, K d . First, the L 1 bits are extracted from the fixed reference pixels, and parameters α and C p are recovered from the bits. According to the parameters, the effective payload can be calculated, assuming N r is the number of reference pixels and N p is the number of pixels that cannot be embedded. Thus, the payload can be calculated by: Meanwhile, the reserved room can be identified, which is the (8 − α)-bit MSBs of each pixel after coordinate C p in the image. Secondly, the image is scanned starting from coordinate C p , and the front (8 − α)-bit MSBs of each pixel are modified to embed the encrypted secret data. Finally, the marked encrypted image I m is generated. The detailed procedure of data hiding is presented by Algorithm 1.

Algorithm 1 Data Hiding Algorithm.
Input: Encrypted image I e , Secret data D, Data encryption key K d Output: Marked encrypted image I m Get the encrypted secret data D e by using key K d Extract the fixed-length L 1 bits from the reference pixels of I e Extract parameter α and coordinate C p from L 1 bits Get the first embeddable pixel p e (i, j)((i, j) = C p ) while There is still encrypted data that have not been embedded do Convert current pixel p e (i, j) into 8-bit binary form p e (i, j) k (k = 0, 1, . . . , 7) Extract front (8 − α) bits from D e , and embed it into p e (i, j) k (k = 8 − α, . . . , 7) Get next pixel end while Get marked encrypted image I m

Data Extraction and Image Recovery
There are two cases during the decoding phase depending on the receiver with different keys: (1) secret data hiding key K d or (2) image encryption key K e . According to the different keys, the receiver can extract the secret data or recover the original image separately.

Data Extraction
The secret data can be extracted from marked encrypted image I m if the receiver has K d . First, the receiver obtains parameters α and C p from L 1 bits reference pixels and identifies the reserved room based on the parameters. Then, the secret data are extracted from the (8 − α)-bit MSBs of the corresponding pixels. Finally, the secret data are decrypted by using K d .

Image Recovery
The receiver can recover the original image when holding K e . First, the marked encryption image I m is decrypted using K e to obtain processed image I p . The L 1 bits in the reference pixels are not encrypted, so there is no need to decrypt them. Then, the parameters α, C p , and L m are extracted from the L 1 and L 2 bits. According to α and C p , the (8 − α)-bit MSBs of the pixels are sequentially extracted to recover the total bitstream B t . In addition, based on L m , the bitstream of the compressed label map B m can be extracted from B t . By decompressing B m , the original label map M is obtained. Based on M, the receiver scans image I p and recovers the remaining bits of B t into the (8 − α)-bit MSBs of the unlabeled pixels through MSB replacement in order. After all the unlabeled pixels have been recovered, the fixed-length bits in the remaining total bitstream are the original bits replaced by parameters. Then, the receiver can recover the reference pixels. Therefore, the labeled image I l is recovered.
After the reference pixels and the unlabeled pixels are recovered, the receiver scans the image I l , and if the pixel is labeled, the original pixel p o (i, j) is equal to the sum of the predicted value v(i, j) and the prediction error e(i, j), where v(i, j) is calculated by the MED predictor and e(i, j) is converted from the α-bit two's complement in the current pixel. On the other hand, for the unlabeled pixel, the original pixel is the same as the current pixel. Thereby, the original image is recovered losslessly.

Experimental Results and Analysis
In this section, six test images with sizes of 512 × 512 are used to analyze the proposed method based on its different performances. As shown in Figure 6, the test images are Lena, F16, Baboon, Tiffany, Airplane, and Man. The proposed method was implemented in Python 3.7, and the experimental environment was an Intel(R) Core (TM) i-5-8265U CPU @ 1.60 GHz (Intel, Santa Clara, CA, USA) on a Windows 10 PC with 8.0 GB RAM and an NVIDIA GeForce MX350 graphics card (NVIDIA, Santa Clara, CA, USA). Comparisons between the proposed method and the state-of-the-art methods are made in this section. The results are described below.

Performance and Security Analysis
In the RDHEI method, the privacy of the original image is important to the content owner. To test the security of our method, Lena is used as an example and the parameter α = 4. Figure 7 shows the results of the experiment at different stages. Figure 7a shows the original Lena, and Figure 7b shows the processed Lena. We can observe that the top area of the processed Lena is scrambled; this occurred because the total bitstream B t is embedded into the consecutive MSBs of the pixels. On the other hand, we label the two's complement in pixels by LSB, so the remaining area of the processed Lena is similar to the original Lena. In Figure 7c, one can use the encryption key to see the encrypted image. Obviously, no information about the original image can be obtained from Figure 7c, which proves that the proposed method is safe. In addition, Figure 7d shows the recovered Lena after the decryption method. The proposed method is completely reversible, and the peak signal-to-noise rate (PSNR) between Figure 7a,d is +∞ dB. Additionally, in experimental results, the embedded data are extracted without error. To further verify the safety of the proposed method, a statistical analysis was performed on Lena at different stages in Figure 7. Figure 8 shows the results, where Figure 8a-d are the corresponding histograms of Figure 7a-d. It is apparent that the histogram of the processed Lena (Figure 8b) retains a certain correlation with the original Lena (Figure 8a). After encryption, Figure 8c shows a uniform distribution of the pixels of the encrypted Lena. Thus, it is impossible to obtain the original content about Lena through statistical analysis, which means that the proposed method achieves a high level of security. Indeed, the histogram in Figure 8d shows that the recovered image is lossless. In the methods of [36,37], the labels of all pixels are shared with the data hider as auxiliary information. However, the prediction error has a one-to-one correspondence with the label. On the other hand, the number of prediction errors is the same as the number of labels. Then, it can be found by statistical analysis that there is a strong correlation between the histogram of the labels and the histogram of the prediction errors. Thus, the auxiliary information in these methods can leak the information of the original image to the data hider. However, in the proposed method, the data hider embeds data through two parameters. In addition, the two's complement labeled in the image is encrypted, so the data hider cannot obtain the label of each pixel. Thus, the proposed method can achieve higher security.

Parameter and Capacity Analysis
In the proposed method, α-bit two's complement is used to encode the prediction errors. As mentioned before, the range of the coded prediction errors varies with the modification of α. To analyze the influence of α, there are three different aspects, which are the labeled pixels' number, the label map's compression rate, and the ER. Meanwhile, α is chosen from 1 to 7 to experiment on the test images. The experimental results and analysis are as follows. Table 2 shows the number of labeled pixels in each test image when α takes different values. To better display the data of Table 2, Figure 9 is generated to show how the number of labeled pixels varies with α. It can be observed from Figure 9 that the number of labeled pixels increases as α increases. In addition, the number of labeled pixels is close to the total number of pixels in each image when α = 7. For example, the number of labeled pixels in Tiffany is 261,080 when α = 7, while the total number of pixels except the reference pixel is 261,121. In the proposed method, the pixel's prediction error is easier to fall into the interval represented by α-bit complement when α is large. Thus, the larger the value of α, the greater the number of labeled pixels.
Moreover, to make the distribution of prediction errors steeper, the MED prediction method is used in this paper.   Table 3 shows the compression rate of the label map in each image when α is different. Similarly, to better visualize the change in compression rate, we convert the data in Table 3 into the broken lines in Figure 10. As shown in Figure 10, it can be seen that the trend of the label map's compression rate is to first decrease and then increase as α increases. The reason is that the proportion of 0 in the label map changes. When the α is small, the proportion of 0 in the label map is small (few labeled pixels). However, the proportion of 1 in the label map is large at this time, which can be compressed. Therefore, the label map has a certain compression rate. For example, the compression rate of Baboon's label map can reach up to 44.27%. Then, the proportion of 0 in the label map increases as α increases, making the number of 0 and 1 become equal. Thus, the compression rate of the label map is reduced. However, with the proportion of 0 in the label map gradually approaching the maximum as α increases, the label map can be compressed well. For example, the label map's compression rate of 5 test images is around 99% when α = 7.
In the proposed method, a suitable α can make a large number of labeled pixels to reserve room. Thus, it is very suitable to use the extended run-length coding [34] to compress the label map.  The ER is the most important indicator to measure an RDHEI method. To obtain the maximum ER of test images, all ERs under different α are calculated, which is shown in Table 4. Further, Figure 11 is generated to visually display the data in Table 4. It is obvious that the ER increases first and then decreases as α increases from 1 to 7. The ER of each test image is not high when α is small, because the labeled pixels' number and label map's compression rate are low. As α increases, the labeled pixels' number and the label map's compression rate gradually increase (α > 3), so the ER gradually increases. However, the reserved room on each labeled pixel is (8 − α) bits. Thus, the room reserved by each labeled pixel decreases as α increases. When α exceeds a certain threshold, the reduced reserved room of the previously labeled pixels is greater than the increased reserved room. Thus, the ER gradually becomes smaller. To maximize the ER, it is necessary to find a suitable α to balance labeled pixels' number, label map's compression rate, and each labeled pixel's reserved room.  In conclusion, α has an important influence on the ER. To achieve the maximum ER, each image needs to choose the appropriate α according to its characteristics. Based on the above experimental results and analysis, the proposed method can achieve the maximum ER when α = 4.

Comparisons with State-of-the-Art Methods
To verify the superiority of the proposed method, the proposed method is compared with four previous RDHEI methods, i.e., those of [33,34,36,37]. To achieve good performance, multiple bit planes are used for the method of [33]. In the method of [34], the length of the codewords and block size are set to 3 and 4 × 4, respectively. In the method of [36], parameters α and β are selected as 5 and 2, respectively, while the block size is set to 3 × 3. Similarly, in the improved method of [37], parameters α and β are set to 5 and 3. In the proposed method, the most suitable value of parameter α is chosen for each image. Figure 12 shows the comparison of the maximum ERs on the six test images of the proposed method with these four methods. It can be observed that the ER of each image obtained by the proposed method is higher than that of other methods. Furthermore, the ERs on F16 and Airplane of other methods do not exceed 3 bpp, but the ERs of the proposed method reach 3.1 bpp and 3.5 bpp, respectively. Obviously, the proposed method can significantly improve the ER on smooth images. Meanwhile, in these comparative methods, the ER of Baboon is very low. However, the proposed method can achieve an ER of 1.2 bpp on Baboon, indicating that our method also has good performance on rough images. To reduce the influence of the test images on the outcome of the comparison, the proposed method is applied to three datasets when α is set to 4, i.e., UCID [40] Table 5. From the results, it can be seen that the maximum ERs of the different datasets are close to 4 bpp, while the minimum ERs are greater than 0 bpp. Thus, these experimental results confirm the universality of the proposed method. Moreover, we compare the average ERs on the different datasets between the proposed method and the four works, as shown in Figure 13. From the figure, it can be observed that the proposed method has higher average ERs on the datasets than other methods. In particular, in the dataset BOSSbase, the average ER of our method exceeds 3 bpp. From the above comparison, it is obvious that the proposed method can achieve a higher ER than existing state-of-the-art methods.

Conclusions
In this paper, we propose a new RDHEI method based on MED and two's complement. First, the MED prediction method is used to predict the pixels in the image. Then, the prediction errors between the original pixels and the predicted values can be obtained. Based on the distribution of the prediction errors, an appropriate interval is encoded by α-bit two's complement. According to experimental results and analysis, the appropriate value of α is 4 or a similar value. To reserve room in the image, two's complement of the prediction error is labeled in the pixel by LSB. Then, a label map is generated to distinguish labeled and unlabeled pixels. To ensure that the original image can be restored losslessly, we use an MSB rearrangement method to embed the label map in the image as auxiliary information. After encrypting the processed image, we can embed secret data in the encrypted image. During the decoding phase, the secret data can be extracted correctly through the data hiding key. Meanwhile, using the image encryption key, the original image can be recovered from the marked encrypted image losslessly, as shown in the experimental results. Thus, the proposed method satisfies the reversibility of the original image. Moreover, the comparisons of experimental results prove that the proposed method can achieve a higher ER than previous methods.
In the future, we plan to increase the ER in two ways. On the one hand, an efficient coding scheme can be used to label more prediction errors. On the other hand, the auxiliary information can be reduced by improving the compression method.