Next Article in Journal
The Learning Curve of Artificial Intelligence for Dental Implant Treatment Planning: A Descriptive Study
Previous Article in Journal
Applications of Deep Eutectic Solvents Related to Health, Synthesis, and Extraction of Natural Based Chemicals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Capacity and High-Quality Reversible Data Hiding Method Using Recurrent Round-Trip Embedding Strategy in the Quotient Image

Department of Information Management, Chaoyang University of Technology, Taichung 41349, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(21), 10157; https://doi.org/10.3390/app112110157
Submission received: 6 September 2021 / Revised: 27 October 2021 / Accepted: 27 October 2021 / Published: 29 October 2021

Abstract

:
In previous research, scholars always think about how to improve the information hiding algorithm and strive to have the largest embedding capacity and better image quality, restoring the original image. This research mainly proposes a new robust and reversible information hiding method, recurrent robust reversible data hiding (triple-RDH), with a recurrent round-trip embedding strategy. We embed the secret message in a quotient image to increase the image robustness. The pixel value is split into two parts, HiSB and LoSB. A recurrent round-trip embedding strategy (referred to as double R-TES) is designed to adjust the predictor and the recursive parameter values, so the pixel value carrying the secret data bits can be first shifted to the right and then shifted to the left, resulting in pixel invariance, so the embedding capacity can be effectively increased repeatedly. Experimental results show that the proposed triple-RDH method can effectively increase the embedding capacity up to 310,732 bits and maintain a certain level of image quality. Compared with the existing pixel error expansion (PEE) methods, the triple-RDH method not only has a high capacity but also has robustness for image processing against unintentional attacks. It can also be used for capacity and image quality according to the needs of the application, performing adjustable embedding.

1. Introduction

With the rapid development of the Internet of Things, advanced high-end computing equipment, and the popularization of communication technology, there are already many applications developed on the IoT-connected platforms, such as cloud-based vehicle routing [1], medical and healthcare [2], freight transportation system [3], blockchain-based IoT [4], etc. The automobile was hailed as the fourth C after the 3C industry. In recent years, after various technology companies have accelerated their investment in Internet of Vehicles (IoV), this industry has set off the biggest change in a century. With the increasing popularity of connected medical devices, companies from the communications technology (ICT) and financial services industries are taking the lead. Healthcare and life sciences are aspiring industries that are close behind. The emerging Internet of Medical Things (IoMT) is becoming a form of medical care. This major change not only helps to simplify the clinical workflow, which is helpful for an infected patient to identify symptoms [5] and provides better treatment rapidly, but also helps to realize remote care.
Blockchain technology, a solution to ensure a trust relationship, has widespread applications relating to Internet of Things (IoT) and Industrial IoT (IIoT) [6]. Blockchain and IIoT has been gaining enormous attention in areas beyond its cryptocurrency roots since more or less 2014: blockchain and cybersecurity, blockchain and finance, blockchain and anti-counterfeit labels, blockchain and smart contracts [7], etc.
People transmit a large amount of media content on the Internet to achieve real-time interactions. At the same time, data are exposed on these public platforms and have run into unforeseen risks. It is easy to attract the attention of stakeholders and even to be maliciously attacked, causing the attacked files tampered with or even destroyed before they are spread. Therefore, how to avoid malicious tampering and misappropriation of secret information is an urgent task.
With the rapid growth of the number of the IoT applications, the amount of data that needs to be transmitted also grows rapidly in multiple activities and even about the environmental conditions that we need to monitor and control at a distance. For users, the environment of various IOT applications contains a lot of key and sensitive information. Therefore, higher requirements are put forward to protect the secure transmission of multimedia data to the destination, and information hiding was born in response to this demand. Information hiding technology, also known as steganography, is to use the content in digital media as a cover media to embed the secret message that you want to convey into the content. To protect the secret message and increase security, it is now used in various applications, for example, identity verification used to protect information content and intellectual property rights [8,9,10]. These technologies hide the secret information in the multimedia content. During transmitting, it is not easy to find that there is a secret message in the content. The receiving party only needs to know how to extract the information hidden in the content so that complete information can be obtained safely. In response to different application methods, two categories are distinguished: digital watermarking and information hiding. Digital watermarking is mainly used to protect intellectual property rights and verify the integrity of information; information hiding is to hide the secret message in the embedded content, not easily discovered, and the restored content also has a considerable degree of image quality.
In the past few decades, reversible data hiding (RDH), also known as lossless data hiding, has gradually become a very active research topic in the field of data hiding. In reversible data hiding, after the confidential information is retrieved, the original images, such as medical images or military images, are not allowed to degrade at all, but its application is now extended. Through the cloud technology, IoT applications, blockchain, AI artificial intelligence technology, and algorithms [1,2,3,4,5,6,7], RDH can be used as a tool to do many things with reversible image operations. After operating the image to the desired target, we can explore all the feature values to be restored to the original image from the target image. These feature values are some derived auxiliary parameters, and then these auxiliary parameters are embedded in the target image through reversible data embedding technology.
The three main methods are difference expansion (DE) [11], histogram shifting (HS) [12,13,14], and prediction error expansion (PEE) [15,16,17]. The histogram shifting (HS) [12] proposed by Ni et al. in 2006 changes the histogram of the image in hiding the embedding secret message, and the histogram is based on the count of the number of all pixel values in the gray-scale image and is drawn into a graph. The pixel value that appears most is the peak point, and the pixel value that does not appear is the zero point. The pixel values in between will move one unit to the zero point to create space. At this time, the peak point can hide the embedding secret message with secret data 1 bit. For the HS method, because the pixel value can only be adjusted at most 1 bit each time, it can have higher image quality, but the embedding capacity will be relatively low. Other scholars have successively proposed related technologies based on the HS method, including generalized histogram shifting [18], two dimensional histogram shifting (TDHS) [19], and adaptive embedding [20].
Another mainstream method of RDH that Thodi and Rodriguez first proposed is the prediction error expansion (PEE) [15], which is based on the difference between the pixel values to perform the action of embedding, so embedding capacity is twice higher compared with the general DE. Additionally, PEE has a more complex predictor than DE, so it produces a prediction error that is smaller than the pixel differences of DE. In order to reduce the excessive distortion after embedding the secret message, Hu et al. proposed an improved method [21] to reduce the size of the location map. In 2014, Peng et al. proposed an improved PVO-based reversible data hiding (IPVO) method [22] based on pixel-value ordering (PVO) and prediction-error expansion (PEE). The IPVO method makes full use of image redundancy to hide information at the prediction errors of 0 or 1/−1 to achieve excellent embedding performance. In 2020, Kumar and Jung successively proposed a pairwise IPVO method [23] to hide data in a smoother block and used the bin reservation strategy [24] to further combine PEE technology to increase the capacity embedded in the carrier. However, the method has lower image quality under low hidden payload. In 2020, Li et al. proposed an improved prediction-error expansion (I-PEE) scheme [25] to achieve higher embedding capacity and good image quality.
In the spatial domain, exquisite pictures are usually transmitted or stored in a JPEG compression format. In view of this, in 2017, Wang et al. proposed a robust significant bit-difference expansion (SBDE) method [26], which uses the higher significant bits (HiSB) in the pixel value as the cover image represented by ( I H i S B ). The reason is that general image processing or attacks will make changes in the lower significant bits (LoSB). Therefore, hiding the data in I H i S B can better maintain the integrity of the data content and increase the robustness of the hiding method. In 2020, Kumar and Jung [27] also proposed the two-layer embedding (TLE) method that hides two secret data in the original pixels. Kumar and Jung’s TLE strategy based on the HS method of PEE uses the sorted predictors and repeatedly embedding, keeping the pixel value unchanged to increase the embedding capacity. The TLE method effectively solves the problem of low capacity of the SBDE method, but there is still room for improvement. Based on the constant pixel value, this method uses a round-trip recursive embedding strategy for the high-capacity and high-quality reversible information hiding of quotient images to hide more secret information and maintain good image quality. The following are the main contributions of our method:
(1)
Our method is called recurrent robust reversible data hiding (triple-RDH). The secret message is embedded back and forth in a recursive round-trip way, which effectively increases the embedding capacity.
(2)
Our method hides the secret message on the quotient images to resist malicious attacks and increase the robustness.
(3)
In addition, our method makes full use of the similarity and correlation of adjacent quotient pixels to maintain stable image quality while increasing the embedding capacity.
(4)
We also split the quotient image into gray pixels and white pixels, using different recursive parameters to adjust the embedding capacity and image quality.
(5)
Therefore, the recurrent round-trip embedding strategy (double R-TES embedding strategy) achieves better performance than the TLE and SBDE methods in terms of embedding capacity and image quality.
This article reviews the Kumar and Jung’s method in Section 2, the proposed method is described in Section 3, Section 4 is the experimental results, and the conclusion is summarized in Section 5.

2. Related Work

In the spatial domain, all pixels on the first layer correspond to their own corresponding values of the least significant bit (LSB); all pixels on the eighth layer correspond to their own corresponding values of the most significant bit (MSB). That is, the LoSBs contains the least significant n bits; the HiSBs contains the higher significant ( 8 n ) bits. That is to say, in the eight-level bit plane, the original image pixel from the intensity value of level 1–127 is changed to 0, and the intensity value of the level 128–255 is changed to 1, and the resulting binary image is described below. The higher significant images of the higher layers are called higher significant bits and represented by I H i S B images, and the images of the lower significant of the lower layers are called lower significant bits (LoSBs). In this article, I L o S B images are represented. That is, the LoSBs contain the least significant n bits; the HiSBs contain the higher significant ( 8 n ) bits. For example, a gray-scale pixel value is 155 = 10011011 2 . Assuming n = 3, the pixel value of the original image pixel in this I L o S B image is 011 2 = 3, and the pixel value in this I H i S B image is 10011 2 = 19. Most of the information hiding techniques in spatial domain directly change the LSBs of the pixel value to achieve the embedding of secret data.
Both the significant-bit-difference expansion (SBDE) method proposed by Wang et al. [26] and the two-layer embedding (TLE) method proposed by Kumar and Jung at (2020) [27] utilize the HiSBs to achieve the information embedding and make the hiding methods more robust. Both the SBDE and TLE methods consider that the distortion of the I H i S B image should not be too large. It will only embed the secret message in the embedding area with low complexity. The measure of complexity complex is the variance of the pixel value on the I H i S B image. The smaller the variance, the higher the smoothness, which means that embedding secret information in this area will cause less distortion, therefore increasing the image visual quality. However, the SBDE method hides the data in the bit difference, resulting in severe image distortion. Therefore, the TLE method enhances the hiding performance both in image quality and embedding capacity.

2.1. Embedding Method of TLE

The TLE method first divides the pixel value of an I H i S B image into white and gray chessboards, as shown in Figure 1a. The secret information to be hidden is repeatedly embedded in the whole gray pixel values and then in the whole white pixel values. Figure 1a also shows that during the embedding process of the TLE method, the pixel values of the first column and the first row and the last column and last row of the I H i S B image will not be changed. The TLE embedding procedure is as follows.
Input: a gray-scale image I .
Output: a gray-scale stego image I .
Step 1. The binary representation of each pixel value in the gray-scale image I is ( b 7 , b 6 , b 5 , b 4 , b 3 , b 2 , b 1 , b 0 ) ,   b i ∈{0, 1}. The cover image is decomposed into an image ( I H i S B ) and an image ( I L o S B ), where the pixel value of each pixel in the I H i S B image is { x = ( b 7 ,   , b n + 2 , b n + 1 , b n ) |   2 n x 2 8 1 }.
Step 2. For the convenience of description, let the I H i S B image be represented by the gray-scale pixel ( x i , j ) in Figure 1b. In the I H i S B image, it is used as the carrier for embedding the secret message of the embedding binary. Let x c = x i , j ; in Figure 1b, the white pixel values of the top, bottom, left and right of the pixel ( x c ) are ( x i 1 , j , x i + 1 , j , x i , j 1 , x i , j + 1 ). For the convenience of explanation, let x u , x d , x l , x r   =   x i 1 , j , x i + 1 , j , x i , j 1 , x i , j + 1 . The vector x u , x d , x l , x r is sorted according to the pixel value from small to large, and the result is x π 1 , x π 2 , x π 3 , x π 4 .
Step 3. The local complexity of the pixel x c is defined as its surrounding four white pixel values x π 1 , x π 2 , x π 3 , x π 4 , and the variance ( σ c 2 ), as shown in Equation (1).
σ c 2 = 1 4 t = 1 4 μ c x π t 2 ,
where μ c = 1 4   t = 1 4 x π t , is the average value of the white pixel values around the pixel x c .
Step 4. The calculation of predictor: Table 1 lists three predictor methods suggested by Kumar and Jung. Each predictor will have a pair of predictors ( p 1 ,   p 2 ) used as a prediction error. Predictor method 1 will use the largest and smallest pixel values in the ordered vector ( x π 1 , x π 2 , x π 3 , x π 4 ), which is a pair of extreme values neighbors of pixel x c . The extreme value neighbors are x π 4 and x π 1 as predictors p 1 and p 2 . The predictor method 2 is the most comprehensive predictor, which will use the four white neighbors of pixel x c . Each predictor p 1 or p 2 will use the average of the three white neighbors to make predictions. The design of predictor method 3 puts the predictors between the extreme predictor method 1 and the comprehensive predictor method 2.
Step 5. Two-layer embedding (TLE) strategy: refers to the strategy that each pixel in the image ( I H i S B ) can embed the secret data twice. In the first layer embedding program, the TLE strategy uses Equation (2) to embed the secret bit s 1 into the pixel x c , and the method is: first calculate the first prediction error e 1 = x c p 1 , and then the to-be-embedded secret bit s 1 0 , 1 , as in Equation (2):
x c 1 =   x c + s 1 ,           i f   e 1 = 1   x c + 1 ,             i f   e 1 > 1   x c ,                             i f   e 1 < 1
where x c 1 is the intermediation value after embedding in the first layer.
Step 6. In the second embedding layer, the TLE method calculates the second prediction error e 2 = x c 1 p 2 , and then uses Equation (3) to hide the secret bit s 2 0 , 1   into x c 1 .
x c 2 = x c 1 s 2 ,         i f   e 2 = 1 x c 1 1 ,             i f   e 2 < 1 x c 1 ,                             i f   e 2 > 1
where x c 2 is the stego pixel value after embedding in the second layer.
The TLE method aims at the prediction error e 1 = 1 , and then the pixel value x c on the original I H i S B image will add s 1 . If the prediction error e 2 = 1 , then the pixel value on the I H i S B image will be the intermediation value x c 1 subtracting s 2 after the first layer of the embedding procedure.
In the TLE strategy, the design predictor p 2 is always greater than the predictor p 1 , so the predictor ( p 2 ) embedded in the second layer can be modified to p 2 by Equation (4):
p 2 = p 2 + 1 ,       i f   p 2 = p 1 p 2 ,                     o t h e r w i s e
Step 7. The pixel of I H i S B is converted to binary, which is the original b 7 ,   , b n + 2 , b n + 1 , b n   and pixel of I L o S B converted to binary, which is the original ( b n , b n 1 , , b 0 ) combined into a stego pixel, until all the pixels are combined into a gray-scale stego image I .
Step 8. Steps 2 to 7 are repeated to perform the TLE embedding method on the white pixel of I H i S B .
Some auxiliary information is needed. The auxiliary information includes the following:
Location map (LM): To avoid the overflow/underflow problem and to lossless extract the secret message and restore the image recovery, the following Equation (5) is used to pre-process the image to generate a location map (LM). If   M I N x c M A X   where   M A X = 2 8 n 1   and   M I N = 0 , the location map L M i , j is represented by 0; if not MAX or MIN, L M i , j is represented by 1.
x i , j = x i , j 1 ,           i f   x i , j = M A X x i , j + 1 ,               i f   x i , j = M I N x i , j ,                               o t h e r w i s e   a n d   L M i , j = 0 , i f   x i , j = M A X   o r   M I N 1 , o t h e r w i s e

2.2. Embedding Example

The following example shows the method of TLE. Assuming that a gray-scale image I has 3 × 3 pixels, its pixel value is shown in Figure 2. First, set n = 3 is used to divide the cover image I into 2 planes, namely I H i S B image and I L o S B image. Then, the I H i S B image is used as the secret carrier. Take the pixel value ( x c ) of the I H i S B image as a carrier, which is 20. The following example illustrates the embedding process:
The TLE method sorts the up, down, left, and right pixel values of the pixel ( x c ) to obtain the result: ( x π 1 , x π 2 , x π 3 , x π 4 ) = (19, 19, 20, 22), as shown in Figure 3.
The TLE strategy first selects which predictor method to use and then judges whether embedding secret bit s 1 can be embedded in x c according to Equation (2). Then according to Equation (3) the TLE strategy judges whether or not the second layer of secret bit s 2 can be embedded.
Assume this example uses predictor method 1, that is, ( p 1 , p 2 ) = ( x π 1 ,   x π 4 ) = ( 19 ,   22 ) . Additionally, suppose s 1 = 1 and s 2 = 1 , then the prediction error e 1 = x c p 1 = 20 19 = 1 is calculated according to Equation (2); therefore, x c 1 = x c + s 1   = 20 + 1 = 21 , and this completes the secret data embedding of the first layer. Then, the prediction error e 2 = x c 1 p 2 = 21 22 = 1 is calculated according to Equation (3) as well as x c 2 = x c 1 s 2 = 21 1 = 20 , and this completes the second layer embedding. The embedded x c 1 is combined with the pixel value of I L o S B to complete the embedding process, as shown in Figure 4.
The TLE method has the possibility of keeping the pixel value unchanged after embedding. It not only reduces the distortion of the image but also increases the embedding capacity and then uses the secret data algorithm to extract the secret message to achieve the purpose of information hiding.

3. Proposed Method

The pixel value of the I HiSB image is prone to the strong correlation among pixels, that is, x c is very similar to p 1 and p 2 . Once the condition that p 1 is always smaller than p 2 is established and the pixel x c is squeezed between the two predictors ( p 1 , p 2 ) , there is a high probability that e 1 = x c p 1 ∈{0, 1} or e 1 = x c p 2 ∈{−1, 0}. For e 1 = 1 , the pixel x c can carry one secret bit   s 1 , and then if e 2 = x c 1 p 2 = 1 , then it again carries another secret bit s 2 . Even if two secret bits are carried, because p 1 x c 2     p 2 , the stego pixel value does not change much, and good image quality can be maintained. Accordingly, the embedding is somehow like a repeated oscillation effect. The effect comes from embedding 1 bit data to make the original I H i S B   pixel x c approach the predictor p 2 and then embedding another 1 bit data to make the pixel x c approach the predictor p 1 in the reverse direction. The proposed method utilizes the repeated oscillation effect to increase the embedding capacity without affecting the distortion of the image. This research method proposed an embedding strategy, which is called the recurrent round-trip embedding strategy (referred to as double R-TES strategy), that the pixel x c   after embedding secret data will move back and forth between the two predictors ( p 1 , p 2 ) . The R-TES strategy embeds performs ( t 1 ) times of outbound/backbound embedding, so the pixel x c can carry 2 t bit data. At the same time, t location map bits are also derived to record the embedding secret data such that the secret message can be completely extracted, and the pixel values of I H i S B image can be exactly recovered.
In the proposed information hiding scheme, the image I is classified into two parts: un-embeddable and embeddable parts. The un-embeddable part is the pixel values of the first column and the first row and the last column and the last row. The remainder pixels of image I can be used as cover image pixels to carry the secret data bits. Like the TLE method, we also first divide the pixel value of a quotient image into white and gray chessboards, as shown in Figure 5, and then the secret information to be hidden is repeatedly embedded in the pixel values of the gray grid first. According to the same embedding procedure, the hidden secret data are repeatedly embedded into the pixel values of the white grid.

3.1. Recurrent Robust Reversible Data Hiding (Triple-RDH) Embedding Method

Input: a gray-scale cover image I .
Output: a gray-scale stego image I .
Step 1. For the pixel in the embeddable area, the image I   is sheared into a quotient image (i.e., higher significant bit-plane image I H i S B ) and a remainder image (i.e., lower significant bit-plane image I L o S B ); that is, each embeddable pixel x i is divided into a quotient image pixel x i q and a remainder image pixel x i r , using the quotient and remainder operands, respectively, by Equation (6).
x i , j q = x i , j   div   2 n ;   x i , j r = x i , j   mod   2 n   for   i = 1 ,   2 ,   ,  
for = 1 ,   2 ,   ,   H   a n d   j = 1 ,   2 ,   , W , where   x i , j I ,   x i , j q   I H i S B , and x i , j r   I L o S B ; ‘div’ and ‘mod’ are the divide and modulo operations, respectively. Each quotient pixel x i , j q of the image I H i S B is labeled into two image pixels of gray and white as in a chessboard-shaped chessboard.
Step 2. For each pixel   x i , j q I H i S B on the chessboard image I H i S B , let x c = x i , j q . Then, the top, bottom, left, and right of the pixel x c are another-color pixel values that are different from the color of x i , j q . The pixel values are, respectively, x i 1 , j q ,   x i + 1 , j q ,   x i , j 1 q ,   x i , j + 1 q I H i S B . If x c = x i , j q is gray, then ( x i 1 , j q ,   x i + 1 , j q ,   x i , j 1 q ,   x i , j + 1 q ) is the four white neighboring pixels of x i , j q ; if x c = x i , j q is white, then ( x i 1 , j q ,   x i + 1 , j q ,   x i , j 1 q ,   x i , j + 1 q ) is the four gray neighboring pixels of x i , j q .
Step 3. The sort() function is used to sort ( x i 1 , j q ,   x i + 1 , j q ,   x i , j 1 q ,   x i , j + 1 q ) in ascending order, and the result obtained is ( x π 1 , x π 2 , x π 3 , x π 4 ).
x π 1 , x π 2 , x π 3 , x π 4 = sort x i 1 , j q ,   x i + 1 , j q ,   x i , j 1 q ,     x i , j + 1 q
The sort() function π : 1 ,   2 ,   ,   n     1 ,   2 ,   ,   n is the unique one-to-one mapping. In the case of equal value, pixels are ordered by   π   k   < π   l , if x π k = x π l and k <   l .
Step 4. The local complexity of pixel x c is defined as the variance ( σ c 2 ) of the surrounding four color pixel values ( x π 1 , x π 2 , x π 3 , x π 4 ), as shown in Equation (8).
σ c 2 = 1 4 k = 1 4 μ c x π k 2 ,
where μ c = 1 4   k = 1 4 x π k , which is the average value of the four color pixel values around pixel x c .
Step 5. The recurrent round-trip embedding strategy (referred to as double R-TES strategy) is expressed. For the convenience of explanation, assume that x c is a gray pixel at this time:
x c o u t =   x c + s o u t ,             i f     e o u t = p p 1   x c + 1 ,                     i f   e o u t > p p 1   x c ,                                   i f   e o u t < p p 1
x c b a c k = x c o u t s i n ,           i f     e b a c k = p p 2 x c o u t 1 ,               i f   e b a c k < p p 2 x c o u t ,                               i f   e b a c k > p p 2
Step 6. p p 1 and p p 2 are two predetermined peak points. According to the experimental statistics of this research, setting p p 1   = 1 and p p 2   = −1 will make the embedding capacity more effective.
Step 7. Then, determine whether x c b a c k can be used a carrier to embed information: compare the gray pixel x c of the original cover image with the gray pixel x c b a c k that has been embedded back and forth t w . times (that is, at most 2 t w bits of information have been hidden). If x c b a c k = x c , this means that the process goes back to Step 6, and the recurrent round-trip embedding strategy can be used again and again to let x c b a c k be the carrier and have the opportunity to embed more than two secret data bits. If x c q = x c b a c k , then go to Step 8.
Step 8. When the last embeddable gray pixel x c = x i , j q of I H i S B , i.e., i = H 1 and   j = W 1 , has been processed, the procedure in a zig-zag canning manner starts to proceed to the first embeddable white pixel x i , j q , i.e., i = 2 and j = 3 of I H i S B , until all white pixels are processed to embed secret data.
Step 9. The final process is to merge the quotient image pixels ( 8 n HiSBs) and the remainder image pixels ( n LoSBs) back to the gray-scale stego image pixels by Equation (11) and return the stego image I .
x c = x c q   mul   2 n + x i r ,
where ‘mul’ stands for the multiplier operator.
This research will generate some extra information: the location map L M i , j t and the parameters for hiding data bits. The lossless compression technology “Arithmetic Coding” is used to compress the location map to a length of ( l C L M ) << t × H × W , and the least significant bit (LSB) substitution method is used with other additional information to hide the end of the image. The function of location map L M i , j t has the following two situations:
CASE 1: When the recursive parameter t   = 1, the location map L M i , j 1 is used to avoid overflow/underflow problems and lossless extraction of secret message to restore the image.
x i , j = x i , j 1 ,               i f   x i , j = M A X x i , j + 1 ,               i f   x i , j = M I N x i , j ,                               o t h e r w i s e   a n d   L M i , j 1 = 0 , i f   x i , j = M A X   o r   M I N 1 , o t h e r w i s e
CASE 2: When the recursive parameter t   > 1, the location map L M i , j t   is used to allow the receiving end to recover smoothly in the process of extracting the information, and every pixel must be recorded how many times the triple-RDH embedding method have been carried out on the pixel; every time it is carried out, it hides a 2-bit data with L M i , j t   = 1, otherwise L M i , j t   = 0.
In addition to the location map, the parameters n , t g ,   t w ,   S e n d are required as additional information, where n 1 , 2 ,   3 ,   4   , using two bits to encode the n value and t g ,   t w 1 , 2 ,   3 ,   4 using four bits to encode the ( t g ,   t w ) values. The index value S e n d is recorded to distinguish the boundary of extra information and the secret message S . The position of the last embedded pixel value is found out when extracting the extra information.
The extra information items and the required bit sizes are shown in Table 2. It is required to embed the overhead information at the end of the cover image by exploiting the LSB method. The size of extra information is 6 + l o g 2 H × W + k   ( t g + t w ) l c l m k bits, and the LSB, which has been replaced by the extra information, is represented as H L S B .

3.2. Embedding Example

We take a cover image with 12 pixels as shown in Figure 6 to illustrate the data embedding procedure. The cover image is split into two images, I H i S B and I L o S B , through Equation (6). The quotient pixels in the I H i S B image are divided into gray pixels and white pixels in a chessboard manner. Let us suppose there is a secret bitstream S =   s 1 s 2 s 3 s 4 s 5 s 6 s 7 s 8 = “1110 1110” and the predictor method 1 is used in this example.
First, the recurrent round-trip embedding strategy (double R-TES) for the gray pixel is used to calculate the prediction error e o u t = x 2 , 2 p 1 = 20 19 = 1 . According to Equation (9), the first secret bit s 1 = 1 is taken out from the secret message 𝑆, and then x 2 , 2 o u t = 20 + 1 = 21 is calculated. This completes the embedding of the outbound value x 1 o u t = 21 and then calculates the backbound value e b a c k = x 2 , 2 o u t p 2 = 21 22 = 1 . According to Equation (10), the secret bit s 2 = 1 is embedded to obtain the backbound value x 2 , 2 b a c k = 21 1 = 20 . It is observed that x 2 , 2 b a c k = x 2 , 2 . According to the double R-TES embedding strategy, the gray pixel x 2 , 2 can continue embedding more secret data bits. At this time, the recursive parameter t g = 2 . According to Equation (6), the prediction error e o u t = x 2 , 2 b a c k p 1 = 20 19 = 1 , and s 3 = 1 is taken from the message S again to make x 2 , 2 o u t = 20 + 1 = 21 . According to Equation (7) again, e b a c k = x 2 , 2 o u t p 2 = 21 22 = 1 . After s 4 = 0 is embedded into x 2 , 2 o u t , the result x 2 , 2 b a c k = 21 0 = 21 is obtained, and the embedding of the backbound is completed.
Since x 2 , 2 b a c k x 2 , 2 , in this example, only two pixels x 2 , 2   and   x 2 , 3 are embeddable, so Step 8 is executed to start embedding the message in another embeddable white pixel x 2 , 3 . Table 3 shows the embedding process in which the proposed triple-RDH is applied to embed the remainder data s 5 s 6 s 7 s 8 into the white pixels.

3.3. Extracting the Message and Restoring the Original Image

At this stage, the receiver uses the instructions of the additional information to extract the secret message embedded in the stego image I according to the reverse operation method of the embedding procedure to recover the original image I .
Like the recurrent robust reversible data hiding (triple-RDH) method in this study, we distinguish the stego image ( I ) into embeddable and un-embeddable areas: the pixel values of the first column, first row, last column, and last row of the stego image are the un-embeddable part. For each embeddable stego pixel y i , the quotient operation is y i q = y i   div  2 n .
As mentioned above, since we need to reverse the hidden method to extract the secret message and recover the original pixel value, we need to extract the message bits from the white pixels with the recursive parameter t w first until all the secret messages hidden in the white pixels are retrieved and the white pixel value is recovered, then the recursive parameter t g   is used to start message extraction and recovery of gray pixel value. In the following extraction equation, we take the recursive parameters t w = 2 and t g = 2 as an illustration.
The receiver will use the following Equations (13) and (14) in the information extraction and recovery procedure of the pixel value. When t w = 2 , the white pixel value will be processed first.
x c b a c k = y i q + 1 , i f   e b a c k < 2 y i q + 1 , i f   e b a c k = 2 y i q                 , i f   e b a c k = 1 y i q                 , i f   e b a c k > 1   a n d   s 4 = n o n e , i f   e b a c k < 2             1 , i f   e b a c k = 2             0 , i f   e b a c k = 1 n o n e , i f   e b a c k > 1
x c o u t = x c b a c k                 , i f   e o u t < 1 x c b a c k                 , i f   e o u t = 1 x c b a c k 1 , i f   e o u t = 2 x c b a c k 1 , i f   e o u t > 2   a n d   s 3 = n o n e , i f   e o u t < 1         0 , i f   e o u t = 1         1 , i f   e o u t = 2 n o n e , i f   e o u t > 2
Then, when t w = 1 , the message extraction and recovery procedure of pixel value use the following Equations (15) and (16) to recovery white pixel value and extract the secret message until all the white pixel values are restored and the secret message is extracted.
x c b a c k = x c o u t + 1 , i f   e b a c k < 2 x c o u t + 1 , i f   e b a c k = 2 x c o u t                   , i f   e b a c k = 1 x c o u t                   , i f   e b a c k > 1   a n d   s 2 = n o n e , i f   e b a c k < 2             1 , i f   e b a c k = 2             0 , i f   e b a c k = 1 n o n e , i f   e b a c k > 1
x c = x c b a c k             , i f   e o u t < 1 x c b a c k             , i f   e o u t = 1 x c b a c k 1 , i f   e o u t = 2 x c b a c k 1 , i f   e o u t > 2   a n d   s 1 = n o n e , i f   e o u t < 0             0 , i f   e o u t = 0             1 , i f   e o u t = 1 n o n e , i f   e o u t > 1
After completing the extraction of the white pixel value and the recovery of white pixel value, the information extraction and the recovery program procedure of the pixel value again use Equations (13) and (14) (at this time t g = 2 ), and Equations (15) and (16) (at this time t g = 1 ) are used to restore the gray pixel value and take out the secret message in sequence.
After extracting the hidden information, the location map (LM) is separated and decompressed into the original form, and the obtained image is reprocessed using Equation (17) to obtain the cover image.
x c = x c + 1 ,       i f   x c = M A X 1   a n d   L M i , j = 0 x c 1 ,       i f   x c = M I N + 1   a n d   L M i , j = 0 x c ,                                                   o t h e r w i s e

3.4. Extraction Example

The extraction process is exactly the reserve of the embedding process, so the message extraction and recovery procedures of the pixel value only need to be reversed. The following is an example of the extraction process:
First, take a stego image in Figure 7 and use Equation (6) to split it into two images I H i S B and I L o S B . The embeddable pixel in the image I H i S B is divided into a gray pixel and a white pixel in a chessboard manner. First, the extract extraction for the white pixel and calculate the backbound prediction error e b a c k = x 2 , 3 b a c k p 2 = 21 22 = 1 , and according Equation (13), s 8 = 0 , and x 2 , 3 b a c k   is unchanged; therefore, x 2 , 3 o u t = 20 + 1 = 21 . Next, calculate the outbound prediction error so that e o u t = x 2 , 3 o u t p 1 = 21 19 = 2 . According to Equation (14), s 7 = 1 , and x 2 , 3 o u t   subtract 1 to restore the pixel value, so x 2 , 3 b a c k = 21 1 = 20 . This completes the extraction of the outbound journey. Then, when the recursive parameter   t w = 1 , the backbound prediction error is also calculated, so that e b a c k = x 2 , 3 b a c k p 2 = 20 22 = 2 , which is expressed according to Equation (15); s 6 = 1 , and add 1 to x 2 , 3 b a c k to restore the pixel value, so x 2 , 3 o u t = 20 + 1 = 21 . Finally, calculate the outbound prediction error so that e o u t = x 2 , 3 o u t p 1 = 21 19 = 2 . According to Equation (16), s 5 = 1 , and x 2 , 3 o u t   subtract 1 to restore the pixel value, so x 2 , 3 = 21 1 = 20 . Finally, the secret bitstream extracted from the white pixels is as s 5 s 6 s 7 s 8 = “1110”, and the pixel values are also restored.
In this example, there are only two pixels x 2 , 2   and   x 2 , 3 , so the next step is to extract the information from the gray pixel x 2 , 2 when the recursive parameter t g = 2 . In order to simply express the extraction process, Table 4 is used to show the changes in the values of the gray pixel during the extraction process to extract the secret bitstream   s 1 s 2 s 3 s 4 = “1110”. Figure 7 illustrates the whole extraction and recovery processes.

4. Experimental Results

The hardware device used by our proposed recurrent robust reversible data hiding (triple-RDH) method is i7 with a solid-state disk with 16GB of memory. The operating system is Win10, and the tool developed is MATLAB. Ten 512 × 512 gray-scale images (“Lena”, “Baboon”, “Barbara”, “Boat”, “Elaine”, “F16”, “House”, “Pepper”, “Sailboat”, and “Tiffany”) are used as the test images, as shown in Figure 8. The secret data are binary string generated by a random function.
The following two evaluation metrics are used in our experiment: embedding capacity (EC) and peak signal-to-noise ratio (PSNR). EC represents the amount of secret data that can be embedded in the image, usually calculated in bits. The maximum embedding capacity refers to the maximum amount of data that can be embedded in an image. The PSNR computes the peak signal-to-noise ratio, in decibels, between two images. The higher the PSNR, the better, indicating that the stego image is highly undetectable. The calculation method of PSNR is P S N R = 10 × log 10 M A X 2 M S E , where MAX is the maximum possible pixel value of the image. The method in this article is mainly based on the gray-scale image, so MAX is 255, and MSE is the mean square error, calculated by M S E = 1 H × W i = 0 H 1 j = 0 W 1 I i , j I i , j 2 , where H is the length of the picture, W is the width of the picture, i ,   j are the pixel value positions, I is the cover image, and I is the stego image.
Figure 9 presents the experimental results of our proposed recurrent robust reversible data hiding (triple-RDH) method. Among the predictor methods used, predictor method 2 shows better experimental results than other predictor methods, so the proposed triple-RDH method chooses the predictor method 2 in all experiments. Additionally, we choose to compare with SBDE (significant-bit-difference expansion) [26], TLE (two-layer embedding) [27], and IPVO (improved pixel-value ordering) [22], because the TLE method, SBDE method, IPVO method, and proposed triple-RDH method are all based on prediction error expansion (PEE) technique.
In Figure 9, compared with the other three methods, it can be clearly seen that the EC of our proposed method is significantly larger than the other methods. In terms of PSNR, in addition to the IPVO method, the triple-RDH method we proposed is also slightly higher than that of TLE method and SBDE method.
Take the “Lena” image in Figure 9a and Table 5 as an example; you can see that the maximum EC of the TLE method has 214,116 bits, the maximum EC of the SBDE method has 179,793 bits, the maximum EC of IPVO method is 38,755 bits, and the maximum EC of our proposed triple-RDH method can be as high as 310,732 bits, which is 1.45 times and 1.73 times more than that of the TLE method and the SBDE method, respectively; the maximum EC of our proposed method is eight times that of the IPVO method.
When the “Lena” image in Figure 9a has the EC = 20,000 bits, the PSNR values of the TLE method, the SBDE method, the IPVO method, and the proposed triple-RDH methods are 43.72, 42.41, 54.91, and 43.94 dB, respectively. When the embedded information is added up to 40,000 bits, the PSNR values of the TLE method, the SBDE method, and our proposed triple-RDH method are 40.52, 39.19, and 40.81 dB, respectively. At this time, the IPVO method is not capable of hiding 40,000 bits into the “Lena” image. Let us observe again if the EC is increased to 60,000 bits in the “Lena” image, the PSNR values of the TLE method, the SBDE method, and our proposed triple-RDH method are 38.55, 37.28, and 38.98 dB, respectively. It can be seen that our triple-RDH method is higher than the TLE method and the SBDE method in terms of PSNR value. The EC of the IPVO method cannot reach 40,000 bits, but we can easily embed more than 60,000 bits, and our advantage in the embedding capacity is great.
Let us observe more complicated images, such as the “Baboon” image in Figure 9b and Table 5 as an example. It can be seen that the maximum EC of the TLE method is 117,025 bits, the maximum EC of the SBDE method is 105,149 bits, the maximum EC of the IPVO method is 13,736 bits, and the maximum EC of our proposed triple-RDH method is 171,742 bits, which is much higher than the other three methods, which are 1.46 times and 1.63 times the TLE method and the SBDE method, respectively. The maximum EC of the proposed method is 12 times that of the IPVO method.
When the “Baboon” image in Figure 9b is embedded in 20,000 bits, the PSNR value of the TLE method is 39.1 dB, the PSNR value of the SBDE method is 36.39 dB, and the PSNR value of the triple-RDH method we proposed has 39.8 dB, but the IPVO method is incapable of carrying 20,000 bits. When the EC is increased to 40,000 bits, the PSNR values of the TLE method, the SBDE method, and the proposed triple-RDH method are 36.73, 34.42, and 37.6 dB, respectively. When the EC is increased up to 60,000 bits, the PSNR values of the TLE method, the SBDE method, and the proposed triple-RDH method are 35.01, 33.16, and 36.08 dB, respectively. It can also be seen that the PSNR value of our proposed method is higher than those of the TLE and SBDE methods. In the image of the “Baboon”, the EC of the IPVO method is less than 20,000, so the PSNR cannot be compared.
Let us observe the average maximum EC and image quality PSNR of the ten test images in Figure 9a–j and Table 5. It can be seen that the average maximum EC of the TLE method is 179,587 bits, the average maximum EC of the SBDE is 149,298 bits, and the average maximum EC of IPVO is 33,465 bits. The average maximum EC of the triple-RDH method we proposed is 261,466 bits. Even in any image in Figure 9, there is an obvious difference in the embedding capacity. We have a great advantage in the embedding capacity, and the PSNR value tends to be flat when the embedding capacity is higher. Compared with other methods, especially the PSNR of IPVO presents a steeper downward trend.
In Figure 9a–j, when the EC is 20,000 bits, the PSNR values of TLE, SBDE, IPVO, and the proposed triple-RDH method are 42.39, 40.31, 54.34 B, and 42.89 dB on average, respectively. When the EC is 40,000 bits, the average PSNR values of TLE, SBDE, IPVO, and the proposed triple-RDH method are 39.24, 37.34, 52.71, and 39.89 dB, respectively. When the EC is 60,000 bits, the average PSNR values of TLE, SBDE, IPVO, and the proposed triple-RDH method are 37.35, 35.47, and 38.04 dB, respectively; However, the IPVO method is incapable of hiding anything at this time. The PSNR value of the proposed triple RDH method is slightly higher than those of the average PSNR values of the TLE and SBDE methods.
From Table 5, our proposed triple-RDH method using predictor method 2 has a higher EC than Predictor methods 1 and 3. In the “Lena” image, the EC of our triple-RDH method is 4931 bits higher than that of the method using Predictor method 1, and also 7714 bits higher than that of the method using predictor method 3. Table 5 is also a comparison table of our proposed triple-RDH method and TLE on the different predictor methods in terms of EC and PSNR metrics. It can be found that the maximum EC of all predictor methods of our proposed method is higher than that of the TLE method. At the maximum EC, the PSNR value of our proposed triple RDH method with predictor Method 2 is only slightly lower than that of the TLE method by 1.31 dB; the PSNR value of our proposed triple RDH method using predictor Method 3 is slightly lower by 1.12 dB; the PSNR value of our proposed triple RDH method using predictor method 1 is slightly lower by 1.1 dB.
Table 6 shows the results in terms of EC and PSNR using our triple-RDH method for different n and different peak points. It can be observed from the experiment results that the PSNR can reach 46.51 dB when n   = 1. When n   = 4, the EC can be as high as 378,971 bits, but the PSNR drops to 24.85 dB. Accordingly, we do not recommend setting n to be 4. The EC with n = 3 can reach 324,055 bits and the PSNR is maintained at 31.11 dB, achieving high capacity while maintaining good image quality PSNR, so we recommend setting n as 3. At different peak points, we find that the EC with (1, −1) is higher than those of (0, −1) and (1, 0) in most cases, so we recommend setting the peak point to (1, −1) as the best choice. The peak points (0, −1) and (1, 0) have some differences in the performance of the complex image and the smooth image. For a smooth image “F16”, when using the second predictor method and n is set as 3, the difference between peak points (1, −1) and (0, −1) in terms of EC is 6644 bits, and the difference in EC between peak points (1, −1) and (1, 0) is 58,464 bits. As for the complex image “Baboon”, the difference in terms of EC between the peak points at (1, −1) and at (0, −1) is 25,045 bits; the difference in EC between peak points at (1, −1) and at (1, 0) is 20,092 bits.
From the results, we find that using (1, 0) as the peak point in a complex image is more advantageous than using (0, −1); however, in a smooth image, using (0, −1) as the peak point will obtain good benefits in embedding capacity. Therefore, different peak points can be set according to whether the carrier is a complex image or a smooth image, and actual application requirements to achieve a balance between embedding capacity and image quality.
We observe the results in Table 6 and find that when n = 3, p p 1     = 1, and p p 2   = −1, the results are the best, so we set n as 3 and peak point to (1, −1) used as the experimental parameter in Table 7. Table 7 shows the performance comparison of our triple-RDH method when setting t g   a n d   t w from 1 to 5 for both the complex image “Baboon” and the smooth image “F16”. When the recursive parameters are between t g = 1 ,   t w = 1 and t g = 2 , t w = 2 , the embedding capacity EC has increased by 0.46 times; between t g = 2 ,   t w = 2 and t g = 3 , t w = 3 , the EC has increased by 0.15 times; between t g = 3 , t w = 3   and   t g = 4 ,   t w = 4 , the EC has increased by 0.06 times; between t g = 4 ,   t w = 4 and t g = 5 , t w = 5 , the EC has increased by 0.02 times. The higher the t g     a n d   t w values, the lower the increased EC will be, so the benefit is not high. Accordingly, we recommend the parameters as t g = 4   and   t w = 4 .
The recursive parameters t g and t w can also be adjusted for help embedding in different ways. We experimented with t g 1 ,   2 ,   3 ,   4 ,   5 and t w 1 ,   2 ,   3 ,   4 ,   5 , and after observing the experiment results, we found that the triple-RDH method we proposed is stable in terms of image quality PSNR and can maintain the image quality PSNR above 30 dB. Different combinations of t g and t w can help us make changes in different situations. Table 8 shows the performance results of our triple-RDH method in t g 1 ,   2 ,   3 ,   4 ,   5 and t w 1 ,   2 ,   3 ,   4 ,   5 . We have observed that changing t g   and   t w can help us make finer adjustments to the embedding capacity and image quality, so that they can have more different embedding possibilities for various application needs.

5. Conclusions

In this research, we propose a recurrent robust reversible data hiding (triple-RDH) method. The triple-RDH method embeds the secret message into the quotient pixel value of the gray-scale image, which can prevent malicious attacks and increase the robustness. We have developed a recurrent round-trip embedding strategy (referred to as a double R-TES), which allows the pixel value to shift to the right and then shift to the left after embedding the secret bits, resulting in the invariance of the pixel achieving the repetitive embedding strategy and effectively increasing the embedding capacity through the double R-TES embedding strategy. The triple-RDH method divides gray pixels and white pixels for the quotient image, adjusting the recursive parameter t g and t w to effectively improve the embedding capacity and maintain good image quality.
The experimental results show that the proposed method has a great advantage in the embedding capacity, and the PSNR value tends to be flat when the embedding capacity is higher. The average maximum ECs of the TLE, SBDE, and IPVO and our proposed triple RDH methods are 179,587, 149,298, 33,465, and 261,466 bits, respectively. The experimental results show that our triple RDH method is superior to the TLE method and the SBDE method in terms of embedding capacity and image quality. The average maximum embedding capacity of our proposed method is 261,466 bits, which is eight times that of IPVO method. It can also be seen from Figure 9a–j of the experimental results that the average PSNR value of the proposed triple RDH method is even higher than that of the TLE and SBDE methods.
Obviously, the image quality of IPVO in the RDH method is quite excellent. When 20,000 bits are hidden, the PSNR of the image “Lena” for the IPVO method is 54.91 dB. This is just as described in the related literature, emphasizing that the IPVO method focuses on good image quality but the embedding ability is limited. Additionally, after hiding 20,000 bits, the image quality of our proposed method can reach a PSNR value of 43.94 dB and an SSIM value of 0.9779. Compared with the PSNR of the IPVO method, even if the PSNR value is less 10 dB, the high SSIM value of our method indicates that the stego-image has a perfect match with the original image, indicating that the third party will not be aware of the distortion of the image and therefore does not know the existence of secret information. Therefore, the proposed data hiding method helps to transmit data seamlessly and securely through communication channels.

Author Contributions

Conceptualization, C.-F.L.; methodology, C.-F.L.; software, H.-Z.W.; validation, C.-F.L. and H.-Z.W.; formal analysis, C.-F.L.; investigation, C.-F.L. and H.-Z.W.; resources, C.-F.L.; data curation, H.-Z.W.; writing—original draft preparation, H.-Z.W.; writing—review and editing, C.-F.L.; visualization, C.-F.L.; supervision, C.-F.L.; project administration, C.-F.L.; funding acquisition, C.-F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the Ministry of Science and Technology, Taiwan, Republic of China under the Grant under the Grants MOST 109-2221-E-324-022.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We are grateful to all the reviewers’ valuable comments for improving the quality of this research work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gayialis, S.P.; Konstantakopoulos, G.D.; Papadopoulos, G.A.; Kechagias, E.; Ponis, S.T. Developing an advanced cloud-based vehicle routing and scheduling system for urban freight transportation. IFIP Int. Conf. Adv. Prod. Manag. Syst. 2018, 536, 190–197. [Google Scholar] [CrossRef]
  2. Khanna, A.; Kaur, S. Internet of Things (IoT), applications and challenges: A comprehensive review. Wirel. Pers. Commun. 2020, 114, 1687–1762. [Google Scholar] [CrossRef]
  3. Kechagias, E.P.; Gayialis, S.P.; Konstantakopoulos, G.D.; Papadopoulos, G.A. An Application of an Urban Freight Transportation System for Reduced Environmental Emissions. Systems 2020, 8, 49. [Google Scholar] [CrossRef]
  4. Lockl, J.; Schlatt, V.; Schweizer, A.; Urbach, N.; Harth, N. Toward trust in Internet of Things ecosystems: Design principles for blockchain-based IoT applications. IEEE Trans. Eng. Manag. 2020, 67, 1256–1270. [Google Scholar] [CrossRef]
  5. Singh, R.P.; Javai, M.; Haleem, A.; Suman, R. Internet of things (IoT) applications to fight against COVID−19 pandemic. Diabetes Metab. Syndr. Clin. Res. Rev. 2020, 14, 521–524. [Google Scholar] [CrossRef]
  6. Choo, K.K.R.; Yan, Z.; Meng, W. Blockchain in industrial IoT applications: Security and privacy advances, challenges, and opportunities. IEEE Trans. Ind. Inform. 2020, 16, 4119–4121. [Google Scholar] [CrossRef]
  7. Gayialis, S.P.; Kechagias, E.P.; Konstantakopoulos, G.D.; Papadopoulos, G.A.; Tatsiopoulos, I.P. An approach for creating a blockchain platform for labeling and tracing wines and spirits. In IFIP International Conference on Advances in Production Management Systems (APMS 2021); Springer: Cham, Switzerland, 2021; Volume 633, pp. 81–89. [Google Scholar] [CrossRef]
  8. Shen, J.J.; Lee, C.F.; Hsu, F.W.; Agrawal, S.A. Self-embedding fragile image authentication based on singular value decomposition. Multimed. Tools Appl. 2020, 79. [Google Scholar] [CrossRef]
  9. Kamran, A.K.; Malik, S.A. A high capacity reversible watermarking approach for authenticating images: Exploiting down-sampling, histogram processing, and block selection. Inf. Sci. 2014, 256, 162–183. [Google Scholar] [CrossRef]
  10. Chen, X.; Sun, X.; Sun, H.; Zhou, Z.; Zhang, J. Reversible watermarking method based on asymmetric-histogram shifting of prediction Errors. J. Syst. Softw. 2013, 86, 2620–2626. [Google Scholar] [CrossRef] [Green Version]
  11. Tian, J. Reversible data embedding using a difference expansion. IEEE Trans. Circuits Syst. Video Technol. 2003, 13, 890–896. [Google Scholar] [CrossRef] [Green Version]
  12. Ni, Z.C.; Shi, Y.Q.; Ansari, N.; Su, W. Reversible data hiding. IEEE Trans. Circuits Syst. Video Technol. 2006, 16, 354–362. [Google Scholar] [CrossRef]
  13. Lee, C.F.; Shen, J.J.; Wu, Y.J.; Agrawal, S. PVO-based reversible data hiding exploiting two layer embedding for enhancing image fidelity. Symmetry 2020, 12, 1164. [Google Scholar] [CrossRef]
  14. Lee, C.F.; Shen, J.J.; Agrawal, S.; Tseng, Y.J.; Kao, Y.C.A. Generalized pixel value ordering data hiding with adaptive embedding capability. J. Supercomput. 2020, 76, 2683–2714. [Google Scholar] [CrossRef]
  15. Thodi, D.M.; Rodriguez, J.J. Expansion embedding techniques for reversible watermarking. IEEE Trans. Image Process. 2007, 16, 721–730. [Google Scholar] [CrossRef] [PubMed]
  16. Lee, C.F.; Weng, C.Y.; Kao, C.Y. Reversible data hiding using Lagrange interpolation for prediction-error expansion embedding. Soft Comput. 2019, 23, 9719–9731. [Google Scholar] [CrossRef]
  17. Ou, B.; Li, X.L.; Zhao, Y.; Ni, R.R.; Shi, Y.Q. Pairwise prediction-error expansion for efficient reversible data hiding. IEEE Trans. Image Process. 2013, 22, 5010–5021. [Google Scholar] [CrossRef]
  18. Li, X.L.; Li, B.; Yang, B.; Zeng, T. General framework to histogram-shifting-based reversible data hiding. IEEE Trans. Image Process. 2013, 22, 2181–2191. [Google Scholar] [CrossRef] [PubMed]
  19. Li, X.L.; Zhang, W.M.; Gui, X.L.; Yang, B.A. Novel reversible data hiding scheme based on two-dimensional difference-histogram modification. IEEE Trans. Inf. Forensics Secur. 2013, 8, 1091–1100. [Google Scholar] [CrossRef]
  20. Hong, W. Adaptive reversible data hiding method based on error energy control and histogram shifting. Opt. Commun. 2012, 285, 101–108. [Google Scholar] [CrossRef]
  21. Hu, Y.J.; Lee, H.K.; Li, J.W. DE-based reversible data hiding with improved overflow location map. IEEE Trans. Circuits Syst. Video Technol. 2009, 19, 250–260. [Google Scholar] [CrossRef]
  22. Peng, F.; Li, X.; Yang, B. Improved PVO-based reversible data hiding. Digit. Signal Process. 2014, 25, 255–265. [Google Scholar] [CrossRef]
  23. Kumar, R.; Jung, K.H. Enhanced pairwise IPVO-based reversible data hiding scheme using rhombus context. Inf. Sci. 2020, 536, 101–119. [Google Scholar] [CrossRef]
  24. Kumar, R.; Kumar, N.; Jung, K.H. I-PVO based high capacity reversible data hiding using bin reservation strategy. Multimed. Tools Appl. 2020, 79, 22635–22651. [Google Scholar] [CrossRef]
  25. Li, S.; Hu, L.; Sun, C.; Chi, L.; Li, T.; Li, H. A Reversible Data Hiding Algorithm Based on Prediction Error With Large Amounts of Data Hiding in Spatial Domain. IEEE Access 2020, 8, 214732–214741. [Google Scholar] [CrossRef]
  26. Wang, W.; Ye, J.; Wang, T.; Wang, W. Reversible data hiding scheme based on significant-bit-difference expansion. IET Image Process. 2017, 11, 1002–1014. [Google Scholar] [CrossRef]
  27. Kumar, R.; Jung, K.H. Robust reversible data hiding scheme based on two-layer embedding strategy. Inf. Sci. 2020, 512, 96–107. [Google Scholar] [CrossRef]
Figure 1. Image scanning and embedding mode. (a) a zig-zag scan (b) the gray pixel x c and its surrounding four white pixels.
Figure 1. Image scanning and embedding mode. (a) a zig-zag scan (b) the gray pixel x c and its surrounding four white pixels.
Applsci 11 10157 g001
Figure 2. Pixel is divided into two planes, I H i S B image and I L o S B image.
Figure 2. Pixel is divided into two planes, I H i S B image and I L o S B image.
Applsci 11 10157 g002
Figure 3. Sorted pixel values.
Figure 3. Sorted pixel values.
Applsci 11 10157 g003
Figure 4. Stego image I after embedding.
Figure 4. Stego image I after embedding.
Applsci 11 10157 g004
Figure 5. Image I is divided into I H i S B and I L o S B .
Figure 5. Image I is divided into I H i S B and I L o S B .
Applsci 11 10157 g005
Figure 6. The example of secret data embedding process.
Figure 6. The example of secret data embedding process.
Applsci 11 10157 g006
Figure 7. Example illustration of message extraction and pixel recovery.
Figure 7. Example illustration of message extraction and pixel recovery.
Applsci 11 10157 g007
Figure 8. Ten test images which are (a) Lena (b) Baboon (c) Barbara (d) Boat (e) Elaine (f) F16 (g) House (h) Pepper (i) Sailboat (j) Tiffany respectively.
Figure 8. Ten test images which are (a) Lena (b) Baboon (c) Barbara (d) Boat (e) Elaine (f) F16 (g) House (h) Pepper (i) Sailboat (j) Tiffany respectively.
Applsci 11 10157 g008
Figure 9. Comparison of PSNRs and ECs between the proposed triple-RDH, TLE, SBDE, and IPVO methods using ten test images (a) Lena (b) Baboon (c) Barbara (d) Boat (e) Elaine (f) F16 (g) House (h) Pepper (i) Sailboat (j) Tiffany.
Figure 9. Comparison of PSNRs and ECs between the proposed triple-RDH, TLE, SBDE, and IPVO methods using ten test images (a) Lena (b) Baboon (c) Barbara (d) Boat (e) Elaine (f) F16 (g) House (h) Pepper (i) Sailboat (j) Tiffany.
Applsci 11 10157 g009aApplsci 11 10157 g009b
Table 1. Three different predictor methods.
Table 1. Three different predictor methods.
PredictorsPredictor Methods123
p 1 x π 1 x π 1 + x π 2 + x π 3 3 x π 1 + x π 2 2
p 2 x π 4 x π 2 + x π 3 + x π 4 3 x π 3 + x π 4 2
Table 2. The extra information items and required bit sizes.
Table 2. The extra information items and required bit sizes.
Extra Information ItemsRequired Bit Sizes (Bits)
n 2
t g 2
t w 2
The length of the L M i , j t where t∈{ t g ,   t w } l o g 2 H × W × ( t g + t w )
Compressed location maps l c l m k where k = 1, 2, …,   ( t g + t w )
Index value S e n d l o g 2 H × W
Table 3. White pixel value and the changes of various values in the embedding procedure.
Table 3. White pixel value and the changes of various values in the embedding procedure.
Recursive   Parameter   t w Original PixelPredictoreSStego Pixel
t w = 1 x 2 , 3 = 20 p 1 = 19 e o u t = 1 s 5 = 1 x 2 , 3 o u t = 21
x 2 , 3 o u t = 21 p 2 = 22 e b a c k = 1 s 6 = 1 x 2 , 3 b a c k = 20
t w = 2 x 2 , 3 b a c k = 20 p 1 = 19 e o u t = 1 s 7 = 1 x 2 , 3 o u t = 21
x 2 , 3 o u t = 21 p 2 = 22 e b a c k = 1 s 8 = 0 x 2 , 3 b a c k = 21
Table 4. Gray pixel value and changes in various values in the extraction process.
Table 4. Gray pixel value and changes in various values in the extraction process.
Parameter   t g Stego PixelPredictorPrediction Error eSecret BitRestored Pixel
t g = 2 x 2 , 2 b a c k = 21 p 2 = 22 e b a c k = 1 s 4 = 0 x 2 , 2 o u t = 21
x 2 , 2 o u t = 21 p 1 = 19 e o u t = 2 s 3 = 1 x 2 , 2 b a c k = 20
t g = 1 x 2 , 2 b a c k = 20 p 2 = 22 e b a c k = 2 s 2 = 1 x 2 , 2 o u t = 21
x 2 , 2 o u t = 21 p 1 = 19 e o u t = 2 s 1 = 1 x 2 , 2 b a c k = 20
Table 5. The comparison of embedding EC and PSNR values by the proposed triple-RDH method and TLE method with 10 standard images on the three predictor methods.
Table 5. The comparison of embedding EC and PSNR values by the proposed triple-RDH method and TLE method with 10 standard images on the three predictor methods.
Prediction Method123
MetricsECPSNRECPSNRECPSNR
Lenaproposed305,80131.28310,73231.04303,01831.15
TLE203,24332.38214,16632.35207,59932.27
Baboonproposed159,91131.92171,74230.61164,42131.1
TLE105,66832.53117,02531.16111,38831.65
Barbaraproposed234,76331.78241,41330.91232,49131.3
TLE155,62332.71165,48831.82158,70732.21
Boatproposed244,80731.34258,19030.92246,23731.02
TLE161,13832.16178,00031.93169,06931.91
Elaineproposed229,59530.94234,09730.79221,36830.76
TLE151,25531.63161,60531.67151,89231.5
F16proposed317,72831.43324,18731.12316,40631.23
TLE211,02432.67221,56332.48215,12832.51
Houseproposed277,56131.6300,10631.01289,15531.18
TLE184,38032.68203,99432.17195,96432.29
Pepperproposed226,67531.78227,65731.64220,86631.65
TLE149,08132.66157,96232.72151,72332.6
Sailboatproposed228,44131.4240,06030.82228,85031
TLE149,92432.2165,01031.71156,43031.81
Tiffanyproposed310,25831.23306,47631.13301,02531.12
TLE205,49932.33211,05932.42206,33432.29
Table 6. Comparison of the performance of our triple-RDH method on images “Baboon” and “F16” with different n and different peak points.
Table 6. Comparison of the performance of our triple-RDH method on images “Baboon” and “F16” with different n and different peak points.
ImageBaboonF16
Prediction Method123123
MetricsECPSNRECPSNRECPSNRECPSNRECPSNRECPSNR
n = 1 (0, −1)38,23946.3453,73842.9449,51744.49151,72644.29171,45942.84158,83143.49
(1, −1)43,49645.9356,08442.7852,29144.17182,18644.2204,69542.99190,12643.51
(1, 0)39,55246.5155,22842.9550,81344.58162,20144.85179,16142.88175,66943.76
n = 2 (0, −1)75,79538.9791,10536.587,79037.6223,23537.55245,73136.74229,13737.11
(1, −1)89,08338.47103,79636.3797,73537.28260,49237.61273,46236.96260,94237.24
(1, 0)82,86439.0897,12236.4394,68437.63224,51737.9231,18136.69229,80837.24
n = 3 (0, −1)132,56032.17147,40830.51140,46731.19299,41031.25317,41130.79307,97531.01
(1, −1)159,95731.92172,45330.6164,47431.09317,22331.43324,05531.11316,16631.23
(1, 0)144,99332.6152,36130.52155,69131.4264,64031.43265,59130.7263,63831.02
n = 4 (0, −1)206,42125.32218,49524.51212,18524.79365,75825.5378,97124.85370,50224.91
(1, −1)243,74425.27248,01124.79241,59124.88356,48225.28354,83525.18353,49025.19
(1, 0)215,38025.71206,34924.49216,82624.96285,72625.04285,60324.66284,40324.8
Table 7. Comparison of the performance of our triple-RDH method between images “Baboon” and “F16” for parameters t g ,   t w   between 1 and 5 (n = 3, p p 1   = 1, and p p 2   = −1).
Table 7. Comparison of the performance of our triple-RDH method between images “Baboon” and “F16” for parameters t g ,   t w   between 1 and 5 (n = 3, p p 1   = 1, and p p 2   = −1).
ImageBaboonF16
Prediction Method123123
MetricsECPSNRECPSNRECPSNRECPSNRECPSNRECPSNR
t g = 1 t w = 1 106,14332.52117,13231.15111,45931.64210,87332.67221,52932.47215,17632.52
t g = 2 t w = 2 160,07631.91172,17930.59163,76431.09318,07831.43324,07531.11316,52031.23
t g = 3 t w = 3 186,30631.63198,48330.35190,47530.84371,14430.93371,92430.58364,01730.72
t g = 4 t w = 4 200,41531.5211,89430.24202,24230.72398,24730.7396,63730.34387,27630.5
t g = 5 t w = 5 207,40731.44218,10930.19208,42530.67411,06430.59407,42130.23398,06930.39
Table 8. Comparison of the performance of our proposed triple-RDH method on images “Baboon” and “F16” at different t g and t w .
Table 8. Comparison of the performance of our proposed triple-RDH method on images “Baboon” and “F16” at different t g and t w .
ImageBaboonF16
Prediction Method123123
MetricsECPSNRECPSNRECPSNRECPSNRECPSNRECPSNR
t g = 1
t w = 2
(0, −1)114,22132.39123,00930.66119,45231.39273,71931.73287,50431.23276,48931.44
(1, −1)129,75232.23144,32830.89137,15831.39260,71732.08274,65331.79266,16031.9
(1, 0)121,76632.88127,42630.7128,03331.62234,89231.62241,01530.93233,34331.24
t g = 2
t w = 1
(0, −1)110,14632.42122,18730.68116,16131.42252,43131.63267,31731.15256,91831.36
(1, −1)136,01532.2146,22930.82140,05031.31267,37931.92273,50531.65267,76631.72
(1, 0)122,94632.92128,19730.69131,34231.64206,99331.97209,13531.08207,41131.45
t g = 2
t w = 3
(0, −1)143,33532.04159,53630.43152,42431.09322,16231.07343,83630.63329,86530.82
(1, −1)172,41431.78184,95230.49176,29530.98343,24431.21348,79130.86340,57531
(1, 0)156,35932.45163,83330.44167,15231.28293,37331.17295,52630.52292,59230.82
t g = 1
t w = 3
(0, −1)125,78432.25136,70930.58131,17031.27300,57731.5319,75631.02305,72831.23
(1, −1)143,02432.09158,25130.78150,02131.27284,78031.8300,71031.48291,74631.62
(1, 0)133,00932.71140,00630.62140,78931.5263,49331.36270,65830.73261,58731.03
t g = 3
t w = 2
(0, −1)140,91832.05158,54630.43150,62031.11299,20330.99322,98130.57308,86230.77
(1, −1)175,07431.77185,85530.46178,00130.95344,91431.14348,41630.8339,29330.92
(1, 0)157,61532.47164,56530.44167,85431.29278,80431.33278,68630.59278,28330.91
t g = 3
t w = 1
(0, −1)119,15032.29133,65030.59126,82331.32262,01531.29280,22130.85266,71731.05
(1, −1)150,69632.04185,85530.46153,78331.16294,80231.6348,41630.8292,27931.36
(1, 0)136,19832.79140,59830.59144,28131.54221,73131.87222,74230.95223,37331.35
t g = 3
t w = 4
(0, −1)157,17631.88177,77530.31168,97830.95328,66630.79355,04130.37338,21930.56
(1, −1)193,71931.57205,15830.3195,80230.79382,97430.82385,28230.47376,79830.63
(1, 0)173,32532.26181,70630.32185,66131.12321,90730.96319,49730.33321,29530.62
t g = 2
t w = 4
(0, −1)149,03931.98165,90430.38158,96631.04333,07830.98357,56030.55344,96030.75
(1, −1)178,44831.72191,61630.44183,14430.93355,69531.09360,77030.74353,38230.89
(1, 0)162,23132.38170,26230.4173,09531.22307,73531.05308,78230.43305,34230.72
t g = 1
t w = 4
(0, −1)131,09032.18142,32930.53138,02831.22315,25031.38332,69530.92319,33731.12
(1, −1)148,62832.011165,17930.72155,99631.2297,65531.67315,27231.34305,02731.49
(1, 0)138,38332.63145,79330.58146,11431.44278,80831.24285,52530.63276,12230.92
t g = 4
t w = 3
(0, −1)155,32331.88177,20930.31167,57330.96312,79230.75338,75230.33321,21730.53
(1, −1)194,31931.57205,81530.29155,99631.2384,74230.8384,45430.44375,48230.59
(1, 0)173,90632.27182,20430.31186,73631.13314,79231.03311,56030.36313,69330.67
t g = 4
t w = 2
(0, −1)144,60831.99164,39430.39155,31231.0629633030.87319,95030.45303,67030.64
(1, −1)182,76731.69193,15330.38184,61930.8735982631.01360,37630.66352,95730.79
(1, 0)163,12632.41169,93730.39174,57431.2428644031.28284,47630.54286,40830.86
t g = 4
t w = 1
(0, −1)123,57332.23140,09430.56131,94931.2726405231.11282,49130.69268,27030.88
(1, −1)157,76431.97167,62230.59160,78331.0830891831.45312,71731.12305,21431.2
(1, 0)142,53532.71147,25330.55151,79231.4822920531.81229,53630.9230,93031.29
t g = 4
t w = 5
(0, −1)162,35331.8185,88930.25176,63230.8932367630.66353,25830.25334,17430.44
(1, −1)203,53531.47215,19830.22205,50030.740297530.65403,06130.29391,91330.45
(1, 0)182,84132.17190,76430.26196,41131.0433422530.86333,38830.24334,62930.52
t g = 3
t w = 5
(0, −1)159,10031.85180,54630.29171,69130.9333379930.75360,50230.34340,97230.53
(1, −1)196,45131.54207,78130.28198,41130.7738944230.77391,52930.42381,52030.58
(1, 0)176,18932.22185,49630.3190,04631.0932827130.9328,43030.29326,73530.57
t g = 2
t w = 5
(0, −1)151,10531.94169,02030.36162,44931.01341,69330.94364,11330.51349,40430.71
(1, −1)181,39231.69194,65630.41186,10830.9361,37431.04368,27130.69358,08530.84
(1, 0)165,17132.35172,97630.38176,52531.2313,68831315,70230.39312,23330.67
t g = 1
t w = 5
(0, −1)135,25032.15145,50030.52141,65531.19321,70031.32339,97830.87327,71731.07
(1, −1)150,61231.98168,15730.69159,80831.18305,02831.61320,63831.26310,63531.42
(1, 0)141,55032.59148,80730.56149,96331.41286,35931.18293,55630.59283,91730.87
t g = 5
t w = 4
(0, −1)163,06531.8186,04930.25176,71830.89316,85930.65342,72530.23324,63030.42
(1, −1)204,27431.47216,08530.21205,84730.7403,77030.64401,82130.28392,41330.43
(1, 0)181,96232.17189,60430.25195,47531.04333,49530.89327,84230.26329,75830.54
t g = 5
t w = 3
(0, −1)157,26631.86179,02430.29169,64830.94308,40330.7332,44530.28315,22930.47
(1, −1)198,59731.53208,79130.25199,74930.74392,37130.74390,18830.38381,14330.53
(1, 0)177,49132.24184,84430.29190,47631.1317,09031.01314,68530.34317,05730.64
t g = 5
t w = 2
(0, −1)146,95531.97167,59930.37158,47031.04295,26330.8317,39130.39299,39430.57
(1, −1)185,90131.66196,02030.35187,36430.84366,34030.94365,50730.59357,33330.72
(1, 0)167,32932.38172,63830.37178,06731.22289,22831.25287,08430.51290,36230.84
t g = 5
t w = 1
(0, −1)126,65032.2142,38730.54133,53431.24262,23131.02282,41630.61269,13930.79
(1, −1)161,55331.93170,81030.55164,50831.05315,44331.38319,56231.04311,89631.13
(1, 0)145,88132.68150,17030.52154,28331.45232,87831.77232,54630.86234,57231.26
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, C.-F.; Wu, H.-Z. High-Capacity and High-Quality Reversible Data Hiding Method Using Recurrent Round-Trip Embedding Strategy in the Quotient Image. Appl. Sci. 2021, 11, 10157. https://doi.org/10.3390/app112110157

AMA Style

Lee C-F, Wu H-Z. High-Capacity and High-Quality Reversible Data Hiding Method Using Recurrent Round-Trip Embedding Strategy in the Quotient Image. Applied Sciences. 2021; 11(21):10157. https://doi.org/10.3390/app112110157

Chicago/Turabian Style

Lee, Chin-Feng, and Hua-Zhe Wu. 2021. "High-Capacity and High-Quality Reversible Data Hiding Method Using Recurrent Round-Trip Embedding Strategy in the Quotient Image" Applied Sciences 11, no. 21: 10157. https://doi.org/10.3390/app112110157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop