You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

2 April 2021

Color Image Self-Recovery and Tampering Detection Scheme Based on Fragile Watermarking with High Recovery Capability

,
,
,
and
Instituto Politecnico Nacional, ESIME Culhuacan, Santa Ana 1000, Mexico-City 04440, Mexico
*
Author to whom correspondence should be addressed.
This article belongs to the Section Computing and Artificial Intelligence

Abstract

In this paper, a fragile watermarking scheme for color image authentication and self-recovery with high tampering rates is proposed. The original image is sub-sampled and divided into non-overlapping blocks, where a watermark used for recovery purposes is generated for each one of them. Additionally, for each recovery watermark, the bitwise exclusive OR (XOR) operation is applied to obtain a single bit for the block authentication procedure. The embedding and extraction process can be implemented in three variants (1-LSB, 2-LSB or 3-LSB) to solve the tampering coincidence problem (TCP). Three, six or nine copies of the generated watermarks can be embedded according to the variant process. Additionally, the embedding stage is implemented in a bit adjustment phase, increasing the watermarked image quality. A particular procedure is applied during a post-processing step to detect the regions affected by the TCP in each recovery watermark, where a single faithful image used for recovery is generated. In addition, we involve an inpainting algorithm to fill the blocks that have been tampered with, significantly increasing the recovery image quality. Simulation results show that the proposed framework demonstrates higher quality for the watermarked images and an efficient ability to reconstruct tampered image regions with extremely high rates (up to 90%). The novel self-recovery scheme has confirmed superior performance in reconstructing altered image regions in terms of objective criteria values and subjective visual perception via the human visual system against other state-of-the-art approaches.

1. Introduction

Currently, the development of authentication and reconstruction techniques for digital images has been the focus of extensive research due to the accelerated growth of image editing software, which can be used to tamper with digital images in multiple ways. These authentication and reconstruction techniques are used to detect tampered regions in images where, in the case of alteration, a recovery process should be applied to retrieve the original content. These schemes are helpful in different applications, in which undetected modifications of digital images may have serious consequences, e.g., legal proceedings, where a digital image can be used as legal evidence. Therefore, detection and recovery of tampered content in digital images have become issues of outstanding importance.
In recent years, watermarking techniques have been used to authenticate and recover tampered information in digital images [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Watermarking techniques can be classified into three types [1,2]: fragile, semi-fragile and robust. Fragile watermarking [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,23,24] does not support intentional and unintentional attacks; so, in case of any modification, the watermark would be destroyed. In contrast, these techniques offer a high payload capacity and are mainly used for authentication [3,4,5,6,7] that justifies several frameworks, which recently appeared in the self-recovery of tampered image regions [1,2,8,9,10,11,12,13,14,15,16,17,18,19,23,24]. Semi-fragile watermarking techniques are commonly used for copyright protection [20,21,22] and recovery schemes [25,26,27,28,29,30]. These techniques are designed to resist non-intentional manipulations caused by traditional image processing operations such as JPEG compression and scaling. These methods are fragile against intentional manipulations like tampering, resulting in a lower recovery rate compared to strategies based on fragile watermarking. Finally, robust watermarking techniques [31,32,33] are mainly used for copyright protection because they support intentional and non-intentional attacks. Their main disadvantage is a reduced payload capacity in comparison with fragile and semi-fragile watermarking techniques.
Self-recovery techniques based on watermarking consist of initially using small blocks of an image; subsequently, a watermark should be generated and embedded into a different block for content recovery for each self-recovery block. During the recovery process, all tampered blocks are reconstructed by the recovered watermark. This step could fail when a block containing the recovery watermark has been tampered with. In this way, it is impossible to reconstruct a concrete block, generating the so-called tampering coincidence problem.
Considering the approaches mentioned above for authentication and tamper detection, which are based on watermarking, the following properties are required for efficient implementation:
(a)
A minimum number of bits used for recovery and tamper detection: the recovery bits should be embedded redundantly, thus avoiding the tampering coincidence problem.
(b)
Watermark imperceptibility: the embedded recovery and authentication bits must not affect the visual quality of a watermarked image.
(c)
Precise tamper detection: a majority of intentional modifications should be accurately detected.
(d)
Precise recovery of tampered regions: the reconstructed image must demonstrate acceptable visual and objective quality in the reconstructed areas.
In this paper, a self-recovery of high tampering rate framework (denoted as SR-HTR) is designed according to the previously presented properties required for an efficient authentication scheme and tamper detection in color images. This novel fragile scheme appears to demonstrate a high payload capacity that can be used for authentication and recovery processes. To minimize the negative influence of the tampering coincidence problem, the proposed algorithm generates 15 A B / 16 bits in total, A × B   being the size of the image. Additionally, A B / 16 bits should be produced to detect tampered pixels in any area of an image. Thus, the designed framework can embed three, six or nine copies of recovery and authentication watermarks, achieving a high recovery and tamper detection capability. The novel framework can be implemented using several variants for embedding the recovery and authentication watermarks, such as the least-significant bit (LSB) methods: 1-LSB, 2-LSB and 3-LSB, where each one of them provides different advantages that are analyzed below.
For the 2-LSB and 3-LSB embedding processes, a bit adjustment stage is performed [34] on the watermarked pixels, increasing the protected images’ objective quality. A hierarchical algorithm in tamper detection is employed to achieve higher tamper detection accuracy. Additionally, an inpainting process is used to resolve the tampering coincidence problem by regenerating the eliminated blocks.
For evaluating the quality of the results obtained in the numerous experiments, the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) criteria are employed in this study. Moreover, we use a variation of the PSNR criterion, denoted as PSNR-HVS-M, which considers the human visual system (HVS) and visual masking (M). This criterion employs the contrast sensitivity function (CSF) and maintains a close relationship with discrete cosine transform (DCT) basis functions [35]. Additionally, it has demonstrated good correspondence with human subjective visual perception. Consequently, it could be useful for the justification of the good performance of our novel system.
The rest of this paper is organized as follows. Firstly, Section 2 presents a review of related works. Secondly, Section 3 describes the proposed SR-HTR, followed by Section 4, which explains the experimental setup. Section 5 shows the proposed method’s analysis when the embedding and extraction processes in 1-LSB, 2-LSB and 3-LSB are used. Section 6 presents the experimental results obtained by the proposed framework and their performance comparison against state-of-the-art techniques. Finally, the study’s conclusion is drawn in Section 7.

3. Designed Scheme

The designed method is divided into two stages. The first stage consists of the image protection process, which allows the insertion of multiple copies (three, six or nine) of the reconstruction watermark. This algorithm is presented in Figure 1. The second stage contains the authentication and reconstruction process, which uses a hierarchical authentication and an inpainting method to enhance the reconstruction’s performance. The diagram of this stage can be observed in Figure 2.
Figure 1. Protection method diagram.
Figure 2. Reconstruction and authentication method diagram.

3.1. Image Protection

The image protection stage is described in this section. This process follows two steps: watermarking generation and insertion in the carrier image. In order to explain these steps, let us denote the original image as I h of size A × B in the color space RGB.

3.1.1. Watermarking Generation for Reconstruction and Authentication Purposes

The original image has three channels according to its color space. Consequently, Pseudocode 1 is applied on each one of the channels, generating three watermarks for reconstruction and three for authentication. The watermarks used for reconstruction were designated as w r , w g and w b , and the ones utilized for authentication were named a u t r , a u t g and a u t b for the channels R, G and B, correspondingly.
Pseudocode 1 Recovery and authentication watermark generation
Input: Image to be processed Ih
[ A , B ] = s i z e ( I h )
i R e f e r e n c e = i m a g e R e s i z e ( I h , 0.25 ) i m a g e   s u b s a m p l i n g
i R e f e r e n c e = b i t A N D ( i R e f e r e n c e , 248 )   r e p l a c e   3   L S B s   b y   0
r e c o v e r y W = [ ]   r e c o v e r y   w a t e r m a r k
a u t e n t W = [ ] a u t h e n t i c a t i o n   w a t e r m a r k
For i = 1   to A / 4   do
For j = 1   to B / 4   do
t m p W = g e t 5 M S B ( i R e f e r e n c e i , j )   e x t r a c t   5   M S B s
r e c o v e r y W = c o n c a t ( r e c o v e r y W , t m p W )   c o n c a t e n a t i o n   p r o c e s s
a u t   =   X O R ( X O R ( t m p W 1 , t m p W 2 ) , X O R ( t m p W 3 , t m p W 4 ) )
a u t   =   X O R ( a u t   , t m p W 5 )
a u t e n t W = c o n c a t ( a u t e n t W , a u t )   c o n c a t e n a t i o n   p r o c e s s
End for
End for
Output: Recovery watermark recoveryW; Authentication watermark autentW
Once the watermarks are obtained, a subsampled process with a factor of 0.25 is performed to significantly reduce the number of bits representing each channel of the original image. When 5 MSB of each pixel are extracted, a total number of 5 A B / 16 bits are available for reconstruction watermark embedding, and A B / 16 bits can be used for authentication watermarks. The authentication bit generation process consists of applying the bitwise XOR operation in the 5 MSBs, where a single bit is generated for each pixel block of size 4 × 4.

3.1.2. Watermark Embedding

The embedding process is described in Pseudocode 2, where w r , w g , w b and an authentication watermark a u t r , a u t g or a u t b are embedded in a selected channel of I h . Firstly, a random permutation of the reconstruction watermarks utilizes a seed of the user key, which must be different for each processed RGB channel or bit plane.
The watermark embedding process employs the 1-LSB method to embed the watermarks for reconstruction and a single watermark for authentication. This is possible due to the size of each one of the reconstruction watermarks, which is 5AB/16, and because a single authentication watermark is composed of AB/16 bits.
Finally, a bit adjustment process is applied, where each pixel in the i,j-th position of each RGB channel of the host image, denoted as I h w , is compared with the pixel in the same position of I h . This comparison has the objective to modify the intensity value of the pixels in I h w to enhance its objective quality. This process depends on the total number of marked bit planes. For each watermarked pixel, the following equation is used:
I h w i , j = { I h i , j 1 , i f   v   i s   ( 2 N 1 )   a n d   LSB N + 1 ( I h i , j )   i s   1 I h i , j + 1 ,   i f   v   i s   ( 2 N 1 )   a n d   LSB N + 1 ( I h i , j )   i s   0   ,
where N represents the watermarked bit plane and v = I h w i , j I h i , j for 1 i A , and 1 j B . This equation can be used only for values N > 2 .
Pseudocode 2 Watermark embedding
Input: Image (single channel) to process Ih; Seed S; Bit plane bit Plane; Recovery watermarks wr, wg and wb; Authentication watermark aut
[ A , B ] = s i z e ( I h )
r n g ( S )   C o n t r o l   r a n d o m   n u m b e r   g e n e r a t o r
n u m R a n d = r a n d p e r m ( A B / 16 ) R a n d o m   p e r m u t a t i o n   o f   i n t e g e r s   i n   r a n g e   [ 1 , A B / 16 ]
n u m = 1
For i = 1   step 4 to A 3   do
For j = 1   step 4 to B 3   do
s u b i n d e x = 5 * n u m R a n d [ n u m ]
i n d e x = s u b i n d e x 4 : s u b i n d e x n u m b e r s   f r o m   ( s u b i n d e x 4 )   t o   s u b i n d e x
t m p W = c o n c a t ( w r i n d e x , w g i n d e x , w b i n d e x , a u t n u m )   c o n c a t e n a t i o n   t m p W   c o n t a i n s   16   b i t s
I h w i : i + 3 , j : j + 3 = e m b e d ( I h i : i + 3 , j : j + 3 , b i t P l a n e , t m p W ) L S B   e m b e d d i n g   o f   t m p W   i n   I h   b y   b i t P l a n e
n u m =   n u m + 1
End for
End for
Equation   ( 1 ) ,   w h e r e   N = b i t   P l a n e
Output: Watermarked image Ihw

3.2. Authentication and Reconstruction

This section describes the authentication and reconstruction process of a given tampered image I h w of size A × B in RGB color space. To accomplish this reconstruction, four steps are used: extraction of the watermarks from the image, authentication of the content, post-processing to indicate the blocks affected by the tampering coincidence problem via an inpainting process to fill these regions and reconstruction of the tampered image.

3.2.1. Watermark Extraction

The extraction of the three reconstruction watermarks is performed as follows:
a u x V a l i   =   j = 1 5 2 8 j w t m 5 ( i 1 ) + j ,     i   s . t .     1 i A B / 16 ,  
where w t m is a vector form of a watermark w r , w g or w b . The process to acquire all the watermarks is detailed in Pseudocode 3, which employs the function v e c 2 m a t ( a u x V a l , A / 4 , B / 4 ) to transform a u x V a l in a matrix of size A / 4   × B / 4 .
This pseudocode is applied to each channel image, where three reconstruction images in the RGB color space ( i R G B R , i R G B G , i R G B B ) and three authentication images ( a u t R , a u t G , a u t B ) to authenticate each RGB channel of I h w are obtained.
Pseudocode 3 Extraction of image watermarks
Input: Watermarked image channel Ihw; Seed S; Bit plane bit Plane
[ A , B ] = s i z e ( I h w )
r n g ( S )   C o n t r o l   r a n d o m   n u m b e r   g e n e r a t o r
n u m R a n d = r a n d p e r m ( A B / 16 )   R a n d o m   p e r m u t a t i o n   o f   i n t e g e r s   i n   r a n g e   [ 1 , A B / 16 ]
w r = [   ] E m p t y   l i s t   o f   s i z e   5 A B / 16
w g = [   ] E m p t y   l i s t   o f   s i z e   5 A B / 16
w b = [   ] E m p t y   l i s t   o f   s i z e   5 A B / 16
a u t = [   ] E m p t y   l i s t   o f   s i z e   A B / 16
n u m = 1
For i = 1   step 4 to   A 3 do
For j = 1   step 4 to B 3   do
s u b i n d e x = 5 * n u m R a n d [ n u m ]
i n d e x = s u b i n d e x 4 : s u b i n d e x   n u m b e r s   f r o m   ( s u b i n d e x 4 )   t o   s u b i n d e x
t m p W = e x t r a c t ( I h w i : i + 3 , j : j + 3 , b i t P l a n e )   e x t r a c t   16   b i t s   f r o m   t h e   b i t P l a n e L S B
n u m = n u m + 1
End for
End for
Equation   ( 2 ) ,   w h e r e   w t m = w r
i m g W t m = v e c 2 m a t ( a u x V a l , A / 4 , B / 4 )   v e c t o r   t o   m a t r i x   c o n v e r s i o n
i R G B ( : , : , 1 ) = i m r e s i z e ( i m g W t m , 4 )
Equation   ( 2 ) ,   w h e r e   w t m = w g
i m g W t m = v e c 2 m a t ( a u x V a l , A / 4 , B / 4 )   v e c t o r   t o   m a t r i x   c o n v e r s i o n
i R G B ( : , : , 2 ) = i m r e s i z e ( i m g W t m , 4 )
Equation   ( 2 ) ,   w h e r e   w t m = w b
i m g W t m = v e c 2 m a t ( a u x V a l , A / 4 , B / 4 )   v e c t o r   t o   m a t r i x   c o n v e r s i o n
i R G B ( : , : , 3 ) = i m r e s i z e ( i m g W t m , 4 )
Output: Recovery image iRGB; Authentication watermark aut

3.2.2. Authentication

This step uses the previously described Pseudocode 1 to generate the bit sequence a u t e n t W   from each channel of   I h w , where a u t e n t W R , a u t e n t W G and a u t e n t W B are obtained. Each bit sequence is compared with each authentication watermark   a u t R , a u t G and a u t B resulting from Pseudocode 3 using the following equation:
a u t e n t I m g i , j = { 255 ,   i f   a u t e n t W j + ( i 1 ) B 4   i s   n o t   a u t j + ( i 1 ) B 4 0 , o t h e r w i s e ,
for all i and j subject to 1 i A / 4 and 1 j B / 4 . Each   a u t e n t I m g is then interpolated to size A × B .
Once the previous steps have been performed, three reconstruction images ( i R G B R , i R G B G and i R G B B ) and three authentication images ( a u t e n t I m g R , a u t e n t I m g G and a u t e n t I m g B ) are generated for each RGB channel and each LSB plane. A general authentication image is then computed by applying the bitwise OR operand to each of the authentication images, i.e.,   i A u t e n t i , j   =   a u t e n t I m g R i , j   +   a u t e n t I m g G i , j   +   a u t e n t I m g B i , j ,   i , j ,   s . t .   1     i     A ,   1     j     B . A point worth mentioning is that this operation is valid only for the 1-LSB embedding method. If the 2-LSB method is utilized, six watermarks for authentication and six for reconstruction should be generated. Analogously, the 3-LSB process requires nine watermarks for each case. Therefore, i A u t e n t is obtained by applying the bitwise OR operation to the six or nine authentication images for the 2-LSB or 3-LSB method, respectively. Finally, the first level hierarchical authentication is performed on i A u t e n t , improving the tamper detection recognition.

3.2.3. Post-Processing and Recovery

A binary image is generated at this stage, representing the blocks affected by the tampering coincidence problem in each reconstruction image generated by Pseudocode 3 ( i R G B R , i R G B G , i R G B B ). Each authentication image generated in Section 3.2.2 ( a u t e n t I m g R , a u t e n t I m g G , a u t e n t I m g B ) is used for this process.
Pseudocode 4 presents the image generation process as follows:
a u t e n t n u m r a n d [ j + ( i 1 ) B / 4 ] = { 255 ,   i f   a u t e n t I m g i , j   i s   255 0 , o t h e r w i s e ,  
for all i and j subject to 1 i A / 4 , and 1 j B / 4 . Finally, the generated image is interpolated to   A × B   size. Pseudocode 4 presents the processes, resulting in three images being obtained, denoted as   T C P R , T C P G and T C P B .
If a large amount of the original image information is altered, the information used for reconstruction could be overwritten, although it is redundant. Therefore, maps to point to the tampering coincidence problem are computed using binary images (Pseudocode 4). Then, an AND operand between each one of these maps’ bits is applied, resulting in a binary image denoted as i T C P , i.e., i T C P i , j   =   T C P R i , j * T C P G i , j * T C P B i , j ,     i , j ,   s . t .   1     i     A ,   1     j     B . This image marks the regions affected by the tampering coincidence problem.
Pseudocode 4 Detection of tampering coincidence problem
Input: Authentication image autent Img; Seed S
[ A , B ] = s i z e ( a u t e n t I m g )
r n g ( S )   C o n t r o l   r a n d o m   n u m b e r   g e n e r a t o r
n u m R a n d = r a n d p e r m ( A B / 16 )   R a n d o m   p e r m u t a t i o n   o f   i n t e g e r s   i n   r a n g e   [ 1 , A B / 16 ]
a u t e n t I m g = i m r e s i z e ( a u t e n t I m g , 0.25 )
Equation   ( 4 )
T C P = v e c 2 m a t ( a u t e n t , A / 4 , B / 4 )   v e c t o r   t o   m a t r i x   c o n v e r s i o n
T C P = i m r e s i z e ( T C P , 4 )
Output: Tampering coincidence problem image TCP
Subsequently, Equation (5) is applied to the reconstruction images obtained by Pseudocode 3 and their corresponding binary images given by the Pseudocode 4, resulting in a single reconstruction image named i R :
i R i , j = α = 1 n ( 1 T C P α i , j ) I w α i , j α = 1 n ( 1 T C P α i , j ) ,   i , j   :   1 i A ,   1 j B ,
where α represents the i-th processed copy, I w represents a recovery image, T C P represents a binary image, which indicates the regions affected by the tampering coincidence problem of I w , where the values “1” indicate that the i, j-th position is authentic and a value of “0” indicates that this position was affected by this problem. Implementation of Equation (5) is given for T C P = [ T C P R , T C P G , T C P B ] and I w = [ i R G B R , i R G B G , i R G B B ] .
The previous sequence is valid when the 1-LSB method is selected for embedding. If the scheme is presented in their variants 2-LSB or 3-LSB, further copies of the watermarks should be computed from the bit panels, and the reconstruction method requires single instances of i R and i T C P . Consequently, the following equations for the N -LSB method must be applied:
i T C P   = { i T C P 1 , i f   N   i s   1 i T C P 1   *   i T C P 2 , i f   N   i s   2 i T C P 1   *   i T C P 2 *   i T C P 3 , i f   N   i s   3 ,
i R   = { i R 1 , i f   N   i s   1 Equation   ( 5 ) ,   where   I w = [ i R 1 , i R 2 , i R 2 ]   and   T C P = [ i T C P 1 , i T C P 2 , i T C P 2 ] ,   i f   N   i s   2 Equation   ( 5 ) ,   where   I w = [ i R 1 , i R 2 , i R 3 ]   and   T C P = [ i T C P 1 , i T C P 2 , i T C P 3 ] ,   i f   N   i s   3 ,
Finally, Pseudocode 5 performs an inpainting method for the given i R and i T C P images. The output image is named as i R e c o v e r y . The inpainting method divides the i R and i T C P images into overlapping blocks of 3 × 3 pixels, and the following equations are used for each block:
i R i , j = a = 1 3 b = 1 3 ( w I o . * ( 1 w T C P ) ) a , b a = 1 3 b = 1 3 ( 1 w T C P a , b ) , i f   w T C P 2 , 2   i s   1   a n d   i = 1 3 j = 1 3 ( 1 w T C P i , j ) >   1 ,
i T C P i , j = 0 ,   i f   w T C P 2 , 2   i s   1   a n d   i = 1 3 j = 1 3 ( 1 w T C P i , j ) >   1 ,
where w T C P represents a block of the i T C P image, which is subject to w T C P   = i T C P i 1 : i + 1 , j 1 : j + 1 , and w I o is a block of the i R image, given w I o   = i R i 1 : i + 1 , j 1 : j + 1 , for all i, j, such that 2 i A + 1 , 2 j B + 1 .
Finally, the tampered zones of the I h w image can be reconstructed by means of the following equation:
I h w i , j = I h w i , j ( 1 i A u t e n t i , j ) + (   i R e c o v e r y i , j )   ( i A u t e n t i , j )   ,   1 i A ,   1 j B .  
Pseudocode 5 Inpainting application
Input: Image to be processed iR; Binary image iTCP
[ A , B ] = s i z e ( i R )
W h i l e   i = 1 A j = 1 B i T C P i , j   ! =   0   d o
i R   = [ i R 1 , 1 i R 1 , : i R 1 , B i R : , 1 i R : , : i R : , B i R A , 1 i R A , : i R A , B ] M a t r i x   o f   s i z e   A + 2 , B + 2
i T C P = [ i T C P 1 , 1 i T C P 1 , : i T C P 1 , B i T C P : , 1 i T C P : , : i T C P : , B i T C P A , 1 i T C P A , : i T C P A , B ] M a t r i x   o f   s i z e   A + 2 , B + 2
i T C P =   i T C P > 127
Equations   ( 8 )   and   ( 9 )
i T C P =   i T C P * 255
i R   = i R   2 : A 1 , 2 : B 1 M a t r i x   o f   s i z e   A , B
i T C P = i T C P 2 : A 1 , 2 : B 1 M a t r i x   o f   s i z e   A , B
End While
i R e c o v e r y =   i R
Output: Image after inpainting process iRecovery

3.3. Implementation of the Algorithms

The proposed SR-HTR method allows the insertion of N copies of the three reconstruction watermarks for the N -LSB method limited by 1 N 3 . However, the algorithm implementation has to be changed depending on the parameter N , as can be observed in Pseudocodes 6 and 7.
Pseudocode 6 Image protection
Input: Image (RGB) to be protected Ih; LSB plane N
w r ,   a u t r   =   Pseudocode   1   ( I h ( : , : , 1 ) )
w g ,   a u t g   =   Pseudocode   1   ( I h ( : , : , 2 ) )
w b ,   a u t b   =   Pseudocode   1   ( I h ( : , : , 3 ) )
I h w   =   I h
F o r   i   = 1   s t e p   1   t o   N   d o
I h w ( : , : , 1 ) =   Pseudocode   2   ( I h w ( : , : , 1 ) , 10 + i , i , w r , w g , w b , a u t r )
I h w ( : , : , 2 ) =   Pseudocode   2   ( I h w ( : , : , 2 ) , 20 + i , i , w r , w g , w b , a u t g )
I h w ( : , : , 3 ) =   Pseudocode   2   ( I h w ( : , : , 3 ) , 30 + i , i , w r , w g , w b , a u t b )
E n d   F o r
Output: Watermarked image Ihw
Pseudocode 7 Image authentication and recovery
Input: Suspicious image (RGB) Ihw; LSB plane N
F o r   i   = 1   s t e p   1   t o   N   d o
i R G B R i ,   a u t R i   =   Pseudocode   3   ( I h w ( : , : , 1 ) , 10 + i , i )
i R G B G i ,   a u t G i   =   Pseudocode   3   ( I h w ( : , : , 2 ) , 20 + i , i )
i R G B B i ,   a u t B i   =   Pseudocode   3   ( I h w ( : , : , 3 ) , 30 + i , i )
E n d   F o r
~ ,   a u t e n t W R   =   Pseudocode   1   ( I h w ( : , : , 1 ) )
~ ,   a u t e n t W G   =   Pseudocode   1   ( I h w ( : , : , 2 ) )
~ ,   a u t e n t W B   =   Pseudocode   1   ( I h w ( : , : , 3 ) )
i A u t e n t   =   z e r o s ( A , B )
F o r   i   = 1   s t e p   1   t o   N   d o
Equation   ( 3 ) ,   w h e r e   a u t e n t W = a u t e n t W R   a n d   a u t =   a u t R i
a u t e n t I m g R i = i m r e s i z e ( a u t e n t I m g , 4 )
Equation   ( 3 ) ,   w h e r e   a u t e n t W = a u t e n t W G   a n d   a u t =   a u t G i
a u t e n t I m g G i = i m r e s i z e ( a u t e n t I m g , 4 )
Equation   ( 3 ) ,   w h e r e   a u t e n t W = a u t e n t W B   a n d   a u t =   a u t B i
a u t e n t I m g B i = i m r e s i z e ( a u t e n t I m g , 4 )
  A i   =   a u t e n t I m g R i + a u t e n t I m g G i + a u t e n t I m g B i OR   operation
i A u t e n t   =   i A u t e n t   +   A i OR   operation
E n d   F o r
i A u t e n t   =   h i e r a r c h i c a l _ a u t h e n t i c a t i o n ( i A u t e n t )
F o r   i   = 1   s t e p   1   t o   N   d o
T C P R I =   Pseudocode   4   ( a u t e n t I m g R i ,   10 + i )
T C P G I =   Pseudocode   4   ( a u t e n t I m g G i ,   20 + i )
T C P B I =   Pseudocode   4   ( a u t e n t I m g B i ,   30 + i )
Equation   ( 5 ) ,   w h e r e   I w = [ i R G B R i , i R G B G i ,   i R G B B i ]   a n d   T C P = [ T C P R I , T C P G I , T C P B I ]  
   i T C P i   =   T C P R I   * T C P G I   *   T C P B I AND   operation
E n d   F o r
Equations   ( 6 )   and   ( 7 )
i R e c o v e r y   =   Pseudocode   5   ( i R ,   i T C P )
Equation   ( 10 )
Output: Restored image Ihw

4. Experimental Setup

Images of sizes 512 × 768 and 768 × 512 from the Kodak database [40], which consists of 24 images, were used for experimentation. These images are labeled as Kodak-n, where n is the image identifier. Some of these images are shown in Figure 3.
Figure 3. Images employed from Kodak database [40], (a) Kodak-1, (b) Kodak-3, (c) Kodak-11, (d) Kodak-14, (e) Kodak-4, (f) Kodak-10, (g) Kodak-17, (h) Kodak-18.
The performance criteria to evaluate the quality of watermarked and recovered images obtained by the proposed framework are: PSNR, SSIM and PSNR-HVS-M [35]. The PSNR metric is defined as follows:
P S N R = 10   l o g 10 M A X 2 M S E ,
M S E = 1 X Y i = 0 X 1 j = 0 Y 1 [ I ( i , j ) I ( i , j ) ] 2 ,
where I represents the original image, I corresponds to the modified image, the arguments i , j are used for the pixel position and X ,   Y are the number of rows and columns, correspondingly. The SSIM metric is computed using the following equation:
S S I M ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
where μ and σ denote the mean and variance of images x and y , σ x y is the covariance between x and y , constants C 1 , C 2   are: C 1 = ( 0.01 L ) 2 ,   C 2 = ( 0.03 L ) 2 , and L = 255 [41].
Furthermore, Precision and Recall metrics were utilized to measure the alteration detection performance of the proposed method. These metrics are based on the numbers of true positives ( T P ), false positives ( F P ) and false negatives ( F N ) among all pixels:
P r e c i s i o n = T P T P + F P ,
R e c a l l = T P T P + F N .

5. Analysis of 1-LSB, 2-LSB and 3-LSB Schemes in Embedding Stage

A comparison between the different LSB variants of the designed framework was performed. The watermarked images were compared with the original ones, and the results are reported in Table 1, which shows the average values of the objective quality measures PSNR, SSIM and PSNR-HVS-M for each variant. Furthermore, the comparison was also performed using the bit adjustment for 2-LSB and 3-LSB methods. It can be observed that a minimal enhancement is generally achieved using this adjustment.
Table 1. Least-significant bit (LSB) embedding analysis in terms of the objective quality.
During the imperceptibility evaluation of the three variants (1-LSB, 2-LSB or 3-LSB) shown in Table 1, we can observe that the 1-LSB and the 3-LSB variants have the best and the worst performance, respectively, in terms of objective quality evaluation. However, a point worth mentioning is that for the worst case (3-LSB), the embedding of nine copies of the watermark does not produce any recognizable visual modification in the watermarked image.
To measure the authentication and reconstruction process’s performance, experiments changing the tampering rate from 10% to 90% in the image were carried out, adding pseudo-random noise to the image, as can be observed in Figure 4.
Figure 4. Tampered images for Kodak-23, (a) 20%, (b) 30%, (c) 40%, (d) 50%, (e) 60%, (f) 70%, (g) 80%, (h) 90%.
The authentication stage evaluation is reported in Table 2, where it can be observed that the 1-LSB method maintains the best results in Precision. In terms of Recall, the proposed variants’ performance increases when this measure is close to one. The variants 2-LSB and 3-LSB have the same high Recall value, even though the 3-LSB variant embeds more copies. Additionally, the probability that the 1-LSB variant extracts copies incorrectly is higher than the other variants, generating more errors during the authentication process. Another point worth mentioning is that the 2-LSB variant achieves Recall’s best performance, avoiding the tampering coincidence problem, embedding only six copies of the recovery watermark.
Table 2. Precision and Recall obtained from the authentication process.
Finally, the average values of the quality measures PSNR, SSIM and PSNR-HVS-M for the reconstructed images compared with the original ones are shown in Table 3. A significant increase in the values with the 2-LSB method can be recognized. Nevertheless, although the 3-LSB implementation also showed a sharp increment compared with 1-LSB, its results are slightly lower than the 2-LBS method. On the contrary, the PSNR-HSV-M values of the 3-LSB method are slightly higher than the result from 2-LSB. However, the 2-LSB results are still acceptable, and, in general, their PSNR and SSIM values are superior to the other implementations in this study. Consequently, the 2-LSB variation was utilized for the proposed SR-HTR method considering the results given in Table 1, Table 2 and Table 3.
Table 3. PSNR, SSIM and PSNR-HVS-M from the reconstruction process.

6. Experimental Results and Discussion

Since the 2-LSB method maintains a balance between the quality of the marking, authentication and reconstruction processes, it was selected to insert the watermarks. In this section, the proposed method’s performance with the 2-LSB variation is detailed and compared with other state-of-the-art methods.

6.1. Watermarked Image Quality

Table 4 shows the results after the watermark embedding and the variation with and without the bit adjustment stage. It can be observed that the bit adjustment markedly raised the PSNR and SSIM values. Nonetheless, the best PSNR-HSV-M values fluctuated between both implementations.
Table 4. Objective quality metrics PSNR, SSIM and PSNR-HVS-M for watermarked images.

6.2. Analysis of Tampering Detection

In order to measure the performance of the proposed method in change detection, different modifications were applied to the test watermarked images. Afterward, the authentication process was executed using the six authentication watermarks and the hierarchical authentication method to discard most of the detection errors, specifically the false negatives.
The tampering detection process evaluation consists of two alteration schemes: the first one was used to estimate the ability to detect alteration rates between 10% and 90% by adding a regular square area of pseudo-random noise into the watermarked image, as displayed in Figure 4. In the second scheme, the alterations were performed by modifying one or multiple irregular areas in the watermarked image using Adobe Photoshop software, maintaining the structure and original nature of the image and avoiding significant falsity in the alteration. Figure 5 illustrates the watermarked images with irregular alterations of Kodak-1 in 46.41%, Kodak-3 in 34.34%, Kodak-11 in 25.06% and Kodak-14 in 40.08% of the entire image.
Figure 5. Multiple and irregular alterations, (a) Kodak-1 original, (b) Kodak-3 original, (c) Kodak-11 original, (d) Kodak-14 original, (e) Kodak-1 altered, (f) Kodak-3 altered, (g) Kodak-11 altered, (h) Kodak-14 altered.
The detection’s Precision results for the changes using the first scheme are shown in Table 5. It can be noticed that the Precision performance is enhanced as the alteration rate is increased. The Recall metric resulted in values of 1.0 for all the alteration rates between 10% and 80% and 0.9999 for a change of 90% of the image. This is due to the hierarchical method of authentication.
Table 5. Precision values for the detection of different alteration rates between 10% and 90%.
The evaluation of the second scheme, which uses multiple and irregular alterations in the images, is displayed in Figure 6, where images in Figure 6a–d represent the ground truth of the alterations, and images in Figure 6e–h are the results of the change detection obtained by SR-HTR. The pairs of (Precision, Recall) values are: (0.9190, 0.9995) for Kodak-1, (0.9416, 1.0) for Kodak-3, (0.9214, 1.0) for Kodak-11 and (0.9382, 0.9998) for Kodak-14.
Figure 6. Multiple and irregular alterations, (a) Kodak-1 ground truth, (b) Kodak-3 ground truth, (c) Kodak-11 ground truth, (d) Kodak-14 ground truth, (e) detection for Kodak-1, (f) detection for Kodak-3, (g) detection for Kodak-11, (h) detection for Kodak-14.

6.3. Evaluation of the Reconstruction under Different Tampering Rates

In this section, the results for the evaluation of the reconstruction process are reported. The reconstruction of the test images was evaluated using both alteration schemes. For the first scheme, which is illustrated in Figure 4, the six versions of the reconstruction watermark are utilized for reconstruction. The inpainting method was performed to regenerate the information affected by the tampering coincidence problem.
Figure 7 shows the reconstructed images for Kodak-23. As can be noticed, the reconstruction capability for each alteration rate markedly dropped according to the increment in the alteration rate. Nevertheless, the original content of the image can be clearly distinguished.
Figure 7. Quality measures values (PSNR/SSIM/PSNR-HVS-M) for Kodak-23 reconstruction given different alteration rates: (a) 20%, 37.91/0.9762/34.69; (b) 30%, 31.99/0.9552/28.82; (c) 40%, 31.26/0.9401/27.96; (d) 50%, 30.66/0.9250/27.47; (e) 60%, 29.96/0.9079/26.68; (f) 70%, 28.21/0.8739/24.69; (g) 80%, 27.25/0.8418/23.28; (h) 90%, 25.21/0.7854/20.47.
Additionally, it can be observed that a granulated effect is obtained according to the alteration rate. The inpainting process gives this effect due to the replenishment of affected areas by the tampering coincidence problem, with neighboring pixels’ intensity values detected as authentic ones.
Finally, this process was executed on all the test images using the first alteration scheme. Figure 8, Figure 9 and Figure 10 report the PSNR, SSIM and PSNR-HVS-M results after the reconstruction process, correspondingly. These graphics show an average decrease of 12.72 dB for PSNR and 14.41 dB for PSNR-HSV-M in reconstructing the modification rates between 10% and 90% of the image and an average reduction of 0.3516 for SSIM values. The results can be considered acceptable given the high alteration rate to which the images are subjected. A point worth mentioning is that other works usually test their methods with an upper boundary of 50% of modification since they do not consider the tampering coincidence problem consequences in their entirety.
Figure 8. PSNR of the reconstructed images for different alteration rates.
Figure 9. SSIM of the reconstructed images for different alteration rates.
Figure 10. PSNR-HVS-M of the reconstructed images for different alteration rates.

6.4. Evaluation of the Image Reconstruction under Multiple and Irregular Attacks

The results of the reconstructed images that were modified using the second alteration scheme, given in Figure 5, are presented in this section. Figure 11 shows a satisfactory visual quality of the reconstructed images, which display the original content substituted with other information.
Figure 11. Reconstructed images from multiple and irregular alterations (PSNR/SSIM/PSNR-HVS-M), (a) Kodak-1 (24.53/0.7417/21.94), (b) Kodak-3 (31.82/0.8959/29.27), (c) Kodak-11 (28.12/0.8670/26.08), (d) Kodak-14 (26.38/0.8263/23.41).

6.5. Comparison with State-of-the-Art Schemes

The proposed SR-HTR method was compared in terms of performance with other state-of-the-art methods. As previously mentioned, the design of SR-HTR aims to achieve high performance when there is a high rate of modification in a watermarked image. This is accomplished by the insertion of redundancy in the authentication and reconstruction bits. Therefore, the comparison is performed in terms of objective quality in the following areas: watermarked image visualization, change detection rates and reconstruction image visualization.
The first evaluation consists of the comparison between the watermarked image and the original one. Table 6 shows the average values of PSNR, SSIM and PSNR-HVS-M for the set of test images employed.
Table 6. Quality comparison using PSNR, SSIM and PSNR-HVS-M between watermarked images and original ones.
The novel SR-HTR method and Molina [36] present the best quality results in PSNR and SSIM. This is due to the bit adjustment stage implemented in 2-LSB for both methods. For the PSNR-HVS-M metric, the highest values belong to [23,24], followed by the proposed method, [36], and then by [2,10]. A point worth mentioning is that the bit adjustment of SR-HTR and [36] negatively affects the results of PSNR-HSV-M due to the characteristics of this metric, as described in Table 1 and Table 4. Furthermore, it is essential to notice that the methods [23,24,36] employ the 2-LSB method, and procedures [2,10] use 3-LSB insertion. Additionally, methods [23,24] do not exploit the total insertion capacity to use enough redundant information in the watermarks, leading to lower performance in the reconstruction stage.
The second evaluation is related to change detection. For this test, Precision and Recall metrics were employed to compare the performance of the alteration schemes previously described.
The Precision results of this evaluation using the first alteration scheme are shown in Table 7. The best results belong to [2,10]. Compared with the other methods, these methods possess an authentication scheme that generates multiple authentication bits for each block of n × n pixels. Consequently, there is a higher likelihood to detect an alteration in a block because the number of bits to be compared increases. However, the proposed method was designed to generate a single authentication bit for each block of 4 × 4 pixels, and this significantly reduces the detection capability.
Table 7. Comparison of Precision for different alteration rates between 10% and 90%.
Figure 12 illustrates the images utilized for the Precision and Recall comparison with the second alteration scheme, which considers the modification of multiple and irregular areas. Moreover, Table 8 shows the average Precision values of SR-HTR and the other state-of-the-art methods for the attacks given in Figure 12.
Figure 12. Multiple and irregular alterations, (a) Kodak-4 original, (b) Kodak-10 original, (c) Kodak-17 original, (d) Kodak-18 original, (e) Kodak-4 modified, (f) Kodak-10 modified, (g) Kodak-17 modified, (h) Kodak-18 modified, (i) Kodak-4 ground truth, (j) Kodak-10 ground truth, (k) Kodak-17 ground truth, (l) Kodak-18 ground truth.
Table 8. Comparison of Precision for multiple and irregular modification detection.
Again, it can be observed that methods [2,10] achieved a higher performance in this test by employing a more significant number of authentication bits. These results are followed by the method [24], then the designed SR-HTR, [36] and finally [23]. Nonetheless, the proposed SR-HTR method accomplished Precision values higher than 0.9. It is further demonstrated that the novel SR-HTR method maintains a better balance between authentication and reconstruction capabilities.
Regarding the Recall measure, all the methods presented a value of 1.0 for both alteration schemes, including the novel SR-HTR method using the hierarchical authentication method.
To perform a visual comparison, the image Kodak-15 was employed. Its modifications can be observed in Figure 13. This comparison is illustrated in Table 9 and Table 10, where Table 9 presents the results for the alteration rates between 20% and 50%, and Table 10 shows the results for alteration rates between 60% and 90%.
Figure 13. Altered images for Kodak-15, (a) 20%, (b) 30%, (c) 40%, (d) 50%, (e) 60%, (f) 70%, (g) 80%, (h) 90%.
Table 9. Visual quality comparison for the reconstruction of Kodak-15 using an alteration rate between 20% and 50%.
Table 10. Visual quality comparison for the reconstruction of Kodak-15 using an alteration rate between 60% and 90%.
It can be noticed in Table 9 that the designed SR-HTR method presents a better visual quality for alteration rates between 20% and 50%, followed by the procedure [36] that inserts three versions of the reconstruction watermark, and then by [10] that inserts only two. The methods [2,23,24] reconstruct the original content but with a significant loss of quality in the reconstructed image due to the insertion of a single instance of the recovery watermark.
The results given in Table 10 show a dramatic fall in quality for the state-of-the-art methods, while SR-HTR maintains an acceptable visual quality proficiency until 90% when the Kodak-15 image is modified. The second-best results belong to [36], which degrades the image quality but still visibly preserves the original content. This decrease in quality for the state-of-the-art methods is due to the low level of redundancy inserted for the reconstruction watermark, which corresponds to the insertion of up to two copies, such as in [10,23]. However, the framework [23] presents a larger number of contiguous pixels without reconstruction compared with [10] because it uses the SPITH compression algorithm with 32 × 32 pixel blocks to generate the reconstruction bits. This method is sensitive to noise applied to the blocks. Therefore, the decompression algorithm will result in wrong values for the whole block with minimal modification of the reconstruction bits.
The values of PSNR, SSIM and PSNR-HVS-M for the visual results shown in Table 9 and Table 10 are reported in Table 11. It can be observed that for alteration rates greater than 30%, the proposed SR-HTR method presents a better reconstruction performance in the metrics PSNR and PSNR-HSV-M. Furthermore, the process [36] provides a higher reconstruction performance for alteration rates lower than 30%. Regarding SSIM, the best results for 20% and 30% rates belong to [23] due to the SPITH transform utilized in the generation of the reconstruction bits, which allows a higher quality in reconstructing the blocks. Nonetheless, the SR-HTR method maintains an acceptable performance when more than 40% of the image is modified. It is important to emphasize that the reconstructed image through SR-HTR is not significantly affected by the growth in the alteration rate.
Table 11. Comparison of objective quality metrics for the reconstruction of Kodak-15 using alteration rates between 20% and 90%.
The results displayed in Table 12, Table 13 and Table 14 represent the average values of PSNR, SSIM and PSNR-HVS-M, correspondingly, for reconstructing the test images using the first alteration scheme. These results reflect the evaluation shown in Table 9, Table 10 and Table 11, where the proposed method demonstrates more balanced results between the change detection and the reconstruction process. Although a lower performance in change detection was obtained (as shown in Table 7 and Table 8), the reconstructed images’ quality drastically increased since the proposed method generates an authentication bit for each set of 15 reconstruction bits. The SR-HTR method aims to obtain a better performance in reconstructing large areas of modified pixels without considerably affecting the change detection process and the quality of the watermarked image, as one can see in Table 6. Subsequently, the method [36] achieves the second-best performance due to its insertion process that uses three copies of the watermarks. Methods [10,23] are placed in the third and fourth positions because of their intolerance to high alteration rates and because of the insertion of only two copies of the reconstruction bits. Finally, methods [2,24], which only insert a single version of the reconstruction bits, have demonstrated a lower performance due to their inability to solve the tampering coincidence problem.
Table 12. Comparison of average PSNR for the reconstruction of the test images with alteration rates between 10% and 90%.
Table 13. Comparison of average SSIM for the reconstruction of the test images with alteration rates between 10% and 90%.
Table 14. Comparison of average PSNR-HSV-M for the reconstruction of the test images with alteration rates between 10% and 90%.
The last evaluation is the comparison of the reconstruction proficiency of multiple and irregular alterations. This test was performed using the modifications illustrated in Figure 12. Their corresponded reconstructed images are shown in Table 15. The images’ alteration rates are Kodak-4 at 85.65%, Kodak-10 at 26.42%, Kodak-17 at 74.36% and Kodak-18 at 30.97%. The PSNR, SSIM and PSNR-HVS-M values of the reconstructed image compared with the original ones are reported in Table 16.
Table 15. Visual comparison of the reconstructed images for multiple and irregular alterations.
Table 16. PSNR, SSIM and PSNR-HVS-M for reconstructed images presented in Table 15.
On one hand, it can be observed that the proposed SR-HTR method presents the best objective quality for each reconstructed image. On the other hand, the method given by [23] resulted in better SSIM values for Kodak-10 because the alteration rate is low. However, the visual quality of the reconstructed Kodak-10 given in Table 15 for the proposed method represents an admissible reconstruction.
As can be noticed in Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15 and Table 16, the novel SR-HTR method demonstrated an excellent performance compared with the other state-of-the-art schemes in terms of regular attacks (as seen in Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14) and multiple and irregular attacks (shown in Table 15 and Table 16).

7. Conclusions

This paper proposes a novel fragile scheme named SR-HTR based on watermarking for color image authentication and self-recovery with high tampering rates. The image protection and extraction method can be implemented in three different variants (1-LSB, 2-LSB or 3-LSB), where it is possible to embed multiple copies (three, six or nine, respectively) of the recovery watermarks, and thus to increase the robustness of the scheme to the tampering coincidence problem. The evaluation of the three embedding–extraction variants of the novel method was carried out. The embedding watermark scheme uses a pseudo-random sequence to embed the recovery watermarks in different blocks to increase the watermarked image’s objective quality.
During the evaluation of the results, various alterations were investigated at different tampering rates (from 10% to 90%) with irregular and multiple alterations. Finally, during the recovery process, the 2-LSB embedding scheme was selected as a balanced solution between the quality of the watermarked image and the recovered images’ quality for different alterations.
The experimental results have shown good quality for the watermarked images obtained by the novel SR-HTR framework compared with state-of-the-art methods. Additionally, the novel scheme has demonstrated a good performance during the detection of regular, irregular and multiple alterations, resulting in Precision and Recall metrics higher than 0.9.
The designed SR-HTR has shown excellent performance in reconstructing the alterations at different tampering rates (from 10% to 90%), which is superior to other state-of-the-art methods. Additionally, in cases of multiple and irregular alterations, the novel color image authentication and self-recovery framework has shown an excellent performance, maintaining high objective criteria values as well as great visual perception via the HVS.
As future work, further investigations should be performed to resist different intentional attacks like cropping, scaling and rotation, and unintentional attacks such as JPEG compression. Additionally, for fast processing in real-time environments, we will consider the possibility of designing parallel fragile watermarking schemes implemented in graphics processing units (GPU) or multicore central processing units (CPU), based on adversarial examples, which have demonstrated remarkable performance in solving similar problems in image and audio domains [42], avoiding weaknesses presented by deep neural networks (DNNs) where if an image is transformed slightly, will be incorrectly classified by a DNN even when the changes are small and unnoticed by the human eye [43].

Author Contributions

Conceptualization, R.R.-R., J.M.-G. and B.P.G.-S.; Formal analysis, C.C.-R.; Investigation, R.R.-R.; Methodology, R.R.-R., V.P. and J.M.-G.; Software, C.C.-R. and B.P.G.-S.; Visualization, C.C.-R. and J.M.-G.; Writing—Original draft, B.P.G.-S. and J.M.-G.; Writing—Review and editing, V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The authors would like to thank the Instituto Politecnico Nacional (Mexico), Comision de Operacion y Fomento de Actividades Academicas (COFAA) of IPN and the Consejo Nacional de Ciencia y Tecnologia (Mexico) for their support in this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, C.; Wang, Y.; Ma, B.; Zhang, Z. A novel self-recovery fragile watermarking scheme based on dual-redundant-ring structure. Comput. Electr. Eng. 2011, 37, 927–940. [Google Scholar] [CrossRef]
  2. Singh, D.; Singh, S.K. Effective self-embedding watermarking scheme for image tampered detection and localization with recovery capability. J. Vis. Commun. Image Represent. 2016, 38, 775–789. [Google Scholar] [CrossRef]
  3. Wu, W.-C.; Lin, Z.-W. SVD-based self-embedding image authentication scheme using quick response code features. J. Vis. Commun. Image Represent. 2016, 38, 18–28. [Google Scholar] [CrossRef]
  4. Qi, X.; Xin, X. A quantization-based semi-fragile watermarking scheme for image content authentication. J. Vis. Commun. Image Represent. 2011, 22, 187–200. [Google Scholar] [CrossRef]
  5. Chang, C.-C.; Chen, K.-N.; Lee, C.-F.; Liu, L.-J. A secure fragile watermarking scheme based on chaos-and-hamming code. J. Syst. Softw. 2011, 84, 1462–1470. [Google Scholar] [CrossRef]
  6. Liu, S.-H.; Yao, H.-X.; Gao, W.; Liu, Y.-L. An image fragile watermark scheme based on chaotic image pattern and pixel-pairs. Appl. Math. Comput. 2007, 185, 869–882. [Google Scholar] [CrossRef]
  7. Chaluvadi, S.B.; Prasad, M.V.N.K. Efficient image tamper detection and recovery technique using dual watermark. In 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC); Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2009; pp. 993–998. [Google Scholar]
  8. Zhang, X.; Qian, Z.; Ren, Y.; Feng, G. Watermarking with Flexible Self-Recovery Quality Based on Compressive Sensing and Compositive Reconstruction. IEEE Trans. Inf. Forensics Secur. 2011, 6, 1223–1232. [Google Scholar] [CrossRef]
  9. He, H.; Chen, F.; Tai, H.-M.; Kalker, T.; Zhang, J. Performance Analysis of a Block-Neighborhood-Based Self-Recovery Fragile Watermarking Scheme. IEEE Trans. Inf. Forensics Secur. 2011, 7, 185–196. [Google Scholar] [CrossRef]
  10. Tong, X.; Liu, Y.; Zhang, M.; Chen, Y. A novel chaos-based fragile watermarking for image tampering detection and self-recovery. Signal Process. Image Commun. 2013, 28, 301–308. [Google Scholar] [CrossRef]
  11. Qian, Z.; Feng, G. Inpainting Assisted Self Recovery With Decreased Embedding Data. IEEE Signal Process. Lett. 2010, 17, 929–932. [Google Scholar] [CrossRef]
  12. Qin, C.; Chang, C.-C.; Chen, K.-N. Adaptive self-recovery for tampered images based on VQ indexing and inpainting. Signal Process. 2013, 93, 933–946. [Google Scholar] [CrossRef]
  13. Li, C.; Wang, Y.; Ma, B.; Zhang, Z. Tamper detection and self-recovery of biometric images using salient region-based authentication watermarking scheme. Comput. Stand. Interfaces 2012, 34, 367–379. [Google Scholar] [CrossRef]
  14. He, H.-J.; Zhang, J.-S.; Tai, H.-M. Self-recovery Fragile Watermarking Using Block-Neighborhood Tampering Characterization. In Computer Vision; Springer International Publishing: Berlin/Heidelberg, Germany, 2009; Volume 5806, pp. 132–145. [Google Scholar]
  15. Dadkhah, S.; Manaf, A.A.; Hori, Y.; Hassanien, A.E.; Sadeghi, S. An effective SVD-based image tampering detection and self-recovery using active watermarking. Signal Process. Image Commun. 2014, 29, 1197–1210. [Google Scholar] [CrossRef]
  16. Zhang, X.; Wang, S.; Qian, Z.; Feng, G. Reference Sharing Mechanism for Watermark Self-Embedding. IEEE Trans. Image Process. 2011, 20, 485–495. [Google Scholar] [CrossRef] [PubMed]
  17. Lee, T.-Y.; Lin, S.D. Dual watermark for image tamper detection and recovery. Pattern Recognit. 2008, 41, 3497–3506. [Google Scholar] [CrossRef]
  18. Zhang, X.; Wang, S.; Qian, Z.; Feng, G. Self-embedding watermark with flexible restoration quality. Multimed. Tools Appl. 2011, 54, 385–395. [Google Scholar] [CrossRef]
  19. Bravo-Solorio, S.; Nandi, A.K. Secure fragile watermarking method for image authentication with improved tampering localisation and self-recovery capabilities. Signal Process. 2011, 91, 728–739. [Google Scholar] [CrossRef]
  20. Wang, C.-P.; Wang, X.-Y.; Xia, Z.-Q.; Zhang, C.; Chen, X.-J. Geometrically resilient color image zero-watermarking algorithm based on quaternion Exponent moments. J. Vis. Commun. Image Represent. 2016, 41, 247–259. [Google Scholar] [CrossRef]
  21. Hsu, L.-Y.; Hu, H.-T. Blind image watermarking via exploitation of inter-block prediction and visibility threshold in DCT domain. J. Vis. Commun. Image Represent. 2015, 32, 130–143. [Google Scholar] [CrossRef]
  22. Munoz-Ramirez, D.O.; Reyes-Reyes, R.; Ponomaryov, V.; Cruz-Ramos, C. Invisible digital color watermarking technique in anaglyph 3D images. In Proceedings of the 2015 12th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 28–30 October 2015; pp. 1–6. [Google Scholar]
  23. Fan, M.; Wang, H. An enhanced fragile watermarking scheme to digital image protection and self-recovery. Signal Process. Image Commun. 2018, 66, 19–29. [Google Scholar] [CrossRef]
  24. Tai, W.-L.; Liao, Z.-J. Image self-recovery with watermark self-embedding. Signal Process. Image Commun. 2018, 65, 11–25. [Google Scholar] [CrossRef]
  25. Chamlawi, R.; Khan, A.; Usman, I. Authentication and recovery of images using multiple watermarks. Comput. Electr. Eng. 2010, 36, 578–584. [Google Scholar] [CrossRef]
  26. Chamlawi, R.; Khan, A. Digital image authentication and recovery: Employing integer transform based information embedding and extraction. Inf. Sci. 2010, 180, 4909–4928. [Google Scholar] [CrossRef]
  27. Chamlawi, R.; Khan, A.; Idris, A. Wavelet Based Image Authentication and Recovery. J. Comput. Sci. Technol. 2007, 22, 795–804. [Google Scholar] [CrossRef]
  28. Qi, X.; Xin, X. A singular-value-based semi-fragile watermarking scheme for image content authentication with tamper localization. J. Vis. Commun. Image Represent. 2015, 30, 312–327. [Google Scholar] [CrossRef]
  29. Preda, R.O. Semi-fragile watermarking for image authentication with sensitive tamper localization in the wavelet domain. Measurement 2013, 46, 367–373. [Google Scholar] [CrossRef]
  30. Molina-Garcia, J.; Reyes-Reyes, R.; Ponomaryov, V.; Cruz-Ramos, C. Watermarking algorithm for authentication and self-recovery of tampered images using DWT. In Proceedings of the 2016 9th International Kharkiv Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves (MSMW), Kharkiv, Ukraine, 20–24 June 2016; pp. 1–4. [Google Scholar] [CrossRef]
  31. Horng, S.-J.; Rosiyadi, D.; Li, T.; Takao, T.; Guo, M.; Khan, M.K. A blind image copyright protection scheme for e-government. J. Vis. Commun. Image Represent. 2013, 24, 1099–1105. [Google Scholar] [CrossRef]
  32. Wang, X.-Y.; Liu, Y.-N.; Han, M.-M.; Yang, H.-Y. Local quaternion PHT based robust color image watermarking algorithm. J. Vis. Commun. Image Represent. 2016, 38, 678–694. [Google Scholar] [CrossRef]
  33. Dutta, T.; Gupta, H.P. A robust watermarking framework for High Efficiency Video Coding (HEVC)–Encoded video with blind extraction process. J. Vis. Commun. Image Represent. 2016, 38, 29–44. [Google Scholar] [CrossRef]
  34. Wang, R.-Z.; Lin, C.-F.; Lin, J.-C. Hiding data in images by optimal moderately-significant-bit replacement. Electron. Lett. 2000, 36, 2069. [Google Scholar] [CrossRef]
  35. Ponomarenko, N.; Silvestri, F.; Egiazarian, K.; Carli, M.; Astola, J.; Lukin, V. On between coefficient contrast masking of DCT basis func-tions, CD-ROM. In Proceedings of the Third International Workshop on Video Processing and Quality Metrics for Consumer Electronics, Scottsdale, AZ, USA, 25–26 January 2007. [Google Scholar]
  36. Molina-Garcia, J.; Garcia-Salgado, B.P.; Ponomaryov, V.; Reyes-Reyes, R.; Sadovnychiy, S.; Cruz-Ramos, C. An effective fragile watermarking scheme for color image tampering detection and self-recovery. Signal Process. Image Commun. 2020, 81, 115725. [Google Scholar] [CrossRef]
  37. Lin, C.-C.; He, S.-L.; Chang, C.-C. Pixel P Air-Wise Fragile Image Watermarking Based on HC-Based Absolute Moment Block Truncation Coding. Electron. 2021, 10, 690. [Google Scholar] [CrossRef]
  38. Kim, C.; Yang, C.-N. Self-Embedding Fragile Watermarking Scheme to Detect Image Tampering Using AMBTC and OPAP Approaches. Appl. Sci. 2021, 11, 1146. [Google Scholar] [CrossRef]
  39. Lee, C.-F.; Shen, J.-J.; Chen, Z.-R.; Agrawal, S. Self-Embedding Authentication Watermarking with Effective Tampered Location Detection and High-Quality Image Recovery. Sensors 2019, 19, 2267. [Google Scholar] [CrossRef]
  40. Kodak Photo CD, Photo Sampler. Available online: http://www.math.purdue.edu/~lucier/PHOTO_CD/ (accessed on 1 March 2021).
  41. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  42. Kwon, H.; Yoon, H.; Park, K.-W. Acoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system. Neurocomputing 2020, 417, 357–370. [Google Scholar] [CrossRef]
  43. Kwon, H.; Kim, Y.; Yoon, H.; Choi, D. Random Untargeted Adversarial Example on Deep Neural Network. Symmetry 2018, 10, 738. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.