Next Article in Journal
Which Alternative for Solving Dual Fuzzy Nonlinear Equations Is More Precise?
Next Article in Special Issue
Novel Linguistic Steganography Based on Character-Level Text Generation
Previous Article in Journal
Software-Automatized Individual Lactation Model Fitting, Peak and Persistence and Bayesian Criteria Comparison for Milk Yield Genetic Studies in Murciano-Granadina Goats
Previous Article in Special Issue
Meaningful Secret Image Sharing Scheme with High Visual Quality Based on Natural Steganography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatial-Perceptual Embedding with Robust Just Noticeable Difference Model for Color Image Watermarking

1
School of Information and Engineering, Shandong Normal University, Jinan 250014, China
2
School of Mechanical and Electrical Engineering, Shandong Management University, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(9), 1506; https://doi.org/10.3390/math8091506
Submission received: 30 July 2020 / Revised: 2 September 2020 / Accepted: 2 September 2020 / Published: 4 September 2020
(This article belongs to the Special Issue Computing Methods in Steganography and Multimedia Security)

Abstract

:
In the robust image watermarking framework, watermarks are usually embedded in the direct current (DC) coefficients in discrete cosine transform (DCT) domain, since the DC coefficients have a larger perceptual capacity than any alternating current (AC) coefficients. However, DC coefficients are also excluded from watermark embedding with the consideration of avoiding block artifacts in watermarked images. Studies on human vision suggest that perceptual characteristics can achieve better image fidelity. With this perspective, we propose a novel spatial–perceptual embedding for a color image watermarking algorithm that includes the robust just-noticeable difference (JND) guidance. The logarithmic transform function is used for quantization embedding. Meanwhile, an adaptive quantization step is modeled by incorporating the partial AC coefficients. The novelty and effectiveness of the proposed framework are supported by JND perceptual guidance for spatial pixels. Experiments validate that the proposed watermarking algorithm produces a significantly better performance.

1. Introduction

Nowadays, with the rapid development of information technology, the storage, replication, and dissemination of digital multimedia have become easier. Multimedia copyright protection has become a problem that people attach great importance to. As a branch of information-hiding technology, digital watermarking has played a significant role in a wide range of applications including content protection, digital rights management (DRM) and media system monitoring, [1,2,3] etc. Digital watermarking technology is to protect copyright by hiding the watermark information in the original digital information. In recent years, digital watermarking, especially image watermarking, has been a wide concern of researchers in the field of data hiding [4,5,6].
Digital watermarking technology, especially robust watermarking technology, has been widely used and developed for more than twenty years [7,8]. According to the processing domain of the original image, the existing robust watermarking techniques can be divided into two categories: spatial domain watermarking and transform domain watermarking. The spatial domain watermarking achieves the purpose of embedding watermark information by directly modifying the image pixel values. The transform domain watermarking algorithm converts the image to the frequency domain space, and embeds the watermark information by modifying the frequency domain coefficients. In general, the image watermarking algorithms are implemented in the new domain by transform methods, such as discrete cosine transform (DCT) [9,10,11], discrete fourier transform (DFT) [12,13], and discrete wavelet transform (DWT) [14,15].
Nowadays, JPEG images are widely used and disseminated in our world. Due to the DCT holding the advantage of excellent energy compaction for highly imaged data, many watermarking algorithms in the DCT domain have been proposed by researchers. Lin et al. [16] embedded watermark information into a selected image block by varying the value of a low-frequency DCT coefficient; although the method has good performance to resist the JPEG compression, the watermarked image’s visual quality has some distortion. Das et al. [11] proposed a method that uses the relationship between blocks to change the DCT coefficients to achieve the purpose of embedding watermark information, but the method needs to improve the performance of resisting noise attacks. In [17], it was shown that it is feasible to embed the watermark into the direct current (DC) component of the DCT domain. Recently, Su proposed a series of methods [18,19,20] based on the combination of spatial embedding and DC quantization, where each pixel value in spatial domain was modified directly according to the DC modification from watermark embedding. The watermarkimg energy in the frequency domain is evenly distributed to the pixel values in the spatial domain, that is, for a certain image block, the amount of modification to the pixel value is the same. Therefore, it has been criticized for not correlating well with perceived quality measurement [21,22,23,24].
Since the human visual system’s perception of images is uneven and non-linear, it cannot perceive some changes in the image. Therefore, people have done a lot of work to understand the human visual system (HVS) and apply it to the application of image watermarking. The just-noticeable difference (JND) refers to the maximum distortion which cannot be perceived by the human eyes, thus it is very important and advantageous to incorporate JND into the watermark-embedding algorithms [25,26,27,28,29,30,31]. Ma et al. [32] proposed a watermarking method which uses Watson’s JND model to compute the quantization step size. More recently, Wang et al. [9] used another robustness JND model to improve the perceptual watermarking algorithm. Moreover, in the watermarking framework, the traditional dither modulation (DM), as a blind method which could be formulated as a form of information transmission over a channel with side information at the encoder, was used for watermark embedding due to its advantages in implementation and computational flexibility. It is well known that the quantization method is very sensitive to the volumetric attack as the fixed quantization step [18,19].
In order to fix the problems discussed above, a robust DCT-domain JND model is presented and transformed to the spatial domain. It can estimate the explicit JND threshold for the individual pixel. Furthermore, the logarithmic function is applied for the DC value, and an optimum quantization step is derived based on different block types. Simulation results show that the proposed spatial–perceptual embedding scheme can obtain a better perceptual quality and overcome the drawbacks of volumetric attack sensitivity. It also has an enhanced performance against common attacks. The main contributions of this paper include the following:
1. The logarithm function is utilized to the DC value, to resist the sensitivity to the volumetric attack. Moreover, an optimal quantization step is designed with a partial alternating current (AC) value-based block classification method. For each block type, a different masking weight is applied;
2. An improved DCT-based JND model is considered to estimate the explicit thresholds from the quantization-based watermark embedding for DC coefficients. Consequently, the pixel value can be updated with better perceptual performance;
3. As the partial AC values are introduced to classify the block types and watermark embedding can alter the original image, the corresponding values in the proposed DCT-based JND profile would be set to zero, which can compensate for the robustness.
The rest of the paper is organized as follow: In Section 2, we will introduce the related work on the JND model. In Section 3, a new perceptual watermarking strategy for image watermarking is proposed. The experimental results are shown and discussed in Section 4. Section 5 contains the concluding remarks.

2. Related Work

2.1. JND Modeling

JND, which refers to the minimum visibility threshold of the HVS, is determined by the underlying physiological and psychophysical mechanisms. The existing JND models are divided into two categories according to the calculated domain for the JND threshold. One category is the pixel-wise domain, which can directly calculate the JND threshold for an image pixel [22,33]. Another category is the subband-domain, for example, the DCT domain [21,24].
It is noted that Contrast Sensitivity Function (CSF), Luminance Adaptation (LA) and Contrast Masking (CM) are important contributing factors for JND in images. The pixel domain JND models are obtained directly in the pixel domain and only consider LA and CM effects. As the HVS highly depends upon contrast sensitivity, existing DCT-based JND models estimate contrast sensitivity using the CSF in the DCT domain. Different from the pixel domain JND models, the DCT domain JND model which has spatial CSF is more consistent with HVS than the JND model in the pixel domain. In addition, the DCT-based JND model is more robust when applied to the proposed watermarking algorithm than the pixel-based JND model, which can be clearly seen in Section 2.1.4. Recently, a sophisticated perceptual DCT-based JND model [24], which built a regularity-based CM factor, is proposed. The corresponding product form for k-th block with size 8 × 8 can be expressed as
T J N D k ( x , y ) = T b a s e k ( x , y ) · F L A k ( x , y ) · F C M k ( x , y ) ,
where T J N D k ( x , y ) is the JND threshold for the k th block, T b a s e k ( x , y ) is the base threshold based on the spatial CSF for k-th block, the modulation factor is LA factor F L A k ( x , y ) and the CM factor F C M k ( x , y ) . x, y are the indices ( x = 0 , 1 , 2 , , 7 , y = 0 , 1 , 2 , , 7 ) in a block.

2.1.1. The Baseline CSF Threshold

In addition to considering the spatial frequency effect for CSF value, we also need to consider several factors such as the oblique effect factor and the spatial summation effect factor [34]. Thus, for k-th DCT block, the corresponding baseline sensitivity can be given as
T b a s e e k x , y = s A ω x , y · G x · y · L max L min · 1 r + 1 r · cos 2 ϕ x , y ,
where s is a parameter is set to 0.25 in [34], G is the number of gray levels which set to 255, L max and L min are the display luminance values corresponding to the maximum and minimum gray levels. x and y are DCT normalization factors which can be obtained by Equations (3) and (4)
x = 1 / 8 if x = 0 2 / 8 if 1 x < 7 ,
y = 1 / 8 if y = 0 2 / 8 if 1 y < 7 .
A ω is the CSF in [35] which can be expressed as
A ω = a · b + c · ω · exp c · ω d ,
where ω (cycles/degree) is the spatial frequency and the empirical constants a, b, c and d are set to 2.6, 0.0192, 0.114 and 1.1 [24], respectively. The CSF with the spatial is shown in Figure 1. The spatial CSF indicates that the sensitivity of the HVS for spatial frequency values, the pixel-based JND, cannot incorporate CSF because spatial CSF can be estimated only in the frequency domain. The DCT-based JND model can incorporate the spatial CSF effect of the HVS into modeling JND, thus they usually show better performance than pixel-domain JND models.
For the ( x , y ) sub-band in the DCT block, the corresponding frequency ω x , y can be calculated by indicating the horizontal/vertical length of a pixel in degrees of visual angle
ω x , y = x 2 + y 2 2 M θ ,
θ = arctan 1 2 · R v h · h ,
where M is the size of non-overlapped DCT block (M is 8 in this case). θ is the ratio of the viewing distance to the screen height, and h is the number of pixels in the screen height. The term 1 / r + 1 r · cos 2 ϕ x , y refers the oblique effect, and r is set to 0.6 in [36]. ϕ x , y represents the direction angle of the corresponding DCT component, which is expressed as
ϕ x , y = arcsin 2 · ω x , 0 · ω 0 , y ω x , y 2 .

2.1.2. Luminance Adaptation

The luminance masking threshold usually depends on the background brightness of a local region; the brighter the background is, the higher the masking value is. The luminance modulation factor F L A which refers to the paper of [21], is calculated as
F L A k x , y = 60 μ p / 150 + 1 , μ p 60 1 , 60 μ p 70 μ p 170 / 425 + 1 , μ p 170 ,
where μ p is the average intensity value of the block.

2.1.3. Contrast Masking

As we all know, when the human visual system observes image blocks, different visual masking effects often appear according to the type of image block. Therefore, it is necessary to classify image blocks reasonably and calculate their contrast-masking effect. We adopted three AC coefficients to simply divide the image blocks into three types of image blocks—smooth, edge, and texture—and this will be shown in detail in Section 3.3. Combined with [24] for using DCT coefficients to classify the direction of image blocks, 11 types of image blocks were obtained. Since the HVS has different sensitivities with different regions, for different kinds of block type, the contrast masking values are also variant according to the sensitivity of the HVS. Three masking patterns for the direction of image blocks in [24] are given as
M h ( x , y ) = 0.8 β · y ,
M v ( x , y ) = 0.8 β · x ,
M d ( x , y ) = 0.8 β · max ( x , y ) ,
where the constant β is set to 0.1. The extent of contrast masking effect [24] is measured as
Γ = 1 , for smooth block 1 + M h , for horizon of the orderly edge block M v , for vertial of the orderly edge block M d , for diagonal of the orderly edge block 1 + 0.75 , x 2 + y 2 16 in disorderly edge block 0.25 , x 2 + y 2 > 16 in disorderly edge block 2 + M h , for horizon of the orderly texture block M v , for vertial of the orderly texture block M d , for diagonal of the orderly texture block 3 + 1.75 , x 2 + y 2 16 in disorderly texture block 1.25 , x 2 + y 2 > 16 in disorderly texture block ,
where the constants 1, 2 and 3 represent that different block types have different influence weights for the contrast-masking evaluation. The constant 0.75 represent the sensitivity of the HVS for high-frequency information, and the constant 0.25 refect sensitivity of the HVS for low-frequency information. These values are determined in [24]. At the same time, the influence of in-band masking and inter-band masking on contrast masking is considered; the final contrast masking factor [37] can be calculated by
F C M = Γ , f o r x 2 + y 2 16 i n s m o o t h a n d o r d e r l y e d g e b l o c k Γ · 4 , m a x 1 , D k x , y T b a s e · F L A σ , o t h e r s ,
where D k x , y is the (x,y)-th DCT coefficient in the k-th block. Then, the σ is set to 0.36 [37].

2.1.4. Robust JND Model

With all the considerations mentioned above, a sophisticated perceptual JND model is obtained which not only perceives a variety of pixel changes in spatial block [33], but also keeps the partial AC coefficients invariant with watermark embedding. Because block classification is operated on partial AC values, if the JND model in the pixel domain is applied to the watermarking algorithm, the block classification will be inaccurate when extracting watermark information. Although this is also the case with the JND model using the DCT domain, the corresponding AC coefficient values in the T J N D k ( x , y ) can be set to zero, and this can compensate for the robustness. Therefore, the JND threshold for a DCT subband in (15) is improved as
T J N D k * ( x , y ) = 0 , f o r x + y 0 , 2 i n b l o c k T J N D k ( x , y ) , o t h e r s .

3. Proposed Watermarking Scheme

3.1. Watermark Embedding Scheme

Because of the high correlation between the three channels in the RGB space, there is a lot of redundant information between the color channels, so the YCbCr space is used to embed the watermark information. For a RGB color image, we convert it from the RGB color space into the YCrCb color space. The Y channel, which consists of HVS for embedding watermark information, is selected. Different from Su’s [18] embedding algorithm, we use the logarithmic DM embedding algorithm. The flow diagram of watermarking scheme is given in Figure 2. Firstly, the RGB color image I is converted into a YCbCr image by Equation (16)
Y C b C r = 0.2990 0.5870 0.1140 0.169 0.331 0.5000 0.5000 0.419 0.081 R G B + 0 128 128 .
It has been demonstrated that DC component can be used to place watermarks for better robustness of the watermarking scheme [17]. Since the common image-processing procedures, where watermarked images may attacked, such as noise, compression and filtering, can change DC components less than AC components, the Y channel is divided into no-overlapped blocks of size 8 × 8 . For the k-th block, the DC coefficient A k can be obtained by DCT. DCT, as a compression method in JPEG standard, has been widely used in watermarking schemes. Generally, an image in the spatial domain can be converted into the DCT domain by 2D block DCT, and the image in the DCT domain also can be inverted to the original image by inverse 2D DCT. For the k-th image block of the image, R k ( i , j ) ( i = 0 , 1 , 2 , , 7 , j = 0 , 1 , 2 , , 7 ) , 2D DCT is given as follows
D k x , y = x y i = 0 7 j = 0 7 R k i , j × cos π 2 i + 1 x 2 × 8 × cos π 2 j + 1 y 2 × 8 ,
where x and y are the horizontal and vertical frequency ( x = 0 , 1 , 2 , , 7 , y = 0 , 1 , 2 , , 7 ) , x and y are DCT normalization factors which can be obtained by Equations (3) and (4), R k ( i , j ) is the original pixel of k-th image block, ( i , j ) are the spatial indices ( i = 0 , 1 , 2 , , 7 , j = 0 , 1 , 2 , , 7 ) in an image block, D k ( x , y ) are DCT coefficients of image block R k ( i , j ) , and D k ( 0 , 0 ) is DC coefficient A k .
Then, A k can be transformed according to the following novel logarithmic function as follows
Y k = ln 1 + μ A k C 0 ln 1 + μ , μ > 0 , C 0 > 0 ,
where A k is the DC coefficient of the k-th image block, C 0 is the mean intensity of the whole image, and μ is a parameter which is set as 0.02 here. The transformed signal Y k is then used for watermark embedding by dither modulation (DM) according to the watermark bit m, as follows
Y w k = Q Y k , Δ k , m , d m = Δ k · r o u n d Y k + d m Δ k d m , m 0 , 1 ,
where Δ k is the adaptive quantization step for k-th image block, and d m is the dither signal corresponding to the watermark bit m.
The watermarked DC coefficient A w k is obtained by applying inverse transform to the quantized data Y w k , as follows
A w k = C 0 μ 1 + μ Y w k 1 .
Thus, the modification E k in Equation (21), which is obtained by logarithmic DM for the k-th block, represents the energy of watermark information,
E k = A w k A k ,
where A k is the k-th image block’s DC coefficient, A w k is the watermarked DC coeffiecient with modificication.
The sum modification M · E k for the k-th block can distribute the energy of embedded data over the spatial block [18]. In [18], the pixel value has the same modification amount in each block, and it ignores the perception of HVS for different pixel values. The smoothness, edges, and texture areas of the image are not well considered, and there is no good correlation with HVS, which inevitably leads to distortion in the spatial domain. JND gives us a promising way to guide the pixel changes. Consequently, the perceptual pixel changes are more consistent with HVS, and the watermarked pixel is obtained according to JND perceptual instead of uniform changes in the individual pixel. According to the inverse transformation from the DCT domain to the pixel domain, we can allocate the energy onto each pixel with this cross-domain JND operation. Consequently, M · E k can be distributed over all pixels in the k-th block with the guidance of the cross-domain JND in Equation (22)
R k * ( i , j ) = R k ( i , j ) + I D C T T J N D k * ( i , j ) s u m I D C T T J N D k * ( i , j ) · ( M · E k ) ,
where R k * ( i , j ) is pixel of the watermarked image block in spatial after the embedding watermark, R k ( i , j ) is the original pixel of k-th image block, ( i , j ) are the spatial indices ( i = 0 , 1 , 2 , , 7 , j = 0 , 1 , 2 , , 7 ) in a block. s u m I D C T T J N D k * ( i , j ) is the total JND threshold of the k-th block.
Repeating the same operation for the non-overlapped blocks, the watermark information is embedded in the Y channel. Then, the YCbCr watermarked image transforms to RGB color watermarked image I w as follows
R G B = 1 0.0000 1.4030 1 0.344 0.714 1 1.7730 0.0000 × Y C b 128 C r 128 .
The main steps of the watermark embedding scheme can be described as Algorithm 1 showed.
Algorithm 1 Watermark Embedding
Input: The host image, I; Watermark message, m;
Output: Watermarked image I w ;
1:
The RGB color image is transformed to YCbCr color space by Equation (16). The Y channel is regarded as the watermark embedding channel;
2:
Divide the Y channel image into 8 × 8 non-overlapped blocks, and perform DCT transform for each block;
3:
for all blocks do
4:
 Use the three AC coefficients to obtain the adaptive quantization step by Equations (26)–(29);
5:
 Estimate the perceptual JND factors including the spatial CSF effect, luminance adaptation and contrast masking by Equations (2), (9) and (14), respectively;
6:
 The final robust JND model can be calculated by Equation (15);
7:
 Obtain the DC coefficient by Equation (17). One part of the watermark message m is embedded into the DC coefficient by Equations (18)–(20);
8:
 Use the JND model to guide the amount of modification to each pixel in each image block by Equation (22);
9:
 Generate the modified block B * ;
10:
end for
11:
Generate the watermarked image Y by collecting all the modified blocks B * ;
12:
Generate the watermarked color image by concatenating the modified Y with the Cb and Cr image channel and then convert the color space from YCbCr to RGB by Equation (23);
13:
return Watermarked image I w ;

3.2. Watermark Extracting Scheme

Firstly, the RGB color watermarked image I w is converted into a YCbCr image by Equation (16), and the watermarked channel Y is divided into non-overlapped 8 × 8 pixel blocks. In the extraction process, for the k-th block, the DC coefficient A w k can be obtained by Equation (17). Then, the D C coefficient is transformed to signal Y w k by logarithmic function
Y w k = ln 1 + μ A w k C w 0 ln 1 + μ , μ > 0 , C w 0 > 0 ,
where A w k is the DC coefficient of k-th watermarked block. C w 0 is the mean intensity of the whole image, μ is a parameter which is set as 0.02 here. Then, the watermark can be detected from the transformed signal Y w k according to the minimum distance as follows
m = a r g m i n m 0 , 1 Y w k Q Y w k , Δ , m , d m ,
where m is the extracted watermark information of the block, Δ is the adaptive quantization step, d m is the dither signal corresponding to the watermark bit m. Repeating the same operation for the non-overlapped blocks, the watermark information is extracted. The main steps of the watermark extracting scheme can be described as Algorithm 2 showed.
Algorithm 2 Watermark Extracting
Input: The received watermarked image, I w ;
Output: Watermark message m ;
1:
Transform the color image from RGB color space to YCbCr color space by Equation (16). Select the channel Y as the main host image;
2:
Divide the host image into 8 × 8 non-overlapped blocks;
3:
for all blocks do
4:
 Use the three AC coefficients to obtain the adaptive quantization step by Equations (26)–(29);
5:
 Obtain the DC coefficient by DCT and transform by logarithmic function by Equation (24);
6:
 One part of the watermark message is extracted by Equation (25); m = a r g m i n m 0 , 1 Y w k Q Y w k , Δ , m , d m
7:
end for
8:
return Watermark image m ;

3.3. Adaptive Quantization Step

Human eyes are usually very sensitive to the distortion in the smooth area or around the edge, so the quantization step should be weighted in the smooth and edge blocks. The Canny operator is usually used to classify image blocks, but in the watermarking system, the classification of blocks in the watermarking image is different from that of host image blocks without attack. Based on the above considerations, an adaptive quantization step for different types of block can be introduced. Here, the image block types are measured by partial AC coefficients. There are three AC coefficients selected for the edge strength, and the corresponding edge strength can be used to calculate the edge density value to classify the image blocks accurately, as in [26]. Thus, the new edge strength of a block is defined as
S A C k = s u m ( B k ( x , y ) ) , f o r x + y 0 , 2 i n b l o c k ,
where B k ( x , y ) represents the AC coefficient which can be calculated by Equation (17). Then, the new edge density can be given as
u e = ln 1 + 0.47 · S A C k ( S A C min ) ( S A C max ) ( S A C min ) + c ,
where c is set to 10 8 to avoid the case where the denominator equals zero. The normalized processing in Equation (27) can resist the volumetric attack, as it can remain unchanged with this attack. In this regard, the image block types are defined as
T y p e = S m o o t h , u e 0.1 E d g e , 0.1 < u e 0.2 T e x t u r e , u e > 0.2 .
Therefore, we can obtain the adaptive quantization steps by different block types. The adjustment equation with different masking weight for k-th image block can be expressed as
Δ k = Δ k · 1 , f o r S m o o t h b l o c k 1.15 , f o r E d g e b l o c k 1.25 , f o r T e x t u r e b l o c k ,
where Δ k is the k-th image block’s fixed quantization step which controls the image quality; the new weighted Δ k is the adaptive quantization step for k-th image block. The different weight factors not only expresses better perceptual quality but also make the quantization step robust to the volumetric attack.

4. Experiments and Results Analysis

In this section, we give and discuss the experiment’s results. To prove the performance of the proposed watermarking scheme, we perform experiments using the original code in MATLAB R2016b on a 64-bit Windows 10 operating system at 16 GB memory, 3.20 GHz frequency of Intel processors.
In this experiment, the 24-bit color RGB images with the size of 512 × 512 are selected as the host images from the CVG-UGR [38] image database, as shown in Figure 3. The original watermark is the binary image with the size 64 × 64 , as shown in Figure 4.
To evaluate the imperceptibility of the proposed method, the peak signal-to-noise ratio (PSNR) and the visual-saliency-based index (VSI) are utilized as the performance metrics. PSNR in Equation (29) is used to measure the similarity between the host image and watermarked image
P S N R = 10 log 10 255 2 M S E ,
where MSE is the mean square error for color original image and watermarked image.
As an image quality assessment metric, the VSI presents good visual performance to measure the image quality between an original image and a distortion image. Thus, we used VSI for the original image and watermarked image.
V S I = S max V 1 , V 2 , max V 1 , V 2 ,
where V 1 , V 2 represent the VS map extracted from original images I and watermarked image I W ; S is the local similarity of I and I W , as described in [39].
To test the robustness of watermarking method, the bit error rate (BER) is computed for comparison. The BER close to 0 proves that the watermarking algorithm is robust to attack. Then, the equation is as follows
B E R = m m A r e a ,
where m is the original watermark and m is the extracted watermark; A r e a is the size of the watermark image.

4.1. Comparison with Different JND Models within Watermarking Framework

To evaluate the robustness performance of our proposed robust JND model in the watermarking framework, the existing JND models—Bae’s model [21], and Zhang’s model [40]—were used to guide the spatial perceptual embedding for the watermarking comparisons. These two JND models also underwenr some AC coefficient zeroing and cross-domain operations. Testing images were standard color images with a dimension of 512 × 512 , as shown in Figure 3. The watermark information was embedded into the Y channel. A binary watermark with the size 64 × 64 was used to embed into the cover images, as shown in Figure 4. The watermarked image quality was fixed to PSNR = 42 dB; we tested the robustness of different JND models within the watermarking framework.
To compare the robustness performance of the proposed JND and the other DCT domain JND models within the watermarking framework, different kinds of attacks such as Gaussian Noise (GN) with mean zero and different variance, JPEG compression, where the JPEG quality factor varies from 40 to 50, Salt and Pepper noise (SPN) with different factors, Rotain (RO) 60°, and Gaussian filter (GF) with 3 × 3 window were used to evaluate the robustness of the proposed JND model. Table 1 shows the average BER values of watermarked images attacked by Gaussian Noise, JPEG compression attacks, Salt and Pepper noise, Rotation and Gaussian filter. When the watermarked images were attacked by Gaussian Noise attacks and Salt and Pepper noise attacks, the BER of the proposed JND model was significantly lower than the other two JND models. This indicates that the proposed JND model performed much better for noise attacks. As shown in Table 1, when the JPEG compression quality is 40, the comparison BER values of the proposed JND model are about 0.87%. When the watermarked images are contaminated by Rotation 60°, the average BER values of three JND models are presented in Table 1. It can be clearly seen that the proposed model always has the lowest BER. The watermarked image was also attacked by Gaussian filter with 3 × 3 window; the BER obtained by our proposed method did not exceed 3.5%. Above all, the watermarking frame based on the proposed JND model has an excellent robustness performance.

4.2. Imperceptibility Test for Watermarking Scheme

PSNR and VSI, which are consistent with HVS, are the object tests for watermarked images to test the invisibility of the proposed method. Because the visual quality of a watermarked image can be changed by the quantification steps, the PSNR value between the watermarked image and the original image was fixed at 42 dB. The VSI and BER values can be obtained from our proposed method in Figure 5.
As we all know, the closer the VSI value to 1, the better the image quality. When VSI = 0.9920, the difference between the original image and the watermark image cannot be seen by the HVS. In Figure 5, the VSI values from our method are all closer to 1, which has better visual quality. Moreover, it can be seen from the value of BER that the watermark can be entirely extracted under without any attacks by our method.

4.3. Robust Test for Watermarking Scheme

4.3.1. Comparison with Individual Quantization-Based Watermarking Methods

It can be seen that the proposed image watermarking method has good visual perceptual quality with HVS in Figure 5. Moreover, robustness is an important metric for the watermarking scheme. In this section, in order to show the robustness of the proposed method, some basic attacks are selected to test the robustness, and we compared it with other quantization-based methods: [26,32,41,42]. To ensure the fairness of the experiment, the Y channel of the color image was used as the embedding position, and we fixed the watermarked image quality with PSNR = 42 dB by adjusting the quantization steps. To better present the robustness of the proposed method, we show the average BER values of the testing watermarked images under some common attacks.
Adding noise is the most common image processing technology to verify robustness. In this paper, we chose Gaussian noise attack and Salt and Pepper noise attack as noise attacks to test the robustness of the proposed method. A Gaussian white noise with a mean of 0 and different standard deviations was used to attack the watermarked image. In Table 2, the average BER values can be given for Gaussian noise with different factors. It can be clearly seen that our method has lower BER values when attacked by Gaussian noise. Table 2 shows the average BER values after adding the Salt and Pepper noise with a different quantity. Although the BER of our method is higher than that of [26] when the factor is 0.0005, the BER is only about 0.05% higher than that of [26]. Comparing the experimental results of other noise factors, the average BER of our method is lower than [26,32,41,42]. As can be seen from Table 2, the proposed scheme had good robustness against the noise attacks and exhibited good performance.
Since JPEG image is the most popular image format transmitted over the internet, in this experiment, the watermarked image was compressed by a JPEG compression factor from 30 to 100, increasing with Step 10. Table 3 lists the partial experimental results of watermarked images with compression factors from 40 to 60, respectively. When the compression factor is 40, the bit error rate obtained by our method is about 2.5% lower than that obtained by [26]. Thus, our method has better performance compared to others against the JPEG compression attacks.
Filtering attack is one of the classical attacks for the watermarked image. Since the watermark can be removed from the watermarked image by the filter, we selected the Gaussian filtering (GF) and the Median filtering (MF) as filtering attacks: the robust to Gaussian filtering attack and Median filtering attack with window size 3 × 3 . Table 4 shows the average experiment results of by the GF attack and MF attack. In total, the proposed method has the best robustness performance under filtering attacks.
The watermarked image is first rotated clockwise by a certain number of degrees, and then counter-clockwise by the same number of degrees. In this experiment, the watermarked image is first rotated 15°, 30°, 60° clockwise, and then restored to its original shape counter-clockwise. Table 5 shows the average BER values at different rotation angles, and the average BER values do not exceed 0.6% in our method, which proves that our method has better performance. Generally, the watermarked image will be contaminated by multiple attacks. We further compared some combined attacks, the same as in Table 6. The testing images were attacked by JPEG quality = 50, and then attacked by the Gaussian noise. It can be noted that our method shows better robustness performance.

4.3.2. Comparison with Spatial-Uniform Embedding-Based Watermarking Methods

In order to compare the proposed scheme with the existing spatial uniform embedding for transform domain quantization watermarking, [18,19], the binary watermark with the size 32 × 32 is used. In [18], a simple preprocessing of a 32 × 32 binary watermark is proposed to obtain the actual 64 × 64 bits, which are embedded into 64 × 64 blocks. Another selective mechanism is used in [19], where 32 × 32 blocks are adaptively selected from the original 128 × 128 blocks. In this section, we compare the proposed scheme with [18,19]. To ensure the fairness of the experiment, the proposed method compares other existing image watermarkings—[18]—with the same image quality, and the Y channel of the color image is utilized to embed the watermark. The same as in [18], an optimum sub-watermark can be obtained by combining foue sub-watermarks here. The BER was computed to make an objective performance comparison.
To test the robustness of the proposed method, watermarked images are attacked by some common image processing operations (such as adding noise, JPEG compression, filtering attacks, and combined attack) and geometrical distortions (such as scaling and rotation). We test watermarked images by different attacks with different factors. Table 7 lists the partial average BER values for various image attacks with fixed image quality, PSNR = 42 dB. The BER of Gaussian noise (GN) with variance 0.0025 and the BER of Salt and Pepper noise (SPN) with the noise quantity 0.0025 are given in Table 7. It can be clearly seen that our method produces a lower BER than others for noise attack. For JPEG compression, we show the results of JPEG compression with JPEG factor = 30 in Table 7. In the filtering attack, our method and [18] have good robustness performance. The scaling operation, as an image geometric attack, is often used in image processing. At first, the watermarked image is scaled down from 25% to 200% with an increment size of 25%, respectively. We give the average BER when the watermarked image is reduced by 25% and then restored to the original size. For volumetric attack (VA), we only give the BER after the factor with 0.5. It can be clearly seen that [18,19] can not resist this image processing operation. We give the experimental result of rotating (RO) the watermarked image by 25°. The watermarked image is first rotated clockwise by 25°, and then counter-clockwise by the same degree. The BER obtained by our method is about 6% lower than that obtained by [19]. Moreover, the watermarked images also will be attacked by the combined attacks. The BER does not exceed 0.8% under JPEG = 50 and GN with 0.002 (J+GN) in our method. Thus, the robustness performance of our method is obviously better than [18,19].

5. Conclusions

This paper proposes a new spatial–perceptual embedding with the guidance of the robust JND model for color image watermarking. The DC coefficient of the block is used as the cover coefficient for watermark embedding, and the DC coefficients are quantized by DM with an adaptive quantization step obtained from partial AC coefficients. More importantly, the pixel modification amount is modified by the inverse DCT-based JND thresholds, so that the watermarked image is more consistent with the perceptual characteristics of HVS. Experimental results have demonstrated that the proposed scheme is robust against common image processing attacks, such as noise addition, compression and volumetric attack. However, the used JND model only considers the Y channel information, without considering the color information. Thus, an improved JND model with color information can be proposed in future work. In the future, we will consider to realize the pixel updating in a watermarking framework with other image enhancement tasks.

Author Contributions

Conceptualization, K.Z.; Funding acquisition, W.W.; Methodology, K.Z.; Writing—original draft, K.Z.; Supervision, J.L.; Investigation, Y.Z. (Yantong Zhan); Writing—review and editing, W.W. and Y.Z. (Yunming Zhang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was founded by Natural Science Foundation of China (61601268, 61803237, 61901246, U1736122), China Postdoctoral Science Foundation Grant (2019TQ0190, 2019M662432), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201718), and Shandong Provincial Key Research and Development Plan (2017CXGC1504).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cox, I.; Miller, M.; Bloom, J.; Fridrich, J.; Kalker, T. Digital Watermarking and Steganography; Morgan Kaufmann: Burlington, MA, USA, 2007. [Google Scholar]
  2. Wu, X.; Yang, C.N. Invertible secret image sharing with steganography and authentication for AMBTC compressed images. Signal Process. Image Commun. 2019, 78, 437–447. [Google Scholar] [CrossRef]
  3. Liu, Y.X.; Yang, C.N.; Sun, Q.D.; Wu, S.Y.; Lin, S.S.; Chou, Y.S. Enhanced embedding capacity for the SMSD-based data-hiding method. Signal Process. Image Commun. 2019, 78, 216–222. [Google Scholar] [CrossRef]
  4. Kim, C.; Yang, C.N. Watermark with DSA signature using predictive coding. Multimed. Tools Appl. 2015, 74, 5189–5203. [Google Scholar] [CrossRef] [Green Version]
  5. Liu, H.; Xu, B.; Lu, D.; Zhang, G. A path planning approach for crowd evacuation in buildings based on improved artificial bee colony algorithm. Appl. Soft Comput. 2018, 68, 360–376. [Google Scholar] [CrossRef]
  6. De Vleeschouwer, C.; Delaigle, J.F.; Macq, B. Invisibility and application functionalities in perceptual watermarking an overview. Proc. IEEE 2002, 90, 64–77. [Google Scholar] [CrossRef]
  7. Zong, J.; Meng, L.; Zhang, H.; Wan, W. JND-based Multiple Description Image Coding. KSII Trans. Internet Inf. Syst. 2017, 11, 3935–3949. [Google Scholar]
  8. Fridrich, J. Steganography in Digital Media: Principles, Algorithms, and Applications; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  9. Wang, J.; Wan, W. A novel attention-guided JND Model for improving robust image watermarking. Multimed. Tools Appl. 2020, 79, 24057–24073. [Google Scholar] [CrossRef]
  10. Roy, S.; Pal, A.K. A blind DCT based color watermarking algorithm for embedding multiple watermarks. AEU-Int. J. Electron. Commun. 2017, 72, 149–161. [Google Scholar] [CrossRef]
  11. Das, C.; Panigrahi, S.; Sharma, V.K.; Mahapatra, K. A novel blind robust image watermarking in DCT domain using inter-block coefficient correlation. AEU-Int. J. Electron. Commun. 2014, 68, 244–253. [Google Scholar] [CrossRef]
  12. Ansari, R.; Devanalamath, M.M.; Manikantan, K.; Ramachandran, S. Robust digital image watermarking algorithm in DWT-DFT-SVD domain for color images. In Proceedings of the 2012 IEEE International Conference on Communication, Information & Computing Technology (ICCICT), Mumbai, India, 19–20 October 2012; pp. 1–6. [Google Scholar]
  13. Cedillo-Hernandez, M.; Garcia-Ugalde, F.; Nakano-Miyatake, M.; Perez-Meana, H. Robust watermarking method in DFT domain for effective management of medical imaging. Signal Image Video Process. 2015, 9, 1163–1178. [Google Scholar] [CrossRef]
  14. Pradhan, C.; Rath, S.; Bisoi, A.K. Non blind digital watermarking technique using DWT and cross chaos. Procedia Technol. 2012, 6, 897–904. [Google Scholar] [CrossRef] [Green Version]
  15. Araghi, T.K.; Abd Manaf, A.; Araghi, S.K. A secure blind discrete wavelet transform based watermarking scheme using two-level singular value decomposition. Expert Syst. Appl. 2018, 112, 208–228. [Google Scholar] [CrossRef]
  16. Lin, S.D.; Shie, S.C.; Guo, J.Y. Improving the robustness of DCT-based image watermarking against JPEG compression. Comput. Stand. Interfaces 2010, 32, 54–60. [Google Scholar] [CrossRef]
  17. Huang, J.; Shi, Y.Q.; Shi, Y. Embedding image watermarks in DC components. IEEE Trans. Circuits Syst. Video Technol. 2000, 10, 974–979. [Google Scholar] [CrossRef]
  18. Su, Q.; Chen, B. Robust color image watermarking technique in the spatial domain. Soft Comput. 2018, 22, 91–106. [Google Scholar] [CrossRef]
  19. Su, Q.; Yuan, Z.; Liu, D. An approximate schur decomposition-based spatial domain color image watermarking method. IEEE Access 2018, 7, 4358–4370. [Google Scholar] [CrossRef]
  20. Su, Q.; Liu, D.; Yuan, Z.; Wang, G.; Zhang, X.; Chen, B.; Yao, T. New rapid and robust color image watermarking technique in spatial domain. IEEE Access 2019, 7, 30398–30409. [Google Scholar] [CrossRef]
  21. Bae, S.H.; Kim, M. A new DCT-based JND model of monochrome images for contrast masking effects with texture complexity and frequency. In Proceedings of the IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; Volume 1, pp. 431–434. [Google Scholar]
  22. Wu, J.; Li, L.; Dong, W.; Shi, G.; Lin, W.; Kuo, C.C.J. Enhanced just noticeable difference model for images with pattern complexity. IEEE Trans. Image Process. 2017, 26, 2682–2693. [Google Scholar] [CrossRef] [PubMed]
  23. Liu, H.; Liu, B.; Zhang, H.; Li, L.; Qin, X.; Zhang, G. Crowd evacuation simulation approach based on navigation knowledge and two-layer control mechanism. Inf. Sci. 2018, 436, 247–267. [Google Scholar] [CrossRef]
  24. Wan, W.; Wu, J.; Xie, X.; Shi, G. A novel just noticeable difference model via orientation regularity in DCT domain. IEEE Access 2017, 5, 22953–22964. [Google Scholar] [CrossRef]
  25. Wan, W.; Wang, J.; Li, J.; Meng, L.; Sun, J.; Zhang, H.; Liu, J. Pattern complexity-based JND estimation for quantization watermarking. Pattern Recognit. Lett. 2020, 130, 157–164. [Google Scholar] [CrossRef]
  26. Wang, C.X.; Xu, M.; Wan, W.; Wang, J.; Meng, L.; Li, J.; Sun, J. Robust Image Watermarking via Perceptual Structural Regularity-based JND Model. TIIS 2019, 13, 1080–1099. [Google Scholar]
  27. Wan, W.; Wang, J.; Xu, M.; Li, J.; Sun, J.; Zhang, H. Robust image watermarking based on two-layer visual saliency-induced JND profile. IEEE Access 2019, 7, 39826–39841. [Google Scholar] [CrossRef]
  28. Tsui, T.K.; Zhang, X.P.; Androutsos, D. Color image watermarking using multidimensional Fourier transforms. IEEE Trans. Inf. Forensics Secur. 2008, 3, 16–28. [Google Scholar] [CrossRef]
  29. Hu, H.T.; Chang, J.R. Dual image watermarking by exploiting the properties of selected DCT coefficients with JND modeling. Multimed. Tools Appl. 2018, 77, 26965–26990. [Google Scholar] [CrossRef]
  30. Wan, W.; Wang, J.; Li, J.; Sun, J.; Zhang, H.; Liu, J. Hybrid JND model-guided watermarking method for screen content images. Multimed. Tools Appl. 2020, 79, 4907–4930. [Google Scholar] [CrossRef]
  31. Wang, J.; Wan, W.B.; Li, X.X.; De Sun, J.; Zhang, H.X. Color image watermarking based on orientation diversity and color complexity. Expert Syst. Appl. 2020, 140, 112868. [Google Scholar] [CrossRef]
  32. Ma, L.; Yu, D.; Wei, G.; Tian, J.; Lu, H. Adaptive spread-transform dither modulation using a new perceptual model for color image watermarking. IEICE Trans. Inf. Syst. 2010, 93, 843–857. [Google Scholar] [CrossRef] [Green Version]
  33. Wu, J.; Lin, W.; Shi, G.; Wang, X.; Li, F. Pattern masking estimation in image with structural uncertainty. IEEE Trans. Image Process. 2013, 22, 4892–4904. [Google Scholar] [CrossRef]
  34. Ahumada, A.J., Jr.; Peterson, H.A. Luminance-model-based DCT quantization for color image compression. In Proceedings of the Human Vision, Visual Processing, and Digital Display III, San Jose, CA, USA, 27 August 1992; Volume 1666, pp. 365–374. [Google Scholar]
  35. Mannos, J.; Sakrison, D. The effects of a visual fidelity criterion of the encoding of images. IEEE Trans. Inf. Theory 1974, 20, 525–536. [Google Scholar] [CrossRef]
  36. Peterson, H.A.; Ahumada, A.J., Jr.; Watson, A.B. Improved detection model for DCT coefficient quantization. In Proceedings of the Human Vision, Visual Processing, and Digital Display IV, San Jose, CA, USA, 9 September 1993; Volume 1913, pp. 191–201. [Google Scholar]
  37. Wei, Z.; Ngan, K.N. Spatio-temporal just noticeable distortion profile for grey scale image/video in DCT domain. IEEE Trans. Circuits Syst. Video Technol. 2009, 19, 337–346. [Google Scholar]
  38. CVG-UGR. The CVG-UGR Image Database. 2012. Available online: http://decsai.ugr.es/cvg/dbimagenes/c512.php (accessed on 11 March 2018).
  39. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Zhang, X.; Lin, W.; Xue, P. Improved estimation for just-noticeable visual distortion. Signal Process. 2005, 85, 795–808. [Google Scholar] [CrossRef]
  41. Li, Q.; Cox, I.J. Improved spread transform dither modulation using a perceptual model: Robustness to amplitude scaling and JPEG compression. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Honolulu, HI, USA, 15–20 April 2007; Volume 2, pp. 185–188. [Google Scholar]
  42. Li, X.; Liu, J.; Sun, J.; Yang, X.; Liu, W. Step-projection-based spread transform dither modulation. IET Inf. Secur. 2011, 5, 170–180. [Google Scholar] [CrossRef]
Figure 1. The spatial Contrast Sensitivity Function (CSF) curve with the spatial frequency (cycles/degree) [35].
Figure 1. The spatial Contrast Sensitivity Function (CSF) curve with the spatial frequency (cycles/degree) [35].
Mathematics 08 01506 g001
Figure 2. Flow diagram of the proposed watermarking algorithm.
Figure 2. Flow diagram of the proposed watermarking algorithm.
Mathematics 08 01506 g002
Figure 3. The color host images: (a) Lena, (b) Baboon, (c) Avion, (d) Bardowl, (e) House, (f) Barnfall.
Figure 3. The color host images: (a) Lena, (b) Baboon, (c) Avion, (d) Bardowl, (e) House, (f) Barnfall.
Mathematics 08 01506 g003
Figure 4. The binary watermark.
Figure 4. The binary watermark.
Mathematics 08 01506 g004
Figure 5. The watermarked image’s VSI, BER values without attacks. (a,c,e,g) is the watermarked image with VSI, and (b,d,f,h) is extrat water information without attack.
Figure 5. The watermarked image’s VSI, BER values without attacks. (a,c,e,g) is the watermarked image with VSI, and (b,d,f,h) is extrat water information without attack.
Mathematics 08 01506 g005
Table 1. BER comparison with PSNR = 42 dB under different image attacks.
Table 1. BER comparison with PSNR = 42 dB under different image attacks.
AttackLenaBaboonAvion
[21][40]Proposed[21][40]Proposed[21][40]Proposed
GN(0.0010)0.12350.18120.00560.13670.19210.03000.12500.11750.0229
GN(0.0015)0.20480.26250.01830.22190.27880.07670.20360.26540.0527
SPN(0.0010)0.05490.05660.00560.05570.05490.01660.05270.05100.0159
SPN(0.0015)0.06960.08030.00630.07100.07640.02470.06760.06960.0232
JPEG(40)0.17870.25880.00560.16410.25490.01200.15890.25000.0085
JPEG(50)0.07670.12550.00290.08370.11380.00900.07080.12300.0037
RO(60)0.01540.01710.00150.03130.02880.00730.02420.02370.0039
GF(3×3)0.05760.05910.01200.07350.08280.03000.06150.06230.0320
Table 2. The average BER comparison with PSNR = 42 dB under noise attacks.
Table 2. The average BER comparison with PSNR = 42 dB under noise attacks.
AttackParametersRef. [26]Ref. [41]Ref. [42]Ref. [32]Proposed
Gaussian noise(0.0005)0.01050.10450.01810.00880.0075
(0.0010)0.03490.15610.05920.14440.0237
(0.0015)0.06930.18560.10380.18900.0562
Salt & Pepper
noise
(0.0005)0.00680.05670.00740.02010.0073
(0.0010)0.01620.06730.01790.03820.0137
(0.0015)0.02350.07620.02620.04090.0228
Table 3. The average BER comparison with PSNR = 42 dB under JPEG compression attacks.
Table 3. The average BER comparison with PSNR = 42 dB under JPEG compression attacks.
AttackParametersRef. [26]Ref. [41]Ref. [42]Ref. [32]Proposed
JPEG(40)0.03180.16990.05440.16770.0063
(50)0.00890.11490.02030.11820.0052
(60)0.00310.08690.00880.08330.0034
Table 4. The average BER comparison with PSNR = 42 dB under filtering attacks.
Table 4. The average BER comparison with PSNR = 42 dB under filtering attacks.
AttackParametersRef. [26]Ref. [41]Ref. [42]Ref. [32]Proposed
Gaussian filter( 3 × 3 )0.02150.08980.05760.03230.0200
Median filter( 3 × 3 )0.16280.14000.08170.10680.0916
Table 5. The average BER comparison with PSNR = 42 dB under Rotation.
Table 5. The average BER comparison with PSNR = 42 dB under Rotation.
AttackParametersRef. [26]Ref. [41]Ref. [42]Ref. [32]Proposed
Rotation(15°)0.00780.04610.00750.10910.0058
(30°)0.00520.04120.00440.02040.0047
(60°)0.00560.04260.00480.02020.0041
Table 6. The average BER comparison with PSNR = 42 dB under combined attacks.
Table 6. The average BER comparison with PSNR = 42 dB under combined attacks.
AttackParametersRef. [26]Ref. [41]Ref. [42]Ref. [32]Proposed
JPEG+
Gaussian noise
(50+0.0005)0.03910.17940.06630.16280.0361
(50+0.0010)0.07700.22750.11230.19900.0724
(50+0.0015)0.11520.27170.15390.22590.1073
Table 7. Average BER values with PSNR = 42 dB between Ref. [18], Ref. [19] and our proposed scheme under different attacks.
Table 7. Average BER values with PSNR = 42 dB between Ref. [18], Ref. [19] and our proposed scheme under different attacks.
GN
0.0025
SPN
0.0025
JPEG
30
Scaling
0.25
MF
(3,3)
GF
(3,3)
VA
0.5
RO
(25°)
J + GN
(50 + 0.002)
Ref. [18]0.04390.04390.00590.02640.00780.00000.39050.06840.0479
Ref. [19]0.01660.13870.00200.29400.02640.07370.39360.16800.0225
Proposed0.01280.03130.00680.02540.00710.00000.00000.10330.0074

Share and Cite

MDPI and ACS Style

Zhou, K.; Zhang, Y.; Li, J.; Zhan, Y.; Wan, W. Spatial-Perceptual Embedding with Robust Just Noticeable Difference Model for Color Image Watermarking. Mathematics 2020, 8, 1506. https://doi.org/10.3390/math8091506

AMA Style

Zhou K, Zhang Y, Li J, Zhan Y, Wan W. Spatial-Perceptual Embedding with Robust Just Noticeable Difference Model for Color Image Watermarking. Mathematics. 2020; 8(9):1506. https://doi.org/10.3390/math8091506

Chicago/Turabian Style

Zhou, Kai, Yunming Zhang, Jing Li, Yantong Zhan, and Wenbo Wan. 2020. "Spatial-Perceptual Embedding with Robust Just Noticeable Difference Model for Color Image Watermarking" Mathematics 8, no. 9: 1506. https://doi.org/10.3390/math8091506

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop