Next Article in Journal
A Loose-Coupled Fusion of Inertial and UWB Assisted by a Decision-Making Algorithm for Localization of Emergency Responders
Previous Article in Journal
Deep-ACTINet: End-to-End Deep Learning Architecture for Automatic Sleep-Wake Detection Using Wrist Actigraphy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Complex Wavelet-Based Image Watermarking with the Human Visual Saliency Model

1
School of Mathematics and Computer Science, Shangrao Normal University, Shangrao 334001, China
2
School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
3
Department of Network Engineering, Chengdu University of Information Technology, Chengdu 610225, China
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(12), 1462; https://doi.org/10.3390/electronics8121462
Submission received: 28 October 2019 / Revised: 24 November 2019 / Accepted: 28 November 2019 / Published: 2 December 2019
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Imperceptibility and robustness are the two complementary, but fundamental requirements of any digital image watermarking method. To improve the invisibility and robustness of multiplicative image watermarking, a complex wavelet based watermarking algorithm is proposed by using the human visual texture masking and visual saliency model. First, image blocks with high entropy are selected as the watermark embedding space to achieve imperceptibility. Then, an adaptive multiplicative watermark embedding strength factor is designed by utilizing texture masking and visual saliency to enhance robustness. Furthermore, the complex wavelet coefficients of the low frequency sub-band are modeled by a Gaussian distribution, and a watermark decoding method is proposed based on the maximum likelihood criterion. Finally, the effectiveness of the watermarking is validated by using the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) through experiments. Simulation results demonstrate the invisibility of the proposed method and its strong robustness against various attacks, including additive noise, image filtering, JPEG compression, amplitude scaling, rotation attack, and combinational attack.

1. Introduction

With the growing popularity of big data and multimedia applications, a large number of digital multimedia data are generated, transmitted, and distributed over the Internet every day. The security of these digital data is a relevant problem. An efficient solution is watermarking technology, which is mainly used for copyright protection, authentication, fingerprinting, etc. [1,2,3]. In general, the main idea of digital watermarking is to embed useful information in a host signal without affecting the perceptual quality of the host signal. For a watermarking method, the three indispensable, but conflicting requirements are robustness, invisibility, and capacity [1]. These requirements are mutually reinforcing and have to be solved together. For instance, when the imperceptibility of watermarking is improved, the robustness of watermarking will be reduced. Therefore, an ideal digital watermarking should achieve good balance among these three requirements.
To achieve the above goal, extensive watermarking methods have been proposed in recent years. These methods can be classified in different ways, e.g., spatial domain methods [4] and frequency domain methods [5,6,7,8,9], based on watermark embedding space. Depending on the manner of embedding, the method can be further categorized into additive [10], multiplicative [11,12], and quantization based methods [13,14]. In addition, the watermarking methods can be categorized as blind [11] and non-blind [15] ones based on watermark decoding.
In terms of embedding region, most current watermarking methods focus on the frequency domain [6,7,16], because frequency domain watermarking algorithms are relatively more robust, invisible, and stable, especially the wavelet based watermarking [16]. The reason is that wavelet based watermarking has two obvious advantages. One of the advantages is that the wavelet transform fits well with the human visual system, which can be exploited in the design of an invisible watermarking [17,18,19,20,21]. The other advantage is that the wavelet transform has good multi-scale analytic characteristics, which can be used to develop a robust watermarking method [21,22,23,24]. Subsequently, many watermarking methods that use wavelets have been proposed in the past two decades. In terms of embedding method, the multiplicative watermarking methods are reportedly more robust, and they provide higher imperceptibility than the additive ones [25,26]. The multiplicative watermarking approaches are dependent on image content [25], and more importantly, they have strong robustness. Therefore, multiplicative watermarking methods are preferred for copyright protection [26]. For this reason, the multiplicative embedding approach is adopted in our study.
The wavelet transform has the advantages of the localization of the time frequency and multi-scale analysis, and it is suitable for describing the characteristics of 1D signals. However, when the signal dimension increases, the wavelet transform cannot sufficiently describe the singularity of the signal [27]. Therefore, to capture the direction information of 2D signals, multi-scale geometric analytic techniques for obtaining the intrinsic geometric structure information of images, such as contours and smooth curves, have emerged in recent years. These technologies include the ridgelets [15,28], wave atoms [29], contourlets [27,30], framelets [31], and dual tree-complex wavelet transform (DT-CWT) [32,33].
Current watermarking algorithms, such as the methods in [16,28,30], have achieved satisfying results; however, some problems need to be solved. Akhaee et al. [16] proposed a robust scaling based watermarking with the multi-objective optimization approach. Although the balance of invisibility and robustness of watermarking has been elaborately addressed by the method [16], the cost of multi-objective optimization is high, which hinders its extension to real applications. Despite the success of the multi-scale geometric analysis technology in various image watermarking-like methods [9,30], the time cost of these methods is also high. As a result, designing a simple and effective digital watermarking method to balance robustness and imperceptibility is necessary.
In addressing the above issues, the quantization watermarking approach with the L1 norm function was proposed in our previous work [33]. This work achieved good imperceptibility, but the robustness of the watermark against some attacks remains insufficient. In the present study, a watermarking that can be easily implemented is developed, and the robustness of watermarking is boosted based on the visual perception model. In this manner, the fidelity of the image can be improved. The developed method can achieve a good balance between invisibility and robustness of watermarking, which is beneficial to the practical use of watermarking technology.
DT-CWT is regarded as an overcomplete transform, which creates redundant complex wavelet coefficients that can be utilized to embed watermarks. In general, shift invariance is the main feature of DT-CWT. We can use this property to produce a watermark that can be decoded even after the host signal has undergone geometric attacks, such as amplitude scaling and rotation. DT-CWT also has good directional selectivity. Therefore, we propose an image watermarking by using DT-CWT in this paper. First, we segment the original image and choose the image blocks with high entropy in this study. Second, we embed watermark data into the low frequency of the complex wavelet coefficients by a visual perceptual model, and we extract the watermark data by using the maximum likelihood estimator (MLE). Finally, we validate the effectiveness of the watermarking algorithm through experimental simulation.
The contributions of the proposed method are twofold. On the one hand, an adaptive watermark embedding method in terms of texture masking and the visual saliency model is developed, which embeds each watermark bit into a set of dual tree complex wavelet coefficients. Using this strategy, the robustness and imperceptibility of the watermark can be well balanced. On the other hand, the low frequency of complex coefficients with high entropy is selected as the watermark embedding space, which can improve the robustness of watermarking against some geometric attacks, such as rotation, scaling, and combinational attack.
The rest of the paper is structured as follows. Section 2 provides the basic concept of DT-CWT. Section 3 introduces the proposed watermarking method, including watermark embedding and watermark decoding. We test and discuss the performance of the proposed watermarking through experiments, and the corresponding findings are discussed in Section 4. The conclusion is presented in Section 5.

2. Dual Tree-Complex Wavelet Transform

DT-CWT, which was initially proposed by Selesnick, Baraniuk, and Kingsbury [32], inherits the characteristics of wavelets, and it can approximate shift invariances and has good directional selectivity [32]. A DT-CWT with a wavelet transform can produce six directional sub-bands oriented at 75°, 15°, −45°, −75°, −15°, and 45° on a decomposition scale. By contrast, a wavelet transform only has three directional sub-bands oriented at 90°, 0°, and 45° on a scale. A comparison of the impulse responses of these two wavelet transforms is shown in Figure 1. As mentioned above, DT-CWT can effectively approximate shift invariances. This invariance can be used to design watermarking, which then can be used to counter geometric attacks. For instance, if the image block is re-sampled after scaling, then DT-CWT can generate a set of coefficients that are roughly the same as the original patch. This scheme enables the watermarking to counter scaling attacks. Transformations, such as discrete wavelet transform (DWT), discrete cosine transform (DCT), and Fourier transform, do not have this property.
For a 1D signal, the wavelet coefficients obtained by using two filter trees are twice those of the original wavelet transform. Furthermore, the 1D signal can be decomposed by the 1D DT-CWT with a shifted and dilated mother wavelet function and scaling function [34], i.e.,
f ( x ) = l Z s j 0 , l φ j 0 , l ( x ) + j j 0 l Z c j , l ψ j , l ( x ) ,
where Z denotes the set of natural numbers; J and l refer to the indices of shifts and dilations, respectively; s j 0 , l denotes the scaling coefficient; and c j , l is the complex wavelet transform coefficient with φ j 0 , l ( x ) = φ j 0 , l r ( x ) + 1 φ i j 0 , l ( x ) and ψ j , l ( x ) = ψ j , l r ( x ) + 1 ψ i j , l ( x ) , where r and i represent the real and imaginary parts, respectively.
Figure 2 shows the calculation process of the real part and the imaginary part of DT-CWT. For tree a, filters h 0 and h 1 are used to compute the real part. For tree b, filters g 0 and g 1 are utilized to calculate the imaginary part. As shown in Figure 2, the output of the two trees can be interpreted as the real part and the imaginary part of the complex wavelet coefficients. For a 2D signal, a 2D image f ( x , y ) can be decomposed by 2D DT-CWT [34], i.e.,
f ( x , y ) = l Z 2 s j 0 , l φ j 0 , l x , y + θ Θ j j 0 l Z 2 c j , l θ ψ j , l θ x ,
where θ Θ = { ± 15 ° , ± 45 ° , ± 75 ° } denotes the directionality of DT-CWT. At each scale of decomposition, the DT-CWT decomposition of f ( x , y ) results in six complex valued high pass sub-bands in which each high pass sub-band corresponds to one unique direction θ .

3. Watermark Embedding and Detection

Human eyes are generally less sensitive to high entropy image blocks than smooth ones based on the human visual perception model. The reason is that relatively strong edges usually appear in high entropy image blocks [35]. Inspired by [35], we propose an image watermarking method by using high entropy blocks in this work. The block diagram of the proposed method is illustrated in Figure 3, which consists of watermark encoding and watermark decoding. The main advantage of this proposed method is its simple implementation; moreover, the tradeoff between invisibility and robustness can be resolved by a visual perceptual model. DT-CWT is also adopted in this work to embed watermark information, which can improve the robustness of the watermarking against geometric attacks.

3.1. Watermark Embedding

As shown in Figure 3a. The procedure of the watermark embedding involves the following steps:
Step 1: The original image is segmented into blocks, and the first blocks in the ascending order of estimated entropy are selected for watermarking purposes.
Step 2: DT-CWT is applied to each selected image block, and a single bit of “0” or “1” is embedded in each block by manipulating the complex wavelet coefficients of the low frequency sub-band as follows:
y ˜ = x × 1 + α , for embedding 1 ,
y ˜ = x × 1 α , for embedding 0 ,
where x denotes the host coefficients of the low frequency sub-band; y ˜ denotes the modified coefficients; and α is called the watermark strength factor, its value being determined by texture masking and visual saliency in Section 3.2.
Step 3: Repeat Step 2 for each image block.
Step 4: The inverse DT-CWT is applied to the watermarked blocks, and the watermarked blocks are combined with the non-watermarked blocks to obtain the whole watermarked image.

3.2. Visual Saliency Based Watermark Strength Factor

The watermark strength factor α can affect imperceptibility. To achieve the transparency of the watermark, two important concepts, texture masking and visual saliency, are used to design the watermark strength factor. The just noticeable difference (JND) threshold is often high in the texture region of an image [36]. Therefore, a high watermark strength factor can be selected to embed more information in the texture region. In addition. The work in [37] studied the spread transform dither modulation (STDM) watermarking algorithm based on the visual saliency model and achieved good results. Furthermore, Wang et al. studied the JND estimation algorithm based on visual saliency in the wavelet domain [38]. The work in [39] utilized the JND scheme in designing a watermarking method. Besides this, in [40], an adaptive quantization watermarking algorithm was proposed. The term “adaptive” in their work [40] was mainly used to describe a process, behavior, and/or a system that is able to interact with its environment. However, in our work, “adaptive” describes the embedding strength of watermark. As a result, the concept of visual saliency [38,41,42] is used to develop an adaptive watermark strength factor in this work. The human eye is inclined to focus on prominent areas, and distortions are more likely hidden in the area far from the image saliency part. However, watermark embedding strength can be enhanced accordingly. The watermark strength factor can be calculated as follows.
First, on the basis of the characteristic of the texture masking, the high frequency energy of the i th image block is calculated by Equation (5), i.e., the value is the average of the sum of the energies of the six high frequency sub-bands.
E H F , i = 1 6 E H , 1 + E H , 2 + + E H , 6 ,
where the six sub-bands of E H , 1 , E H , 2 , ⋯, E H , 6 are produced, which correspond to the outputs of the six directional sub-bands oriented at 15°, 45°, 75°, −15°, −45°, and −75°. Subsequently, the high frequency sub-band image energy of the N image sub-block is computed as follows:
E H F = 1 N i = 1 N E H F , i ,
where E H F is the average energy of all image blocks. The watermark strength factor can increase with increasing E H F . Hence, the high frequency portion of the watermark strength factor α 1 can be computed by employing the relationship proposed in [36] as follows:
α 1 = a c × exp ξ · E H F ,
where the values of a , c, and ξ are set to 1.023 , 0.02 , and 3.5 × 10 5 , respectively.
According to [42], the final strength factor α can be computed by exploiting visual saliency. First, saliency distance is calculated for each image block, as denoted by D, and the maximum saliency distance is determined from the image blocks, as denoted by D max . In this manner, the visual saliency based strength factor α 2 can be represented as α 2 = 1 + δ · D , where δ = 0.02 / D max .
In summary, the final watermark strength factor α can be calculated as:
α = α c × exp ξ · E H F × 1 + 0.02 D max · D α 0 ,
where α 0 denotes a positive constant, and α 0 is subtracted in this work to control the degree of image distortion after the watermark embedding. In this manner, the value of α 0 can be set to 1.0 in this work.
On the basis of the above analysis, texture masking and visual saliency can be utilized to calculate the strength factor. This strength factor can adaptively change with the change of image texture and the degree of saliency. In this manner, the strength of the embedding can be controlled more appropriately, thus further improving watermarking performance.

3.3. Watermark Detection

The effect of attacks at the receiver can simply be modeled as an additive white Gaussian noise (AWGN) [16]. Furthermore, the complex wavelet coefficients of the low frequency sub-band can be modeled by a Gaussian distribution. The distribution of watermark information “1” or “0” can be represented as follows:
y i | 1 = ( 1 + α ) · x i + n i y i | 1 N ( ( 1 + α ) μ , σ y | 1 2 ) ,
y i | 0 = ( 1 α ) · x i + n i y i | 0 N ( ( 1 α ) μ , σ y | 0 2 ) ,
where σ y | 1 2 = ( 1 + α ) 2 σ 2 + σ n 2 , σ y | 0 2 = ( 1 α ) 2 σ 2 + σ n 2 , σ n 2 is the variance of the noise in the related sub-band coefficients.
Complex wavelet coefficients are assumed to be independent and identically distributed in this work. Therefore, the distribution of these coefficients in a specific block with N coefficients y 1 , y 2 , , y N for embedding “1” is:
f ( y 1 , y 2 , , y N | 1 ) = i = 1 N 1 2 π σ y | 1 exp y i ( 1 + α ) μ 2 2 · σ y | 1 2 .
Similarly, to embed “0”, we have:
f ( y 1 , y 2 , , y N | 0 ) = i = 1 N 1 2 π σ y | 0 exp y i ( 1 α ) μ 2 2 · σ y | 0 2 .
According to the ML decision criterion, the watermark extraction process can be written as follows:
f ( y 1 , y 2 , , y N | 1 ) > f ( y 1 , y 2 , , y N | 0 ) , w ^ = 1 ,
f ( y 1 , y 2 , , y N | 1 ) < f ( y 1 , y 2 , , y N | 0 ) , w ^ = 0 .
Thus, by substituting (11) and (12) in (13) and (14), we have:
i = 1 N 1 2 π σ y | 1 exp ( y i ( 1 + α ) μ ) 2 2 · σ y | 1 2 > i = 1 N 1 2 π σ y | 0 exp ( y i ( 1 α ) μ ) 2 2 · σ y | 0 2 , w ^ = 1 ,
i = 1 N 1 2 π σ y | 1 exp ( y i ( 1 + α ) μ ) 2 2 · σ y | 1 2 < i = 1 N 1 2 π σ y | 0 exp ( y i ( 1 α ) μ ) 2 2 · σ y | 0 2 , w ^ = 0 .
We take the logarithm of both sides by calculating:
ω 1 i = 1 N y i 2 + ω 2 i = 1 N y i > τ , w ^ = 1 ,
ω 1 i = 1 N y i 2 + ω 2 i = 1 N y i < τ , w ^ = 0 ,
where ω 1 = 1 σ y | 0 2 1 σ y | 1 2 , ω 2 = 2 ( 1 + α ) 1 α σ y | 0 2 1 + α σ y | 1 2 .
Therefore, the watermark detection threshold can be expressed as:
τ = 2 N In σ y | 1 σ y | 0 N ( 1 + α ) 2 ( 1 α ) 2 σ y | 0 ( 1 + α ) 2 σ y | 1 .

4. Experimental Results

4.1. Imperceptibility of Watermarking

To assess the performance of the proposed watermarking method, experiments are conducted by using real images. In this study, we used eight natural images (Barbara, Boat, Bridge, Elaine, Lena, Man, Mandrill, and Peppers), each with a size of 512 × 512 . The host images and their watermarked version with 16 × 16 blocks and 128-bit message are shown in Figure 4. Throughout the experiments, three level DT-CWT was used to decompose each selected block, and the filters used were the near-symmetric 13, 19 tap filters and Q-shift 14, 14 tap filters. The watermark strength factor α was set to 0.0153 according to Equation (8) in Section 3.2. For each image in Figure 4, the top image is the host image, the middle image the watermarked image, and the bottom image the difference image between the host image and the watermarked version.
From Figure 4, the watermark imperceptibility is satisfied. The proposed watermarking method provided an image dependent watermark with strong components in the complex part of the image, which is barely noticeable to the human eyes. This scheme allowed for the setting of the high watermark strength factor, while the visual quality of the watermarked image was kept at an acceptable level. Moreover, the peak signal-to-noise-ratio (PSNR) and the structural similarity index measure (SSIM) [43] were used to evaluate the performance of the proposed watermarking method in a subjective manner. The results are shown Table 1, in which the watermarking evaluation results are satisfactory. Therefore, the embedded watermarks were perceptually invisible.

4.2. Error of Probability Analysis

The error probability in the presence of AWGN was derived as follows. Error occurred whenever watermark information “1” was embedded into the host image, while watermark information “0” was extracted at the decoder end, and vice versa. The error probability of the watermarking included these two errors.
According to Equation (17), the error probability of embedding watermark information “1” was:
f e | 1 = f ω 1 i = 1 N y i 2 + ω 2 i = 1 N y i < τ | 1 ,
where τ = 2 N ln σ y | 1 σ y | 0 N 1 + α 2 1 α 2 σ y | 0 2 1 + α 2 σ y | 1 2 , ω 1 = 1 σ y | 0 2 1 σ y | 1 2 , and ω 2 = 2 ( 1 + α ) 1 α σ y | 0 2 1 + α σ y | 1 2 . According to previous results [16], f e | 1 can be written as:
f e | 1 = 1 2 π N σ y | 1 2 · κ 1 κ 2 × 0 γ N 2 , x 2 · exp τ ˜ κ 1 x / κ 2 N 1 + α μ 2 N σ y | 1 2 d x ,
where κ 1 = ω 1 σ y | 1 2 , κ 2 = 2 1 + α μ ω 1 + ω 2 , τ ˜ = τ + ω 1 N 1 + α 2 μ 2 , and γ N 2 , x 2 denotes a Gamma function.
Meanwhile, the error probability of embedding watermark information “0” is:
f e | 0 = f ω 1 i = 1 N y i 2 + ω 2 i = 1 N y i > τ | 0 .
According to [16], f e | 0 can be further written as:
f e | 0 = 1 c 1 c 2 2 π N σ y | 0 2 · 0 γ N 2 , x 2 · ζ ( x ) d x ,
where c 1 = ω 1 σ y | 0 2 , σ y | 0 2 = 1 α 2 σ 2 + σ n 2 , c 2 = 2 1 α μ ω 1 + ω 2 , τ = τ + ω1N(1 − α)2μ2, ξ(x) = exp τ ˜ c 1 x / c 2 N 1 α μ 2 N σ y | 0 2 .
The data bits “0” and “1” were assumed to be inserted in the original image with equal probabilities. On the basis of Equations (21) and (23), the total error probability can be written as:
F e = 0.5 f e | 1 + 0.5 f e | 0
Figure 5 shows the error probability F e versus the different values of noise variance for the Lena image under AWGN attack. From Figure 5, although the error probability F e increased with the increase of noise variance, the change value of error probability remained small as the noise attack strength increased.

4.3. Performance under Attacks

For testing robustness, several common attacks, as described by [16,28], were utilized for the watermarked images by the proposed method. These attacks included common image processing attacks and geometric distortion attacks. In this study, bit-error rate (BER) was used to evaluate the robustness of the watermarking under several intentional or unintentional attacks. To save on space, the robustness of the proposed method under AWGN, median filtering, Gaussian filtering, JPEG compression, scaling attack, rotation attack, and combinational attack on the eight well known images (i.e., Barbara, Boat, Bridge, Elaine, Lena, Man, Mandrill, and Peppers) was investigated.
(1) AWGN attack:
Figure 6 shows the results of BER of various test images against AWGN attacks. When the noise variance was less than 10, BER was near zero. When noise variance was less than 33, the corresponding BER was near 0.1. Therefore, the proposed watermarking had good robustness against AWGN attacks.
(2) JPEG compression attack:
Figure 7 shows the results of BER against JPEG compression attacks, in which the proposed watermarking demonstrates robustness, even if the quality factor is very low (e.g., quality factor of five). When the strength of the JPEG compression attack was very large (e.g., quality factor of five), the result of BER was less than 0.3. When the JPEG compression quality factor was greater than 25, BER tended to be zero. Therefore, the proposed watermarking algorithm was robust against JPEG compression attacks.
(3) Scaling attack:
Table 2 shows the BER results under amplitude scaling attacks. The watermarking algorithm was robust to most scaling attacks. However, Table 2 also shows that when the scaling factor was equal to 0.7 or 1.1, BER had a comparatively large value. The reason is not yet clear, and this issue will be explored in our future work.
(4) Rotation attack:
Table 3 shows the results of the BER of various test images against rotation attacks. The range of the rotation angle was [ 10 ° , 10 ° ] . When the range of the rotation angle was [ 5 ° , 5 ° ] , the range of the BER was [ 0 , 0.07 ] . Among the values listed in in Table 3, the maximum BER was 0.1016 , whereas most of the other BER values were small. The effect of BER was not prominent when the angle increased. Thus, the proposed embedding approach was robust against rotation attacks.
(5) Gaussian filtering and median filtering attacks:
Table 4 shows the BER results against Gaussian filtering and median filtering attacks. The window sizes of Gaussian filtering and median filtering were 3 × 3 , 5 × 5 , and 7 × 7 . The proposed method was highly robust against Gaussian filtering attacks. When the window size of the median filtering was 3 × 3 , the performance of the proposed scheme was robust. However, with the increase of the window size of the median filtering, the robustness of the watermarking was decreased.
(6) Combinational attack:
Table 5 shows the BER results under the combinational attack for various distortions associated with JPEG compression with a quality factor of 20 % . The robustness of the proposed method against this kind of combinational attack was satisfactory. The result of BER against Gaussian noise attack combined with JPEG compression attack for different images is shown in Figure 8, which further confirmed that the proposed scheme had good robustness for this kind of combinational attack.

4.4. Comparison with Other Methods

In this part of the study, our method is compared with its most related competitors, particularly the methods reported in [16,28,44,45]. The methods in [16,44] were chosen on the basis of their similarity with the proposed watermarking. For example, the methods all used high entropy image blocks of an original image for watermark embedding. To ensure fairness in comparison, the message lengths and PSNR values used in our experiments were the same as those in the other works. The results in Table 6 depict the same watermark lengths of 256 bits embedded into Barbara, Boat, and Peppers images, while the PSNR of the watermarked image was 45 dB. The results of our method were better than those in the other works for most attacks. For instance, for geometric attacks, such as scaling and rotation attacks, the proposed method outperformed the above three watermarking methods. The main reason was that DT-CWT can effectively approximate shift invariances for geometric transformations. However, the proposed method was slightly ineffective compared to the methods in [28,44] under AWGN attacks. This problem will be investigated thoroughly by studying the statistical properties of DT-CWT coefficients and noise in our future work.
Table 7 and Table 8 show the BER results against median filtering attack and JPEG compression attack by using the proposed method and those methods in [28,44], in which the watermark length was 256 bits and the PSNR of the watermarked image was 45 dB, respectively. As shown in both tables, the proposed method had better results than the methods in [28,44].
Table 9 shows the BER results under scaling attack with a watermark length of 100 bits and a PSNR of the watermarked image of 45 dB. The results of the proposed method outperformed those of the methods in [44,45] for most scaling attack. As described in the third part of Section 4.2 (i.e., “scaling attack”), when the scaling factor was equal to 0.7 or 1.1, the robustness of the watermarking decreased dramatically. We will investigate this issue in our future work.

5. Conclusions

An image watermarking method was proposed in this study by using DT-CWT and the multiplicative strategy. In this approach, after partitioning a host image into non-overlapping blocks, the high entropy image blocks were selected for watermark embedding. Watermark data extraction was performed by using the MLE decision criterion, and the embedding factor was computed to improve the robustness of the watermarking by using the texture masking and visual saliency scheme. The performance of the proposed scheme was evaluated in terms of image quality and robustness. The experimental results demonstrated the effectiveness of the proposed method.

Author Contributions

J.L. conceived of the idea, designed the experiments, and wrote the manuscript; Y.R. helped to revise this manuscript; Y.H. helped to test and analyze the experimental data.

Funding

This work was supported by the Science and Technology Foundation of Jiangxi Provincial Education Department (Grant No. GJJ170922) and the Natural Science Foundation of Jiangxi (No. 20192BAB207013).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Asikuzzaman, M.; Pickering, M.R. An overview of digital video watermarking. IEEE Trans. Circuit. Syst. Video Technol. 2018, 28, 2131–2153. [Google Scholar] [CrossRef]
  2. Wang, Y.G.; Zhou, G.P.; Shi, Y.Q. Transportation spherical watermarking. IEEE Trans. Image Process. 2018, 27, 2063–2077. [Google Scholar] [CrossRef] [PubMed]
  3. Jang, B.-G.; Lee, S.-H.; Lee, Y.-S.; Kwon, K.-R. Biological viral infection watermarking architecture of MPEG/H.264/AVC/HEVC. Electronics 2019, 8, 889. [Google Scholar] [CrossRef]
  4. Guo, Y.F.; Au, O.C.; Wang, R.; Fang, L.; Cao, X.C. Halftone image watermarking by content aware double-sided embedding error diffusion. IEEE Trans. Image Process. 2018, 27, 3387–3402. [Google Scholar] [CrossRef]
  5. Ou, B.; Li, L.; Zhao, Y.; Ni, R. Efficient color image reversible data hiding based on channel–dependent payload partition and adaptive embedding. Signal Process. 2015, 108, 642–657. [Google Scholar] [CrossRef]
  6. Cox, I.J.; Kilian, J.; Leighton, T. Secure spread spectrum watermarking for multimedia. IEEE Trans. Image Process. 1997, 6, 1673–1687. [Google Scholar] [CrossRef]
  7. Liu, X.; Han, G.; Wu, J.; Shao, Z.; Coatrieux, G.; Shu, H. Fractional krawtchouk transform with an application to image watermarking. IEEE Trans. Signal Process. 2017, 65, 1894–1908. [Google Scholar] [CrossRef]
  8. Ahmaderaghi, B.; Kurugollu, F.; Rincon, J.M.D.; Bouridane, A. Blind image watermark detection algorithm based on discrete shearlet transform using statistical decision theory. IEEE Trans. Comput. Imaging 2018, 4, 46–59. [Google Scholar] [CrossRef]
  9. Singh, D.; Singh, S.K. DCT based efficient fragile watermarking scheme for image authentication and restoration. Multimed. Tools Appl. 2017, 76, 953–977. [Google Scholar] [CrossRef]
  10. Rahman, M.M.; Ahmad, M.O.; Swamy, M.N.S. A new statistical detector for DWT–based additive image watermarking using the Gaussian–Hermite expansion. IEEE Trans. Image Process. 2009, 18, 1782–1796. [Google Scholar] [CrossRef]
  11. Sadreazami, H.; Ahmad, M.O.; Swamy, M.N.S. A study of multiplicative watermark detection in the contourlet domain using alphastable distributions. IEEE Trans. Image Process. 2014, 23, 4348–4360. [Google Scholar] [CrossRef] [PubMed]
  12. Amini, M.; Ahmad, M.O.; Swamy, M.N.S. A robust multibit multiplicative watermark decoder using a vector based hidden markov model in wavelet domain. IEEE Trans. Circuit. Syst. Video Technol. 2018, 28, 402–413. [Google Scholar] [CrossRef]
  13. Hwang, M.J.; Lee, J.S.; Lee, M.S.; Kang, H.G. SVD based adaptive QIM watermarking on stereo audio signals. IEEE Trans. Multimed. 2018, 20, 45–54. [Google Scholar] [CrossRef]
  14. Zareian, M.; Tohidypour, H.R. A novel gain invariant quantization–based watermarking approach. IEEE Trans. Inf. Forensics Sec. 2014, 9, 1804–1813. [Google Scholar] [CrossRef]
  15. Sadreazami, H.; Amini, A. A robust spread spectrum based image watermarking in ridgelet domain. Int. J. Electron. Commun. 2012, 66, 364–371. [Google Scholar] [CrossRef]
  16. Akhaee, M.A.; Sahraeian, S.M.E.; Sankur, B.; Marvasti, F. Robust Scaling–based image watermarking using maximum–likelihood decoder with optimum strength factor. IEEE Trans. Multimed. 2009, 11, 822–833. [Google Scholar] [CrossRef]
  17. Liu, J.H.; She, K. A hybrid approach of DWT and DCT for rational dither modulation watermarking. Circuits Syst. Signal Process. 2012, 31, 797–811. [Google Scholar] [CrossRef]
  18. You, X.; Du, L.; Cheung, Y.M.; Chen, Q.H. A blind watermarking scheme using new nontensor product wavelet filter banks. IEEE Trans. Image Process. 2010, 19, 3271–3284. [Google Scholar] [CrossRef]
  19. Ramanjaneyulu, K.; Rajarajeswari, K. Wavelet based oblivious image watermarking scheme using genetic algorithm. IET Image Process. 2012, 6, 364–373. [Google Scholar] [CrossRef]
  20. Barni, M.; Bartolini, F.; Piva, A. Improved wavelet–based watermarking through pixel–wise masking. IEEE Trans. Image Process. 2001, 10, 783–791. [Google Scholar] [CrossRef]
  21. Bhowmik, D.; Abhayaratne, C. Quality scalability aware watermarking for visual content. IEEE Trans. Image Process. 2016, 25, 5158–5172. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, J.; Lian, S.; Shi, Y.Q. Hybrid multiplicative multiwatermarking in DWT domain. Multidimens. Syst. Signal Process. 2017, 28, 617–636. [Google Scholar] [CrossRef]
  23. Xiong, L.; Xu, Z.; Shi, Y.Q. An integer wavelet transform based scheme for reversible data hiding in encrypted images. Multidimens. Syst. Signal Process. 2018, 29, 1191–1202. [Google Scholar] [CrossRef]
  24. Yu, X.Y.; Wang, C.Y.; Zhou, X. A hybrid transforms based robust video zero-watermarking algorithm for resisting high efficiency video coding compression. IEEE Access 2019, 7, 115708–115724. [Google Scholar] [CrossRef]
  25. Langelaar, G.C.; Setyawan, I.; Lagendijk, R.L. Watermarking digital image and video data: A state–of–the–art overview. IEEE Trans. Signal Process. Mag. 2000, 17, 20–26. [Google Scholar] [CrossRef]
  26. Cheng, Q.; Huang, T.S. Robust optimum detection of transform domain multiplicative watermarks. IEEE Trans. Signal Process. 2003, 51, 906–924. [Google Scholar] [CrossRef]
  27. Do, M.N.; Vetterli, M. The contourlet transform: an efficient directional multi-resolution image processing. IEEE Trans. Image Process. 2005, 14, 2091–2106. [Google Scholar] [CrossRef]
  28. Kalantari, N.K.; Ahadi, S.M.; Vafadust, M. A robust image watermarking in the ridgelet domain using universally optimum decoder. IEEE Trans. Circuit. Syst. Video Technol. 2010, 20, 396–406. [Google Scholar] [CrossRef]
  29. Demanet, L.; Ying, L.X. Wave atoms and time upscaling of wave equations. Numer. Math. 2009, 13, 1–71. [Google Scholar] [CrossRef]
  30. Sadreazami, H.; Ahmad, O.; Swamy, M.N.S. Multiplicative watermark decoder in contourlet domain using the normal inverse Gaussian distribution. IEEE Trans. Multimed. 2016, 18, 196–207. [Google Scholar] [CrossRef]
  31. Wang, F.; Zhao, X.L.; Ng, M. Multiplicative noise and blur removal by framelet decomposition and L1 based L–curve method. IEEE Trans. Image Process. 2016, 25, 4222–4232. [Google Scholar] [CrossRef] [Green Version]
  32. Selesnick, I.W.; Baraniuk, R.G.; Kingbury, N.G. The dual–tree complex wavelets transform—A coherent framework for multi-scale signal and image processing. IEEE Signal Process. Mag. 2005, 22, 123–151. [Google Scholar] [CrossRef] [Green Version]
  33. Liu, J.H.; Xu, Y.Y.; Wang, S.; Zhu, C. Complex wavelet domain image watermarking algorithm using L1 norm function–based quantization. Circuits Syst. Signal Process. 2018, 37, 1268–1286. [Google Scholar] [CrossRef]
  34. Celik, T.; Ma, K.K. Unsupervised change detection for satellite images using dual–tree complex wavelet transform. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1199–1210. [Google Scholar] [CrossRef]
  35. Watson, A.B.; Yang, G.Y.; Solomon, J.A.; Villasenor, J. Visibility of wavelet quantization noise. IEEE Trans. Image Process. 1997, 6, 1164–1175. [Google Scholar] [CrossRef] [Green Version]
  36. Yang, X.; Lin, W.; Liu, Z.; Ongg, Z.; Yao, S. Motion–compensated residue preprocessing in video coding based on just–noticeable–distortion profile. IEEE Trans. Circuits Syst. Video Technol. 2005, 15, 745–752. [Google Scholar]
  37. Wang, C.; Zhang, T.; Wan, W.; Han, X.; Xu, M. A novel STDM watermarking using visual saliency based JND model. Information 2017, 8, 103. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, C.; Han, X.; Wan, W.; Li, J.; Sun, J.; Xu, M. Visual saliency based just noticeable difference estimation in DWT domain. Information 2018, 9, 178. [Google Scholar] [CrossRef] [Green Version]
  39. Akhaee, M.A.; Sahraeian, S.M.; Marvasti, F. Contourlet based image watermarking using optimum detector in a noisy environment. IEEE Trans. Image Process. 2010, 19, 967–980. [Google Scholar] [CrossRef]
  40. Papakostas, G.A.; Tsougenis, E.D.; Koulouriotis, D.E. Fuzzy knowledge based adaptive image watermarking by the method of moments. Complex Intell. Syst. 2016, 2, 205–220. [Google Scholar] [CrossRef] [Green Version]
  41. Borji, A.; Itti, L. State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 185–207. [Google Scholar] [CrossRef] [PubMed]
  42. Borji, A.; Sihite, D.N.; Itti, L. Quantitative analysis of human model agreement in visual saliency modeling: A comparative study. IEEE Trans. Image Process. 2013, 22, 55–69. [Google Scholar] [CrossRef] [PubMed]
  43. Wang, Z.; Bovik, A.C.; Sheikh, H.R. Image quality assessment from error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Yadav, N.; Singh, K. Robust image–adaptive watermarking using an adjustable dynamic strength factor. Signal Image Video Process. 2015, 9, 1531–1542. [Google Scholar] [CrossRef]
  45. Tsougenis, E.D.; Papakostas, G.A.; Koulouriotis, D.E.; Tourassis, V.D. Towards adaptivity of image watermarking in polar harmonic transforms domain. Opt. Laser Technol. 2013, 54, 84–97. [Google Scholar] [CrossRef]
Figure 1. Impulse responses of the reconstruction filters in two transforms. (a) DWT. (b) DT-complex wavelet transform (CWT).
Figure 1. Impulse responses of the reconstruction filters in two transforms. (a) DWT. (b) DT-complex wavelet transform (CWT).
Electronics 08 01462 g001
Figure 2. Two-level 1D dual tree complex wavelet transform.
Figure 2. Two-level 1D dual tree complex wavelet transform.
Electronics 08 01462 g002
Figure 3. Block diagram of the proposed watermarking. (a) Embedding. (b) detection.
Figure 3. Block diagram of the proposed watermarking. (a) Embedding. (b) detection.
Electronics 08 01462 g003
Figure 4. Host, watermarked, and difference images: Barbara, Boat, Bridge, Elaine, Lena, Man, Mandrill, and Peppers.
Figure 4. Host, watermarked, and difference images: Barbara, Boat, Bridge, Elaine, Lena, Man, Mandrill, and Peppers.
Electronics 08 01462 g004
Figure 5. Error probability ( F e ) versus noise variance for AWGN attack.
Figure 5. Error probability ( F e ) versus noise variance for AWGN attack.
Electronics 08 01462 g005
Figure 6. AWGN attack with different noise variances.
Figure 6. AWGN attack with different noise variances.
Electronics 08 01462 g006
Figure 7. JPEG attack for various test images with different quality factors.
Figure 7. JPEG attack for various test images with different quality factors.
Electronics 08 01462 g007
Figure 8. Results for JPEG compression (quality factor = 20 % ) attack combined with AWGN attack for various test images with different noise variances.
Figure 8. Results for JPEG compression (quality factor = 20 % ) attack combined with AWGN attack for various test images with different noise variances.
Electronics 08 01462 g008
Table 1. Evaluation results with different watermark lengths.
Table 1. Evaluation results with different watermark lengths.
ImageWatermark Length 128Watermark Length 1024
PSNR (dB)SSIMPSNR (dB)SSIM
Barbara51.11510.999942.47250.9959
Boat51.67590.999841.89770.9952
Bridge51.46280.999942.72790.9979
Elaine51.19900.999541.47590.9938
Lena52.27450.999846.01690.9931
Man53.08350.999948.86530.9995
Mandrill53.08350.999948.86530.9995
Peppers51.59190.999642.27390.9939
Table 2. BER results of the extracted watermark under scaling attack.
Table 2. BER results of the extracted watermark under scaling attack.
ImageScaling Factor
0.50.70.750.80.91.11.21.31.41.52.0
Barbara0.00860.186800.07890.01640.14220.08200.0531000
Boat00.314100.19840.03360.27810.18830.0945000
Bridge00.154700.07190.00860.12500.08280.0336000
Elaine00.275000.14300.03830.22970.14060.06720.004700
Lena00.221000.12890.01950.18280.12500.06800.004700
Man00.261700.12730.02340.23360.12890.05860.006300
Mandrill0.00160.117200.019500.08360.02340.0047000
Peppers00.268800.15630.02580.23980.15550.05700.007800
Table 3. BER results of the extracted watermark under rotation attack.
Table 3. BER results of the extracted watermark under rotation attack.
ImageRotation Angle
−10°−5°−2°−1°−0.5°0.5°10°
Barbara0.08250.04220.04220.03830.03360.01860.01880.01880.02970.0398
Boat0.03200.03280.02890.02660.0242000.00310.00780.0078
Bridge0.04300.03280.02660.02660.02660.02580.03200.03440.04300.0594
Elaine0.07270.03440.01640.01640.01640.01720.01720.01720.03520.0531
Lena0.04220.01880.01330.01330.01330.00780.00780.00780.01330.0273
Man0.03200.02270.01090.01090.01090.00310.00310.00700.02270.0328
Mandrill0.10160.07890.06170.06170.06170.03590.03910.03910.04760.0820
Peppers0.03980.02890.02420.01950.01640.03120.03590.03590.05230.0516
Table 4. BER results of the extracted watermark under Gaussian filtering and median filtering attacks.
Table 4. BER results of the extracted watermark under Gaussian filtering and median filtering attacks.
ImageGaussian FilteringMedian Filtering
3 × 3 5 × 5 7 × 7 3 × 3 5 × 5 7 × 7
Barbara0.03750.05350.05350.05620.17340.2757
Boat0.01760.03200.03400.06710.19020.2820
Bridge0.05120.06640.06720.13160.26680.3422
Elaine0.02270.03160.03320.00200.03090.1129
Lena0.03010.04100.04300.06520.18360.2566
Man0.05310.07540.07850.07150.20200.2793
Mandrill0.05350.07500.07500.20350.30740.3555
Peppers0.02970.04490.04690.02270.06680.1707
Table 5. BER results under JPEG compression (quality factor = 20 % ) combined with other attacks.
Table 5. BER results under JPEG compression (quality factor = 20 % ) combined with other attacks.
MethodBarbaraBoatPeppersElaineBridgeMandrill
JPEG and Scaling (0.75)00.007800.007800.0078
JPEG and Scaling (0.90)0.01870.007800.01780.00780.0239
JPEG and Scaling (1.10)0.07810.03560.03130.06640.12530.0907
JPEG and Gaussian filtering ( 3 × 3 ) 0.05860.03130.02950.03520.05190.0643
JPEG and Gaussian filtering ( 5 × 5 ) 0.05860.05080.03640.04690.07410.0852
JPEG and Median filtering ( 3 × 3 ) 0.05910.07100.03900.11680.00780.2015
JPEG and Median filtering ( 5 × 5 ) 0.14760.15920.11870.23750.03520.2848
Table 6. BER (%) results of the extracted watermark under some common attack.
Table 6. BER (%) results of the extracted watermark under some common attack.
Method
AWGN
σn = 20
Median
3 × 3
Gaussian
3 × 3
JPEG
10%
Scaling
0.9
Rotation
−5°
Rotation
Test image Barbara
Akhaee [16]6.337.063.167.802.793.922.36
Kalantari [28]0.1920.475.297.036.119.0410.09
Yadav [44]00.780.3911.720.326.277.35
Proposed2.152.731.172.7303.522.34
Test image Boat
Akhaee [16]9.5411.115.4616.394.463.090.35
Kalantari [28]014.388.319.648.238.989.40
Yadav [44]2.341.951.179.370.165.746.03
Proposed2.582.7302.730.781.950.78
Test image Peppers
Akhaee [16]11.232.783.2617.251.273.404.80
Kalantari [28]03.446.198.077.229.1511.34
Yadav [44]1.560.781.9510.930.246.908.85
Proposed2.810.781.174.300.392.733.52
Table 7. BER (%) comparison of the recovered watermark under median filtering attack.
Table 7. BER (%) comparison of the recovered watermark under median filtering attack.
ImageKalantari [28]Proposed Method
Median FilteringMedian Filtering
3 × 35 × 57 × 73 × 35 × 57 × 7
Lena10.1621.4828.522.7313.6718.75
Peppers3.4414.4525.070.783.9114.84
Boat14.3826.1037.892.7313.2822.66
Table 8. BER (%) comparison of the recovered watermark under JPEG compression attack.
Table 8. BER (%) comparison of the recovered watermark under JPEG compression attack.
MethodImageJPEG Compression Quality Factor (%)
1020304050
Kalantari [28]Lena9.241.740.780.130
Yadav [44]Lena18.758.984.300.780
ProposedLena6.250000
Kalantari [28]Peppers8.072.210.780.260
Yadav [44]Peppers10.943.911.170.780.39
ProposedPeppers4.300.390.3900
Kalantari [28]Boat9.641.560.3900
Yadav [44]Boat9.376.251.720.390
ProposedBoat2.7300.3900
Table 9. BER (%) comparison of the recovered watermark under scaling attack.
Table 9. BER (%) comparison of the recovered watermark under scaling attack.
Method Scaling Factor
0.50.70.91.11.31.5
Yadav [44]Lena1.020.010.06.000
Tsougenis [45]Lena5.03.04.08.08.012.0
Proposed methodLena011.71.907.77.00
Yadav [44]Peppers013.06.01.000
Tsougenis [45]Peppers13.05.08.017.017.020.0
Proposed methodPeppers011.43.010.000

Share and Cite

MDPI and ACS Style

Liu, J.; Rao, Y.; Huang, Y. Complex Wavelet-Based Image Watermarking with the Human Visual Saliency Model. Electronics 2019, 8, 1462. https://doi.org/10.3390/electronics8121462

AMA Style

Liu J, Rao Y, Huang Y. Complex Wavelet-Based Image Watermarking with the Human Visual Saliency Model. Electronics. 2019; 8(12):1462. https://doi.org/10.3390/electronics8121462

Chicago/Turabian Style

Liu, Jinhua, Yunbo Rao, and Yuanyuan Huang. 2019. "Complex Wavelet-Based Image Watermarking with the Human Visual Saliency Model" Electronics 8, no. 12: 1462. https://doi.org/10.3390/electronics8121462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop