Next Article in Journal
Spatial and Temporal Non-Linear Dynamics Analysis and Predictability of Solar Radiation Time Series for La Reunion Island (France)
Previous Article in Journal
Bearing Remaining Useful Life Prediction Based on Naive Bayes and Weibull Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Logarithmic Quantization-Based Image Watermarking Using Information Entropy in the Wavelet Domain

School of Mathematics and Computer Science, ShangRao Normal University, Shangrao 334001, China
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(12), 945; https://doi.org/10.3390/e20120945
Submission received: 19 November 2018 / Revised: 3 December 2018 / Accepted: 5 December 2018 / Published: 8 December 2018

Abstract

:
Conventional quantization-based watermarking may be easily estimated by averaging on a set of watermarked signals via uniform quantization approach. Moreover, the conventional quantization-based method neglects the visual perceptual characteristics of the host signal; thus, the perceptible distortions would be introduced in some parts of host signal. In this paper, inspired by the Watson’s entropy masking model and logarithmic quantization index modulation (LQIM), a logarithmic quantization-based image watermarking method is developed by using the wavelet transform. Furthermore, the novel method improves the robustness of watermarking based on a logarithmic quantization strategy, which embeds the watermark data into the image blocks with high entropy value. The main significance of this work is that the trade-off between invisibility and robustness is simply addressed by using the logarithmic quantizaiton approach, which applies the entropy masking model and distortion-compensated scheme to develop a watermark embedding method. In this manner, the optimal quantization parameter obtained by minimizing the quantization distortion function effectively controls the watermark strength. In terms of watermark decoding, we model the wavelet coefficients of image by the generalized Gaussian distribution (GGD) and calculate the bit error probability of proposed method. Performance of the proposed method is analyzed and verified by simulation on real images. Experimental results demonstrate that the proposed method has the advantages of imperceptibility and strong robustness against attacks covering JPEG compression, additive white Gaussian noise (AWGN), Gaussian filtering, Salt&Peppers noise, scaling and rotation attack, etc.

1. Introduction

With the wide application of big data and other multimedia information technology, mass multimedia data are being generated and distributed over the Internet each day. This facilitates people’s daily work and life, but the security of these multimedia products are becoming more and more important, which has been studied over the past twenty years. One of the current effective methods is digital watermarking, which has been widely researched in the field of multimedia information security, such as data authentication, fingerprinting and broadcast monitoring, etc. [1,2,3]. Currently, most of the image watermarking algorithms focus on the study of imperceptibility and robustness.
Generally, the embedding method of watermarking can be divided into two categories due to the different embedding space used. The first type is based on the spatial domain and the other is dependent on the transform domain. For spatial domain-based watermarking, most algorithms mainly embed the watermark data by modifying the pixels of the image. While the transform domain-based watermarking is usually embedded by the coefficients of a properly transform domain, such as Fourier transform [4,5], discrete cosine transform [6,7] and wavelets [8,9,10,11]. According to different strategies of embedding watermark data, they can be classified into additive [12], multiplicative [13,14] and quantization-based methods [15,16,17,18,19]. Therefore, choosing the appropriate embedding space and strategy is very important for the design of watermarking algorithm.
In the current quantization-based watermarking algorithms, the most typical is the uniform quantization index modulation (UQIM) method, which has been presented in [16]. The UQIM method has far-reaching implications since it can achieve good distortion-robustness trade-off. Furthermore, UQIM is a blind watermarking scheme because the host image is not needed during the watermark detection. In this method, watermark information is embedded through quantizing the feature of host image by a set of quantizations, and each quantizer is associated with a different message. Although UQIM method is simple and easy to implement, it has a disadvantage is that it is sensitive to amplitude scaling attack. Moreover, the UQIM ignores the visual perceptual characteristics, which prone to introduce perceptible distortions in some parts of host signal. Several previously proposed works have addressed these problems. Literature [17] proposes a gain-invariant adaptive quantizer based on rational dither modulation (RDM) strategy in both watermark encoding and decoding. Experiments show that this method can well resist scaling attack. Literature [18] introduces an adaptive QIM (AQIM) watermarking method by utilizing modified Watson’s visual perceptual model, which exploits adaptive quantization step size to improve the fidelity of image and resistance to scaling attack.
To further improve the robustness of quantization-based watermarking, N.K.Kalantari et al. [19] introduces a logarithmic domain-based quantization index modulation (LQIM) watermarking which features perceptual advantages by μ -Law concept. They first transform the host signal into the logarithmic domain, then use the uniform quantization method to embed the watermark data, and extract the watermark data by applying the Euclidean distance decoder. The advantages of an LQIM method are desirable from perceptual perspective, where small quantization step sizes are devoted to smaller amplitudes and larger quantization step sizes are associated with larger amplitude. However, the visual perception model of the image itself is not considered, thus, some perceptible distortions may be introduced when decoding the watermark data. Recently, literature [20] proposes a gain invariant-based quantization watermarking method, which uses the division function strategy. This division function scheme has no effect on the watermark decoding process. Therefore, the watermarking method in [20] is invariant to gain attack, but the performance of watermarking against geometric attacks still needs to be further improved. Besides this, Carpentieri et al. [21] proposed a novel data-hiding method based on the modification of prediction errors (MPE) technique.They developed a new one-pass framework suitable for hyper-spectral images collected through remote sensing facilities. Experimental results demonstrated the effectiveness of their proposed method. Moreover, literature [22] presents a watermarking method by using a 4D hyperchaotic system with coherent superposition and modified equal modulus decomposition. Experiment simulations have validated that their proposed watermarking method has good robustness performance when against noise, occlusion and special attack. More importantly, their paper opens a new area of research as hybrid multi-resolution wavelet transform is used in their proposed method, where different combinations of transforms can be explored.
It is clear that the embedding space is important to watermark embedding as mentioned above. As reported in [4,5,6,7,8,9,10,11], most watermarking algorithms focus on the frequency domain due to its good tradeoff between robustness and invisibility. It is well known that wavelet-based watermarking methods have the advantages of multi-scale and multi-resolution characteristics. Therefore, we design the watermarking method in the wavelet transform domain.
We have proposed a preliminary version of parts of this work in [23]. There is a substantial difference between this paper and the conference version. The overall algorithm in conference [23] is much less elaborate. Motivated by the LQIM [19], we propose an improved logarithmic domain-based image watermarking in this paper. In order to obtain a good tradeoff between the invisibility and the robustness of watermarking, we embed the watermark data into the high entropy region of image in the logarithmic domain. For watermark detection, we model the wavelet coefficients of image by the generalized Gaussian distribution (GGD) model. Lastly, we evaluate and discuss the performance of the proposed watermarking through experiments.
Although the proposed method follows the framework of [19], there are a number of significant contributions that it presents. First, we embed strong watermark data into the complex texture region of image. Thus, the perceptual quality of the watermarked image can be kept at acceptable level. Second, although some analysis indicates that the embedding method between [19] and the proposed approach seems to be mathematically equivalent, there exist some differences between them. In paper [19], the uniform quantization method is used to quantize the transformed coefficients, while we use the distortion-compensated method to achieve the quantization in proposed method and an optimization strategy is applied for obtaining the optimal quantization step size. Quantization scalar factor is determined through the optimization method. In general, the proposed scheme is slightly more robust than [19] against some common distortions.
The rest of this paper is organized as follows. Section 2 introduces the proposed logarithmic quantization-based watermarking. Section 3 exploits the optimal value of quantization parameter. Experimental results about the imperceptibility and robustness of the proposed watermarking against common attacks are given in Section 4. Finally, we have some conclusions of this paper in Section 5.

2. Improved LQIM-Based Watermarking

It is well known that, usually, the rational exploitation of the human visual system will help to realize the invisibility of watermarking. Therefore, based on the entropy masking of visual perception model [24], we apply the high entropy blocks of an image to embed the watermark information. Figure 1 shows the flow chart of the proposed watermarking. Specifically, the proposed watermarking method performs as the following steps.
Step 1: Applying the Pseudo-random Noise (PN) generator to produce a binary watermark sequence. Let b i 1 , 1 be the binary watermark signal.
Step 2: Divide the host image into non-overlapping L × L blocks, then sort these image blocks in descending by its entropy, and select frontal k high entropy image blocks as the embedding space. The selection threshold is set to the average entropy of all blocks. Generally, entropy can be computed by:
H = i = 1 n p i log p i ,
where p i denotes the probability of gray pixel i appearing in the image, and i = 1 n p i = 1 .
Step 3: Using the wavelet transform to decompose each selected image block. thus, the wavelet coefficients of mid-frequency sub-band image are obtained for embedding the watermark data.
Step 4: Let [ x 1 , x 2 , , x N ¯ ] be the set of selected wavelet coefficients of mid-frequency in each block. Then, we use logarithmic function to transform the set of wavelet coefficients [ x 1 , x 2 , , x N ¯ ] to
c = ln 1 + μ x X s ln 1 + μ , μ > 0 , X s > 0 ,
where parameter μ defines the compression level and X s is the parameter that scales the original signal. The calculation of parameter μ can be referred to in Section 3.1. Based on [19], the optimal value of X s can spreads most of the original image samples into the range [ 0 , 1 ] . As a result, let c = [ c 1 , c 2 , , c N ¯ ] be the set of transformed coefficients which can be obtained by Equation (2). Then, we adopt a distortion-compensated-quantization index modulation (DC-QIM) scheme to quantize the transformed signal c i for watermarking purpose. We have
z i = Q b i ( c i ) + ( 1 α ) ( c i Q b i ( c i ) ) , i = 1 , 2 , N ¯ ,
where Q b i ( c i ) = round ( c i + b i ) b i represents the adaptive quantizer, b i 1 , 1 represents the binary watermark signal, Δ denotes the quantization step size; α represents the scalar factor of quantization, details are discussed in the Section 3.2, When α = 1 , the DC-QIM corresponds to the quantization index modulation scheme.
Step 5: Embed the watermark signal into the selected wavelet coefficients. We obtain the watermarked signal y i by:
y i = sgn ( z i ) X s μ 1 + μ z i 1 ,
where sgn ( · ) represents the sign function, z is the quantized signal in the transformed domain, and y i represents the watermarked signal.
Step 6: Using the inverse wavelet transform to reconstruct the watermarked image block.
Step 7: Repeat Steps 3–6. Finally, combining the watermarked blocks with non-watermarked blocks to get the whole embedded watermark image.
For the extraction of the watermark, we use the Euclidean distance decoder in this work. Specifically, we adopt the proposed watermarking method to embed zero and one into the received signal r in the logarithmic domain, which results in r 0 and r 1 , respectively. As a consequence, we extract the watermark signal as following:
m ^ = arg min r r i 2 , i { 0 , 1 } ,
where m ^ represents the extracted watermark signal.

3. Quantization Parameter Discussion and Error Probability Analysis

In this regard, we computed the optimum parameter μ and quantization scalar factor α , in which the optimum parameter for μ is found by minimizing the quantization distortion from reference [19]; the quantization scalar factor α is determined by the distortion-compensation interference and the noise interference. Besides this, the watermark error probability has been discussed in terms of the generalized Gaussian distribution (GGD) model in Section 3.3.

3.1. Optimal Parameter μ

In order to obtain the optimum value μ , we minimize the quantization distortion and the watermark power in this sub-section. We assume that the quantization noise w and E [ x w x 2 ] the watermark power areand in logarithmic transform domain, respectively. According to [19], ( x w x ) can be written as:
x w x = s q s x x = s q s 1 x ,
where s q = X s μ 1 + μ c + w 1 , w denotes the quantization noise. c denotes the quantized signal, s = 1 N i = 1 N x i 2 represents the normalized magnitude that embed one bit into the vector X = x 1 , x 2 , , x N . By adding and subtracting 1 + μ w inside the bracket of expression s q , we have
s q = X s μ 1 + μ c + w + 1 + μ w 1 + μ w 1 = X s μ 1 + μ w 1 + μ c 1 + 1 + μ w 1 = X s μ 1 + μ w 1 + X s μ 1 + μ c 1 1 + μ w = X s μ 1 + μ w 1 + s 1 + μ w .
thus, ( x w x ) can be further written as
x w x = X s μ s 1 + μ w 1 + 1 + μ w 1 x .
Simplifying the above equation, we have
x w x = 1 + X s μ s 1 + μ w 1 x
According to Equation (9), replacing s with | x | when s is scalar, then we have
x w x = x + sgn ( x ) X s μ 1 + μ w 1 ,
where sgn ( x ) = x / x , thus, E [ x w x 2 ] can be expressed as
E [ x w x 2 ] = E x + sgn ( x ) X s μ 2 E 1 + μ w 1 2 .
As well, we assume that the two terms in Equation (11) are independent for each other based on [19]. Therefore, the first term can be written as
E x + sgn ( x ) X s μ 2 = E [ x 2 ] + 2 E [ x ] X s μ + X s 2 μ 2 .
Next, we calculate and minimize of watermark power by the method used in vector LQIM of [19]. Based on the obtained watermark power, Document to Watermark Ratio (DWR) can be computed as
D W R = E x 2 E x w x 2 = 1 + 2 E [ x ] X s E [ x 2 ] μ + X s 2 E [ x 2 ] μ 2 × 1 Δ Δ / 2 Δ / 2 1 + μ w 1 2 d w 1 .
Applying the Taylor series expansion for 1 + μ w , we can write it as
1 + μ w = 1 + ln ( 1 + μ ) w + O ( 2 ) ,
where the higher order terms are neglected. Considering the above approximation, the expectation E 1 + μ w 1 2 in Equation (11) can be rewritten as
E 1 + μ w 1 2 = ln 2 ( 1 + μ ) Δ 2 12 .
Using the above simplification form, and represent the optimum of LQIM by μ o p t , it can be obtained by
μ o p t = arg min μ ( 0 , ) 1 + 2 E [ x ] X s E [ x 2 ] μ + X s 2 E [ x 2 ] μ 2 ln 2 ( 1 + μ ) .

3.2. Quantization Scalar Factor α

Firstly, we assumed that the received image contaminated by zero mean Additive white Gaussian noise (AWGN). The total interference energy is generated from both distortion-compensation interference and noise interference, and they are independent [16]. Thus, the interference function f is defined as:
f = E ε ( 1 α ) ( Q ( x ; m , Δ / α ) x ) 2 ,
where ε is Gaussian noise and ε N ( 0 , σ ε 2 ) , σ ε 2 denotes the noise variance, then Equation (17) can be derived as:
f = σ ε 2 + ( 1 α ) 2 D / α 2 ,
where D is expectation distortion function, which is defined as D = E 1 / N ¯ y x 2 according to [16], where y denotes the watermarked signal, and x denotes the host signal, E · represents the mathematical expectation. One optimality criterion for choosing α is to maximize the “DIR (Distortion-Interference Ratio, DIR)”:
DIR ( λ ) = D 1 2 / ( α 2 σ ε 2 + ( 1 α ) 2 D ) ,
where D is the minimum distance. Let φ ( α ) = λ 2 σ ε 2 + ( 1 α ) 2 D and φ ( α ) / α = 0 , and set the derivative of α to zero as follows:
φ ( α ) / α = 2 ( D + σ ε 2 ) α 2 D = 0 .
Therefore the optimal α is obtained by
α o p t = D / ( D + σ ε 2 ) = 1 / ( 1 + 1 / D N R ) ,
where D N R is the distortion-to-noise ratio and have D N R = log 10 ( D / σ ε 2 ) .

3.3. Derivation of Error Probability

Roughly speaking, the distribution of the wavelet coefficients of image is highly non-Gaussian. Therefore, we utilize the generalized Gaussian distribution (GGD) [6,19] to model the image wavelet coefficients in this work. For simplicity, the host image modeled by the GGD, which is defined as
p x ( x ; μ ˜ , α , β ) = β 2 Γ ( 1 / β ) α e x μ ˜ α β ,
where μ ˜ denotes the mean value of the distribution. α represents the scale parameter and β denotes the shape parameter, Γ ( · ) is the Gamma function. When β = 1 , the GGD corresponds to a Laplacian distribution while β = 2 corresponds to a Gaussian distribution.
Figure 2 shows the histograms of wavelet coefficients, as can be seen, the wavelet coefficients are highly non-Gaussian. Moreover, Figure 3 shows the histogram of wavelet coefficients together with a plot of the fitted GGD. From Figure 3, we can see that the fits are quite good. Therefore, we can use the two parameters of GGD to model the wavelet coefficients. For the derivation of the error probability of watermark, we assume that the interference channel is AWGN. Error occurs in detection when noise causes the received signal to fall into a wrong region. From [19], the error probability of watermark is defined as
p i = i = o i m = T i + 2 m T i + 1 + 2 m 1 2 π σ n e ( n C i / 2 ) 2 2 σ n 2 d n ,
where σ n 2 is the noise variance, T i is defined as
T i = C i / 2 + C ( i + 1 ) / 2 2 ,
where o i is the probability of occurrence of the host signal in the interval [ C ( i 1 ) / 2 , C ( i + 1 ) / 2 ] , assuming equal probabilities for −1 and 1 bits, it can be defined as
o i = 1 2 C ( i 1 ) / 2 C ( i + 1 ) / 2 β 2 α Γ 1 / β exp x α β d x ,
where C i is defined as
C i = sgn ( i ) X s μ 1 + μ i Δ 1 ,

4. Experimental Results and Analysis

To evaluate the performance of the proposed watermarking method and validity of analytical derivations, the proposed watermarking algorithm is simulated on several benchmark images, which covers Lena, Barbara, Boat, Mandrill, Flintstones and Einstein. First, We first conduct the experiment by simulation on these images to show the imperceptibility of watermarking. Second, we have performed several robustness experiments to show the perceptual advantages of the proposed watermarking in comparison with previous quantization-based algorithms. Finally, to further verify the effectiveness of the detection performance of proposed method, the watermark error probability has been discussed under AWGN and JPEG compression attacks.

4.1. Imperceptibility Performance Test

In this section, we perform the imperceptibility performance test based on above six images. In terms of watermark embedding, the original images are segmented into non-overlapping blocks with size L × L firstly and L can be set to 8, 16, 32 or 64, respectively. The frontal k high entropy image blocks are chosen as the embedding space. For each selected image block, the 9–7 biorthogonal filters with three levels of decomposition are used to decompose the block, then two mid-frequency sub-band wavelet coefficients are quantized by using the logarithmic quantization strategy. The mid-frequency sub-band wavelet coefficients include horizontal direction decomposition coefficient and vertical direction decomposition coefficient. In this experiment, the size of image block is 32 × 32 and the number of image blocks is 64. Because the mid-frequency wavelet coefficients of the second level are used for quantizing, which result in embedding 8192 bits in a 512 × 512 original image.
The results of the invisibility are shown in Figure 4. We can see that the watermark invisibility is satisfied. Moreover, to investigate the performance of proposed method in an objective way, we also evaluate the performance of the proposed watermarking method through the PSNR (Peak-Signal-to-Noise-Ratio, PSNR) and SSIM (Structure similarity index measure, SSIM) [25]. The results of PSNR and SSIM are shown in Table 1. It can be seen that the proposed method has good invisibility without any attack. Furthermore, we perform the histogram test to measure the difference between the host image and the watermarked image. As shown in Figure 5, it also can be found that the histogram of original image agrees closely with the histogram of watermarked image.

4.2. Robustness Performance Test

In oder to evaluate the robustness of the proposed method, some common image processing attacks and geometric distortion attacks are applied to the watermarked images, including AWGN, JPEG compression, scaling attack, median filtering and rotation attack. In this regard, the robustness of the proposed watermarking method under above attack is investigated using six well-known images including Lena, Barbara, Boat, Mandrill, Flintstones and Einstein. Besides, the size of all these images is 512 × 512 . Lastly, we apply the BER (Bit error ratio, BER) to evaluate the watermark robustness under several intentional attacks.
Furthermore, to show the perceptual advantages of the proposed method, we compare it with previous quantization-based algorithms, we conduct experiments on benchmark test images. The comparisons of UQIM [16], AQIM [18], LQIM [19] and reference [20] under these attacks are performed. All watermarking methods contain the same watermark length and PSNR value. The watermark length is 8192 and the PSNR for all images is 45 dB. Quantization step size is selected as 0.07 , 3.0 , 1.50 , 0.65 and 0.09 for the proposed method, UQIM [16], AQIM [18], LQIM [19] and reference [20], respectively. Furthermore, we set X s = 200 , and μ o p t = 7.52 , and the quantization scalar factor is 0.75 for the proposed method.
Table 2 shows the results of comparison of UQIM [16], AQIM [18], LQIM [19] and reference [20] under AWGN, median filtering, JPEG compression, amplitude scaling and rotation attack. From Table 2, we can see that the proposed method outperforms UQIM, AQIM, LQIM and reference [20]. The main reasons are as follows. In these quantization-based watermarking, the UQIM [16] uses a uniform quantizer for watermark embedding, which may reduce the fidelity of the host image. AQIM [18] utilizes perceptual model to exploit an adaptive quantizer, which improves the robustness of watermarking. However, the discrepancies between the corresponding estimated quantization step sizes at the embedder and the decoder, resulting in reducing the robustness of AQIM [18]. Considering the Watson’s entropy masking and the optimal quantization parameter, the proposed method has a slightly larger BER than LQIM [19] and reference [20].
To further verify the performance of watermark under AWGN, JPEG compression, amplitude scaling and rotation attacks. Simulations are also performed on six benchmark test images and the average results are depicted in Figure 6, Figure 7, Figure 8 and Figure 9. In this regard, the optimal quantization scalar factor is set to 0.75 , and the image block size is 32 × 32 . Different from Table 2, the watermark length is 4096 in all Figure 6, Figure 7, Figure 8 and Figure 9. It is to say that we choose 32 image blocks with high entropy value as the watermark embedding space, then the two set of mid-frequency wavelet coefficients of the second level are quantized for watermark embedding, resulting in embedding 4096 bits in an image. Besides, the other pre-defined parameters are chosen as mentioned above. The results are averaged over above six images.
Figure 6 shows the results of the proposed method in comparison with UQIM [16], AQIM [18], LQIM [19] and reference [20] under AWGN attack. From the simulation results in Figure 6, the proposed approach outperforms these previous quantization-based methods. Among many watermarking applications presented so far, JPEG compression is the most common image distortion attack.
Therefore, we also simulate all methods against JPEG compression attack in Figure 7. As seen, the overall performance of the proposed algorithm is satisfied. However, the performance of the proposed method is slightly worse than [20] under strong JPEG compression strength, and this issue will be investigated in our future work. Figure 8 and Figure 9 illustrate the robustness of the proposed method in comparison with the above quantization-based watermarking under amplitude scaling attack and rotation attack, respectively. It can be seen from these two figures that the performance of proposed watermarking in different scaling factor and angles is better than the above watermarking methods.
Besides this, the computational time of the proposed watermarking method with different images is presented in Table 3, and we compare it with previous quantization-based algorithms in terms of the computational time. Note that all the results are implemented in MATLAB R2016a. As shown in Table 3, the proposed watermarking algorithm has high computational efficiency.
In summary, the robustness of the proposed method outperforms the other comparison methods. The main factors are summarized as follows. First, the high entropy image region is selected as the watermark embedding space, which will improve the invisibility of the watermarking. Moreover, the optimal quantization scalar factor is used to control the perceptual distortion of watermark embedding, which reduces the effect of embedding distortion on the watermarked image. On the other hand, we exploit the the logarithmic quantization strategy in designing the watermarking, which improves the robustness of the proposed watermarking.

4.3. Discussion of Error Probability

Under AWGN and JPEG compression attacks for Lena image, respectively. The error probability is derived according to the method described in Section 3.3, and it is calculated by P e = 1 k i = 1 k p i , where k is the total number of the selected high entropy blocks in the watermarking system. p i is computed by applying the Equation (23) described in Section 3.3.
From Figure 10 and Figure 11, it can be seen that the proposed method has slightly better performance than UQIM [16], AQIM [18], LQIM [19] and reference [20]. This further validates the effectiveness of the proposed algorithm.
Meanwhile, it also shows that the GGD model can well describe the non-Gaussian property of wavelet coefficients. In brief, the main reasons are summarized as follows. First of all, the watermark signal is embedded into the high entropy region of the host image. By using this strategy, the imperceptibility of the watermarking system can be improved effectively. Second, we use the distortion-compensated approach to achieve the quantization-based watermark embedding in proposed method and an optimization strategy is applied for obtaining the optimal quantization step size. By applying this method, the quantization distortion of the host signal can be reduced and the robustness of watermarking can be improved. Finally, thanks to the good fitting ability of the generalized Gaussian distribution model for non-Gaussian property of wavelet coefficients, the detection performance of the watermarking can be enhanced effectively.
As reported in Table 2 and in figures ranging from Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, the robustness of the proposed watermarking method outperforms the other quantization-based watermarking methods mentioned above. The main factors are summarized as follows.First, the high entropy image region is selected as the watermark embedding space, which will improve the invisibility of the watermarking. Moreover, the optimal quantization scalar factor is used to control the perceptual distortion of watermark embedding, which reduces the effect of embedding distortion on the watermarked image. Second, we exploit the the logarithmic quantization strategy in designing the watermarking, and find the optimum parameter by minimizing the quantization distortion, which improves the robustness of the proposed watermarking. Finally, we utilize the generalized Gaussian distribution model to model the wavelet coefficients, which improves the detection performance of the proposed watermarking.
However, the performance of the proposed method is slightly worse than [20] under strong JPEG compression strength. Furthermore, the proposed watermarking method performs weakly when against some geometric distortion attacks, which covers complex affine transformation, cropping attack, synchronous attack and local random bending attack, and so on. To address these difficult problems, we will adopt some advanced methods and techniques in our future work, including group component analysis [26], sparse Bayesian learning [27,28,29], and deep convolutional neural networks [30], etc.

5. Conclusions

Wavelet transform has been successfully applied in many image processing areas, such as digital watermarking, JPEG compression and image restoration, etc. In this work, we develop a modified logarithmic quantization-based watermarking method based on information entropy in the wavelet domain. By using the information entropy, the invisibility of watermark can be improved effectively. Furthermore, an optimization strategy is applied for obtaining the optimal quantization step size. The robustness of the proposed watermarking is satisfied through a series of experimental results. In terms of watermark decoding, we apply the generalized Gaussian distribution model to describe the distribution of the wavelet coefficients. Simulation results show the effectiveness of the watermark detection. Future work will probably include investigating a novel data-hiding algorithm by applying other technologies such as sparse representation, deep learning and convolution neural network, etc.

Author Contributions

J.L. conceived the idea, designed the experiments; S.W. helped to revise the manuscript, X.X. helped to analyze the experimental data.

Funding

This work was supported by the Science and Technology Foundation of Jiangxi Provincial Education Department (Grant No. GJJ170922).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bhowmik, D.; Abhayaratne, C. Quality scalability aware watermarking for visual content. IEEE Trans. Image Proc. 2016, 25, 5158–5172. [Google Scholar] [CrossRef] [PubMed]
  2. Zhou, J.; Sun, W.; Dong, L.; Liu, X.; Au, O.C.; Tang, Y.Y. Secure reversible image data-hiding over encrypted domain via key modulation. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 441–452. [Google Scholar] [CrossRef]
  3. Sadreazami, H.; Ahmad, M.O.; Swamy, M.S. A robust multiplicative watermark detector for color images in sparse domain. IEEE Trans. Circuits Syst. II Exp. Briefs 2015, 62, 1159–1163. [Google Scholar] [CrossRef]
  4. Tsui, T.K.; Zhang, X.P.; Androutsos, D. Color image watermarking using multidimensional Fourier transforms. IEEE Trans. Inf. Forensics Secur. 2008, 3, 16–28. [Google Scholar] [CrossRef]
  5. Urvoy, M.; Goudia, D.; Autrusseau, F. Perceptual DFT watermarking with improved detection and robustness to geometrical distortions. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1108–1119. [Google Scholar] [CrossRef]
  6. Hernandez, J.R.; Amado, M.; Perez-Gonzalez, F. DCT-domain watermarking techniques for still images: Detector performance analysis and a new structure. IEEE Trans. Image Proc. 2000, 9, 55–68. [Google Scholar] [CrossRef]
  7. Zhang, X.; Xiao, Y.; Zhao, Z. Self-embedding fragile watermarking based on DCT and fast fractal coding. Multimed. Tools. Appl. 2015, 74, 5767–5786. [Google Scholar] [CrossRef]
  8. Sadreazami, H.; Ahmad, M.O.; Swamy, M.S. A study of multiplicative watermark detection in the contourlet domain using alpha-stable distributions. IEEE Trans. Image Proc. 2014, 23, 4348–4360. [Google Scholar] [CrossRef]
  9. Li, C.; Zhang, Z.; Wang, Y.; Ma, B.; Huang, D. Dither modulation of significant amplitude difference for wavelet based robust watermarking. Neurocomputing 2015, 166, 404–415. [Google Scholar] [CrossRef]
  10. Singh, D.; Singh, S.K. DWT-SVD and DCT based robust and blind watermarking scheme for copyright protection. Multimed. Tools. Appl. 2017, 76, 13001–13024. [Google Scholar] [CrossRef]
  11. Mehta, R.; Rajpal, N.; Vishwakarma, V.P. A robust and efficient image watermarking scheme based on Lagrangian SVR and lifting wavelet transform. Int J. Mach Learn. Cybern. 2017, 8, 379–395. [Google Scholar] [CrossRef]
  12. Rahman, S.M.; Ahmad, M.O.; Swamy, M. A new statistical detector for DWT-based additive image watermarking using the Gauss–Hermite expansion. IEEE Trans. Image Proc. 2009, 18, 1782–1796. [Google Scholar] [CrossRef] [PubMed]
  13. Cheng, Q.; Huang, T.S. Robust optimum detection of transform domain multiplicative watermarks. IEEE Trans. Image Proc. 2003, 51, 906–924. [Google Scholar] [CrossRef]
  14. Sadreazami, H.; Ahmad, M.O.; Swamy, M. Multiplicative watermark decoder in contourlet domain using the normal inverse Gaussian distribution. IEEE Trans. Multimed. 2016, 18, 196–207. [Google Scholar] [CrossRef]
  15. Qin, C.; Chang, C.C.; Chiu, Y.P. A novel joint data-hiding and compression scheme based on SMVQ and image inpainting. IEEE Trans. Image Proc. 2014, 23, 969–978. [Google Scholar]
  16. Chen, B.; Wornell, G.W. Quantization index modulation: A class of provably good methods for digital watermarking and information embedding. IEEE Trans. Inf. Theory 2001, 47, 1423–1443. [Google Scholar] [CrossRef]
  17. Hu, H.T.; Hsu, L.Y. A DWT-based rational dither modulation scheme for effective blind audio watermarking. Circuits Syst. Signal Proc. 2016, 35, 553–572. [Google Scholar] [CrossRef]
  18. Li, Q.; Cox, I.J. Using perceptual models to improve fidelity and provide resistance to valumetric scaling for quantization index modulation watermarking. IEEE Trans. Inf. Forensics Secur. 2007, 2, 127–139. [Google Scholar] [CrossRef]
  19. Kalantari, N.K.; Ahadi, S.M. A logarithmic quantization index modulation for perceptually better data-hiding. IEEE Trans. Image Proc. 2010, 19, 1504–1517. [Google Scholar] [CrossRef]
  20. Zareian, M.; Tohidypour, H.R. A novel gain invariant quantization-based watermarking approach. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1804–1813. [Google Scholar] [CrossRef]
  21. Carpentieri, B.; Castiglione, A.; De Santis, A.; Palmieri, F.; Pizzolante, R. One-pass lossless data-hiding and compression of remote sensing data. Future Gener. Comput. Syst. 2019, 90, 222–239. [Google Scholar] [CrossRef]
  22. Rakheja, P.; Vig, R.; Singh, P. Optical asymmetric watermarking using 4D hyperchaotic system and modified equal modulus decomposition in hybrid multi resolution wavelet domain. Optik 2019, 176, 425–437. [Google Scholar] [CrossRef]
  23. Liu, J.; Ye, P. A Logarithmic Quantization Index Modulation data-hiding Using the Wavelet Transform. In Proceedings of the 2013 IEEE Third International Conference on Instrumentation, Measurement, Computer, Communication and Control (IMCCC), Shenyang, China, 21–23 September 2013; pp. 1181–1184. [Google Scholar]
  24. Watson, A.B.; Yang, G.Y.; Solomon, J.A.; Villasenor, J. Visibility of wavelet quantization noise. IEEE Trans. Image Proc. 1997, 6, 1164–1175. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Proc. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  26. Zhou, G.; Cichocki, A.; Zhang, Y.; Mandic, D.P. Group component analysis for multiblock data: Common and individual feature extraction. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2426–2439. [Google Scholar] [CrossRef] [PubMed]
  27. Jin, Z.; Zhou, G.; Gao, D.; Zhang, Y. EEG classification using sparse Bayesian extreme learning machine for brain–computer interface. Neural Comput. Appl. 2018, 1–9. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Zhou, G.; Jin, J.; Zhao, Q.; Wang, X.; Cichocki, A. Sparse Bayesian classification of EEG for brain–computer interface. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2256–2267. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Zhou, G.; Jin, J.; Zhang, Y.; Wang, X.; Cichocki, A. Sparse Bayesian multiway canonical correlation analysis for EEG pattern recognition. Neurocomputing 2017, 225, 103–110. [Google Scholar] [CrossRef]
  30. Liu, N.; Wan, L.; Zhang, Y.; Zhou, T.; Huo, H.; Fang, T. Exploiting convolutional neural networks with deeply local description for remote sensing image classification. IEEE Access 2018, 6, 11215–11228. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the proposed watermarking method.
Figure 1. Flow chart of the proposed watermarking method.
Entropy 20 00945 g001
Figure 2. Histogram of horizontal part and vertical part of the image Lena. The kurtosis of the two distributions is measured at (a) 20.2610 (b) 25.9780, for the Gaussian distribution, the kurtosis is 3, and therefore the coefficients of wavelet transform are highly non-Gaussian.
Figure 2. Histogram of horizontal part and vertical part of the image Lena. The kurtosis of the two distributions is measured at (a) 20.2610 (b) 25.9780, for the Gaussian distribution, the kurtosis is 3, and therefore the coefficients of wavelet transform are highly non-Gaussian.
Entropy 20 00945 g002
Figure 3. Wavelet sub-band coefficient histogram fitted with a generalized Gaussian distribution for Lena image, where mu represents the mean, alpha and beta represent the scale parameter and the shape parameter, respectively. (a) horizontal part (b) vertical part.
Figure 3. Wavelet sub-band coefficient histogram fitted with a generalized Gaussian distribution for Lena image, where mu represents the mean, alpha and beta represent the scale parameter and the shape parameter, respectively. (a) horizontal part (b) vertical part.
Entropy 20 00945 g003
Figure 4. Original and watermarked images using the proposed method for Lena, Barbara, Boat, Mandrill, Flintstones and Einstein. For each image, the top one is the original image, the bottom one is the watermarked image.
Figure 4. Original and watermarked images using the proposed method for Lena, Barbara, Boat, Mandrill, Flintstones and Einstein. For each image, the top one is the original image, the bottom one is the watermarked image.
Entropy 20 00945 g004
Figure 5. Histograms of the original image and the watermarked image. (a) Lena (b) Barbara (c) Boat (d) Mandrill (e) Flintstones (f) Einstein.
Figure 5. Histograms of the original image and the watermarked image. (a) Lena (b) Barbara (c) Boat (d) Mandrill (e) Flintstones (f) Einstein.
Entropy 20 00945 g005
Figure 6. BER ( % ) of watermark extraction under AWGN attack for various noise variances. The results are averaged over six well-known images. 4096 bits have been embedded in each image in all methods.
Figure 6. BER ( % ) of watermark extraction under AWGN attack for various noise variances. The results are averaged over six well-known images. 4096 bits have been embedded in each image in all methods.
Entropy 20 00945 g006
Figure 7. BER ( % ) versus JPEG quality factor for JPEG compression attack. The results are averaged over six well-known images; 4096 bits have been embedded in each image in all methods.
Figure 7. BER ( % ) versus JPEG quality factor for JPEG compression attack. The results are averaged over six well-known images; 4096 bits have been embedded in each image in all methods.
Entropy 20 00945 g007
Figure 8. BER ( % ) versus scaling factor for scaling attack. The results are averaged over six well-known images.4096 bits have been embedded in each image in all methods. Note that for UQIM, AQIM and LQIM, the BER of these methods is 0 when scaling factor is equal to 1.0, thus, these points are therefore not plotted.
Figure 8. BER ( % ) versus scaling factor for scaling attack. The results are averaged over six well-known images.4096 bits have been embedded in each image in all methods. Note that for UQIM, AQIM and LQIM, the BER of these methods is 0 when scaling factor is equal to 1.0, thus, these points are therefore not plotted.
Entropy 20 00945 g008
Figure 9. BER ( % ) versus different angle for rotation attack. The results are averaged over six well-known images.4096 bits have been embedded in each image in all methods.
Figure 9. BER ( % ) versus different angle for rotation attack. The results are averaged over six well-known images.4096 bits have been embedded in each image in all methods.
Entropy 20 00945 g009
Figure 10. Probability of error under AWGN attack with different noise variance.
Figure 10. Probability of error under AWGN attack with different noise variance.
Entropy 20 00945 g010
Figure 11. Probability of error under JPEG compression attack with different quality factor.
Figure 11. Probability of error under JPEG compression attack with different quality factor.
Entropy 20 00945 g011
Table 1. Performance evaluation results with block size of 32 × 32 .
Table 1. Performance evaluation results with block size of 32 × 32 .
ImageBlock Size 32,
Watermark Length 4096
Block Size 64,
Watermark Length 8192
PSNR (dB)SSIMPSNR (dB)SSIM
Lena49.47380.999145.47250.9873
Barbara50.18530.998646.15070.9882
Boat49.93620.999445.72790.9891
Mandrill49.01150.997846.53810.9876
Flintstones49.80230.997946.62690.9931
Einstein50.12090.999846.75460.9970
Table 2. BER ( % ) results of extracted watermark under common attacks.
Table 2. BER ( % ) results of extracted watermark under common attacks.
ImageMethodNoise var. 10Med. 3 × 3 JPEG 30%Scal. 0.75Rot. 10
LenaUQIM41.204515.656450.091324.467335.3694
AQIM36.653413.296345.14787.586431.4782
LQIM35.102111.590523.3564.178927.0649
[20]29.45363.944214.28722.079329.9434
Proposed26.53273.006715.36841.903222.1481
BarbaraUQIM40.706916.237849.788624.872337.0023
AQIM37.773413.896545.35228.566232.129
LQIM34.630810.973224.11254.321526.5674
[20]29.10962.645512.57892.345930.1073
Proposed25.43732.852914.09243.114525.0805
BoatUQIM42.083214.731750.334223.543936.5547
AQIM36.156612.706446.01056.114832.1982
LQIM34.820410.428524.35843.093429.0478
[20]28.17592.493613.90371.097829.478
Proposed26.90862.529814.80451.308623.8009
MandrillUQIM41.788316.090749.233623.57635.9014
AQIM35.282413.21444.76087.900431.5773
LQIM33.170810.278623.87144.823224.1335
[20]28.50652.911513.01393.117525.1052
Proposed25.04211.897315.2172.80423.4468
FlintstonesUQIM41.137815.468750.012723.75736.8089
AQIM37.434613.365845.66487.162432.3427
LQIM34.070811.774123.7873.393127.2319
[20]30.00123.053914.10022.008528.1004
Proposed26.24312.696316.95931.701624.4546
EinsteinUQIM40.468214.903550.244323.934236.4002
AQIM36.101513.200644.97827.074331.6768
LQIM33.885410.643324.41053.550327.5649
[20]30.72383.21413.51082.84328.2404
Proposed27.11072.743615.81731.906125.2115
Table 3. Computational time of several watermarking methods with different image (unit: s).
Table 3. Computational time of several watermarking methods with different image (unit: s).
ImageUQIMAQIMLQIM[20]Proposed
Lena2.90953.54263.40293.09302.7687
Barbara2.83173.41853.25983.25212.8124
Boat2.78483.30213.31163.19422.9236
Mandrill2.85603.14353.20792.89642.6411
Flintstones2.90843.20283.30183.20262.8455
Einstein2.88673.13293.24653.17082.6903
Average2.86283.29043.28843.13482.7802

Share and Cite

MDPI and ACS Style

Liu, J.; Wu, S.; Xu, X. A Logarithmic Quantization-Based Image Watermarking Using Information Entropy in the Wavelet Domain. Entropy 2018, 20, 945. https://doi.org/10.3390/e20120945

AMA Style

Liu J, Wu S, Xu X. A Logarithmic Quantization-Based Image Watermarking Using Information Entropy in the Wavelet Domain. Entropy. 2018; 20(12):945. https://doi.org/10.3390/e20120945

Chicago/Turabian Style

Liu, Jinhua, Shan Wu, and Xinye Xu. 2018. "A Logarithmic Quantization-Based Image Watermarking Using Information Entropy in the Wavelet Domain" Entropy 20, no. 12: 945. https://doi.org/10.3390/e20120945

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop