QuatJND: A Robust Quaternion JND Model for Color Image Watermarking

Robust quantization watermarking with perceptual JND model has made a great success for image copyright protection. Generally, either restores each color channel separately or processes the vector representation from three color channels with the traditional monochromatic model. And it cannot make full use of the high correlation among RGB channels. In this paper, we proposed a robust quaternion JND Model for color image watermarking (QuatJND). In contrast to the existing perceptual JND models, the advantage of QuatJND is that it can integrate quaternion representation domain and colorfulness simultaneously, and QuatJND incorporates the pattern guided contrast masking effect in quaternion domain. On the other hand, in order to efficiently utilize the color information, we further develop a robust quantization watermarking framework using the color properties of the quaternion DCT coefficients in QuatJND. And the quantization steps of each quaternion DCT block in the scheme are optimal. Experimental results show that our method has a good performance in term of robustness with better visual quality.


Introduction
The protection of digital images is one of the urgent security issues that need to be solved nowadays, and digital image watermarking technology provides an effective solution. Digital image watermarking technology embeds watermarked information into multimedia information carriers without degrading the perceived quality but at the same time resists common attacks. The technology must satisfy robustness, imperceptibility and watermark capacity [1]. In past decades, digital image watermarking has been widely studied in grayscale images, whereas color images have received much less attention though they constitute most of the displayed multimedia content. Color information is also viewed as a significant feature in many fields of image processing. If correctly handled, color information will lead to more effective watermarking schemes, especially when achieving a good trade-off between imperceptibility and robustness [2]. Therefore, there is a considerable hot research topic for researchers to use the color information in digital image watermarking technology.
At present, most color image watermarking algorithms extract luminance information of color images or process only a single color channel, such as: (1) By transforming the color space model, the color image is transformed from RGB color space to YCbCr (or YUV) color space, and then the luminance component Y of the image is selected to embed the watermark; (2) According to the insensitivity of human vision system (HVS) to the change of blue component, the watermark is embedded by modifying the blue component value of color image [3]; (3) The three color channels of color images are processed separately, and watermark embedding also needs to be carried out on three color components respectively. Therefore, how to make better use of the correlation between the three channels of the color image is an issue that cannot be ignored.
In order to realize a better tradeoff between robustness and invisibility, the watermark strength can be achieved by the JND, which is the maximum distortion not perceived by HVS. The most well-known JND model is proposed by Watson et al. [4], the model consists of a sensitivity function, two masking components based on luminance and contrast masking. Lihong et al. [5] proposed robust algorithms which incorporate Watson's model to compute the quantization steps, it has proved a significant improvement in robustness against the common attacks by the used perceptual model. In the past few years, the JND model has been the focus of research because of its excellent performance in the field of digital image analysis, such as Kim's model [6], Zhang's model [7], Wan's model [8] and so on. And based on the development of JND modeling, some JND model-based watermarking algorithms are proposed [9][10][11]. In addition, visual saliency (VS) is also considered to facilitate JND metrics. However, these existing JND models restore each color channel separately or process the vector representation from three color channels with the traditional monochromatic model. And it cannot make full use of the high correlation among RGB channels. To account for this, a quaternion perceptual JND model is needed.
Quaternions, which have been increasingly used in color image processing in the past two decades, offer a solution to achieve this goal. They represent an image by encoding its three color channels on the imaginary parts of quaternion numbers. Compared with traditional color image processing technologies, the main advantage of such a representation is that a color image can be processed holistically as a vector field and can exploit the correlation between the three color components, so does the color image watermarking [12].
Recently, many algorithms have been proposed for color image watermarking based on Quaternion Discrete Fourier Transform (QDFT). Bas et al. [13] firstly proposed a non-blind color image watermarking algorithm in the QDFT domain by the method of quantization index modulation. But the algorithm has a low peak signal noise ratio (PSNR) and poor ability to resist attacks. Ma et al. [14] proposed a watermarking scheme for color images based on local quaternion Fourier spectral analysis (LQFSA). They introduced invariant feature transform (IFT) and geometric correction scheme to enhance the robustness to tackle geometric attacks. Jiang et al. [15] pointed out that, Bas et al. [13] didn't consider the issue that the real part of the quaternion matrixes by inverse QDFT should be equal to zero and the problem could lead to a loss of watermark energy. They selected the real part of the QDFT coefficient matrixes to insert watermark and modified the coefficients of the real part symmetrically. Based on this constraint of symmetric distortion, Chen et al. [16] provided a Full 4-D quaternion discrete Fourier transform watermarking framework to illustrate the overall performance gain in terms of imperceptibility, capacity and robustness they can achieve compared to other quaternion Fourier transform based algorithms.
Furthermore, some other quaternion algorithms have been proposed, such as Quaternion Singular Value Decomposition (QSVD). In [17], a blind color image watermarking algorithm is proposed based on QSVD. The QSVD and rotation are employed to fulfill the process of watermarking and extracting watermark. Liu et al. [18] firstly performed QSVD to get U matrix and then the watermark was inserted into the optimally selected coefficients of the quaternion elements in the first column of the U matrix to enhance the invisibility. Recently, because the Discrete Cosine Transform (DCT) is compatible with the JPEG image compression standard, the watermarking algorithm in QDCT domain has received more considerable attention [19]. Therefore, it is meaningful to study how to introduce Quaternion Discrete Cosine Transform (QDCT) into watermarking algorithm.
In this paper, a robust quaternion JND model for color image watermarking (QuatJND) is proposed. And a novel and efficient robust quantization watermarking framework by exploiting quaternionic domain DCT based QuatJND model is proposed for color images. In our method, we embed the watermark into the QDCT domain by the method of spread transform dither modulation (STDM). At first, the colorfulness which is obtained in the QDCT domain is introduced as a new impact factor for QuatJND model. Furthermore, the QuatJND model is incorporated to derive the optimum quantization step for the embedding.
In summary, our main contributions are listed as follows: 1) We proposed perceptual unit pure quaternion in the QDCT watermarking scheme. In this way, the proposed scheme can have the better performance.
2) A quaternion perceptual JND model (QuatJND) is calculated in the QDCT domain.
3) The color information and the pattern guided contrast masking effect in quaternion domain are considered for the QuatJND model. 4) A logarithmic STDM watermarking scheme is proposed incorporate the QuatJND model. The proposed watermarking scheme can achieve a better performance with Peak Signal to Noise Ratio (PSNR) and Quaternion Structural Similarity Index (QSSIM).
The rest of this paper is organized as follows. Section 2 introduces the basic definitions that include quaternion and the QDCT of color images. Section 3 we provide QuatJND model which is used in the scheme and the colorfulness masking effect in quaternion DCT domain. Subsequently, we present the proposed watermarking scheme based on QDCT combines with QuatJND model. Experimental results and comparisons in Section 4 are provided to demonstrate the superior performance of the proposed scheme. Finally, we draw the conclusions in Section 5.

Quaternion DCT Definition
Quaternions were introduced by mathematician Hamilton in 1843 [20]. For easy reading, the main relevant abbreviations and symbols used in this paper is listed in Table 1. Red-Green color space i,j,k Three imaginary numbers of a quaternion Quaternion is the extension of real number and complex number, a quaternion has one real part and three imaginary parts given by where a, b, c, d ∈ R, and i, j, k are three imaginary numbers which obey the following rules If the real part a = 0, q is called a pure quaternion. Pei et al. [21] first applied quaternion to color image, as well proposed quaternion model of color image, which considered the three color components R, G, B as three imaginary parts of the quaternion. Let f (x, y) be an RGB image function with the quaternion representation (QR), then each pixel can be represented as a pure quaternion as where f R (x, y), f G (x, y) and f B (x, y) are the pixel values of the R, G and B color components at position (x, y), respectively. Because of the non-commutative multiplication rule for quaternions, the form of QDCT has two categories, left-handed form and right-handed form [19]. Without loss of generality, for QDCT, only the left-side one is considered in this paper, which satisfy the following equation Corresponding to QDCT, the inverse Quaternion Discrete Cosine Transform (IQDCT) of f (x, y) is defined as where, and and µ is a unit pure quaternion which meets the constraint that µ 2 = −1.
In order to reduce the complex computations and to make full use of the existing real-valued DCT codes, this subsection describes the relationship between QDCT and DCT. This relationship can provide not only an efficient computation approach for QDCT but also an approach to analyse the constraints for the watermark embedding.
Similarly, applying IQDCT, we get the reconstructed image where, Here, IDCT(·) is the conventional inverse discrete Cosine transform. For the color image signal, it can be drawn from Eq. (12) that IQDCT must be a pure quaternion matrix after modifying some QDCT coefficients to insert watermark. Otherwise, taking only the three imaginary parts of this quaternion matrix to get the watermarked image will discard non-null real part data and result in a loss of watermark energy. Based on the above relationships Eq. (10) and Eq. (12) and depending on the pure unit quaternion considered, one can identify the constraint to respect when modifying QDCT coefficients so as to avoid watermark energy loss. After the watermark embedding process, f should be a pure quaternion, or more clearly where 0 is a zero matrix. For the IQDCT coefficients matrix, we can obtain the real part from Eq. (13) as In order to respect the constraint Eq. (14), as we can see from Eq. (15), f 0 is not related to one component C 0 (p, s). So, if we modify C 0 (p, s) to insert watermark, the precondition f 0 = 0 is satisfied.

Perceptual Unit Pure Quaternion
To avoid watermark energy loss, the real part C 0 (p, s) after QDCT is selected to embed watermarking. It can be seen form Eq. (16), for different unit pure quaternion, the C 0 (p, s) part transformation coefficients are different, and the schemes of modifying coefficient embedding watermark are also different. Hence, the combination of unit pure quaternion and its weight will affect the performance of watermarking algorithm.
DCT( f R (x, y))), DCT( f G (x, y))), DCT( f B (x, y)) are respectively the conventional DCT matrix of the red, green and blue channels. Therefore, C 0 (p, s) part can be deemed a weighted aggregate of each component of the color such as R, G, and B. Although we embed watermark information into C 0 (p, s) part which changes the distribution of the values of C 0 (p, s), for the whole image in spatial domain, the differences can spread to the R, G, B three color components. During the QDCT transformation, a unit pure quaternion µ = (i The unit pure quaternion will cause the same amount of change in the three color components of R, G, and B. However, due to the color sensitivity of the human eye to R, G and B is different, this change will make the invisibility of watermarking method is poor. In order to improve the invisibility of the watermarking scheme, we proposed the perceptual unit pure quaternion. In the process of exploring the weight of ξ, η, and γ, we find that Zhu et al. [22] pointed out the RGB input signal can be converted into the YCbCr signal to remove the redundancies across three color channels and to offer good experimental results. The luminance component Y can be represented use the R, G, B three color components and the weight of R, G and B is 0.299, 0.587 and 0.114, respectively. And, some color image watermarking algorithms such as in YCbCr (or YUV) space [23,24], they modified the luminance component to inject watermark, and the experimental results showed good invisibility.
Therefore, to obtain well imperceptibility of the watermarked model, the unit pure quaternion and its weight according to relative relationship between the color channel R, G, and B is 0.299, 0.587 and 0.114, respectively. And the unit pure quaternion which should meet the constraint that µ 2 = −1. Then the perceptual unit pure quaternion is and, where, the perceptual unit pure quaternion µ and its weight ξ * , η * , and γ * is 0.299 : 0.587 : 0.114, substituting the relative relationship into Eq.(18), and we can obtain the ξ * =0.4472, η * =0.8780, and γ * =0.1705, respectively. The experimental results are provided in section 4.3.1 show that the perceptual pure unit quaternion µ has the better performance.

Proposed Quaternionic JND Model
For an image, a high-precision perceptual JND profile is usually perceived various changes which includes the spatial contrast sensitivity function (CSF), luminance adaptation (LA) effect and the contrast masking (CM) effect. In fact, the color sensitivity needs to be concerned for a perceptual JND profile in color images. The JND in the QDCT domain is typically expressed as a product of a base threshold and some modulation factor. In this paper, the real part C 0 (p, s) after QDCT is selected to embed watermarking. To obtain the JND threshold of the modified coefficients in C 0 (p, s), in this section, a novel contrast masking effects considering colorfulness is introduced: where the parameter t is the index of a QDCT block, and (m, n) is the position of the QDCT block coefficients. τ is to account for the summation effect of individual JND thresholds over a spatial neighborhood for the visual system and is set to 0.14. N is the dimension of QDCT (8 in this case). J q_base is the base CSF threshold, M q_LA is the LA effect and M q_CM is the CM effect [8,9,25,26]. And M q_COL is an important factor to reflect the colorfulness.

Spatial CSF in Quaternion Domain
J q_base is the quaternion domain JND value for the component C 0 (p, s) generated by spatial CSF on a uniform background image [6] and can be given by considering the oblique effect in QDCT domain as where J q_d (ω m,n ) and J q_v (ω m,n ) is formulated by QDCT coefficients where ω m,n is cycle per degree (cpd) for the (m, n)-th QDCT coefficient and is given by and, where θ indicates the horizontal/vertical length of a pixel in degrees of visual angle, R V H is the ratio of the viewing distance to the screen height, and H is the number of pixels in the screen height. ϕ m,n stands for the direction angle of the corresponding QDCT component, which is expressed as

Luminance Adaptation in Quaternion Domain
An luminance adaptation factor M q_LA that employed both the cycles per degree (cpd) ω m,n for spatial frequencies and the average intensity value of the block µ la can be formulated as, where the M 0,1 , M 0,9 are empirically set as where ω m,n is expressed as in Eq. (22) and the average intensity value of the t-block µ la can be expressed as where C 0 (0, 0) is the QDCT coefficient at position (0, 0) of the t-th C 0 block called Q-DC coefficient (Quaternion DC coefficient). E d denotes the maximum directional energy of image block in Eq.(28), C is a fixed constant and is approximately equal to E d to ensure the invariance and stability of µ la . Therefore, the proposed formula can resist the fixed gain attack as it will vary linearly with the amplitude changes.

Pattern Guided Contrast Masking in Quaternion Domain
M q_CM is modeled for boosting the J q_base based on local spatial texture complexity (e.g., smoothness, edge or texture), which is given by where the g(ω m,n ) is modeled in a gamma pdf form and expressed as where, µ cm represents the contrast masking effect of t-th QDCT block. In this paper, both pattern complexity and luminance contrast are considered to construct the contrast masking effect. And the contrast masking effect µ cm is defined as where, C p is the pattern complexity and C l is the luminance contrast of t-th QDCT block, respectively. The pattern complexity measurement of the block proposed by Wan et al. [9] is the ratio of the maximum directional energy and the DC coefficient of each 8 × 8 block, which can measure energy in different directions while keeping the measurement of pattern complexity insensitive to the changes caused by the watermarking process. However, this method ignores the relationship between the directional energy of a DCT block and its neighboring DCT blocks. Therefore, we propose a new pattern complexity representation that combines the directional energy within a QDCT block and the directional energy of its neighboring QDCT blocks. This method is more effective in representing the complexity relationship of image patterns.
Firstly, we choose a neighborhood of size 3 × 3 for each 8 × 8 QDCT block. If the directional location of the maximum directional energy of its neighboring block is the same as this QDCT block, then the neighboring block is marked. We choose the ratio of the number of marked neighborhood blocks to all neighborhood blocks of this QDCT block as the pattern complexity C p . Therefore, the pattern complexity C p of the image block is represented by where, D i represents the correlation between the image block and its neighbor in Eq. (34), and n is the number of neighborhoods of t-th QDCT block.
where, E d is the maximum directional energy of t-th QDCT block and E d,i (i = 1, 2, . . . , n) are the maximum directional energy of neighboring blocks of the t-th QDCT block.
Since the pattern complexity of the irregular regions in the image is stronger, the diminishing effect of C p follows the non-linear transducer as The luminance contrast C l can be obtained from Q-AC coefficients C 0 (0, 1), C 0 (1, 0) and C 0 (1, 1) where, (·) is normalization operation. Following logarithmic form, the increasing effect of C l can be represented as µ(C l ) = ln(1 + 0.47 · C l ) (37) Figure 1 shows the µ cm of three types of image blocks, such as smoothness, edge and texture. The yellow image block is smooth, and its µ cm is less than 0.15. The blue image block is an edge block whose µ cm is greater than 0.15 and less than 0.2. An image block with its µ cm greater than 0.2 is a texture block, such as the green image block in Figure 1.

Colorfulness Masking in Quaternion Domain
In this part, we proposed a new masking function that consider the colorfulness masking effect from C 1 , C 2 and C 3 parts. For the color images, when human eyes observe different colors, the interaction between different colors will interfere with the judgment of color. Colorfulness is the attribute of chrominance information humans perceive. Hasler and Susstrunk [28] have shown that colorfulness can be represented effectively with combinations of image statistics ( the variance and mean values). And Panetta et al. [29] pointed out, just as the human visual system (HVS), human eyes capture color information in the opponent color spaces such as red-green (R-G) and yellow-blue (Y-B) color space. In a word, the colorfulness can be formulated by using image statistics in opponent color spaces.
In this paper, we select C 1 , C 2 and C 3 parts after QDCT to calculate the image block's colorfulness. In QDCT domain, we are first transformed into the opponent red-green and yellow-blue color space can be expressed as follows: Then, for a QDCT block (8 × 8), the image colorfulness Q c is defined as where, σ 2 K 1 , σ 2 K 2 , µ K 1 and µ K 2 represent the variance and mean values along these two opponent color axes and can be expressed by the coefficients of QDCT block Figure 2 shows the comparisons of colorfulness metrics. Figure 2 (a) and (b) are from TID2008 database [30]. The colorfulness of Figure 2 (a) and (b) is 0.9462 and 0.4563, respectively. The results indicate the colorfulness metrics have a good correlation with human color perception. Inspired by this, a factor obtained from colorfulness is used to make JND a better match for human beings. The colorfulness masking factor M q_COL is defined as where, (·) is normalization operation.

QuatJND-Based Watermarking
In this section, the flowchart of the proposed watermarking scheme based QuatJND model is briefly introduced.

Adaptive Quantization Step
In this paper, some of the QDCT coefficients denoted as the host vector X, the maximum imperceptible change in the random direction of v can be given as X T v. To ensure the independence between the quantization compensation and the original signal in the watermarking process, the host vector is transformed into logarithmic domain firstly.
where, v is the random projection vector, E d is the image block direction maximum energy in Eq. (28), which is to resist the linear variation. And z is used as a secret key.
In this arrangement, the transformed vector Y is quantized into Y w regarding the watermark bit as where, d m is the dither signal corresponding to the message bit w and the proposed JND model can be used as a slack S to calculate the adaptive quantization step ∆ Thus, when the image is scaled by a fixed gain, the coefficients to be watermarked and the estimated quantization step ∆ can ensure stability.

Watermark Embedding Procedure
The proposed watermarking scheme includes two parts, embedding and extraction procedure. Figure 3 illustrates the embedding steps of the watermarking scheme. Here, taking Lena image as an example, the procedures of the watermark embedding are shown as follows Step 1: For an original image, it is first divided into non-overlapped blocks of 8 × 8 size, and each block is converted to the quaternion representation by Eq. (4).
Step 2: Apply QDCT which used the perceptual unit pure quaternion µ to each block, and the QDCT spectrum coefficients are obtained by Eq. (11).
Step 5: Final QuatJND value of each block combined with colorfulness masking is determined by Eq. (19). The proposed QuatJND value can be served as the perceptual redundancy vector S.
Step 6: The C 0 coefficients from the fourth to tenth except the fifth after zigzag scan are selected to form a host vector X. The host vector X and the perceptual redundancy vector S are used to obtain the transformed vector Y and the adaptive quantization step ∆.
Step 7: One bit of the watermark message w after Arnold transformation is embedded into the transformed vector Y as followed: Step 8: Transform the modulated coefficients Y w back to form the watermarked image X w .
Step 9: Finally, the inverse QDCT on each block is performed, and then the watermarked image is obtained.

Watermark Extracting Procedure
The extracting algorithm is an inverse procedure of the embedding algorithm. Figure  4 illustrates the extracting steps of the watermarking scheme. And the procedures of the watermark extracting are shown as follows: Step 1: For a watermarked image, it is first and divided into non-overlapped blocks of 8 × 8 size, and each block is converted to the quaternion representation by Eq. (4).
Step 2: Apply QDCT which used the perceptual unit pure quaternion µ to each block, and the QDCT spectrum coefficients are obtained by Eq. (11).
Step 5: Final QuatJND value of each block combined with colorfulness masking is determined by Eq. (19). The proposed QuatJND value can be served as the perceptual redundancy vector S .
Step 6: The C 0 coefficients from the fourth to tenth except the fifth after zigzag scan are selected to form a host vector X . The host vector X and the perceptual redundancy vector S are used to obtain the transformed vector Y and the adaptive quantization step ∆ .
Step 7: The watermark can be detected according to the minimum distance detector as follows Step 8: The final watermark image is obtained by the inverse Arnold transform.

Experimental Results and Comparisons
In this section, we show and discuss the experimental results. To prove the effectiveness and robustness performance of our proposed scheme, we perform experiments using the original code in MATLAB (MathWorks, Natick, USA) R2019a on a 64-bit Windows 10 operating system at 16 GB memory, 3.40 GHz frequency of Intel (R) Core (TM) i7-6700 CPU (Intel, Santa Clara, USA).

Performance Metrics
In the experiments, two objective criteria include Peak Signal to Noise Ratio (PSNR) and Quaternion Structural Similarity Index (QSSIM) have been considered to measure the fidelity. The Bit Error Rate (BER) is computed to evaluate the robustness of algorithms.
(1) Peak Signal to Noise Ratio (PSNR) PSNR provides an objective standard for measuring image distortion or noise level. In this experiment, we use PSNR to evaluate the quality between the embedded image and original image, which means it is used to evaluate the invisibility of the embedded watermark. The evaluation result is expressed in dB (decibel). The larger the PSNR value between the two images, the better the invisibility of the watermarking scheme. Considering the host color image I of size M × N and its watermarked version I , the PSNR is defined as (2) Quaternion Structural Similarity Index (QSSIM) Kolaman et al. [31] developed a visual quality matrix that will be able to better evaluate the quality of color images, which is named quaternion SSIM (QSSIM). The value of QSSIM ranges is [0 , 1]. And the closer the QSSIM value is to 1, the better the image's visual quality effect. The QSSIM is defined by Eq. (49), which is composed to be the same as SSIM but with quaternion subparts.
where, qI, qI are the quaternion representation (QR) of image I and its watermarked version I respectively; µ qI , µ qI are the mean of image I and its watermarked version I respectively; σ 2 qI , σ 2 qI are the variance of image I and its watermarked version I respectively; σ qI,qI is the covariance of image I and its watermarked version I .
(3) Bit Error Rate (BER) The Bit Error Rate was here utilized to evaluate the quality of the extracted binary watermark image w compared to its original version w, both of M w × N w pixels. The BER between w and w is given by

Imperceptibility
To verify the performance of the proposed color image watermarking algorithm, 109 color images available from the Computer Vision Group at the University of Granada (http://decsai.ugr.es/cvg/dbimagenes/, accessed on 21 September 2020) were considered. A binary watermark "SDNU" of length 4096 bits (64 × 64) is embedded into the original cover images as shown in Figure 5. Eight standard images 'Lena', 'Avion', 'Baboon', 'House', 'Athens', 'Sailboat', 'Butrfly' and 'Goldgate', were used as testing images. The size of the eight testing images are 512 × 512 shown in Figure 6. For evaluating the invisibility of the embedded watermark, we embed the watermark in Figure 5 in the host images in Figure 6 (a)-(h), respectively. And the proposed scheme was compared with the popular watermarking schemes, referring to QDFT [16], QSVD [32], Wang et al. [10] proposed color image watermarking based on orientation diversity and color complexity (CIW-OCM), Wang et al. [11] proposed robust image watermarking via perceptual structural regularity-based JND model (RIW-SJM), and Su [33]. First of all, a good watermarking scheme must show a satisfying invisibility. Figure 7 gives the visual quality fraction of the watermarking images. The tested images in Figure 7 are first restrained with the same PSNR=42 dB and we ensure this by modifying the embedded intensity factor. The QSSIM values are compared, the higher the QSSIM value, the more complete the details and structure of the image preserved. The average QSSIM values of different algorithms are 0.9850, 0.9886, 0.9794 , 0.9814 and 0.9864, respectively, and the proposed QSSIM value is 0.9810. Although the results for the proposed image watermarking scheme are not the best compared with other watermarking schemes, the QSSIM values are almost similar to other schemes on average. With the same PSNR guaranteed, the QSSIM of our scheme is comparable to other schemes. This is because in order to achieve a balance between imperceptibility and robustness, our scheme satisfies the imperceptibility while calculating the redundancy of the image more accurately, making the modification of the image larger. Thus the algorithm in this paper can obtain better robustness while satisfying the imperceptibility, while the tests of robustness in Section 4.3 below also demonstrate this.  [16], QSVD [32], CIW-OCM [10], Su [33], RIW-SJM [11]) with PSNR=42dB.
To prove that the proposed image watermarking scheme can produce a high watermark quality and the watermark can be extracted correctly without attack. The test images were watermarked with the uniform fidelity, a fixed Peak Signal to Noise Ratio (PSNR) of 42 dB. The bit error rate (BER) was computed to make the objective performance evaluation. Figure 8 shows the cover images, watermarked images, and extracted watermarks. Intuitively, it is noticeable that the proposed method can provide a good visual quality of the extracted watermark image.

Evaluation of Different Unit Pure Quaternions
In order to prove that the perceptual unit pure quaternion in Section 3.1 can produce a better watermark quality, we compare the robustness results with different types of pure unit quaternion such as µ 1 = (-2j + 8k)/ √ 68 [13], µ 2 = (j − k)/ √ 2 [34], and µ 3 = (i + j + k)/ √ 3 [20]. It should be noticed that µ 3 is the most common unit quaternion used in the quaternion based on image processing literature. Table 2 shows the performance for different µ. From the results, the perceptual unit quaternion has lower BER in JPEG compression. This shows the advantages of QDCT transform itself which is compatible with the JPEG compression standard. Although for the perceptual unit quaternion, the performance is not the best as others under Gaussian noise and Filtering, it also has low BER and shows good robustness. In total, the perceptual pure unit quaternion µ has better performance against common signal attacks, especially in JPEG attacks.

Evaluation of Different JND Models within QDCT Watermarking Algorithm
This experiment is used to compare the performance of different JND models used within the proposed QDCT watermarking algorithm. To verify robustness performance of our proposed QuatJND model guided watermarking scheme, the proposed scheme is compared with different JND models, referring to Watson's model [4], Kim's model [6] and Zhang's model [7].
In this experiment, we recomputed the features of Watson's model, Kim's model and Zhang's model in the quaternion DCT domain, respectively. For example, in Kim's model, we used the C 0 coefficients to calculate the base threshold, luminance adaptation, and contrast masking in the quaternion domain. The tested images are first embedded watermark and restrained with the same PSNR=42 dB, and the average BER values are compared. As shown in Table 3, compared with the other JND models, the proposed model always has the lowest BER for different noise intensities. This indicates that the proposed model performed much better than others. As for JPEG compression, different performance emerges in the four JND models within the watermarking algorithms shown about JPEG compression attacks. The average BER of Watson's, Kim's, Zhang's and QuatJND model are 0.0828, 0.1144, 0.0775, and 0.0331 when JPEG compression quality is 30, respectively. And from Figure 9 (c), the extracted watermark can be clearly identified when JPEG compression quality is 30. When the Median filtering and Gaussian filtering are used to attack the watermarked image. For filtering with median filter (3,3), the BER value of the proposed model is 4.5 % higher than the Kim's model, but in Figure 10 (b), the extracted watermark can also be correctly recognized. In summary, our proposed QuatJND model performs excellently in quaternion domain.

Evaluation of Watermarking Algorithms in Different Domains
This experiment is used to compare the performance of different watermarking algorithms in DCT domain and spatial domain. To verify the effectiveness of quaternion DCT and the advantage of the quaternion, the proposed scheme is compared with CIW-OCM [10], RIW-SJM [11]) and Su [33].
(1) Under common attacks During the image transmission, the watermarked image is attacked easily and inevitably by some common attacks such as Gaussian noise, Salt and Pepper noise, JPEG compression and Amplitude scaling. Table 4 lists the average robustness results for eight test images using the different watermarking schemes under various attacks, such as Gaussian Noise with zero mean, variances 0.0003, 0.0008 and 0.0012; Salt and Pepper noise with different densities 0.004, 0.008 and 0.015; JPEG compression with quality factors 30, 50, and 80; Amplitude scaling with factors 0.3, 1.2 and 1.5.
First, it is obvious that our proposed scheme can get a minimum bit error rate compared with other schemes after Gaussian noise and Salt and Pepper noise attack from Table 4. As shown in the robustness results, the proposed scheme has a lower average BER value than CIW-OCM [10] about 0.3% at least. And with the density of Salt and Pepper noise increased, Su [33] shows lower BER than ours when density is 0.015. For traditional JPEG compression attacks, our proposed scheme has similar results when JPEG compression quality is greater than 50, which is 0.1% -0.4% lower than the CIW-OCM [10]. In general, the proposed model has the best performance against JPEG compression attacks on average. Finally, while the watermarked image is distorted by Amplitude Scaling attack, although the performance of the proposed model is not the best, the results are almost similar to other schemes on average. And from Figure 9 (d), the extracted watermark can be clearly identified when the Amplitude Scaling is 1.5, which can satisfy the robustness of watermarking scheme against Amplitude Scaling attacks.  (2) Under filtering attacks Filtering attacks such as Median filtering and Gaussian filtering are usually used to attack the watermarked image. And the visual perception of the extracted watermark can be destroyed by these attacks. The performance of watermarking model resists the Filter attacks needs to be considered. Table 5 and Figure 10 (a) and (b) present the comparison results of filtering. For filtering with Median filtering (3,3), the BER value of the proposed model is 1% lower than the model of RIW-SJM [11]. And for Gaussian filtering, the proposed model has the lowest BER values than the rest of models, which can ensure that the extracted watermark image has a higher recognition.  (3) Under cropping attacks In practice, the watermarked image will also be contaminated by other attacks such as Cropping and geometric attacks. Here, in this experiment, image rotation has been considered as a geometric attack, which results in the change of image pixel value and image size. We firstly compared the robustness results after Cropping attacks in Table 6 and  Table 6, the proposed model gets the lowest BER value than other algorithms which means that the proposed method provides a good visual quality of the extracted watermark image after different types of cropping attacks.

(4) Under rotation attacks
To verify that the proposed image watermarking scheme can be robust to geometric attacks, we test the robustness of the proposed algorithm after image Rotation. In this experiment, one watermarking image is first carried out a forward image Rotation transformation, and is then corrected by an inverse image Rotation transformation. More clearly, the watermarking image rotates clockwise 30, 60, 90, 120, and then rotates counter-clockwise to extract the watermarking.
The robustness of image rotation is listed in Table 7 and Figure 10 (c) and (d). The results show that our proposed method has the lowest BER value than other methods. For rotation, the value of BER obtained by our method does not exceed 0.2% when the rotation angle is 30, 60, 90, 120, which demonstrates our method can get a significant robustness performance for image rotation.

Evaluation of Different Quaternion Watermarking Schemes
This experiment is used to compare the performance of different quaternion watermarking algorithms. To verify the robustness of the proposed scheme in quaternion DCT domain, the proposed scheme is compared with QDFT [16] and QSVD [32]. Table 8 shows the BER values of watermarked images attacked by Gaussian Noise (GN), JPEG compression attacks, Salt and Pepper noise (SPN), Median filtering (MF) Gaussian filtering (GF) and Amplitude Scaling (AS). As shown in the robustness results, for traditional JPEG compression attack, both QSVD [32] and QDFT [16] have a poorer performance than the proposed scheme, the reason may be that the proposed scheme enhances the performance to resist JPEG attack by using QDCT domain. When the watermarked image is distorted by Amplitude Scaling attack, the performance of the proposed scheme is better than that of other schemes except QSVD [32]. In QSVD [32], they inserted the watermark through moderating the coefficients f 1,1 and f 2,1 of the quaternion elements in U matrix, the Amplitude Scaling attack leads to the minimum effect on the relative relationship between f 1,1 and f 2,1 . Therefore, QSVD [32] shows superior performance to Amplitude Scaling attack.  Table 9 demonstrates the comparision of the average BER values between our scheme and other methods for different image attacks with fixed image quality, QSSIM = 0.9820. Although the QDFT [16] has better robustness to against the process of adding Gaussian noise and Salt and Pepper noise, the scheme has a poorer performance in JPEG compression. For JPEG compression, it can be seen from Table 9 that our method has a lower BER than other watermarking algorithms when the JPEG with QF is 30 and 50. In addition, the robustness performance of our method is obviously better than others under combined image attacks that performed JPEG compression firstly, followed by the Gaussian noise and Salt and Pepper noise. Above all, the watermarking framework based on the QuatJND model in QDCT domain has better robustness performance than other methods in most cases.  Tables 3 to 9 list the robustness performance after single image attack. However, in the actual digital signal transmission process, the watermarked image will be destroyed by multiple attacks simultaneously. Here, we further compared the robustness results after various combined attacks by common image processing in Figure 12 and Figure 13. Figure 12 shows the BER after passing JPEG compression processing, followed by common Gaussian noise, Salt and Pepper noise, Gaussian filtering and Median filtering attacks. Figure 13 shows the BER after passing Gaussian noise, followed by Amplitude Scaling, Cropping and image Rotation. From the results of Figure 12 and Figure 13, the human eye still can recognize the extracted watermark information after different combined attacks.
In summary, our method shows well robustness performance after combinatorial image attacks, which means that our method can achieve good image copyright protection in practical applications.
On the whole, some exist quaternion watermarking algorithms such as QSVD [32] and QDFT [16], some DCT domain watermarking algorithms such as CIW-OCM [10] and RIW-SJM [11], and Su [33] which is an improved watermarking algorithm based on Schur decomposition, although these algorithms show better invisibility for watermarked images from the results of Figure 7, these algorithms have poorer robustness under some attacks. They can't achieve a good tradeoff between invisibility and robustness. As for CIW-OCM [10] and RIW-SJM [11], although the algorithm achieves a good tradeoff by using JND models, the algorithm neglects the correlation between the three color components. The proposed model exploits the correlation for three color channels and uses a QuatJND model to obtain the optimum quantization step, and the results show our scheme has better robust performance than others.

Conclusions
In this paper, we proposed a robust quaternion JND model for color image watermarking (QuatJND). Firstly, we obtained the quaternion DCT coefficients by the perceptual unit pure quaternion. And then QuatJND model is calculated by using the quaternion DCT coefficients. The color information is also considered. A logarithmic STDM scheme is further proposed based on the QuatJND. Our scheme is evaluated under different types of attacks such as Gaussian noise, JPEG compression, Gaussian filter, Median filter and geometrical attacks like image rotation, cropping. The proposed technique also provides robustness results under combined attacks. Experimental results show that our scheme provides better robust performance than existing techniques. Color is a very important content in images. We can further use the color information in the images to enhance the accuracy of the QuatJND model. For example, the cross-masking effect of luminance and color components can be further analyzed in order to enhance the imperceptibility of watermarked images in future research. Meanwhile, deep learning methods can extract image features more effectively and build a more accurate JND model to make its robust performance under various attacks more effective.

Conflicts of Interest:
The authors declare no conflict of interest.