Next Article in Journal
Bilateral Trading Strategy for the Wind–Thermal Storage System Considering Peak Shaving
Previous Article in Journal
A Real-Time Photovoltaic Power Estimation Framework Based on Multi-Scale Spatio-Temporal Graph Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Watermarking Algorithm Based on QGT and Neighborhood Coefficient Statistical Features

1
School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411201, China
2
Software Engineering Institute, Hunan Software Vocational and Technical University, Xiangtan 411100, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(22), 4494; https://doi.org/10.3390/electronics14224494
Submission received: 11 October 2025 / Revised: 12 November 2025 / Accepted: 13 November 2025 / Published: 18 November 2025

Abstract

The exponential advancement of the Internet of Things and artificial intelligence technologies has significantly accelerated digital content generation and dissemination, intensifying challenges in copyright protection, identity theft, and privacy breaches. Traditional digital watermarking techniques, constrained by vulnerabilities to geometric attacks and perceptual distortions, fail to meet the demands of modern complex application scenarios. To address these limitations, this paper proposes a robust watermarking algorithm based on quaternion Gyrator transform and neighborhood coefficient statistical features, designed to enhance copyright protection efficacy. The methodology involves three key innovations: (1) The host image is partitioned into non-overlapping sub-blocks, with an inhomogeneity metric calculated from local texture and edge characteristics to prioritize embedding sequence optimization; (2) quaternion Gyrator transform is applied to each sub-block, where the real component of transformed coefficients is utilized as the feature carrier, harnessing the geometric invariance of quaternion transformations to mitigate distortions induced by rotational attacks; (3) Integration of an Improved Uniform Log-Polar Mapping algorithm to embed synchronization markers, reinforcing resistance to geometric attacks by preserving structural consistency under affine transformations. Prior to embedding, dynamic statistical analysis of neighborhood coefficients adjusts watermark intensity, ensuring compatibility with human visual system masking properties. Experimental results demonstrate dual advantages: The PSNR of the proposed method is 41.4921, showing good invisibility. The average NC value remains at around 0.9, demonstrating good robustness. The effectiveness and practicability of the algorithm in a complex attack environment have been verified.

1. Introduction

Within the theoretical framework of the information society, the deep integration of the internet and mobile communication technologies has propelled the comprehensive digitization of information dissemination and transmission. Empirical studies indicate that digital images, as quintessential multi-dimensional data carriers, now achieve three-dimensional spatial expansion in transmission channels, enhancing information representation efficiency by 42 percent compared with traditional models while maintaining a data fidelity rate of 98.2 percent [1]. This technological empowerment, analyzed through Lessig’s digital rights management framework, manifests as a double-edged sword: it democratizes information accessibility by eliminating geographical and temporal constraints, yet simultaneously introduces systemic security risks [2]. Such duality underscores the urgent need for adaptive regulatory frameworks to reconcile innovation incentives with intellectual property safeguards in hyperconnected ecosystems [3].
To mitigate the malicious exploitation of digital images and safeguard their intellectual property rights, digital watermarking technology has been established as an effective copyright protection mechanism. By embedding imperceptible identifiers into digital content through information embedding principles, this technology enables robust traceability authentication for unauthorized usage [4,5]. Empirical studies demonstrate that such imperceptible markers achieve an 87.6 percent success rate in copyright tracking across diverse application scenarios. Digital watermarks can be broadly categorized into spatial-domain watermarks and frequency-domain watermarks. Spatial-domain approaches, while computationally simpler, exhibit inferior robustness compared with frequency-domain methods [6,7]. For instance, Deep Shikha Chopra et al. proposed a spatial-domain technique where the least significant bits (LSBs) of pixel values are directly replaced with watermark data. Another widely adopted spatial-domain method is Local Binary Pattern (LBP) [8,9]. However, such methods exhibit critical vulnerabilities in maintaining watermark integrity under adversarial conditions.
Frequency-domain watermarking demonstrates superior attack resistance and imperceptibility compared with spatial-domain approaches [10], as its energy is stochastically distributed across spectral components and tightly integrated with image texture features, making precise localization and removal by adversaries highly challenging [11]. The human visual system’s lower sensitivity to spectral noise further ensures covert watermark preservation. For instance, Bao et al. [12] embedded watermarks by transforming U fractions of the YUV flavor universe into Radon domains, performing 2D-DCT on selected blocks, and modifying fixed mid-frequency coefficients. Wang et al. [13] enhanced imperceptibility through periodic watermark encryption and sub-block mapping mechanisms. Kumari et al. [14] optimized embedding positions via DWT-SVD decomposition combined with a tunicate swarm optimization algorithm (TSA), significantly improving robustness against geometric distortions. Woo et al. [15] mitigated rounding errors in DWT using intelligent optimization techniques, achieving higher imperceptibility. While Zhang et al. [16] leveraged DFT’s DC component uniqueness for blind extraction in spatial domains, DFT-based methods remain vulnerable to coordinate-level attacks in vector map applications due to localized sensitivity. To address this, Qu et al. [17] proposed a hybrid DFT-SVD framework that embeds watermarks into invariant feature sequences derived from geometric transformations, thereby resisting affine attacks. These hybrid strategies underscore the necessity of integrating multiple frequency-domain transformations to balance robustness and adaptability in complex application scenarios [18].
Frequency-domain algorithms typically process grayscale images directly. For color images, conventional approaches decompose them into three separate channels (e.g., R, G, B) for independent watermark embedding, which inevitably disrupts inter-channel correlations and compromises color integrity [19]. To preserve these intrinsic relationships, quaternion-based methodologies emerge as an optimal solution [20]. By representing color images as pure quaternion matrices—where the three imaginary components encode red, green, and blue channels—quaternion frameworks enable holistic processing of color data, preserving both chromatic coherence and spatial-textural interdependencies [21]. This unified representation not only retains color fidelity but also enhances watermark robustness by leveraging the algebraic properties of quaternions, such as rotational invariance and multi-channel synchronization [22], which are critical for resisting geometric distortions and channel-specific attacks. Bas et al. [23] proposed a method integrating quaternion Fourier transform (QFT) with quantization index modulation, optimizing embedding parameters through performance analysis across diverse color image filtering processes while leveraging quaternion algebra to preserve holistic chromatic features. Wang et al. [24] developed a quaternion polar harmonic transform (QPHT)-based framework that extracts stable structural feature points from color images, generating geometrically invariant regions for watermark embedding in frequency-domain coefficients correlated with color components, thereby maintaining chromatic integrity under affine transformations. Yan et al. [25] introduced a quaternion-based image hashing technique that eliminates geometric distortion effects through quaternion spectral transformations, exhibiting exceptional resilience against common signal processing attacks. Chen et al. [26] designed a QSVD-based algorithm incorporating key-dependent coefficient selection and cross-correlation optimization, achieving high-fidelity real-time embedding with minimal perceptual distortion. Zhang et al. [27] implemented quaternion Householder transformations for color image watermarking, demonstrating robustness against compression and noise injection. Gong et al. [28] enhanced geometric attack resistance by combining quaternion fractional orthogonal Fourier–Mellin moments (QFrOOFMM) with least squares support vector regression (LS-SVR) for distortion parameter estimation, enabling precise watermark recovery. Further innovations by Ouyang et al. [29] improved uniform log-polar mapping (IULPM) integration with quaternion discrete Fourier transform (QDFT), significantly boosting resistance to scaling and rotation attacks.
This study proposes a robust watermarking algorithm based on QGT and neighborhood coefficient statistical features, addressing existing challenges by reducing computational complexity while enhancing security through visual capacity value optimization and watermark scrambling. The primary contributions are as follows:
  • A watermark embedding framework integrating QGT, using the neighborhood coefficient to determine the final watermark value, embedding the authenticated watermark in the real component of the quaternion coefficient, and then embedding the synchronous watermark in combination with the IULPM method to complete the embedding of the final watermark. This method increases the security and layout against geometric attacks. Meanwhile, through the embedding sequence of DQ, it can not only maintain the visual fidelity but also enhance the security of the watermark.
  • An adaptive embedding strategy prioritizing sub-blocks with higher visual capacity values to optimize imperceptibility, coupled with a support vector machine (SVM)-based extraction mechanism. The SVM models nonlinear relationships between watermark values, spatial positions, and neighborhood coefficients, enabling accurate watermark recovery under adversarial conditions. Experimental validation confirms superior resistance to compression, noise, and geometric distortions compared with conventional spatial-domain and frequency-domain methods.
The rest of the paper is organized as follows. Section 2 briefly summarizes the approach used in the suggested scenario; Section 3 gives a detailed description of the suggested image watermarking solution; Section 4 introduces the empirical results, and the performance of this paper’s solution is presented in comparison with the state-of-the-art solutions reported in publications. Section 5 summarizes the paper and suggests some future directions for development.

2. Background

In this section, some related techniques are introduced, including the concepts and related work on Arnold scrambling, Gyrator transform, quaternions and their related theories, inhomogeneity of blocks, etc.

2.1. Scrambling of Image Watermarking

Digital watermarking technology is usually inseparable from encryption technology [30]. Generally, in digital watermarking, some information is encrypted to ensure the security of the watermark information. Under normal circumstances, the encryption of watermarks only requires simple operations. However, if the watermark information is relatively complex, more appropriate encryption techniques will be needed. For example, Gao et al. [31] used a combination of chaotic mapping and neural networks to encrypt color images, which not only has excellent security but also reduces the time complexity. Since the watermark information in this paper is only a binary image and does not require complex operations, we only used the Arnold transform as the encryption operation of the watermark.
The Arnold scrambling transformation, also known as cat mapping, is widely adopted as a pre-processing technique in information hiding [32] to enhance watermark security by converting meaningful watermark images into noise-like patterns prior to embedding. This method scrambles pixel positions through iterative coordinate transformations governed by the formula
u v = 1 1 1 2 u v   mod   P
where u , v , u , v 0 , 1 , 2 P 1 , P is the size of the binary watermarked image to be processed, P is a matrix of two rows and one column, u , v is the pixel position of the watermarked image before disorganization, and u , v is the position of the pixel of the watermarked image after disorganization.
The Arnold scrambling transformation enables pixel coordinate relocation from u , v to u , v in digital watermarking, achieving information concealment through iterative position permutations. The scrambling iterations (K) serve as a private key to enhance cryptographic security by obfuscating spatial correlations. For watermark recovery, the inverse Arnold transformation is defined as
u v = 2 1 1 1 u v + P P   mod   P

2.2. Introduction to Gyrator Transformations

Rodrigo and his team proposed a first-order optical system model based on a columnar lens array in 2006, which realizes the rotational transformation function of a two-dimensional image by adjusting the optical path parameters, and at the same time has the separable fractional-order Fourier transform characteristics. The core innovation lies in expanding the order of the traditional Fourier transform into the angle variable, forming a mathematical framework with the rotation angle as an independent control parameter, so it is defined as the Gyrator transform. The main idea of the calculation is as follows: Suppose there exists a grayscale f ( x , y ) , and the rotation angle is of the Gyrator transform:
G α [ f ( x , y ) ] ( u , v ) = 1 sin α f ( x , y ) exp ( j 2 π ( u v + x y ) cos α ( u y + v x ) sin α ) d x d y
The above transformations can be converted according to Euler’s formula as follows:
G α [ f ( x , y ) ] ( u , v ) = 1 sin α f ( x , y ) cos ( 2 π ( u v + x y ) cos α ( u y + v x ) sin α ) d x d y + i 1 sin α f ( x , y ) sin ( 2 π ( u v + x y ) cos α ( u y + v x ) sin α ) d x d y
If you need to invert the transformation, you only need to perform a Gyrator transformation with the rotation angle adjusted to α to get the original image f ( x , y ) .
The inverse process of the Gyrator transform can be achieved by adjusting the rotation angle to the negative of the initial parameter; specifically, the original object can be reconstructed by applying the transform with a rotation angle of α . As a linear operation, this transform has previously been utilized in optical information processing and has been progressively extended to the domain of image encryption [33]. Its primary advantage lies in the introduction of a freely adjustable rotation angle α as the key parameter, akin to the order parameter of the fractional-order Fourier transform. In the absence of the correct angle parameter, an unauthorized decryptor cannot recover the original image through conventional reverse operations, thereby significantly enhancing the security of the encryption system. In comparison to the traditional Fourier transform, the Gyrator transform facilitates hybrid analysis in the air-frequency domain through optical rotation transformation, offering distinct advantages in image feature extraction and information hiding. It has emerged as one of the leading tools in optical image processing, with substantial potential for algorithm optimization and multi-dimensional extensions, such as advancements in quaternionic domain applications.

2.3. Quaternion Correlation Theory

Quaternionic hypercomplex numbers are an extended concept after the complex numbers were proposed by Hamilton in 1843 to cope with the previous problem that complex numbers can only deal with two-dimensional space but not three-dimensional space. Since there exists a certain relationship between the three imaginary parts of quaternions, and at the same time, there exists a close structural relationship between the three channels in RGB color images, which corresponds exactly to the relationship between the components of quaternions, quaternions can be a data structure for expressing color images and have been introduced into various fields of color image processing by researchers, replacing the traditional image processing scheme with the image processing method under quaternions. The traditional image processing scheme is replaced by the image processing method under the quaternion, such as quaternion Fourier transform, quaternion singular value decomposition, quaternion wavelet transform, and quaternion rotary transform. The following mainly introduces the basic arithmetic rules under quaternions and the quaternion Fourier transform, which is closely related to this paper.

2.3.1. Quaternion Base Operations

Quaternions, as an extension of real and complex numbers, consist of one real part and three imaginary parts [34,35,36]. The mathematical expression is
q = q r + i q i + j q j + k q k
where q r , q i , q j , q k are real numbers; i , j , k are units of imaginary parts as quaternions, which means that this number is the imaginary part of quaternions; and the imaginary parts satisfy the following relationship:
i 2 = j 2 = k 2 = 1 i j = j i = k j k = k j = i k i = i k = j
The weighing component q r is the real part of the quaternion q, and i q i + j q j + k q k is the imaginary part of the quaternion q. A pure quaternion is a quaternion for which the real part of the quaternion q r = 0 . When the imaginary part is 0, the quaternion transforms to a real number.
The conjugate operation of the quaternion is q ^ = q r i q i j q j k q k . The modulo operation of the quaternion q is
q = q r 2 + q i 2 + q j 2 + q k 2
The modulus of the pure unit quaternion is 1.
Basic operations with quaternions: Let two quaternions be q 1 and q 2 , respectively.
q 1 = a 1 + i b 1 + j c 1 + k d 1 q 2 = a 2 + i b 2 + j c 2 + k d 2
Addition of quaternions means addition of corresponding parts, equality of quaternions means equality of corresponding parts, and multiplication of quaternions:
q 1 · q 2 = ( a 1 a 2 b 1 b 2 c 1 c 2 d 1 d 2 ) + ( a 1 b 2 + a 2 b 1 + c 1 d 2 c 2 d 1 ) i + ( a 1 c 2 b 1 d 2 + a 2 c 1 + b 2 d 1 ) j + ( a 1 d 2 + b 1 c 2 b 2 c 1 + a 2 d 1 ) k
Addition and subtraction of quaternions satisfy the law of union and the law of exchange, but multiplication of quaternions does not have the law of exchange. When the real part of a quaternion is 0, the quaternion is said to be a pure quaternion. Each pixel of a color image f ( x , y ) is represented by a quaternion as
f ( x , y ) = f R ( x , y ) i + f G ( x , y ) j + f B ( x , y ) k
where f R ( x , y ) , f G ( x , y ) and f B ( x , y ) denote the red, green and blue components of an image pixel, respectively. i , j , k are units of imaginary parts as quaternions.

2.3.2. Quaternion Gyrator Transformations

Because multiplication of quaternions does not satisfy the exchange law, the quaternion Gyrator transform is similar to the quaternion Fourier transform, defined in two forms: the left quaternion Gyrator transform and the right quaternion Gyrator transform. For a color image, the expression of the left quadratic Gyrator transform for the rotation angle α is
G l α [ f ( x , y ) ] ( u , v ) =   K α ( u , v ; x , y ) f ( x , y ) d x d y
Right quaternion Gyrator transformations:
G r α [ f ( x , y ) ] ( u , v ) =   f ( x , y ) K α ( u , v ; x , y ) d x d y
where f ( x , y ) is a color image represented by a pure quaternion matrix:
f ( x , y ) = i f R ( x , y ) + j f G ( x , y ) + k f B ( x , y )
The corners R, G, and B denote the three color channels, and K α ( u , v ; x , y ) is the kernel function of the transformation [37], which is expressed as follows:
K α ( u , v ; x , y ) = 1 sin α exp μ 2 π ( u v + x y ) cos α ( u y + v x ) sin α
where μ is the unit pure quaternion, i.e., μ = i x + j y + k z , and μ 2 = 1 . And the variable α represents the angle of rotation of the image. In order to reconstruct the original signal from the transformed signal, the inverse GT corresponds to the QGT at rotation angle ‘ α ’.
From the conjugate property of the quaternion product, it follows that the left quaternion Gyrator transform and the right quaternion Gyrator transform can be converted to each other as follows:
G r α [ f ( x , y ) ] ( u , v ) = G l α [ f ( x , y ) ] ( u , v )

2.4. The Non-Uniformity of Visual Capacity

The transparency of digital watermarking pertains to the perceptual invariance between the original host image and the watermarked image as assessed by the human visual system (HVS), ensuring that embedded watermarks remain imperceptible. Visual thresholds demonstrate spatial heterogeneity; areas with significant background variations or complex textures inherently exhibit higher thresholds due to visual masking effects, where rapid luminance gradients or intricate patterns diminish the HVS’s sensitivity to localized perturbations. By leveraging the HVS’s insensitivity to subtle spatial distortions while maintaining acute detection of granular artifacts, watermark embedding must comply with sub-threshold intensity criteria to prevent perceptibility. This requires adaptive watermark embedding, where local visual capacity—measured through gradient magnitude, edge density, or texture complexity—dictates region-specific embedding strengths. By focusing on high-capacity areas (e.g., textured regions with elevated masking thresholds) for more robust watermark insertion while safeguarding low-capacity zones (e.g., smooth backgrounds), the balance between imperceptibility and robustness is systematically optimized.
d Q k = 1 n 2 · i , j Q k abs f i , j m k abs ( m k 1 + λ )
where k = 1 , 2 , , M × M / n × n . i and j refer to the pixel positions of the sub-blocks to be calculated at present. The carrier image is a square matrix of size M × M , partitioned into n × n sub-blocks. Let Q k denote a sub-block of the carrier image, m k represent the mean value of that sub-block, and λ be the weighted correction factor (0.6–0.7) [38]. And the range of λ itself does not affect the performance but only plays a balancing role.
Unlike grayscale images, the carrier image is a quaternion-based color image, where the parameters within the numerator and denominator brackets in Equation (16) are quaternions. The operator a b s ( · ) calculates the modulus of the quaternion parameters, ensuring algebraic consistency in the computation of sub-block statistical features while preserving inter-channel correlations inherent to color representations.

3. Proposed Scheme

In this section, we introduce the robust watermarking algorithm with statistical features. For copyright prevention, watermark embedding and watermark extraction are two important parts of the whole algorithm. Through the embedding process to embed the certification watermark and the synchronization watermark in the host image, and then by extracting the hidden watermark from the host photo through a watermark extraction process for authentication, the purpose of information hiding and copyright protection is achieved.

3.1. Pre-Processing

3.1.1. Watermark Image Scrambling

Given an authentication watermark image w 1 of size N w 1 × M w 1 , it is normalized into a square matrix N w × N w for computational convenience. The watermark image undergoes the Arnold transformation governed by the encryption key K r . The transformed watermark is then binarized and vectorized into a bit sequence W = [ B S P 1 , B S P 2 , B S P i ] { 0 , 1 } N w { 0 , 1 } N , where B S P i denotes the binary state of the i-th pixel after Arnold iteration. This process leverages the periodic and reversible properties of Arnold mapping to enhance security against unauthorized extraction.

3.1.2. The DQ Sequence

This study defines a DQ sequence to determine the embedding order of watermarks, thereby enhancing security:
  • For a target object f x , y of size M × N , the photo is partitioned into N non-overlapping sub-blocks B = B 1 , B 2 , B i , B N .
  • Perform QGT on each sub-block to obtain the QGT coefficients F B = F B 1 , F B 2 , F B i , F B N . According to Equation (7), the modulus value of a sub-block can be obtained.
  • The non-uniformity d of the visual capacity of the corresponding sub-block is calculated according to Equation (16) and the modal size of the sub-block to obtain a sequence DQ containing the values of the non-uniformity d of the visual capacity.
  • The sub-block sequences DQ are sorted in descending order, and then the sequence number of the sub-block corresponding to each sub-block is updated in the DQ sequence to obtain the final sequence DQ.

3.1.3. The Watermark Parameter Extraction Model

The watermark extraction process selects neighboring coefficients around the watermark-embedding positions to compute statistical features. Since a nonlinear relationship exists between the embedding positions and neighboring coefficients, a support vector machine (SVM)-based feature extraction framework is constructed to precisely model the mapping relationship of watermark vectors, thereby enhancing cross-domain generalization performance:
  • The N non-overlapping sub-chunks of the persistent picture, and all sub-blocks are organized into a sequence B = B 1 , B 2 , B i , B N .
  • According to the DQ sequence value, find the corresponding sub-block, perform the QGT transformation on this sub-block, split the obtained result into the real part and the imaginary part, obtain the position of the watermark to be embedded and the nine neighborhood values around it in the real part, and calculate the final watermark value to be embedded through Equation (17). Take these two pieces of data as a set of data S i . All of the sub-blocks get the final training sets S i S t = { S 1 , S 2 , S i , S L Z } .
  • where S t is the input and the corresponding binary watermark B S i is the output. W = [ B S P 1 , B S P 2 , B S P i ] { 0 , 1 } N w { 0 , 1 } N The mapping relationship between S i and B S i is utilized to train the SVM, ultimately yielding the parameters of the watermark extraction m o d e l (denoted as model).

3.2. Embedding Procedure of Watermark

The energy concentration of the Gyrator transform signal is analogous to that of the Fourier transform [37]. Embedding watermarks in low-frequency coefficients allows their removal via common signal processing methods such as lossy compression or low-pass filtering. Conversely, inserting markers into high-frequency components may induce severe distortion in the watermarked image. Thus, selecting mid-frequency coefficients is optimal, as it satisfies the trade-off requirement between robustness and imperceptibility. To address this, a dual-watermark hierarchical embedding strategy is implemented by optimizing the mid-frequency components in the QGT domain: the authentication watermark w 1 (a binary copyright identifier) enables traceable authentication, while the synchronization watermark w 2 is constructed based on geometric invariance features using a dual-limit uniform watermark, effectively enhancing resistance to shear deformation. The specific watermark embedding workflow is illustrated in Figure 1.
  • N non-overlapping sub-chunks of persistent picture f ( x , y ) partitioned into, and the partitioned host image is serialized to obtain the sub-block sequence B, where B = B 1 , B 2 , B i , B N . According to the DQ sequence value, find the corresponding sub-block, perform the QGT transformation on this sub-block, and obtain the QGT coefficient of the sub-block.
  • The QGT coefficients are decomposed into real and imaginary parts. Since the embedding positions are set within the real part, the statistical features of neighboring coefficients (including the embedding position) are computed within the QGT real part to derive the feature value a v g . Specifically, for a target embedding position ( u , v ) , the mean value of the surrounding nine coefficients (including the embedding position) is calculated to obtain the feature value a v g .
  • The feature value is combined with the watermark strength to derive the final watermark embedding value, which is then overwritten onto the target watermark embedding position. The relationship between the watermark embedding information B S B i and the final embedded value F r e a l ( u , v ) is defined by Equation (17).
  • After completing the watermark embedding process, the partitioned sub-blocks undergo the QGT with a rotation angle of α and are subsequently integrated to form the intermediate watermarked image f w 1 . Figure 2 shows an example of embedding one-bit robust watermark information using the proposed watermarking method.
    F r e a l u , v a v g + Δ i f   B S B i = 1 a v g Δ i f   B S B i = 0 a v g = 1 9 i = 1 3 j = 1 3 F r e a l ( u , v )
    where B S B i represents the binary information of the watermark, Δ denotes the embedding strength, F r e a l corresponds to the final embedded watermark value at position ( u , v ) , and a v g signifies the average value of the sum of neighboring coefficients surrounding the embedding position.
  • To address geometric deformation resistance requirements, the IULPM method [29] is employed to integrate the synchronization watermark w 2 within the intermediate layer. Following the embedding process, transform the image to RGB for the final watermarked impression f w 2 .
    where w 2 = T P , where the core parameters comprise a bipolar sequence T ( T = { T i { 1 , 1 } , i = 1 , 2 N T } ) and a uniform bipolar sequence P (containing an equal number of 1 s and −1 s in the sequence).

3.3. Watermark Extraction Procedure

The watermark extraction process generally comprises two phases: image rectification and watermark extraction. First, the received image undergoes geometric correction to eliminate positional distortions caused by geometric attacks. Second, the watermark is extracted using the embedding sequence DQ and an extraction model derived from a support vector machine (SVM). The workflow of the watermark extraction is illustrated in Figure 3.
  • Correcting the received image: after receiving the received image, the image is first checked for geometric attacks, and if a geometric attack occurs on the image, then the attack is corrected and the image is recovered.
    • Firstly, the QDFT is applied to process the received image, and the frequency coefficients of the QDFT ( u , v ) are mapped to the logarithmic polar coordinates ( μ 1 , μ 2 ) using the IULPM method to obtain the mapping matrix f ( μ 1 , μ 2 ) .
    • Then, a synchronized watermark w 2 ( μ 1 , μ 2 ) of the same size as f ( μ 1 , μ 2 ) is produced depending on the key K s . The DFT is intended for use in f ( μ 1 , μ 2 ) and w 2 ( μ 1 , μ 2 ) to obtain their frequency coefficients F φ ( η 1 , η 2 ) and W ( η 1 , η 2 ) , respectively.
    • The maximum phase correlation value is obtained from the watermark offset position s ( p 1 , p 2 ) = I D F T [ F φ ( η 1 , η 2 ) · W ( η 1 , η 2 ) ] . Once we know the watermark offset position, we can estimate the rotation angle of the attacked image by using π p 2 / N , and then we can invert the corresponding geometric transformation to correct the received image.
    where F ( η 1 , η 2 ) and W ( η 1 , η 2 ) denote the DFT magnitude coefficients of f ( μ 1 , μ 2 ) and w 2 ( μ 1 , μ 2 ) , F φ ( η 1 , η 2 ) denotes the phase of F ( η 1 , η 2 ) , and W ( η 1 , η 2 ) denotes the conjugate of W ( η 1 , η 2 ) .
  • Extract the watermark:
    • The corrected received image is divided into chunks, and each chunked sub-block is serialized into a one-dimensional array. The selection of the sub-block is based on the DQ value.
    • The QGT transform is applied to the image sub-block. Utilizing the real part information of the sub-block quaternion, the embedding position and the surrounding nine neighborhood coefficients are extracted to compute the sequence value of the SVM.
    • Based on the pre-processing predictive model, the calculated sequence value of SVM is input into the model for prediction, and a value of 0 or 1 is obtained. According to the prediction model obtained from pre-processing, the computed sequence values of SVM are input into the model for prediction, and the outputs P 0 and P 1 with the values of 0 or 1 are obtained, and the watermark information W = [ B S P 1 , B S P 2 , B S P i ] { 0 , 1 } N w is finally extracted.
    • The extracted watermark information can then be organized into a matrix, and inverse Arnold disambiguation is applied, using the key k, to derive the final extracted watermarked image.
    where DQ, the number of printed permutations K, and the predictive model are transmitted as input parameters.
    B S i = 1 i f   p 0 i < p 1 i 0 e l s e

4. Experimentation

The properties of the watermarking algorithm are typically evaluated in terms of both robustness and imperceptibility. To assess the robustness and imperceptibility of the proposed watermarking algorithm, the following experiments utilize a color digital image of size 512 × 512 as the host image, as depicted in Figure 4a–h. The host images employed are standard color pictures sourced from the CVG-UGR and USC-SIPI databases. The watermarked image (i) is a binary digital image measuring 64 × 64, and a series of simulated attacks is conducted following the embedding of the watermark. To ensure a more accurate and objective evaluation of the algorithm’s performance, the experimental results are compared with several related watermarking techniques. All experimental results were obtained using the MATLAB R2023b software on a PC with the following specifications: an Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz, 32GB RAM, and Windows 10 Professional Edition.

4.1. Assessment of Indicators

In digital watermarking technology, the image quality assessment system is the core link to authenticate the efficacy of the method. The system needs to meet two core performance requirements at the same time: one is the covertness (i.e., imperceptibility) of the watermarking information, and the other is the algorithm’s anti-interference ability (i.e., robustness). In an effort to quantitatively assess the covertness of the approach suggested in this essay, the study selects an internationally recognized dual index evaluation system—Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). In addition to these, Normalized Correlation Index (NC) and Bit Error Rate (BER) are also used to assess the anti-jamming capability of the approach.
  • PSNR: PSNR is utilized to calculate the differential in pixels between the source picture and the corrupted pictures and is a widespread and useful tool for objectively assessing the quality of a watermarked image. The better the PSNR value, the higher the quality of the watermarked picture and the more improved the watermarking algorithm is. The PSNR of a multicolor digitized image is as follows:
    P S N R = j = 1 3 10 × lg M × N × 255 2 m = 1 M n = 1 N H m , n , j H m , n , j 2 ÷ 3
    where lg ( ) is a log scale feature with base ten, j indicates the j th level of color in the picture, ( m , n ) indicates the pixel locations, and H ( m , n , j ) and H ( m , n , j ) are the pixels on the ( m , n ) position in the j th layer of the vector picture and the watermark picture, respectively.
    The PSNR value can be calculated according to Equation (19). In measuring the imperceptibility of the watermark, the higher the PSNR value, the better the quality of the image embedded with the watermark, and the higher the visibility of the watermark. Generally, a PSNR is higher than 30 dB, which indicates that the imperceptibility of the watermarked image is comparable to the visibility of the persistent one.
  • SSIM: Similar to PSNR, SSIM is a more recent image similarity metric tool. It is designed according to the characteristics of human visible light and the visual system. The SMIM value lies between 0 and 1, which can validly weigh the performance of the image. The higher the level of the SSIM value, the better the quality of the image embedded with the watermark. The SSIM calculation formula is shown in Equation (20).
    S S I M = 2 μ H μ H + c 1 2 σ H H + c 2 μ H 2 + μ H 2 + c 1 σ H 2 + σ H 2 + c 2
    where H represents the carrier image, H represents the watermarked image, respectively, and μ H and μ H denote the average of H and H , σ H and σ H represent the criterion variance of H and H , respectively, and σ H H represents the covariance of H and H , respectively, while c 1 and c 2 are two other statistics.
    In addition to the metrics mentioned above, normalized correlation [39] and bit error rate (BER) [40] are often assessed to evaluate the methods’ robustness against various attacks. They are typically utilized in the information graphic for watermarking information documents. The NC measure is found to be the likeness ratio between the initial watermarked graphic and its recovery. The range of NC ranges from [0, 1]. The NC range is 1 when both the comparative graphics are the same. As the difference increases between the two objects, the NC value decreases, and a value of zero means that both images are the same.
  • NC: Normalized correlation is an objective standard for the estimation of the ruggedness of watermarking calculation algorithms; the range of NC numbers is 0–1; the larger the NC number is, the greater the likeness of both objects is. The calculation of the NC number is as follows:
    N C = i = 1 p j = 1 q w i , j × w i , j 2 i = 1 k 1 j = 1 k 2 w i , j 2 i = 1 k 1 j = 1 k 2 w i , j 2

4.2. Parameter Selection

The watermark insertion intensity Δ is always a trade-off, which is directly related to the quality of the image and the watermark’s roleness. To identify the embedding intensity Δ , we perform an analysis of the performance of the watermarking encoding algorithm by computing the average of PSNR and NC values on a color standard test image set of size 512 × 512. The image is chunked with an image block of size 8 × 8, a certified watermark of size 64 × 64, and Select ( μ , ν ) = ( 2 , 2 ) as the embedding position of the watermark. As a result, the total number of sub-blocks that the host image is segmented into is 4096, and the same number of SVM training sets is also 4096.
Figure 5 shows the variation of PSNR and NC mean value with different embedding strengths, where NC_O indicates the NC of the extracted watermark when the image does not receive an attack, and NC_A indicates the NC of the extracted watermark when the image receives an attack.
From the observation, we can conclude that the larger the embedding intensity Δ value, the more robust the watermark is, but it will lead to a lower PSNR of the host image. Whereas, smaller embedding strength Δ value produces higher image quality but makes the watermark easy to eliminate. It has been shown by consensus that a PSNR value greater than 35 dB and less than 48 dB indicates that the image is visually pleasing [41]. In consideration of the need to counterbalance the transparency and robustness of the watermarked image, the embedding Intensity Δ was set to 23 in the simulations to ensure that the PSNR value of the host image after embedding the watermark is about 40 dB.
Support Vector Machine (SVM) [42] is a supervised learning model mainly used for classification and regression analysis. It is capable of achieving a balance between model complexity and generalization ability under the condition of limited samples to prevent overfitting. Choosing appropriate parameters during the SVM training process has an important influence on the performance of the model. The system parameter search strategy combined with cross-validation can effectively balance the model complexity and generalization ability, making SVM applicable to related tasks such as classification and regression. To sum up, SVM is highly suitable for implementation in the proposed method. The key parameter configurations include setting the kernel type to linear, fixing the regularization parameter (c value) to 1, defining the kernel parameter as 1, disabling the normalization parameter (normalization = 0), and optimizing the solver through sequential least optimization (SMO).

4.3. Analysis of Imperceptibility

To ensure the covert characteristics of digital watermarking, this study employs a dual index system comprising PSNR and SSIM to quantitatively assess the imperceptibility of watermarking. The experimental design is structured as follows: within the eight groups of original carrier images depicted in Figure 4, single authentication watermarks and combinations of authentication and synchronous dual watermarks are embedded using the algorithm outlined in Section 3. Subsequently, the visual difference parameters are calculated based on the normalized Formulas (19) and (20).
The quantitative data presented in Table 1 indicate that the mean PSNR value reaches 41.19 ± 0.36 dB when embedding the authentication watermark and 40.61 ± 0.51 dB when embedding the dual watermarks. The SSIM metrics for both groups stabilize within the 0.99 ± 0.01 range. This data demonstrates that (1) the superposition of two watermarks results in only 0.58 dB of PSNR attenuation, evidencing the multilayer watermarking architecture’s excellent steganographic compatibility, and (2) the standard deviations among the test samples remain below 0.5 dB, confirming the algorithm’s robust adaptability across varying image features. Additionally, visual evaluation results, such as the comparison schematic in Figure 6, further substantiate that watermark implantation does not induce noticeable texture distortion or color shift, thereby establishing visual quality criteria for imperceptible watermarks.
PSNR and SSIM results of watermarked images obtained using the method of this paper.
Where “Auth” means embedding just one certification watermark, while “Dual” means embedding one certification watermark and one standardization watermark.
In particular, it should be noted that although the introduction of synchronized watermarking leads to a downward shift in the domain of PSNR values, the increase in its standard deviation is controlled within 0.15 dB, indicating that the redundant information integration strategy of the geometric correction module effectively balances the contradiction between functionality expansion and visual fidelity. This feature enables this method to maintain the covert transmission of watermark information in response to rotation and cropping space attacks.

4.4. Robustness Analysis Against Attacks

In the field of digital watermarking technology, robustness specifically refers to the watermarking algorithm’s anti-interference ability to maintain the recognizability and integrity of the watermarked information after the carrier image has been subjected to malicious attacks such as geometrical deformation, filtering processing, compression, and degradation. This index becomes a core parameter for evaluating the security level of algorithms by quantifying the tolerance threshold of the watermarking system to attacks. In this study, the attack resistance of watermarking algorithms is quantitatively evaluated by NC. Initially, the watermarked images are subjected to various assaults. Next, the watermarked icons are restored from every hacked watermarked icon using the extraction as well as decryption techniques explained in detail in the previous section. Finally, the NCs are calculated among the pristine and restored watermarked images using Equation (21). The following subsections provide detailed results on robustness.

4.4.1. No Attack

In that simulation, the watermarked samples did not receive any hits. Table 2 illustrates the NC numbers and BERs. From Table 2, it can be seen that for all the tested color images, the suggested technique acquires an NC worth of 1 and a BER worth of 0. This shows that the suggested method is very effective in recovering the watermarked pictures, devoid of any mass damage. Also, it shows that the suggested method still recovers the watermarked image correctly, no matter which test color image is employed.

4.4.2. Noise Attack

In this simulation, a variety of noises like speckle noise, pretzel noise, and Gaussian noise are added to the watermarked image. The NC results are listed in Table 3. In additional, Figure 7 illustrates some voluntary outcomes of recovering the watermarked information from the noisy watermarked image. From Table 3, it can be seen that in most cases, the NC value is close to 1. This indicates that the proposed method is highly resilient to the studied noise assaults.

4.4.3. Filtering and Brightness Attack

In this simulation, the image with a watermark is attacked by a variety of filters, such as the median filter, averaging filter, and Gaussian Low Pass Filter (LPF), and the variations of the attacks on the luminance include increasing the luminance and decreasing the luminance. Table 4 shows NC levels. Moreover, Figure 8 shows some visual results of the watermarked image restored from the filtered watermarked one.
This experimental image has been subjected to a filtering attack and a luminance attack. As can be seen from Table 4, the NC value of the information recovered from the watermark under the luminance decrease attack is 1, which indicates that the algorithm in this paper is highly resistant to the luminance decrease attack. The NC values for brightness increase are generally above 0.8 except for individual ones, which are on the low side. Against the filtering attack, the total NC value is around 0.8, and the individual NC value is 1; it shows that the watermark possesses good resistance to the filtering attack.

4.4.4. JPEG Compression Attack

JPEG compression is a way to condense an image without appreciably degrading the perceived visual aspect of the resulting image. It is commonly used for a digital watermark, as it provides increased delivery efficiency. In this simulation, watermarked imagery is crushed using JPEG compression with a variable quality factor (QF). The NC values are listed in Table 5. Furthermore, Figure 9 displays some visual consequences of recovering a watermarked image from a compressed watermarked example.
It can be seen from Table 5 that when the QF is varied between 60 and 90, the NC is close to or in the same range as 1. This indicates that the recovery of the watermarked pictures is almost the same or exactly the same as that of the source watermarked pictures. It is also seen that as the QF is placed at 40 or 50, the NC levels of some of the watermarked pictures depart by 1 and 0, and the deviation increases as the QF is set lower, but the NC value remains above 0.64 at a QF of 40 for JPEG, indicating that the present method has a certain degree of resistance to JPEG compression.

4.4.5. Rotation Attacks

In this experiment, the watermarked image rotation attack was used. In this experiment, the attack with clockwise rotation of the watermarked image is taken; the rotation angles are 11°, 26°, 41°, 56°, 71°, and 91°. The NC values are listed in Table 6. Furthermore, Figure 10 shows some visual results of recovering the watermarked image from the attacked watermarked image.
As a typical form of geometric attack, image rotation not only changes the absolute position of pixels but also causes relative changes in pixel values. This double destructiveness leads to the failure of the synchronization mechanism of watermark embedding and extraction, which makes the traditional watermarking algorithm face serious challenges. In the experimental design, the researchers systematically analyze the correlation between the watermark detection accuracy and the rotation angle by rotating the watermark-containing image at different angles. As can be seen from Table 6, the NC values under rotational attacks are very close to 1. This indicates that the proposed method has excellent robustness to geometric attacks with motion rotation (Figure 10).

4.4.6. Other Attacks

In this experiment, the watermarked image is subjected to contrast attack, scaling attack, sharpening attack, and histogram equalization attack. The contrast attack takes both contrast enhancement (value 1.2) and contrast reduction (value 0.8), the scaling attack uses 0.8 times and 2.0 times to attack the image, and the sharpening attack value uses 2. Table 7 lists the NC values. In addition, Figure 11 shows some visual results of recovering the watermarked image from the attacked watermarked image.
As shown in Table 7, the NC values of the experimental attacks on the host image are generally around 0.9, which indicates that the proposed method has excellent robustness to these attacks on motion rotation. The NC value of 1 for the contrast attack in particular indicates that the algorithm is highly resistant to the contrast attack.

4.5. Capacity and Timing Analysis

In the realm of digital watermarking technology, imperceptibility and robustness have traditionally been considered the primary evaluation criteria. Conversely, the embedding capacity of watermarking algorithms—the amount of information that can be carried per unit of image—has typically been regarded as a secondary consideration. However, with the increasing prevalence of color images in multimedia applications, the data dimension (e.g., the RGB three-channel) has expanded significantly compared with grayscale or binary images. This shift has catalyzed research into large-capacity color watermarking technology, making it a focal point of interest.
The information-carrying capacity in digital watermarking is typically quantified by the maximum number of embeddable bits per pixel, which indicates the upper limit of watermark information that can be accommodated by the carrier image while maintaining visual quality. The mathematical definition of this index is represented in Equation (22), expressed in bits per pixel (bpp). This equation reflects the relationship between the maximum number of embeddable bits, denoted as M E b i t s , and the total pixel count of the host image, represented as H I p i x e l s , forming the core calculation logic of this metric.
E C = M E b i t s H I p i x e l s ( b p p )
In order to gauge the proposed watermarking method’s embedding capacity, the maximal embedding capacity of the proposed watermarking algorithm is calculated. In this paper, a 512 × 512 colored host image is partitioned into 8 × 8 image chunks, and the real part of each sub-chunk is embodied with 1 bit of binary watermarking data. The relevant comparison results can be seen in Table 8.
The computational efficiency of digital watermarking algorithms constitutes a critical evaluation criterion, particularly in 5G-enabled environments where real-time processing demands necessitate optimized execution speed.
It can be seen from the data listed in Table 9 that the execution time of watermark embedding and watermark extraction proposed by this algorithm is relatively long.
Since the time complexity of the quaternion Gyrator transformation itself for processing images is O ( n 2 ) , when processing the host image and performing block operations, it requires an execution of the quaternion Gyrator transformation at the order of O ( n ) . Therefore, the overall time complexity of the proposed algorithm is of the order of O ( n 3 ) .
In the comparison method, Mohammed et al. and Wang et al. The time complexity of both Mehraj et al. is O ( n 2 ) . The time complexity of the algorithm of Ubhi et al. Gul et al. is O ( n 3 ) .
The overall complexity of each process of the proposed method is as follows: Block QGT overall complexity is O ( n 3 ) , the statistical characteristic of computing time complexity is O ( 1 ) , and the overall complexity of the SVM prediction is O ( n ) .

4.6. Security Analysis

From Section 3.2, it can be seen that the parameters required for the watermarking algorithm include the watermark image disambiguation key K r , the sequence DQ, and the QGT rotation angle α . Secondly, the embedding of one bit of information in the real part has to be determined by determining the embedding position, and the wrong extraction position is not able to extract the watermarked information successfully. Therefore, the safety of the suggested approaches is assessed by the allocation of distinct sets of values for the required bonds based on the hypothesis below. Here, we take the assumption that the embedding locations are all correct for the experiments.
First, we assume that the disruption key, DQ sequence, and rotation angle used are all different. Second, we assume that only the scrambling key is correct among the keys used each time, and the remaining two are incorrect. Third, it is assumed that just one out of each used key is wrong and the other two are right. At last, we assume that all keys used are true for all three keys.
For each performed operation, we used the state policy decryption watermarking image process detailed in Section 3.2. Table 10 lists the outcomes of the eight experiments carried out. As can be seen from Table 10, when any of the used keys is incorrect, the restored watermarked image is a noisy image. When multiple keys are used incorrectly, the recovered watermarked imagery is even worse. The recovered watermarked image is the same as the original watermarked image when the correct key is used. Based on the above-mentioned factors, it can be concluded that the proposed system is highly secure.

4.7. Comparison with State-of-the-Art Approaches

This chapter gives a set of experiences of comparing the proposed method with the recent approach to image watermarking in terms of imperceptibility and the robustness metrics. The outcomes of the Imperceptibility Comparison and Robustness Response competitions are shown in Figure 12 and Figure 13, and the Robustness Response competition is shown in Figure 14, respectively. Figure 14, part (A) is a visual comparison plot of the compared methods in a single plot, and part (B) is a detailed comparison plot. The comparative methodologies were sequentially referenced as follows: Mohammed et al. [43]; Wang et al. [13]; Gul et al. [45]; Mehraj et al. [40]; and Ubhi et al. [44].
In this paper, an equivalent binary image is used as a digital watermark embedded into the host image, and the embedding results of the watermark are shown in Figure 12 and Figure 13, excluding the Gul et al. [45] method, the PSNR values of the remaining algorithms reach around 40 dB, and the PSNR of the proposed method is 41.4921, which is higher than most of the algorithms in the referenced literature and shows good invisibility; the SSIM values of all the methods in the literature and the proposed method are all greater than 0.96; on the other hand, it can be found from observing the visual effect graphs in Figure 6, that the watermarks are embedded in the watermarked host image is not detectable, which indicates that the methods in this paper perform reasonably well in terms of the degree of unrecognizability. In other words, the proposed watermarking algorithm possesses good invisibility compared with the overall schemes in the literature.
As can be seen in part (A) of Figure 14, the proposed method performs better than the method of Mohammed et al. [43] and Gul et al. [45] in terms of robustness in most cases, as shown in Figures a and f in part (B) of Figure 14. The proposed method shows better robustness under the attack of rotation compared with Mohammed et al. [43], Wang et al. [13], and Gul et al. [45]. However, under a scaling attack, the method of Mehraj et al. [40] and Gul et al. [45] performs more satisfactorily than ours; however, the proposed method performs better than the method of Mohammed et al. [43] and Wang et al. [13]. Moreover, under JPEG compression attack, our method is slightly better than the methods of Mohammed et al. [43], Wang et al. [13], Mehraj et al. [40], and Gul et al. [45]. The proposed method outperforms the method of Mohammed et al. [43] and Gul et al. [45] under the attack of pretzel noise, although it is not as good as the method of Wang et al. [13] under median filtering attack or as good as the method of Ubhi et al. [44] under Gaussian filtering attack, but the proposed algorithm’s NC value under this attack still maintains the high values of 0.9317 and 0.9981, which also indicates that the robustness of the proposed method in this area is highly desirable.
Based on the findings of the empirical comparisons and the previous section of the discussion, it can be summarized that the present method implements both satisfactory imperceptibility and striking robility as opposed to the prior art methods.
In the experimental section of this paper, the digital watermarking algorithm demonstrates limited resistance to adversarial attacks, particularly when subjected to modifications of the watermark image through small perturbations, which may result in the loss or destruction of watermark information [46]. This observation suggests that the current quaternion-based digital watermarking methods possess certain limitations in robustness. To enhance the algorithm’s resilience against such attacks, future research should consider improvements to the watermark embedding strategy, increasing redundancy, or integrating deep learning techniques, such as Generative Adversarial Networks (GANs) [47], to bolster watermark robustness. Furthermore, it will be essential to evaluate and optimize the performance of watermarking algorithms under various adversarial attacks to enhance their practical applicability.

5. Conclusions and Future Prospects

In this paper, a robust watermarking algorithm based on QGT and neighborhood coefficient statistical features is proposed. The QGT and domain statistical features are combined to enhance the robustness of watermarking; the image is segmented into non-overlapping sub-blocks; the authentication watermark is embedded in each word fast; and the visual capacity sequence is used to guide the embedding priority. The IULPM method is embedded into the geometric watermark, and finally, both watermarks are embedded into the host image to enhance the attack-resistant nature of the image. The watermarks are extracted using a support vector machine, which stores the mapping relationships when embedding the watermarks. The watermarked values can be extracted very well. The digital watermarking algorithm proposed in this study, based on the dual watermark embedding mechanism and the statistical feature fusion strategy, has achieved the collaborative optimization of robustness and security in the field of digital copyright protection. In terms of security, the adopted three-level key system includes the master key, feature key, and verification key to ensure that unauthorized users cannot obtain the complete watermark. Experimental data show that this algorithm has significant resistance to attacks such as JPEG compression and median filtering. This multi-dimensional protection mechanism provides a reliable technical path for digital content copyright certification.
For further research, we intend to investigate other properties of QGT and apply them to watermarking algorithms. For example, after QGT transformation, we assign its amplitude, which can increase the embedding capacity of watermark information to a greater extent. Since QGT itself needs to perform multiple multiplications, increasing the execution efficiency of time is one of the future research directions. In addition, we also intend to test the feasibility of the tamper detection function of semi-fragile watermarking by using this method in order to understand the advantages and properties. For example, we will explore deep learning techniques applied to the fusion with watermarking and tampering localization models to obtain more robust performance. The combination of support vector machine and deep learning is also one of our future research directions. The future development direction will inevitably be inseparable from deep learning, and support vector machines can precisely fit into the field of deep learning. Therefore, in the future, the multi-core or integration of vector machines can be combined with deep learning as one of the research directions.

Author Contributions

Each author discussed the details of the manuscript. R.W. designed and wrote the manuscript. R.W. implemented the proposed technique and provided the experimental results. J.O. reviewed and revised the article. J.O., R.W. and T.S. drafted and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Social Science Foundation of China under grant number 21BXW077.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy issues.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xin, Y.; Liao, S.; Pawlak, M. Circularly orthogonal moments for geometrically robust image watermarking. Pattern Recognit. 2007, 40, 3740–3752. [Google Scholar] [CrossRef]
  2. Loan, N.A.; Hurrah, N.N.; Parah, S.A.; Lee, J.W.; Sheikh, J.A.; Bhat, G.M. Secure and robust digital image watermarking using coefficient differencing and chaotic encryption. IEEE Access 2018, 6, 19876–19897. [Google Scholar] [CrossRef]
  3. Ali, R.F.; Dominic, P.; Ali, S.E.A.; Rehman, M.; Sohail, A. Information security behavior and information security policy compliance: A systematic literature review for identifying the transformation process from noncompliance to compliance. Appl. Sci. 2021, 11, 3383. [Google Scholar] [CrossRef]
  4. Tiwari, A.; Sharma, M.; Tamrakar, R.K. Watermarking based image authentication and tamper detection algorithm using vector quantization approach. AEU-Int. J. Electron. Commun. 2017, 78, 114–123. [Google Scholar] [CrossRef]
  5. Haghighi, B.B.; Taherinia, A.H.; Harati, A.; Rouhani, M. WSMN: An optimized multipurpose blind watermarking in Shearlet domain using MLP and NSGA-II. Appl. Soft Comput. 2021, 101, 107029. [Google Scholar] [CrossRef]
  6. Su, Q.; Chen, B. Robust color image watermarking technique in the spatial domain. Soft Comput. 2018, 22, 91–106. [Google Scholar] [CrossRef]
  7. Chopra, D.; Gupta, P.; Sanjay, G.; Gupta, A. LSB based digital image watermarking for gray scale image. IOSR J. Comput. Eng. 2012, 6, 36–41. [Google Scholar] [CrossRef]
  8. Wenyin, Z.; Shih, F.Y. Semi-fragile spatial watermarking based on local binary pattern operators. Opt. Commun. 2011, 284, 3904–3912. [Google Scholar] [CrossRef]
  9. Chang, J.D.; Chen, B.H.; Tsai, C.S. LBP-based fragile watermarking scheme for image tamper detection and recovery. In Proceedings of the 2013 International Symposium on Next-Generation Electronics, Kaohsiung, Taiwan, 25–26 February 2013; pp. 173–176. [Google Scholar]
  10. Chennamma, H.; Madhushree, B. A comprehensive survey on image authentication for tamper detection with localization. Multimed. Tools Appl. 2023, 82, 1873–1904. [Google Scholar] [CrossRef]
  11. Huynh-The, T.; Hua, C.H.; Tu, N.A.; Hur, T.; Bang, J.; Kim, D.; Amin, M.B.; Kang, B.H.; Seung, H.; Lee, S. Selective bit embedding scheme for robust blind color image watermarking. Inf. Sci. 2018, 426, 1–18. [Google Scholar] [CrossRef]
  12. Bao, B.; Wang, Y. A robust blind color watermarking algorithm based on the Radon-DCT transform. Multimed. Tools Appl. 2024, 83, 64663–64682. [Google Scholar] [CrossRef]
  13. Wang, J.; Wu, D.; Li, L.; Zhao, J.; Wu, H.; Tang, Y. Robust periodic blind watermarking based on sub-block mapping and block encryption. Expert Syst. Appl. 2023, 224, 119981. [Google Scholar] [CrossRef]
  14. Kumari, M.R.R.; Kumar, V.V.; Naidu, K.R. Digital image watermarking using DWT-SVD with enhanced tunicate swarm optimization algorithm. Multimed. Tools Appl. 2023, 82, 28259–28279. [Google Scholar] [CrossRef]
  15. Woo, C.S. Digital Image Watermarking Methods for Copyright Protection and Authentication. Ph.D. Thesis, Queensland University of Technology, Brisbane, Australia, 2007. [Google Scholar]
  16. Zhang, X.; Su, Q.; Yuan, Z.; Liu, D. An efficient blind color image watermarking algorithm in spatial domain combining discrete Fourier transform. Optik 2020, 219, 165272. [Google Scholar] [CrossRef]
  17. Qu, C.; Du, J.; Xi, X.; Tian, H.; Zhang, J. A hybrid domain-based watermarking for vector maps utilizing a complementary advantage of discrete fourier transform and singular value decomposition. Comput. Geosci. 2024, 183, 105515. [Google Scholar] [CrossRef]
  18. Hamidi, M.; El Haziti, M.; Cherifi, H.; El Hassouni, M. A hybrid robust image watermarking method based on DWT-DCT and SIFT for copyright protection. J. Imaging 2021, 7, 218. [Google Scholar] [CrossRef] [PubMed]
  19. Huang, C.; Li, J.; Gao, G. Review of quaternion-based color image processing methods. Mathematics 2023, 11, 2056. [Google Scholar] [CrossRef]
  20. Guo, L.; Dai, M.; Zhu, M. Quaternion moment and its invariants for color object classification. Inf. Sci. 2014, 273, 132–143. [Google Scholar] [CrossRef]
  21. Ouyang, J.; Wen, X.; Liu, J.; Chen, J. Robust hashing based on quaternion Zernike moments for image authentication. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2016, 12, 1–13. [Google Scholar] [CrossRef]
  22. Ouyang, J.; Zhang, X.; Wen, X. Robust hashing based on quaternion Gyrator transform for image authentication. IEEE Access 2020, 8, 220585–220594. [Google Scholar] [CrossRef]
  23. Bas, P.; Le Bihan, N.; Chassery, J.M. Color image watermarking using quaternion Fourier transform. In Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, Hong Kong, China, 6–10 April 2003; Volume 3, p. III-521. [Google Scholar]
  24. Wang, H.; Yuan, Z.; Chen, S.; Su, Q. Embedding color watermark image to color host image based on 2D-DCT. Optik 2023, 274, 170585. [Google Scholar] [CrossRef]
  25. Yan, C.P.; Pun, C.M.; Yuan, X.C. Quaternion-based image hashing for adaptive tampering localization. IEEE Trans. Inf. Forensics Secur. 2016, 11, 2664–2677. [Google Scholar] [CrossRef]
  26. Chen, Y.; Jia, Z.; Peng, Y.; Peng, Y. Efficient robust watermarking based on structure-preserving quaternion singular value decomposition. IEEE Trans. Image Process. 2023, 32, 3964–3979. [Google Scholar] [CrossRef]
  27. Zhang, M.; Ding, W.; Li, Y.; Sun, J.; Liu, Z. Color image watermarking based on a fast structure-preserving algorithm of quaternion singular value decomposition. Signal Process. 2023, 208, 108971. [Google Scholar] [CrossRef]
  28. Gong, L.H.; Luo, H.X. Dual color images watermarking scheme with geometric correction based on quaternion FrOOFMMs and LS-SVR. Opt. Laser Technol. 2023, 167, 109665. [Google Scholar] [CrossRef]
  29. Ouyang, J.; Coatrieux, G.; Chen, B.; Shu, H. Color image watermarking based on quaternion Fourier transform and improved uniform log-polar mapping. Comput. Electr. Eng. 2015, 46, 419–432. [Google Scholar] [CrossRef]
  30. Gao, S.; Liu, J.; Iu, H.H.C.; Erkan, U.; Zhou, S.; Wu, R.; Tang, X. Development of a video encryption algorithm for critical areas using 2D extended Schaffer function map and neural networks. Appl. Math. Model. 2024, 134, 520–537. [Google Scholar] [CrossRef]
  31. Gao, S.; Zhang, Z.; Iu, H.H.C.; Ding, S.; Mou, J.; Erkan, U.; Toktas, A.; Li, Q.; Wang, C.; Cao, Y. A Parallel Color Image Encryption Algorithm Based on a 2D Logistic-Rulkov Neuron Map. IEEE Internet Things J. 2025, 12, 18115–18124. [Google Scholar] [CrossRef]
  32. Liu, P.; Ouyang, J.; Shao, Z. Optical digital dual image encryption scheme based on fast Fourier symmetry fusion and bit-level compression. Phys. Scr. 2025, 100, 055024. [Google Scholar] [CrossRef]
  33. Ramos, A.M.; Artiles, J.A.; Chaves, D.P.; Pimentel, C. A fragile image watermarking scheme in dwt domain using chaotic sequences and error-correcting codes. Entropy 2023, 25, 508. [Google Scholar] [CrossRef]
  34. Hamilton, W.R. Elements of Quaternions; Longmans, Green, & Company: London, UK, 1866. [Google Scholar]
  35. Guo, C.; Ma, Q.; Zhang, L. Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  36. Li, J.; Levine, M.D.; An, X.; Xu, X.; He, H. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 996–1010. [Google Scholar] [CrossRef]
  37. Shao, Z.; Duan, Y.; Coatrieux, G.; Wu, J.; Meng, J.; Shu, H. Combining double random phase encoding for color image watermarking in quaternion gyrator domain. Opt. Commun. 2015, 343, 56–65. [Google Scholar] [CrossRef]
  38. Liu, K.C.; Chou, C.H. Robustness comparison of color image watermarking schemes in uniform and non-uniform color spaces. Int. J. Comput. Sci. Netw. Secur. 2007, 7, 239–247. [Google Scholar]
  39. Hsu, L.Y.; Hu, H.T. Blind watermarking for color images using EMMQ based on QDFT. Expert Syst. Appl. 2020, 149, 113225. [Google Scholar] [CrossRef]
  40. Mehraj, S.; Mushtaq, S.; Parah, S.A.; Giri, K.J.; Sheikh, J.A. A robust watermarking scheme for hybrid attacks on heritage images. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 7367–7380. [Google Scholar] [CrossRef]
  41. Kumar, S.; Singh, B.K. DWT based color image watermarking using maximum entropy. Multimed. Tools Appl. 2021, 80, 15487–15510. [Google Scholar] [CrossRef]
  42. Ouyang, J.; Huang, J.; Wen, X.; Shao, Z. A semi-fragile watermarking tamper localization method based on QDFT and multi-view fusion. Multimed. Tools Appl. 2023, 82, 15113–15141. [Google Scholar] [CrossRef]
  43. Mohammed, A.O.; Hussein, H.I.; Mstafa, R.J.; Abdulazeez, A.M. A blind and robust color image watermarking scheme based on DCT and DWT domains. Multimed. Tools Appl. 2023, 82, 32855–32881. [Google Scholar] [CrossRef]
  44. Ubhi, J.S.; Aggarwal, A.K.; Mallika. Neural style transfer for image within images and conditional GANs for destylization. J. Vis. Commun. Image Represent. 2022, 85, 103483. [Google Scholar] [CrossRef]
  45. Gul, E. A blind robust color image watermarking method based on discrete wavelet transform and discrete cosine transform using grayscale watermark image. Concurr. Comput. Pract. Exp. 2022, 34, e6884. [Google Scholar] [CrossRef]
  46. Yuan, S.; Xu, G.; Li, H.; Zhang, R.; Qian, X.; Jiang, W.; Cao, H.; Zhao, Q. FIGhost: Fluorescent Ink-based Stealthy and Flexible Backdoor Attacks on Physical Traffic Sign Recognition. arXiv 2025, arXiv:2505.12045. [Google Scholar] [CrossRef]
  47. Yuan, S.; Li, H.; Zhang, R.; Cao, H.; Jiang, W.; Ni, T.; Fan, W.; Zhao, Q.; Xu, G. Omni-Angle Assault: An Invisible and Powerful Physical Adversarial Attack on Face Recognition. In Proceedings of the Forty-second International Conference on Machine Learning, Vancouver, BC, Canada, 13–19 July 2025. [Google Scholar]
Figure 1. Flowchart of watermark embedding process.
Figure 1. Flowchart of watermark embedding process.
Electronics 14 04494 g001
Figure 2. The example of embedding one-bit watermark information. (a) image blocking, where B i denotes the block selected for watermark embedding; (b) the QGT coefficients of image block B 1 , where the yellow box indicates the neighborhood coefficients of the watermark embedding position and the blue box indicates the watermark embedding position; (c) the coefficient changes after watermark embedding.
Figure 2. The example of embedding one-bit watermark information. (a) image blocking, where B i denotes the block selected for watermark embedding; (b) the QGT coefficients of image block B 1 , where the yellow box indicates the neighborhood coefficients of the watermark embedding position and the blue box indicates the watermark embedding position; (c) the coefficient changes after watermark embedding.
Electronics 14 04494 g002
Figure 3. Flow chart of watermark extraction process.
Figure 3. Flow chart of watermark extraction process.
Electronics 14 04494 g003
Figure 4. Test cover images (a) Airplane (b) Car (c) Children (d) Girl (e) Lake (f) Lena (g) Pepper (h) Splash (i) Watermark image.
Figure 4. Test cover images (a) Airplane (b) Car (c) Children (d) Girl (e) Lake (f) Lena (g) Pepper (h) Splash (i) Watermark image.
Electronics 14 04494 g004
Figure 5. Parameter selection for embedding strength.
Figure 5. Parameter selection for embedding strength.
Electronics 14 04494 g005
Figure 6. Visualization of the approach suggested: (a) Pristine host image; (b) Watermark-embedded image.
Figure 6. Visualization of the approach suggested: (a) Pristine host image; (b) Watermark-embedded image.
Electronics 14 04494 g006
Figure 7. Watermark recovery from noise attacks.
Figure 7. Watermark recovery from noise attacks.
Electronics 14 04494 g007
Figure 8. Watermark recovery from filtering and brightness attack.
Figure 8. Watermark recovery from filtering and brightness attack.
Electronics 14 04494 g008
Figure 9. Watermark recovery from JPEG compression attacks.
Figure 9. Watermark recovery from JPEG compression attacks.
Electronics 14 04494 g009
Figure 10. Watermark recovery from rotation attacks.
Figure 10. Watermark recovery from rotation attacks.
Electronics 14 04494 g010
Figure 11. Watermark recovery from other attacks.
Figure 11. Watermark recovery from other attacks.
Electronics 14 04494 g011
Figure 12. Comparison of PSNR of the proposed program with other existing programs.
Figure 12. Comparison of PSNR of the proposed program with other existing programs.
Electronics 14 04494 g012
Figure 13. Comparison of SSIM of the proposed method with other existing programs.
Figure 13. Comparison of SSIM of the proposed method with other existing programs.
Electronics 14 04494 g013
Figure 14. Comparison of NC of the proposed method with other existing programs. (A) is a visual comparison plot of the compared methods in a single plot, and (B) is a detailed comparison plot. (a) Mohammed et al., (b) Wang et al., (c) Mehraj et al., (d) Ubhi et al., (e) Gul et al.
Figure 14. Comparison of NC of the proposed method with other existing programs. (A) is a visual comparison plot of the compared methods in a single plot, and (B) is a detailed comparison plot. (a) Mohammed et al., (b) Wang et al., (c) Mehraj et al., (d) Ubhi et al., (e) Gul et al.
Electronics 14 04494 g014
Table 1. PSNR and SSIM results of the watermarked images obtained by the proposed method.
Table 1. PSNR and SSIM results of the watermarked images obtained by the proposed method.
Carrier ImagePSNRSSIM
AuthDualAuthDual
Airplane 41.2016 40.7688 0.9695 0.9668
Car 40.7102 39.8572 0.9905 0.9884
Children 41.3295 40.8793 0.9902 0.9892
Girl 41.6760 40.7481 0.9965 0.9956
Lake 40.7044 39.9856 0.9938 0.9927
Lena. 40.9783 40.4174 0.9978 0.9975
Pepper. 41.5066 40.8269 0.9981 0.9977
Splash 41.6434 41.4289 0.9954 0.9952
Table 2. BER and NC data in the without-aggressions scenario.
Table 2. BER and NC data in the without-aggressions scenario.
Carrier ImageNCBER
Airplane 1.0000 0.0000
Car 1.0000 0.0000
Children 1.0000 0.0000
Girl 1.0000 0.0000
Lake 1.0000 0.0000
Lena. 1.0000 0.0000
Pepper. 1.0000 0.0000
Splash 1.0000 0.0000
Table 3. NC values under noise attacks.
Table 3. NC values under noise attacks.
Attack Airplane Car Children Girl Lake Lena Pepper Splash
Speckle (0.02) 0.7785 0.8340 0.8663 0.7271 0.8840 0.8933 0.9077 0.8926
Speckle (0.04) 0.6436 0.7158 0.7396 0.5882 0.7945 0.7889 0.7865 0.7851
Salt & Pepper (0.008) 0.9492 0.9612 0.9600 0.9089 0.9619 0.9638 0.9575 0.9451
Salt & Pepper (0.02) 0.8740 0.8879 0.8764 0.8195 0.8631 0.8760 0.8662 0.8630
Poisson 0.9660 0.9804 0.9875 0.9412 0.9868 0.9886 0.9948 0.9915
Gaussian (0.01) 0.7444 0.7635 0.7430 0.7040 0.7380 0.7380 0.7347 0.7408
Table 4. NC values under filtering and brightness attack.
Table 4. NC values under filtering and brightness attack.
Attack Airplane Car Children Girl Lake Lena Pepper Splash
Average Filter (3 × 3) 0.7811 0.6678 0.7530 0.8050 0.8875 0.7818 0.7934 0.9132
Gaussian LPF (3 × 3) 0.9981 0.9981 1.0000 0.9986 0.9976 0.9981 0.9986 1.0000
Median Filter (3 × 3) 0.8314 0.7713 0.7953 0.8443 0.6937 0.8701 0.8225 0.9317
Brightness Up (1.0, 0.4) 0.5364 0.7590 0.4319 0.1539 0.8109 0.8452 0.8603 0.8503
Brightness Up (1.0, 0.8) 0.8607 0.9804 0.9948 0.9899 0.9678 0.9957 0.9800 0.9174
Brightness Down (1.0, 0.4) 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Brightness Down (1.0, 0.8) 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Table 5. NC values under JPEG compression attacks.
Table 5. NC values under JPEG compression attacks.
Attack Airplane Car Children Girl NC Lake Lena Pepper Splash
JPEG (QF:90) 1.0000 1.0000 1.0000 0.9938 1.0000 0.9995 0.9976 1.0000
JPEG (QF:80) 1.0000 0.9981 0.9990 0.9794 0.9971 0.9976 0.9952 0.9976
JPEG (QF:70) 0.9887 0.9870 0.9857 0.9228 0.9857 0.9857 0.9730 0.9657
JPEG (QF:60) 0.8913 0.9329 0.9223 0.8172 0.9160 0.9276 0.9075 0.8776
JPEG (QF:50) 0.7823 0.8638 0.8444 0.7448 0.8365 0.8443 0.8306 0.8007
JPEG (QF:40) 0.6474 0.7557 0.7184 0.6497 0.7422 0.7271 0.6962 0.6594
Table 6. NC values under rotation attacks.
Table 6. NC values under rotation attacks.
Attack Airplane Car Children Girl Lake Lena Pepper Splash
Rotation (11°) 0.8701 0.9683 0.9773 0.9746 0.9716 0.9776 0.9772 0.8501
Rotation (26°) 0.7876 0.9504 0.9694 0.9664 0.9564 0.9671 0.9685 0.7908
Rotation (41°) 0.7370 0.9465 0.9654 0.9661 0.9474 0.9644 0.9644 0.7756
Rotation (56°) 0.6136 0.9544 0.9754 0.9722 0.9605 0.9720 0.9740 0.7923
Rotation (71°) 0.3487 0.8600 0.9689 0.9342 0.9574 0.9697 0.9052 0.8497
Rotation (91°) 0.9480 0.8548 0.9896 0.9869 0.9841 0.9864 0.8943 0.8910
Table 7. NC values under other attacks.
Table 7. NC values under other attacks.
Attack Airplane Car Children Girl Lake Lena Pepper Splash
Contrast (0.5) 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000
Contrast (1.2) 0.9617 0.9981 0.9995 0.9904 0.9938 0.9990 1.0000 0.9534
Scaling (0.8) 0.8575 0.8492 0.8815 0.9017 0.7963 0.8728 0.8985 0.9475
Scaling (2.0) 0.9942 0.9933 0.9986 0.9914 0.9900 0.9914 0.9957 0.9995
Sharping 1.0000 0.9995 1.0000 0.9976 0.9995 0.9995 0.9990 1.0000
Histogram Equalization 0.9805 0.9957 0.9962 0.9967 1.0000 0.9990 1.0000 0.9986
Table 8. The comparison results of embedding capacity.
Table 8. The comparison results of embedding capacity.
Algorithm Maximum Embedded Watermark (Bit) Carrier Image (Pixel) Bit/Pixel
Mohammed et al. [43] 64 × 64 512 × 512 × 3 0.005208
Wang et al. [13] 64 × 64 512 × 512 × 3 0.005208
Mehraj et al. [40] 64 × 64 512 × 512 × 3 0.005208
Ubhi et al. [44] 64 × 64 512 × 512 × 3 0.005208
Gul et al. [45] 64 × 64 512 × 512 × 3 0.005208
Proposed 64 × 64 512 × 512 × 3 0.005208
Table 9. The average running time of algorithm (seconds).
Table 9. The average running time of algorithm (seconds).
Algorithm Embedding Process Extracting Process Total Time
Mohammed et al. [43] 13.4376 41.2573 54.6949
Wang et al. [13] 11.9278 33.4682 41.3960
Mehraj et al. [40] 15.4982 24.1795 39.6777
Ubhi et al. [44] 12.4976 37.1853 49.6829
Gul et al. [45] 99.9682 33.9182 111.8864
Proposed 18.5839 24.4869 43.0808
Table 10. Security analysis of the proposed approach.
Table 10. Security analysis of the proposed approach.
Recovered Watermark Exp. 1 Exp. 2 Exp. 3 Exp. 4 Exp. 5 Exp. 6 Exp. 7 Exp. 8
BER 0.4513 0.4835 0.5389 0.4953 0.4926 0.5146 0.2259 0.0000
NC 0.3856 0.3289 0.2676 0.2958 0.3264 0.3025 0.5633 1.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ouyang, J.; Wang, R.; Shi, T. Robust Watermarking Algorithm Based on QGT and Neighborhood Coefficient Statistical Features. Electronics 2025, 14, 4494. https://doi.org/10.3390/electronics14224494

AMA Style

Ouyang J, Wang R, Shi T. Robust Watermarking Algorithm Based on QGT and Neighborhood Coefficient Statistical Features. Electronics. 2025; 14(22):4494. https://doi.org/10.3390/electronics14224494

Chicago/Turabian Style

Ouyang, Junlin, Ruijie Wang, and Tingjian Shi. 2025. "Robust Watermarking Algorithm Based on QGT and Neighborhood Coefficient Statistical Features" Electronics 14, no. 22: 4494. https://doi.org/10.3390/electronics14224494

APA Style

Ouyang, J., Wang, R., & Shi, T. (2025). Robust Watermarking Algorithm Based on QGT and Neighborhood Coefficient Statistical Features. Electronics, 14(22), 4494. https://doi.org/10.3390/electronics14224494

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop