Next Article in Journal
Generation of Julia and Mandelbrot Sets for a Complex Function via Jungck–Noor Iterative Method with s-Convexity
Previous Article in Journal
Symmetric Dual-Phase Framework for APT Attack Detection Based on Multi-Feature-Conditioned GAN and Graph Convolutional Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Security in Augmented Reality Through Hash-Based Data Hiding and Hierarchical Authentication Techniques

1
Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan
2
Prospective Technology of Electrical Engineering and Computer Science, National Chin-Yi University of Technology, Taichung 411, Taiwan
3
Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhua Road, Xitun District, Taichung 407, Taiwan
4
Department of Electrical and Computer Engineering, Tamkang University, Tamsui, New Taipei City 251, Taiwan
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(7), 1027; https://doi.org/10.3390/sym17071027
Submission received: 22 April 2025 / Revised: 3 June 2025 / Accepted: 18 June 2025 / Published: 30 June 2025
(This article belongs to the Section Computer)

Abstract

With the increasing integration of augmented reality (AR) in various applications, ensuring secure access and content authenticity has become a critical challenge. This paper proposes an innovative and robust authentication framework for protecting AR multimedia content through a hash-based data-hiding technique. Leveraging the Discrete Wavelet Transform (DWT) in the YCbCr color space, the method embeds multiple cryptographic hash signatures directly into the AR visual data. This design not only utilizes the symmetric property between two consecutive AR contents but also allows users to verify the connectivity between two AR digital contents by checking the embedded hash values. These embedded signatures support hierarchical, multi-level authentication, verifying not only the integrity and authenticity of individual AR objects but also their contextual relationships within the AR environment. The proposed system exhibits exceptional resilience to tampering, effectively identifying whether two consecutive e-pages in the AR content have been altered, while preserving high perceptual quality with PSNR values above 45 dB and SSIM scores consistently exceeding 0.98. This work presents a practical, real-time solution for enhancing AR content security, contributing significantly to the advancement of secure multimedia systems in next-generation interactive platforms.

1. Introduction

The rapid digital advancement of augmented reality (AR) has catalyzed the development of immersive experiences across multiple domains, including education, healthcare, and entertainment. Within the gaming industry, AR significantly enhances user interaction by overlaying digital elements onto the physical environment, thereby transforming traditional gameplay into a highly dynamic and engaging experience. In the healthcare sector, AR facilitates surgical assistance and medical training through real-time data visualization and interactive simulations, contributing to improved accuracy and educational outcomes. Similarly, educational institutions are increasingly adopting AR technologies to promote interactive learning, enabling students to grasp complex concepts through immersive visual representations. The entertainment industry also leverages AR to produce compelling content that merges virtual and real-world elements, thereby redefining user interaction paradigms [1].
As AR technologies become more deeply integrated into daily life, the necessity for robust mechanisms to guarantee the authenticity, security, and integrity of AR content grows correspondingly. The seamless blending of virtual and physical environments introduces new challenges, particularly in protecting digital assets from tampering, unauthorized access, and intellectual property infringement. Despite AR’s transformative capabilities, ensuring the authentication and integrity of AR content remains a critical and underexplored issue that necessitates further research and innovation.
Recent years have seen growing academic interest in techniques aimed at securing AR content, with data-hiding schemes emerging as a particularly promising approach. In 2016, Gaebel et al. [2] introduced the “Looks Good To Me” (LGTM) protocol, which utilizes AR headset hardware and contextual awareness to embed human trust into digital authentication. This protocol combines localization-based wireless communication and facial recognition to achieve real-time identity verification. Building upon this, Wazir et al. [3] proposed a graphical authentication system in 2020, wherein users generate doodle-based passwords within a 3D AR interface. The system authenticates users by analyzing the spatial characteristics—such as coordinates and size—of the five most recent doodles drawn via smartphone.
Chang et al. [4] proposed a cryptographic hash-based data-hiding scheme aimed at enhancing content authenticity and integrity. Their approach involves embedding authentication data within digital media using cryptographic hash functions, rendering it resistant to tampering and unauthorized extraction. In the same year, Shrimali et al. [5] introduced an efficient hash-based data-hiding technique utilizing the SAJ encryption–decryption mechanism. Their method embeds hashed secret data [6,7] derived from a secret key into various frames of a video stream to enhance data integrity and minimize payload size. Similarly, Liu et al. [8] developed a secure image data-hiding framework incorporating encryption and error correction techniques that demonstrated high resilience against tampering and unauthorized access. In 2022, Stephenson et al. [9] conducted a comprehensive evaluation of AR/VR authentication techniques, addressing dimensions such as usability, accessibility, and security. While these prior works have significantly advanced user authentication within AR contexts, a substantial gap persists concerning the protection of AR digital content itself—especially in interactive environments where virtual objects adapt dynamically to user input. Existing approaches have not sufficiently addressed mechanisms for ensuring the security and integrity of such mutable AR elements.
To bridge this gap, this paper proposes a secure authentication framework for AR content based on hash-based data hiding. The proposed method embeds hash values of content regions into e-pages using the Discrete Wavelet Transform (DWT), thereby ensuring data integrity while preserving visual imperceptibility. Departing from conventional approaches, this framework employs a hierarchical embedding strategy, enabling the creation of a verifiable chain of trust through linked e-pages in AR environments. This work offers three primary contributions: (1) A novel hash-based data-hiding technique that utilizes DWT to embed authentication information into AR content. (2) A hierarchical authentication mechanism that verifies content integrity through interlinked e-pages, supporting traceable verification in AR applications. (3) A robustness evaluation of the proposed scheme against tampering, measured via visual quality metrics, accompanied by a computational cost analysis to assess real-time performance feasibility.
The remainder of this paper is structured as follows: Section 2 reviews the related literature on AR content authentication and data-hiding techniques. Section 3 presents the proposed hash-based authentication framework in detail. Section 4 reports experimental results and discusses their implications. Finally, Section 5 concludes the study and outlines directions for future research.

2. Related Work

Recent developments in multimedia security have significantly enhanced the protection of digital content, with notable applications in fields such as augmented reality (AR) and medical image processing. Among the various security techniques, data-hiding and cryptographic methods have proven essential for ensuring content integrity, confidentiality, and resilience against tampering. In particular, hash-based data hiding has garnered considerable attention due to its effectiveness in generating unique, reversible authentication signatures while maintaining minimal perceptual distortion. The subsequent sections review key approaches that inform and support the design of the proposed authentication framework.

2.1. Hash-Based Data Hiding in Multimedia Security

Cryptographic hash functions are integral to ensuring data integrity and protecting intellectual property in digital environments. Chang et al. [4] introduced a data-hiding scheme based on one-way hash functions within the VQ-compressed domain to enhance both security and content integrity. Their approach embeds secret messages into cover images using a reversible data-hiding method that leverages cryptographic hash properties. Additionally, they incorporated the concept of Side-Matching Vector Quantization (SMVQ) to expand the embeddable indices in the VQ index table. A digest generated from the secret message using a hash function is embedded into the image, enabling verification of the message’s integrity during data extraction.
Shrimali et al. [5] proposed a fast and secure hash-based data-hiding technique employing the SAJ encryption–decryption scheme. Their method uses cryptographic hash functions to generate a secure digest of the secret data with a secret key, which is then embedded across various frames of a video. The approach prioritizes enhanced security, improved reliability, and low data payload size. Through the comparative evaluation of techniques including codeword substitution, steganography, watermarking, least significant bit (LSB) substitution, and traditional cryptographic methods, their technique demonstrates improved efficiency. Liu et al. [8] further advanced hash-based data hiding for image content, focusing on robustness against common forms of attack such as noise addition, compression, and tampering. Collectively, these studies illustrate the effectiveness of hash-based authentication mechanisms in securing multimedia content.

2.2. Traditional Digital Signature and Data-Hiding Techniques for Content Authentication

Traditional digital signature schemes, as discussed by Schneier et al. [10], rely on cryptographic hash functions to ensure authenticity, integrity, and security in a wide range of cryptographic protocols. These one-way hash functions generate a fixed-size digest from input data and serve critical roles in digital signatures, message authentication codes (MACs), and password storage.
Wu and Liu [11] developed an LSB substitution-based data-hiding method for embedding data within images and videos. Their technique enhances imperceptibility, increases embedding capacity, and improves robustness against attacks. Lin et al. [12] proposed an efficient data-hiding approach for tamper detection and recovery by embedding 2-bit authentication and 6-bit recovery data in each 2 × 2 image block. Their hierarchical framework improved tamper localization accuracy and demonstrated resilience against collage and vector quantization (VQ) attacks. Zhang et al. [13] introduced a self-embedding scheme using a reference-sharing mechanism, embedding reference data from different image regions to enhance both tamper detection and content recovery. Follw attacks mentioned by Chang et al. [14], Chang and Tai [15] improved the accuracy of tamper detection with a block-based hiding approach using 2-LSB embedding. While their method showed resilience against collage, VQ, and constant-average attacks, its localization accuracy was limited by the use of only 2-bit authentication data per block.
Sarreshtedari and Akhaee [16] developed a watermark-based data-hiding technique aimed at digital image protection and self-recovery. Their method treated the original image as source data and protected the output bitstream using suitable channel coding. The total watermark bit budget was divided into three categories: (1) source encoder output bits, (2) channel code parity bits, and (3) check bits. The check bits were used for tamper detection and supported channel decoder recovery of the source-encoded image. Qin et al. [17] proposed a self-embedding data-hiding scheme that integrated reference data interleaving with adaptive LSB layer selection, thereby improving embedding efficiency and recovery accuracy. Lin et al. [18] presented a hybrid authentication scheme employing watermark-based data hiding for AMBTC-compressed images. Their method encodes each image block into a compressed trio consisting of lower-level quantization, higher-level quantization, and a bitmap (BM). Authentication data is embedded either in the BM or in the quantization levels, depending on the block’s texture characteristics.
In follow-up research, Lin et al. [19] proposed a reversible data-hiding scheme that determines embeddable blocks within AMBTC-compressed images. They created four disjoint sets of selected blocks and embedded authentication data using various combinations of mean and standard deviation values. Tai and Liao [20] introduced a self-embedding watermarking method utilizing wavelet transforms to improve both image authentication and tampered region recovery. Later, Yu et al. [21] developed an adaptive bit-plane hiding scheme that uses a parity-check matrix to improve security and embedding efficiency. Nazir et al. [22] proposed a blind watermarking technique for RGB image authentication by integrating Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT). Their method applied encryption to SVD components and incorporated a logistic map combined with a hyperchaotic system, enhancing resistance to attacks while minimizing false-positive rates.
In addition to conventional image authentication methods, recent studies have explored advanced data-hiding techniques for enhanced robustness and broader applicability. Li et al. [23] extended watermarking approaches to the domain of deep neural network (DNN) ownership verification. Their framework effectively mitigated linear functionality equivalence attacks and strengthened the resilience of existing white-box watermarking schemes. Tang et al. [24] proposed a two-stage reversible data-hiding method that embeds a robust watermark into selected Pseudo-Zernike moments, achieving a well-balanced trade-off between imperceptibility and robustness. Anand and Singh [25] employed a dual watermarking strategy for CT scan image authentication, embedding both Electronic Patient Record (EPR) text and medical images as watermarks to enhance security and data fidelity. Chang et al. [26] introduced a novel data-hiding technique based on a turtle shell pattern, where secret digits are embedded within pairs of cover image pixels, improving both visual quality and embedding capacity. Building on this concept, Chen et al. [27] incorporated the turtle shell method into AMBTC-compressed codes to embed authentication data more effectively, further advancing the security of compressed image content.

2.3. Secure Authentication in Augmented Reality (AR) Content

Lee et al. [28] developed a tracking system for augmented reality (AR) environments by embedding imperceptible markers within AR content. These markers, detectable through standard camera systems, served primarily as activation triggers rather than mechanisms for copyright protection. In a related vein, Li et al. [29] proposed an AR-based data-hiding framework that integrated deep learning techniques to facilitate covert communication. By leveraging the spatial characteristics of AR environments, their approach achieved high-capacity data embedding; however, the focus remained on secure data transmission rather than content authentication or copyright enforcement.
Lin et al. [30] introduced an authentication framework for AR content based on data hiding, with an emphasis on verifying the integrity of AR objects in interactive contexts. Their method embedded authentication data derived from specific AR content features, such as AR object attributes or defined e-book page regions into the transform domain using Discrete Wavelet Transform (DWT) techniques. This embedding was strategically performed in visually insignificant regions of the AR e-book to ensure robustness against visual distortions. The technique facilitated the effective detection of various tampering activities, including tone alterations, unauthorized insertions, and content replacements. During authentication, the embedded data is extracted and compared against freshly computed values from the current AR scene, with discrepancies indicating potential manipulation or unauthorized changes.
Bhattacharya et al. [31] examined the potential of blockchain technology to reduce asset replication risks in IoT-enabled smart manufacturing systems by establishing secure digital identities. While blockchain has gained traction in enterprise asset management, its adoption in AR-based electronic publishing remains nascent. The effective application of blockchain for AR content authentication would necessitate a robust identification scheme for AR objects; without such a framework, blockchain mechanisms offer limited utility for AR authentication.
Deshmukh and Bhagyashri [32] proposed a hash-based least significant bit (LSB) technique for video steganography that enhances confidentiality while mitigating susceptibility to steganalysis. This approach was extended by Manjula and Ajit [33], who embedded hash values in the LSBs of spatial-domain images to improve security in image steganography. Similarly, Dasgupta et al. [34] combined LSB embedding with SHA-256 hashing to authenticate video frames, ensuring imperceptibility alongside robust verification. Xiong et al. [35] advanced this concept through a DWT-based secret image-sharing scheme augmented with hash-based verification. Kunhu et al. [36] introduced a multi-watermarking system for medical image authentication that integrated DWT and cryptographic hash functions. Further enhancements were presented by Mahmood and Huang [37], who combined DWT with Singular Value Decomposition (SVD) and encryption techniques to enable secure embedding of sensitive information in cloud-based applications.
Despite these developments, most existing approaches lack a hierarchical authentication structure capable of linking content across multiple AR e-book pages, which poses significant challenges in ensuring content integrity at scale. The framework proposed in this study addresses this limitation by embedding hash values in a hierarchical and structured manner, thereby enabling seamless authentication and effective tampering detection across interconnected e-pages. Additionally, the proposed method is computationally efficient, making it well suited for real-time AR applications while preserving the visual and structural quality of digital content.

3. Proposed Method

The protection of multimedia digital content necessitates the deployment of robust security mechanisms to guarantee its authenticity, integrity, and resistance to tampering. This section presents a secure authentication framework tailored for augmented reality (AR) digital content, employing a hash-based data-hiding technique. The proposed approach integrates the Discrete Wavelet Transform (DWT) within the YCbCr color space to embed multiple hash signatures into both AR trigger areas and corresponding digital content regions. These embedded hash values function as authentication signatures, enabling hierarchical verification at multiple levels, including the content area (denoted as ( C A E )—representing the AR trigger zones—and the associated electronic pages (e-pages) of the AR content. In this context, an “AR page” is defined as an “e-page,” denoted by Ei to refer specifically to each page displaying AR content. In other words, an AR content contains several e-pages, and each e-page is linked to each other according to the script pre-defined by the AR creator or AR owner, as shown in Figure 1.
The proposed framework adopts a structured embedding process, illustrated in Figure 2, in which distinct hash values are embedded at multiple levels to enable verification of both content areas and inter-page linkages within the AR digital content. By operating on the luminance (Y) channel of the YCbCr color space, the embedding strategy achieves low perceptual distortion while preserving robust security and tamper resistance. Furthermore, the framework strengthens authentication by embedding hash values that link consecutive e-pages, thereby constructing a verifiable chain of trust throughout the AR content sequence, as also depicted in Figure 2. To support clarity and consistency in the description of the proposed method, the key symbols and notations employed are defined in Table 1. These notations serve to facilitate a comprehensive understanding of the hierarchical authentication mechanism applied across interconnected e-pages.
The core components of the framework, including key-based hash generation and hash-based data embedding, are explained in detail in Section 3.1. Hash-based data extraction and authentication are explained in Section 3.2.

3.1. Hash-Based Data-Embedding Phase

The region of interest (ROI) corresponds to the content area of a given e-page, denoted as C A E . Each e-page E is associated with a specific C A E , while the remaining portion of the page is referred to as the region of non-interest (RONI). For ease of reference in subsequent discussions, we denote individual e-pages in a sequence as E 1 to E n and so forth. Likewise, the content area C A E , located within the ROI of an e-page, functions as the trigger area(s) that activate augmented reality (AR) objects, as illustrated in Figure 3. To uniquely identify each C A E , a hash value is generated using Algorithm 1 with a predetermined 32-byte key. This hash is then embedded in a designated section of the corresponding e-page. Within our framework, each e-page E serves as a unit of digital multimedia content and is hierarchically structured from E 1 to E n . The key used to generate the hash facilitates the relocation of the ROI across different e-pages, as shown in Figure 3.
This key is securely shared by the content owner with authorized recipients to enable reliable data authentication. During verification, Algorithm 1 is employed to compute the key-based hash of the extracted C A E and compare it with the embedded hash. If the values match, the integrity of the content is confirmed. After the embedding process, the modified e-page is referred to as a stego e-page, abbreviated as S E .
Figure 2 illustrates a secure methodology for hierarchically embedding hash values derived from the content area C A E 1 of the first e-page E 1 as well as the linkage hash H S E 1 obtained from the corresponding stego e-page S E 1 . In this framework, C A E 1 , representing the ROI on E 1 , is first hashed to produce H s . This hash is then embedded into the RONI of E 1 , resulting in the generation of S E 1 . Subsequently, the linkage hash H S E 1 , computed from S E 1 , is embedded into the RONI of the next e-page in the sequence, E j . This hierarchical embedding process continues iteratively, establishing a secure linkage from the previous e-page E i to the current page E j (where i < j) across the entire sequence from E 1 to E n . This multi-level embedding strategy ensures both the integrity and authenticity of the content throughout the e-page sequence. The detailed process of the proposed framework is depicted in Figure 4 and described as follows:
The framework begins with the first e-page, E 1 , which contains a region of interest (ROI) denoted as C A E 1 . This content area plays a crucial role in the embedding process, as it is subject to later verification for both integrity and authenticity. Concurrently, any augmented reality (AR) content associated with this e-page, such as a 3D object, is linked to C A E 1 and subsequently processed.
Using Algorithm 1, a hash value is generated for C A E 1 . This hash is then embedded into the RONI of E 1 , resulting in the creation of the stego e-page, denoted as S E 1 . Next, S E 1 is processed again via Algorithm 1 to compute its linkage hash, H S E 1 . To establish a sequential connection between two continuous pages (i.e., E 1   a n d   E 2 ), this linkage hash is embedded into the RONI of the next e-page in the sequence, E 2 . This process not only utilizes the symmetric property between two consecutive AR contents but also links E i   a n d   E j , where i < j, i = 1, and j = 2, forming the foundation for a hierarchical and verifiable chain of content authentication.
In Algorithm 1, the hash value is calculated as follows:
Algorithm 1: Key-based hash algorithm.
Input :   E z   ( 1     z     n ) :   Sequence   of   e-page   of   size   u   ×   v ,   ( x , y ) :   Coordinate   of   C A E z Output :   H s   ( 1     z     n ) :   Hierarchical   hash   value   of   the   C A E z ,   H S E z 1 z n :   Hierarchical   linkage   hash   of   the   S E z Step   1 :   Select   C A E z   from   E z   and   split   C A E z ( i , j )   into   three   color   channels for   each   e-page   z   from   1   to   n   do :    C A E z = E z y : y + v , x : x + u ,    for   each   pixel   ( i , j )   in   the   E z   do :      R   ( i ,   j ) = E z ( x + i ,   y + j ,   0 ) ,      G   ( i ,   j ) = E z ( x + i , y + j ,   1 ) ,      B   ( i ,   j ) = E z ( y + i , y + j ,   2 ) ,      C A E z i , j = R i , j , G i , j , B i , j ,    end   for    i f   z = 1   d o :      for   each   pixel   ( i ,   j )   in   C A E 1 do :       R   ( i ,   j ) = C A E 1 ( i ,   j ,   0 ) ,       G   ( i ,   j ) = C A E 1 ( i ,   j ,   1 ) ,       B   ( i ,   j ) = C A E 1 ( i ,   j ,   2 ) ,       Step   1.1 :   Normalize   the   pixel   values   of   of   each   channel                  R n o r m i , j = R ( i ,   j ) / 255 ,       G n o r m i , j = G ( i ,   j ) / 255 ,       B n o r m i , j = B ( i ,   j ) / 255 ,       Step   1.2 :   Set   the   initial   H   accumulators   for   each   channel       H R   = 0 ,   H G   = 0 ,   H B   = 0       Step   1.3 :   Compute   H   Values   for   each   channel       W M i , j = i + 1 × j + 1 ,       H R   = H R + R n o r m i , j × W M i , j ,       H G   = H G + G n o r m i , j × W M i , j ,       H B   = H B + B n o r m i , j × W M i , j ,       Step   1.4 :   To   limit   the   H   size ,   apply   modulo   M   to   each   channel   hash   accumulator ,   where   M = 2 8   ensures   the   hash   remains   within   the   8 -bit   integer   range .       H R   = H R   m o d   M ,       H G   = H G   m o d   M ,       H B   = H B   m o d   M ,       Step   1.5 :   Represents   the   bitwise   concatenation   of   H R ,   H G ,   and   H B   normalized   pixel   values   from   all   three   channels   to   form   the   final   H s .         H = H R H G H B ,         H s = S H A 256 H , end   for    else   do :      Step   1.6 :   Compute   linkage   hash   H S E z   corresponding   to   S E z if z = z + 1 , end   for Step   2 :   Output   of   hierarchical   hash   H s   ( 1     z     n )   and   H S E z 1 z n end
The following example demonstrates the key-based hashing procedure outlined in Algorithm 1. In this case, we assume that E 1 is an image of dimensions 8 × 8 × 3 pixels, as illustrated in Figure 5. The region of interest, C A E 1 1, corresponds to a 2 × 2 × 3 pixel block within this image, highlighted by a bold black border in Figure 5. All subsequent processing steps are confined to this designated 2 × 2 block. In Step 1 of Algorithm 1, the region C A E 1   ( i , j ) is initially separated into its three constituent color channels: red (R), green (G), and blue (B). As described in Step 1.1, each color channel is then normalized by dividing pixel intensity values by 255, producing values in the range [0.0, 1.0]. For instance, if the R channel pixel at position (0,0) has a value of 150, the normalized value is calculated as R n o r m 0,0 = 150 / 255 = 0.5882 . The normalized results for the pixels within the black-framed region are presented in Figure 6, corresponding to the pixel values shown in Figure 5.
In Step 1.2, the initial hash accumulator is initialized independently for each channel. Subsequently, as described in Step 1.3, the hash values for each channel are computed by iterating over all pixels within C A E 1 , specifically from i = 0 to w 1 and j = 0 to h 1 . For example, consider the R channel at pixel location (0,0), where i = 0 , j = 0 ,   t h e   n o r m a l i z e d   v a l u e   i s   R n o r m 0,0 = 0.588 23, and the corresponding weight matrix value is W M 0,0 = 1 . The hash contribution at this pixel is computed as follows: H R ( 0,0 ) = H R + R n o r m ( 0,0 ) × W M 0,0 = 0 + 0.58823 × 1 = 0.5882 . This value is then used to update the hash accumulator for the R channel. The same procedure is applied to the G and B channels to compute H G and H B , respectively. This operation is repeated across all pixels in the image using the corresponding values from the weight matrix (WM) shown in Figure 7. It is important to note that the weight matrix (WM) is used to scale the normalized values R n o r m ,   G n o r m ,   a n d   B n o r m , which are derived from C A E z of size w × h × 3 , and the linkage hash derived from S E Z of size u × v × 3 , during the hash computation in Step 1.3. This scaling ensures that the resulting hash values of C A E z and the linkage hash of S E Z are compatible with the C L L coefficients. As a result, both robustness and imperceptibility are maintained, as demonstrated in Figure 8.
In Step 1.5, the process involves the bitwise concatenation of the hash values H R , H G , and H B from the R, G, and B channels, respectively, to derive the final hash value, denoted as H s . Specifically, the values are approximated as follows: H R = 3.4431 ≈ 3, H G = 4.7294 ≈ 5, and H B = 3.2784 ≈ 3. Upon concatenating the normalized hash values from the three channels, the final concatenated value H is obtained through the expression H   = H R     H G     H G = 3     5     3   = 353. This concatenated value H is subsequently processed using the SHA-256 algorithm, yielding the final hash value H s of C A E 1 ( H s = SHA256 (353) = “459535faa370a3b5f8b87203b089623c7aeb9325abf241ec8a685b9c325047a3”. Once all C A E z for each e-pages have been sequentially processed, Algorithm 2 is executed to facilitate data embedding.
In Algorithm 2, the hash-based data-hiding strategy is employed to conceal the H s derived from C A E z , obtained via Algorithm 1, into the RONI of E z . It is important to note that the same procedure is utilized for embedding H S E z , derived from S E z , into the RONI of the E j . The details of Algorithm 2 are outlined as follows:
Algorithm 2 Hash-based data embedding.
Input :   E z   ( 1     z     n ) :   Sequence   of   e-page   of   size   u   ×   v ,   H s   ( 1     z     n ) :   Sequence   of   hash   value   of   the   C A E z ,   α :   Blending   factor   and   n :   Number   of   hierarchical   e-pages Output :   S E z   ( 1     z     n ) :   Stego   e-pages   with   hierarchical   hash   embedding Step   1 .   Convert   E z   from   RGB   to   YCbCr   color   space   and   separate   the   luminance   ( Y )   component   ( E z , Y )   for   data   hiding .   Moreover ,   compute   C b   and   C r . for   each   e-page   z   from   1   to   n   do :     i f   ( z = 1 )   d o :             E 1 , Y = 0.299 × R + 0.587 × G + 0.114 × B ,             C b = 0.168736 × R 0.331264 × G + 0.5 × B + 128 ,             C r = 0.5 × R 0.46000 × G 0.040000 × B + 128 ,      Step   1.1 :   Optimize   H s   obtained   from   Algorithm   1   to   match   the   intensity   range   of   the   LL   subband   coefficients   of   E 1   for   smoother   embedding .              H s n = H s min H s max H s min H s × ( max C L L min C L L ) + min C L L ,      Step   1.2 :   Apply   a   DWT   to   E 1 , Y   to   decompose   it   into   C L L ,   C L H , C H L ,   and   C H H   subbands .   Select   the   LL   subband   C L L   for   data   embedding .      for   each   pixel   ( i ,   j )   in   E 1 , Y   do :      for   each   row   pairs   ( E 1 , Y i , 2 j , E 1 , Y i ,   2 j + 1   do :          A v e r a g e i = E 1 , Y   ( i , 2 j ) + E 1 , Y   i ,   2 j + 1 2 ,          D e t a i l i = E 1 , Y   i , 2 j E 1 , Y   i ,   2 j + 1 2 ,      for   each   column   pairs   ( E 1 , Y 2 i , j , E 1 , Y 2 i + 1 ,   j   do :          A v e r a g e j = E 1 , Y   ( 2 i , j ) + E 1 , Y   2 i + 1 ,   j 2 ,          D e t a i l j = E 1 , Y   2 i , j E 1 , Y   2 i + 1 ,   j 2 ,          Step   1.2.1 :   Divide   C L L   into   blocks   of   size   u × v   and   calculate   the   mean   intensity   μ L L   and   standard   deviation   σ L L   of   each   block   to   control   adaptively   the   strength   of   embedding ,   where   ϵ   is   a   small   constant   to   prevent   division   by   zero          for   each   block   k   in   C L L   do :              β = σ L L μ L L + ϵ ,          Step   1.2.2 :   Embed   the   H s n   into   the   C L L   by   modifying   each   block   C L L , k   using   adaptive   blending   α   and   weight   β .              C L L , k i ,   j = C L L , k i ,   j + α × β × H s n ,          end   for      end   for      Step   1.3 :   Apply   inverse   DWT   to   the   modified   C L L , k   corresponding   with   the   original   C L H , C H L ,   and   C H H   subbands ,   to   reconstruct   the   Y   component   E 1 , Y   with   embedded   data .          E 1 , Y 2 i ,   2 j = C L L , k i ,   j + C L H i ,   j + C H L i ,   j + C H H i ,   j ,          E 1 , Y 2 i ,   2 j + 1 = C L L , k i ,   j + C L H i ,   j C H L i ,   j C H H i ,   j ,          E 1 , Y 2 i + 1 ,   2 j = C L L , k i ,   j C L H i ,   j + C H L i ,   j C H H i ,   j ,          E 1 , Y 2 i + 1 ,   2 j + 1 = C L L , k i ,   j C L H i ,   j C H L i ,   j + C H H i ,   j ,        Step   1.4 :   Combine   E 1 , Y   with   the   original   Cb   and   Cr   channels ,   then   convert   back   to   RGB   to   obtain   the   stego   e-page   S E 1 .          R = E 1 , Y + 1.402 × C r 128 ,          G = E 1 , Y 0.344136 × C b 128 0.714136 × C r 128 ,          B = E 1 , Y + 1.772 × C b 128 ,   end   if Step   2 :   Step   1   to   Step   1.4   will   be   repeated   until   all   e - pages   have   been   processed   sequentially   by   updating   z .                     z = z + 1 , end for Step   3 :   Output   of   S E z   ( 1     z     n )   in   hierarchical   order   within   hash   embedded end
A detailed example is provided to illustrate the process of data embedding. Initially, the RGB pixels on e-page 1 are converted into the YCbCr color space. The luminance component E 1 , Y is then extracted, as depicted in Figure 9. for demonstration purposes only. Here, only E 1 , Y is used in this example. However, it is important to note that the same procedure can be extended to extract the corresponding components from the Cb and Cr channels, which can subsequently be employed for data embedding.
In Step 1 of Algorithm 2, a comprehensive example is provided to demonstrate the conversion of E 1 from the RGB color space to the YCbCr color space followed by the extraction of the luminance component E 1 , Y for data hiding, ensuring minimal distortion. The calculation of E 1 , Y for the pixel located at position (0, 0) in E 1 is performed as follows: E 1 , Y 0,0 = 0.299 × 150 + 0.587 × 100 + 0.114 × 50 = 109 . Thus, the luminance component E 1 , Y for E 1 is obtained, as shown in Figure 10. Similarly, the corresponding chrominance components Cb and Cr are calculated as C b = 0.168736 × 150 0.331264 100 + 0.5 × 50 + 1280 = 95 and C r = 0.5 × 150 0.46000 × 100 0.040000 × 50 + 128 = 155 . These calculations yield the final values for the Cb and Cr components, which, along with the luminance component E 1 , Y , are used for subsequent data-embedding operations.
The linkage hash is derived to establish a hierarchical connection between consecutive e-pages. This linkage ensures the authentication and integrity of the e-page content by securing and verifying the relationship between the hierarchical e-pages, thereby enabling multi-level authentication. The hash value and the linkage hash are subsequently normalized to match the intensity range of C L L . In Step 1.1 of Algorithm 2, the hash value H s obtained through Algorithm 1 is normalized to align with the intensity range of C L L from the luminance component E 1 , Y , facilitating smoother data embedding.
For the purpose of this example, we assume that C A E 1 is an RGB image, where the ROI is extracted as a 2 × 2 block for each color channel. This region, which has dimensions of 2 × 2 × 3 pixels, is enclosed by a bold black border in Figure 6. To process the hash value H s , we begin by splitting the hexadecimal string of H s into pairs of two digits, with each pair representing one byte (eight bits) in hexadecimal. The array H s [ i ] is indexed from 0 to 31. The DWT is applied to E 1 , Y to decompose it into C L l ,   C L H , C H L , and C H H subbands where C L L is selected for embedding the hash, as it is the most robust location in the frequency domain in Step 1.2 as illustrated in Figure 11 and Figure 12.
Next, we convert each hexadecimal pair from H s [ i ] into its decimal equivalent, for instance, H s [1] = 4516 = 4 × 161 + 5 × 160 = 69, H s [2] = 9516 = 9 × 161 + 5 × 160 = 149, and H s [3] = fa16 = 15 × 161 + 10 × 160 = 250. This process continues for all hexadecimal pairs in the string to obtain the final hash value H s = “69, 149, 53, 250, 163, 112, 163, 181, 248, 184, 114, 3, 176, 137, 35, 199, 174, 185, 50, 90, 191, 36, 30, 200, 166, 133, 185, 156, 50, 80, 47, 163” for C A E 1 using Algorithm 1. Subsequently, each value H s [ i ] is normalized to obtain the corresponding normalized hash value H s n [ i ] , where m a x H s = 250 , min H s = 3 , max C L L = 255 , a n d min C L L = 180 . For example, the first value is normalized as follows: H s , n 0 = ( ( ( 69 3 ) / ( 250 3 ) ) × ( 255 180 ) ) + 180 = 200 . This process is repeated for all H s [ i ] , resulting in the final normalized hash values: H s n = “200, 227, 237, 259, 232, 215, 232, 237, 258, 238, 216, 181, 236, 223, 191, 243, 235, 238, 196, 208, 240, 191, 190, 243, 232, 222, 238, 229, 196, 205, 195, 232”. Thus, the normalized hash value H s n is obtained, ready for data embedding in the RONI as part of the multi-level authentication process in Algorithm 2.
The normalized hash values H s n are processed for embedding into the C L L subband by modifying each pixel value in C L L using adaptive bending (α) and weight (β) parameters, where C L L is divided into blocks of size 2 × 2 , as shown in Figure 13. These parameters balance the trade-off between imperceptibility and robustness of the embedded data. Specifically, the α parameter acts as a scaling factor, governing the strength or visibility of the embedded bitstream within the e-page. It ensures that the hidden data does not significantly distort the visual quality of the stego e-page. On the other hand, β serves as a threshold or adjustment parameter used during pixel intensity comparisons.
To illustrate the data embedding process, we provide an example where the hash values are embedded into the first block coefficient of C L L (denoted C L L (0,0)) with α = 0.01 and β = 0.128 for block 1, as shown in Figure 13. For block 1 (depicted in Figure 13a), the mean intensity μ L L , standard deviation σ L L , and β are calculated as follows: μ L L = 233 + 180 + + 255 + 252 4 = 230 ,   σ L L = ( 233 230 ) 2 + ( 180 230 ) 2 + ( 255 230 ) 2 + ( 252 230 ) 2 4 = 30 ,   β = 30 230 + 10 5 = 0.128 . Once the values of μ L L , σ L L , and β are determined, the pixel value of the first block coefficient C L L (0,0) is adjusted by the following equation to embed the hash value to obtain modified block 1 in C L L and repeat the same operation to the rest block 2 to 4, as illustrated in Figure 14:
C L L , k 0,0 = 233 + 0.01 × 0.128 × 200 = 233.25 233 .
Additionally, the IDWT is applied to the modified C L L , k ( C L L , k ), as shown in Figure 15 along with the original C L H , C H L , and C H H subbands, to reconstruct the E 1 , Y using equations in Step 1.3 where A v e r a g e i represents the low-pass filter used to reconstruct the approximate low-frequency coefficients and D e t a i l i represents the high-pass filter used to retain the details of high-frequency coefficients. Finally, E 1 , Y is combined with the corresponding C b and C r channels, and then converted back to RGB to obtain the S E 1 , as shown in Figure 16. In Figure 16, we provide an example for pixel (0, 2), where E 1 , Y 0,2 = 95 , C b 0,2 = 214 , and C r ( 0,2 ) = 108 . We apply equations in Step 1.4 to convert YCbCr back to RGB, respectively, for each channel to reconstruct S E 1 , where R (0,2) = 1.164 × 95 − 16 + 1.402 × (108 − 128) = 67, G (0,2) = 1.164 × 95 − 16 − 0.344136 × (214 − 128) − 0.714136 × (108 − 128) = 78.10 ≈ 79, and B (0,2) = 1.164 × 95 − 16 + 1.772 × (214 − 128) = 247 to obtain the pixel (0,2), R (0,2) = 67, G (0,2) = 79, B (0,2) = 247.

3.2. Hash-Based Data Extraction and Authentication

This sub-section presents the process of securely extracting the embedded hash value and linkage hash from the preceding stego e-pages sequentially and verifying them to ensure digital content authenticity. The extracted hash is compared with the recomputed hash from the corresponding content area or precedent stego e-page to detect any unauthorized modifications. By validating the integrity of the embedded data through this hash-based comparison, the proposed method effectively supports reliable authentication and tamper detection within augmented reality digital content.
Figure 17 illustrates the hash-based data extraction and authentication framework, which verifies the integrity of the hierarchical e-page and its associated AR digital content by extracting and comparing embedded hash values. This ensures that no unauthorized modifications have occurred.

3.2.1. Hash-Based Data Extraction

The extraction process is designed to closely mirror the embedding procedure, thereby enabling the accurate retrieval of the hash from the frequency domain of the Y component (denoted as E 1 , Y ) within the C L L subband. During the rendering or playback of AR digital content, the embedded hash is extracted, and a new hash is computed from the corresponding C A S E of the stego e-page, where the associated 3D object is displayed. These two hash values—the extracted and the newly generated—are then compared. A match between them confirms the authenticity of the digital content, including the stego e-page S E and C A S E , whereas a mismatch indicates potential tampering or unauthorized modification.
To facilitate this verification, a hash-based data extraction strategy is employed to retrieve the hash value from the RONI of S E 1 . The extraction process begins with the stego e-page S E 1 and the known blending factor α. Initially, S E 1 is converted from the RGB color space to YCbCr, and the luminance (Y) component, E 1 , Y , is isolated. A DWT is then applied to E 1 , Y to obtain the C L L subband, which contains the embedded hash information. For hash retrieval, both C L L and the original C L L are partitioned into blocks of dimensions u × v . For each block, the mean intensity μ L L and the standard deviation σ L L are computed to determine a corresponding weight β. The embedded hash H s n is extracted by reversing the embedding process outlined in Algorithm 2, thereby ensuring precise recovery of the originally embedded data. These partial hash values are then aggregated to reconstruct the complete hash, H s n , followed by a de-normalization step.
Subsequently, the reconstructed hash H s is compared against the original hash H s to verify content authenticity and integrity. The proposed authentication framework employs the SHA-256 cryptographic hash function, which offers robust security properties such as determinism, pre-image resistance, second pre-image resistance, and collision resistance. These properties ensure that identical inputs yield identical hash outputs. Therefore, if the e-page has been altered, the recalculated hash will differ from the extracted one, prompting the system to flag the content as unauthenticated.

3.2.2. Multi-Level Data Authentication

The proposed framework incorporates a mechanism for detecting tampered regions within electronic pages (e-pages). In instances where the hash comparison fails, the system facilitates the localization of the tampered region and alerts the user to possible unauthorized modifications. The integrity of e-pages is maintained through linkage data authentication, which verifies embedded hash values sequentially across a series of e-pages denoted as E 1 to E 2 and so on. The process commences with the stego e-page S E 1 and its corresponding e-page E 2 , which contains the linkage hash H S E 1 of S E 1 embedded within it. Initially, the content area C A S E 1 is extracted from S E 1 , and its hash H c , s 1 is computed using Algorithm 1. This computed hash is then compared with the previously extracted embedded hash H s ′. A match confirms the integrity of S E 1 .
Subsequently, a DWT is applied to E 2 , decomposing it into its subbands, C L L , C L H , C H L , and C H H . The embedded linkage hash H S E 1 is then extracted from a designated region within the C L L subband of E 2 . Concurrently, the hash H S E 1 is recomputed using Algorithm 1 to serve as the expected linkage hash for E 2 . If the extracted and recomputed hashes match, the linkage between E 1 and E 2 is successfully verified. To further authenticate the content of E 2 , the content area C A S E 2 is extracted from S E 2 , and its corresponding hash H s 2 is computed via Algorithm 1. This newly computed hash is compared with the embedded hash H s 2 ; a successful match validates the integrity of the content area in E 2 . The hash H S E 2 , generated from S E 2 , is then used as the reference for subsequent e-page verification (e.g., E j   a n d   j = 2 ).
This sequential authentication approach ensures that each e-page not only maintains its own data integrity but also remains properly linked within the series E i and E j , where i < j, thereby preserving the security of the augmented reality (AR) digital content chain. By verifying hash values at each stage, the framework guarantees the authenticity and continuity of AR multimedia content. Any discrepancy in hash values is treated as evidence of tampering, thereby flagging a potential breach of security.

4. Experimental Results and Discussion

This section provides a comprehensive analysis of the experimental results obtained from the conducted evaluations. The assessment focuses on three key aspects—image quality, robustness against various attacks, and authentication accuracy—in order to validate the effectiveness and reliability of the proposed framework.

4.1. Image Quality

To evaluate the effectiveness of the proposed framework, a series of experiments was conducted on a diverse set of e-pages, including static images, 3D models, and video clips. The tested e-pages, along with their parameter variations (e.g., block size of 64 × 64, α = 0.01, and β ranging from 0.0229 to 0.9019), are illustrated in Figure 18. Furthermore, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24 present the results of tests performed on two representative e-pages subjected to six distinct types of attacks. Additional experiments were carried out using the prototype system described in Section 3, with detailed results for the test e-pages under various attack scenarios provided in Section 4.2.
The visual quality of the stego e-page ( S E ), which carries the authentication data corresponding to the content area, is quantitatively assessed using Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE), as defined in Equations (1) and (2). Here, u × v denotes the dimensions of both the stego e-page ( S E i , j ) and the original e-page ( E i , j ), respectively.
M S E = 1 u × v i = 1 u j = 1 v ( S E i , j E i , j ) 2 ,
PSNR is defined as follows in Equation (2):
P S N R = 10 l o g 10 255 2 M S E ,
In addition, the robustness of the proposed framework was evaluated under common image processing attacks, including JPEG compression, Gaussian noise, and scaling.

4.2. Robustness Against Attacks

To further evaluate the performance of our proposed framework on various attacks, two linked e-pages were prepared as shown in Figure 18. Additionally, six common image-processing attacks were applied to e-page 1, as depicted in Figure 19. It is noted that a Gaussian attack involves introducing statistically distributed noise into an image to simulate real-world signal degradation, and higher σ values (e.g., 10%) result in greater image degradation and lower PSNR compared with lower σ values (e.g., 5%). A JPEG attack involves applying lossy image compression, where the quality factor Q controls the extent of compression—lower Q values (e.g., 50) result in more severe compression artifacts and reduced image quality, while higher Q values (e.g., 60) preserve more detail. A Type-1 cropping attack replaces the original image with a forged one, effectively simulating unauthorized content substitution. In contrast, a Type-2 cropping attack duplicates an existing image within the same e-page but places it in a different position, mimicking internal content manipulation.
Table 2 summarizes the image quality metrics and Structural Similarity Index Measure (SSIM) for e-page 1 under six different attack conditions as well as without any attacks.
In the absence of any attacks, the framework achieves a high Peak Signal-to-Noise Ratio (PSNR) of 45.80 dB, indicating excellent visual fidelity, along with an SSIM of 0.99, reflecting a strong structural similarity to the original content. Under Gaussian noise—with standard deviation σ ranging from 5% to 10% to simulate varying noise levels—the PSNR decreases moderately to 20.38 dB, while the SSIM remains above 0.60, thereby demonstrating the system’s resilience against noise interference. Under JPEG compression attacks with quality factors Q between 50 and 60, a slight reduction in image quality is observed, with PSNR values remaining above 31.17 dB and SSIM exceeding 0.93. These results confirm the framework’s effectiveness in preserving content integrity despite compression and noise-related distortions.
In contrast, when stego e-page 1 undergoes “Type-1” and “Type-2” cropping attacks—where portions of the e-page content are replaced with the forged images highlighted in red—the PSNR drops below 19 dB, indicating significant content alteration, as illustrated in Figure 19. Corresponding histograms provided in Figure 20 further corroborate the presence of modifications under all six attack scenarios.
Figure 18. (a) Original e-page 1; (b) original e-page 2.
Figure 18. (a) Original e-page 1; (b) original e-page 2.
Symmetry 17 01027 g018
Figure 19. Stego e-page 1 under six attacks.
Figure 19. Stego e-page 1 under six attacks.
Symmetry 17 01027 g019aSymmetry 17 01027 g019b
Figure 20. Comparing the histogram shifting of stego e-page 1 under six attacks.
Figure 20. Comparing the histogram shifting of stego e-page 1 under six attacks.
Symmetry 17 01027 g020aSymmetry 17 01027 g020b

4.3. Authentication Evaluation

Table 3 presents the performance evaluation results for stego e-page 2. As indicated in the table, the embedded hash demonstrates strong robustness against various attacks while preserving high visual quality. The proposed framework effectively ensures the authenticity of AR digital content without compromising usability.
Although the PSNR values for stego e-page 2 are generally lower than those observed for e-page 1, the framework continues to exhibit resilience. Under no-attack conditions, stego e-page 2 achieves a PSNR of 41.27 dB and an SSIM of 0.99, indicating excellent image fidelity. When subjected to Gaussian noise attacks with a standard deviation of 5% or 10%, the PSNR decreases to 26.22 dB, while the SSIM remains above 0.65, demonstrating robustness to noise perturbations. Under JPEG compression attacks with quality factors Q of 50 or 60, which balance compression and visual quality, the system attains a PSNR of 31.69 dB and an SSIM exceeding 0.94. These results confirm the framework’s effectiveness in maintaining content integrity under common compression scenarios. In contrast, when stego e-page 2 undergoes “Type-1” and “Type-2” cropping attacks—where portions of the e-page are replaced by the forged images highlighted in red—the PSNR drops below 21 dB and the SSIM falls below 0.66. These significant reductions indicate severe content modification, as illustrated in Figure 21.
Figure 21. Stego e-page 2 (a) under six attacks (bg).
Figure 21. Stego e-page 2 (a) under six attacks (bg).
Symmetry 17 01027 g021aSymmetry 17 01027 g021b
The corresponding histograms presented in Figure 22 further illustrate the effects of the six attacks on the stego e-page. Figure 22a depicts the original stego page without any attack, characterized by distinct peak distributions across the RGB channel histograms. Figure 22b,c show slight variations in the histograms following a Gaussian noise attack (σ = 5% and 10%, respectively), indicating minor perturbations. Figure 22d,e illustrate changes in pixel value distributions resulting from JPEG compression with a quality factor of Q = 50 and 60, respectively. In contrast, Figure 22d reveals markedly altered histogram patterns corresponding to replacement attacks, highlighting substantial content modifications.
Typical examples of successful and failed system verifications are illustrated in Figure 23 and Figure 24, respectively. Figure 23a,b depict successfully verified stego e-pages 1 and 2, each displaying a green “Verified” indicator at the bottom of the page. This confirmation verifies data integrity and triggers the corresponding AR object rendering. Figure 24 presents six distinct verification failure scenarios: Figure 24a,c show pages 1 and 2 subjected to Gaussian noise attacks (σ = 5%), highlighting how noise interference results in authentication failure; Figure 24b,d demonstrate the effects of JPEG compression attacks (Q = 50 and 60) on pages 1 and 2, illustrating the compromise of data integrity due to compression; finally, Figure 24e,f illustrate verification failures caused by two different replacement attacks.
Figure 22. Comparison of histogram shifting of stego e-page 2 under six attacks.
Figure 22. Comparison of histogram shifting of stego e-page 2 under six attacks.
Symmetry 17 01027 g022aSymmetry 17 01027 g022b
Figure 23. (a) Verified stego e-page 1; (b) verified stego e-page 2.
Figure 23. (a) Verified stego e-page 1; (b) verified stego e-page 2.
Symmetry 17 01027 g023
Figure 24. (a) Unverified stego e-page 1 (Gaussian noise (σ = 5%) attack); (b) unverified stego e-page 1 (JPEG compression (Q = 50 and 60) attack); (c) unverified stego e-page 2 (Gaussian noise (σ = 5%) attack); (d) unverified stego e-page 2 (JPEG compression (Q = 50) attack); (e) unverified stego e-page 2 (replaced attack); (f) unverified stego e-page 2 (replaced attack).
Figure 24. (a) Unverified stego e-page 1 (Gaussian noise (σ = 5%) attack); (b) unverified stego e-page 1 (JPEG compression (Q = 50 and 60) attack); (c) unverified stego e-page 2 (Gaussian noise (σ = 5%) attack); (d) unverified stego e-page 2 (JPEG compression (Q = 50) attack); (e) unverified stego e-page 2 (replaced attack); (f) unverified stego e-page 2 (replaced attack).
Symmetry 17 01027 g024aSymmetry 17 01027 g024b
Based on the experimental results, it is confirmed that the proposed framework provides a robust mechanism for detecting tampered or replaced stego e-pages by comparing extracted hashes from the Region of Non-Interest (RONI) with newly recalculated hashes. When the two hash values match, the authentication result displays the message “Verified,” triggering the rendering of the corresponding augmented reality (AR) object on the relevant e-page, as demonstrated in Figure 23a,b. Conversely, if authentication fails due to a hash mismatch, a warning message “Unverified” is shown, as illustrated in Figure 24a–f.
This security feature ensures that only authentic and unmodified stego e-pages can activate the intended AR experiences, thereby effectively protecting against various forms of tampering and manipulation. The experimental findings further demonstrate that the system accurately distinguishes between original, untampered pages and those subjected to multiple types of attacks, providing reliable data integrity assurance and robust authentication for AR applications based on visible light communication.

4.4. Computational Performance

The extraction process reliably retrieves accurate hash values, thereby ensuring the authenticity of the digital content. Furthermore, the embedding and extraction procedures exhibit low computational complexity, rendering the proposed framework suitable for real-time augmented reality (AR) applications.
Table 4 summarizes the performance of the proposed DWT-based embedding method applied in the YCbCr color space. The system attains high Peak Signal-to-Noise Ratio (PSNR) values of 45.80 dB and 41.26 dB for stego e-pages 1 and 2, respectively, indicating minimal perceptual distortion. Both stego e-pages achieve a similarity detection rate of 99.9%, confirming the accurate extraction and verification of the embedded data. Additionally, the embedding, extraction, and similarity detection procedures demonstrate efficient execution times, highlighting the suitability of the proposed method for real-time AR applications. The overall time complexity of the proposed hash-based data-hiding framework, which employs DWT over the Y channel of an e-page transformed to YCbCr, is linear with respect to the number of pixels in the e-page, denoted as n = H × W , where H and W present the image height and width. The embedding process involves several operations: (1) hash generation from the content area (CA) of size u × v in RGB and fixed-length SHA-256 output; (2) RGB to YCbCr conversion of the e-page with a time complexity of O ( n ) , as each of the n pixels undergoes a matrix transformation; (3) Y channel extraction, which requires scanning all n pixels to isolate luminance values, resulting in O ( n ) ; (4) 2D DWT on the Y channel, which processes the entire n = H × W luminance plane, with a complexity of O ( n ) ; (5) hash-to-binary conversion, which is fixed at 256 bits and thus has constant time complexity O ( 1 ) ; (6) embedding hash bits into selected DWT subband coefficients (typically 256 bits), resulting in O ( 1 ) ; (7) inverse DWT, which operates on the full Y channel, requiring O ( n ) ; and (8) YCbCr to RGB reconstruction, a pixel-wise conversion, which also takes O ( n ) . Therefore, the total time complexity for the data embedding process is O ( n ) + O ( u × v ) . During extraction, a similar linear pattern is followed, with the exception of the additional task of extracting the hidden hash values. Consequently, the total time complexity for the extraction process is also O ( n ) + O ( u × v ) .
The performance evaluations validate the effectiveness of our proposed authentication framework in maintaining visual quality, providing robustness against various attacks, and demonstrating real-time efficiency. These results highlight its reliability as a solution for securing AR digital content. For instance, the proposed framework could be integrated into the asset import stage of Unreal Engine, serving as an additional verification layer and offering an alternative method to ensure both the integrity and source reliability of AR content.

4.5. Comparisons with Other Works

Since the proposed authentication framework integrates a data-hiding strategy and a hash function to achieve both content authentication and linkage authentication for AR content, its features are demonstrated through comparison. Specifically, three existing schemes—either employing hash-based data hiding for images or utilizing alternative hiding strategies designed for AR content—are discussed alongside our framework in Table 5.
As shown in Table 5, both Chang et al. [4] and Liu et al. [8] employed hash functions in their respective data-hiding strategies. However, only Liu et al. evaluated the robustness of their approach against JPEG compression, with quality factors (Q) ranging from 20% to 80%. In contrast, the scheme by Chang et al. was designed specifically for VQ-index tables and, therefore, did not report the image quality after embedding the secret data. Lin et al.’s method [28], like our proposed scheme, focused on content authentication for AR content; however, their approach addressed only a single e-page and did not consider the linkage between consecutive e-pages. Although our framework does not explicitly test against luminance and color saturation attacks, these types of distortions alter pixel values, thereby changing the input to the hash function. Due to the properties of the SHA-256 algorithm, any variation in input will produce a different hash value. As a result, the extracted message will differ from the recalculated hash value, leading to a mismatch that causes the authentication to fail. Regarding image quality, PSNR was not considered in Chang et al.’s scheme [4], as their carrier medium is the VQ-index table rather than a standard image. The remaining three schemes—including ours—achieve various levels of PSNR, yet the differences between the original and stego content remain largely imperceptible to the human visual system. The PSNR achieved by our scheme is lower than that of Lin et al.’s scheme [28], primarily due to the larger amount of embedded data, which includes both the hash value of the current e-page and the linkage hash value of the preceding e-page. Nevertheless, this limitation presents an opportunity for future improvement.

5. Conclusions

In this paper, we propose a secure authentication framework for augmented reality (AR) digital content that exploits the symmetry between consecutive AR frames and employs a hash-based data-hiding technique integrating the YCbCr color space with the Discrete Wavelet Transform (DWT). This framework demonstrates high performance in terms of visual quality, resilience against various attacks, authentication accuracy, and real-time efficiency. Experimental results validate the system’s robustness against common distortions—including Gaussian noise, scaling, and JPEG compression—with PSNR values exceeding 26 dB in most attacked cases. Furthermore, the framework consistently maintains high visual fidelity, achieving PSNR values above 45 dB and SSIM scores greater than 0.98. The proposed hash-based method achieves a 99.9% authentication accuracy in detecting tampered AR content.
The rapid execution times for embedding, extraction, and similarity detection highlight the framework’s suitability for real-time AR applications. By embedding multiple hash signatures into multimedia content, the system ensures the integrity and authenticity of AR objects, their digital media components, and the hierarchical relationships among them. This enables robust detection of tampering and unauthorized modifications. In future work, we aim to extend the proposed framework to more complex augmented reality environments and enhance its resilience against advanced attack vectors, all while preserving competitive image quality. Overall, the proposed method makes a significant contribution to the secure and reliable management of digital multimedia content within AR systems.

Author Contributions

Conceptualization, C.-C.L., A.N., C.-C.C. and S.-H.L.; methodology, C.-C.L. and C.-C.C., software, A.N., validation, C.-C.L.; formal analysis, C.-C.L. and C.-C.C.; investigation, C.-C.L.; resources, C.-C.C.; data curation, C.-C.L.; writing—original draft preparation, C.-C.L.; writing—review and editing, C.-C.L., S.-H.L. and C.-C.C.; visualization, S.-H.L.; supervision, C.-C.L.; project administration, C.-C.L. and C.-C.C.; funding acquisition, C.-C.L. and C.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science and Technology Council, R. O. C. under Program (NSTC 113-2221-E-035-014-), (NSTC 113-2221-E-035-047-), (NSTC 114-2622-8-005-001-TE1), and (NSTC 113-2218-E-035-002-).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Feiner, S.; Macintyre, B.; Seligmann, D. Knowledge-Based Augmented Reality. Commun. ACM 1993, 36, 53–62. [Google Scholar] [CrossRef]
  2. Gaebel, E.; Zhang, N.; Lou, W.; Hou, Y.T. Looks Good to Me: Authentication for Augmented Reality. In Proceedings of the 6th International Workshop on Trustworthy Embedded Devices, Hofburg Palace, Vienna, Austria, 28 October 2016. [Google Scholar]
  3. Wazir, W.; Khattak, H.A.; Almogren, A.; Khan, M.A.; Din, I.U. Doodle-Based Authentication Technique Using Augmented Reality. IEEE Access 2020, 8, 4022–4034. [Google Scholar] [CrossRef]
  4. Chang, C.H.; Lee, C.Y.; Chen, C.C.; Wang, Z.H. A Data-Hiding Scheme Based on One-Way Hash Function. Int. J. Multimed. Intell. Secur. 2010, 1, 285–297. [Google Scholar] [CrossRef]
  5. Shrimali, S.; Kumar, A.; Singh, K.J. Fast Hash Based High Secure Hiding Technique for Digital Data Security. Electron. Gov. Int. J. 2020, 16, 326–340. [Google Scholar] [CrossRef]
  6. Suman, R.R.; Mondal, B.; Mandal, T. A Secure Encryption Scheme Using a Composite Logistic Sine Map (CLSM) and Sha-256. Multimed. Tools Appl. 2022, 81, 27089–27110. [Google Scholar] [CrossRef]
  7. Gueron, S.; Johnson, S.; Walker, J. SHA-512/256. In Proceedings of the 2011 Eighth International Conference on Information Technology: New Generations, Las Vegas, NV, USA, 11–13 April 2011. [Google Scholar]
  8. Liu, N.; Amin, P.; Subbalakshmi, K. Security and Robustness Enhancement for Image Data Hiding. IEEE Trans. Multimed. 2020, 22, 1802–1810. [Google Scholar] [CrossRef]
  9. Stephenson, S.; Pal, B.; Fan, S.; Fernandes, E.; Zhao, Y.; Chatterjee, R. Sok: Authentication in Augmented and Virtual Reality. In Proceedings of the IEEE Symposium on Security and Privacy, San Francisco, CA, USA, 23–26 May 2022. [Google Scholar] [CrossRef]
  10. Schneier, B. Applied Cryptography: Protocols, Algorithms and Source Code in C; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
  11. Wu, M.; Liu, B. Data Hiding in Image and Video: Part I Fundamental Issues and Solutions. IEEE Trans. Image Process. 2003, 12, 685–695. [Google Scholar]
  12. Lin, P.L.; Hsieh, C.K.; Huang, P.W. A Hierarchical Digital Watermarking Method for Image Tamper Detection and Recovery. Pattern Recognit. 2005, 38, 2519–2529. [Google Scholar] [CrossRef]
  13. Zhang, X.; Wang, S.; Qian, Z.; Feng, G. Reference Sharing Mechanism for Watermark Self-Embedding. IEEE Trans. Image Process. 2011, 20, 485–495. [Google Scholar] [CrossRef]
  14. Chang, C.C.; Fan, Y.H.; Tai, W.L. Four-Scanning Attack on Hierarchical Digital Watermarking Method for Image Tamper Detection and Recovery. Pattern Recognit. 2008, 41, 654–661. [Google Scholar] [CrossRef]
  15. Chang, Y.F.; Tai, W.L. A Block-Based Watermarking Scheme for Image Tamper Detection and Self-Recovery. Opto-Electron. Rev. 2013, 21, 182–190. [Google Scholar] [CrossRef]
  16. Sarreshtedari, S.; Akhaee, M.A. A Source-Channel Coding Approach to Digital Image Protection and Self-Recovery. IEEE Trans. Image Process. 2015, 24, 2266–2277. [Google Scholar] [CrossRef] [PubMed]
  17. Qin, C.; Wang, H.; Zhang, X.; Sun, X. Self-Embedding Fragile Watermarking Based on Reference-Data Interleaving and Adaptive Selection of Embedding Mode. Inf. Sci. 2016, 373, 233–250. [Google Scholar] [CrossRef]
  18. Lin, C.C.; Huang, Y.; Tai, W.L. A Novel Hybrid Image Authentication Scheme Based on Absolute Moment Block Truncation Coding. Multimed. Tools Appl. 2017, 76, 463–488. [Google Scholar] [CrossRef]
  19. Lin, C.C.; Liu, X.L.; Tai, W.L.; Yuan, S.M. A Novel Reversible Data Hiding Scheme Based on AMBTC Compression Technique. Multimed. Tools Appl. 2015, 74, 3823–3842. [Google Scholar] [CrossRef]
  20. Tai, W.L.; Liao, Z.J. Image Self-Recovery with Watermark Self-Embedding. Signal Process. Image Commun. 2018, 65, 11–25. [Google Scholar] [CrossRef]
  21. Yu, Z.; Lin, C.C.; Chang, C.C. ABMC-DH: An Adaptive Bit-Plane Data Hiding Method Based on Matrix Coding. IEEE Access 2020, 8, 27634–27648. [Google Scholar] [CrossRef]
  22. Nazir, H.; Ullah, M.S.; Qadri, S.S.; Arshad, H.; Husnain, M.; Razzaq, A.; Nawaz, S.A. Protection-Enhanced Watermarking Scheme Combined with Non-Linear Systems. IEEE Access 2023, 11, 33725–33740. [Google Scholar] [CrossRef]
  23. Li, F.Q.; Wang, S.L.; Liew, A.W.C. Linear Functionality Equivalence Attack Against Deep Neural Network Watermarks and A Defense Method by Neuron Mapping. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1963–1977. [Google Scholar] [CrossRef]
  24. Tang, Y.; Wang, S.; Wang, C.; Xiang, S.; Cheung, Y.M. A Highly Robust Reversible Watermarking Scheme Using Embedding Optimization and Rounded Error Compensation. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1593–1609. [Google Scholar] [CrossRef]
  25. Anand, A.; Singh, A.K. Dual Watermarking for Security of COVID-19 Patient Record. IEEE Trans. Dependable Secur. Comput. 2023, 20, 859–866. [Google Scholar] [CrossRef]
  26. Chang, C.C.; Liu, Y.; Nguyen, T.S. A Novel Turtle Shell-Based Scheme for Data Hiding. In Proceedings of the 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kitakyushu, Japan, 27–29 August 2014. [Google Scholar]
  27. Chen, C.C.; Chang, C.H.; Lin, C.C.; Su, G.D. TSIA: A Novel Image Authentication Scheme for AMBTC-Based Compressed Images Using Turtle Shell Based Reference Matrix. IEEE Access 2019, 7, 149515–149526. [Google Scholar] [CrossRef]
  28. Lee, H.R.; Shin, J.S.; Hwang, C.J. Invisible Marker Tracking System Using Image Watermarking for Augmented Reality. In Proceedings of the 2007 Digest of Technical Papers International Conference on Consumer Electronics, Las Vegas, NV, USA, 10–14 January 2007. [Google Scholar]
  29. Li, C.; Sun, X.; Li, Y. Information Hiding Based on Augmented Reality. Math. Biosci. Eng. 2019, 16, 4777–4787. [Google Scholar] [CrossRef] [PubMed]
  30. Lin, C.C.; Nshimiyimana, A.; SaberiKamarposhti, M.; Elbasi, E. Authentication Framework for Augmented Reality with Data-Hiding Technique. Symmetry 2024, 16, 1253. [Google Scholar] [CrossRef]
  31. Bhattacharya, P.; Saraswat, D.; Dave, A.; Acharya, M.; Tanwar, S.; Sharma, G.; Davidson, I.E. Coalition of 6G and Blockchain in AR/VR Space: Challenges and Future Directions. IEEE Access 2021, 9, 168455–168484. [Google Scholar] [CrossRef]
  32. Deshmukh, P.R.; Bhagyashri, R. Hash Based Least Significant Bit Technique for Video Steganography. Int. J. Eng. Res. Appl. 2014, 4, 44–49. [Google Scholar]
  33. Manjula, G.R.; Ajit, D. A Novel Hash Based Least Significant Bit (2-3-3) Image Steganography in Spatial Domain. Int. J. Secur. Privacy Trust Manag. 2015, 4, 11–20. [Google Scholar]
  34. Dasgupta, K.; Mandal, J.K.; Dutta, P. Hash-Based Least Significant Bit Technique for Video Steganography(HLSB). Int. J. Secur. Privacy Trust Manag. 2012, 1, 1–11. [Google Scholar]
  35. Xiong, L.; Zhong, X.; Yang, C.N. DWT-SISA: A Secure and Effective Discrete Wavelet Transform-Based Secret Image Sharing with Authentication. Signal Process. 2020, 173, 107571. [Google Scholar] [CrossRef]
  36. Kunhu, A.; Taher, F.; Al-Ahmad, H. A New Multi Watermarking Algorithm for Medical Images Using DWT and Hash Functions. In Proceedings of the 11th International Conference on Innovations in Information Technology, Dubai, United Arab Emirates, 1–3 November 2015. [Google Scholar]
  37. Mahmood, G.S.; Huang, D.J. PSO-Based Steganography Scheme Using DWT-SVD and Cryptography Techniques for Cloud Data Confidentiality and Integrity. Comput. J. 2019, 30, 31–45. [Google Scholar] [CrossRef]
Figure 1. AR e-book structure.
Figure 1. AR e-book structure.
Symmetry 17 01027 g001
Figure 2. The proposed chain structure.
Figure 2. The proposed chain structure.
Symmetry 17 01027 g002
Figure 3. Key selection for C A E .
Figure 3. Key selection for C A E .
Symmetry 17 01027 g003
Figure 4. Hash-based data embedding framework.
Figure 4. Hash-based data embedding framework.
Symmetry 17 01027 g004
Figure 5. (a) R channel pixel values in E 1 ; (b) G channel pixel values in E 1 ; and (c) B channel pixel values in E 1 .
Figure 5. (a) R channel pixel values in E 1 ; (b) G channel pixel values in E 1 ; and (c) B channel pixel values in E 1 .
Symmetry 17 01027 g005
Figure 6. (a) Normalized R channel pixel values ( R n o r m ) in C A E 1 ; (b) normalized G channel pixel values ( G n o r m ) in C A E 1 ; and (c) normalized B channel pixel values ( B n o r m ) in C A E 1 .
Figure 6. (a) Normalized R channel pixel values ( R n o r m ) in C A E 1 ; (b) normalized G channel pixel values ( G n o r m ) in C A E 1 ; and (c) normalized B channel pixel values ( B n o r m ) in C A E 1 .
Symmetry 17 01027 g006
Figure 7. Example of weight matrix ( W M ( i , j ) ).
Figure 7. Example of weight matrix ( W M ( i , j ) ).
Symmetry 17 01027 g007
Figure 8. (a) H R of C A E 1 ; (b) H G of C A E 1 ; and (c) H B of C A E 1 .
Figure 8. (a) H R of C A E 1 ; (b) H G of C A E 1 ; and (c) H B of C A E 1 .
Symmetry 17 01027 g008
Figure 9. Example for converting E 1 from RGB to YCbCr color space.
Figure 9. Example for converting E 1 from RGB to YCbCr color space.
Symmetry 17 01027 g009
Figure 10. E 1 , Y channel of E 1 .
Figure 10. E 1 , Y channel of E 1 .
Symmetry 17 01027 g010
Figure 11. Example for hash-based data embedding.
Figure 11. Example for hash-based data embedding.
Symmetry 17 01027 g011
Figure 12. (a) C L L in E 1 , Y ; (b) C L H in E 1 , Y ; and (c) C H L in E 1 , Y ; (d) C H H in E 1 , Y .
Figure 12. (a) C L L in E 1 , Y ; (b) C L H in E 1 , Y ; and (c) C H L in E 1 , Y ; (d) C H H in E 1 , Y .
Symmetry 17 01027 g012
Figure 13. (a) Block 1 in C L L ; (b) block 2 in C L L ; and (c) block 3 in C L L ; (d) block 4 in C L L .
Figure 13. (a) Block 1 in C L L ; (b) block 2 in C L L ; and (c) block 3 in C L L ; (d) block 4 in C L L .
Symmetry 17 01027 g013
Figure 14. (a) Modified block 1 in C L L ; (b) Modified block 2 in C L L ; and (c) Modified block 3 in C L L , (d) Modified block 4 in C L L .
Figure 14. (a) Modified block 1 in C L L ; (b) Modified block 2 in C L L ; and (c) Modified block 3 in C L L , (d) Modified block 4 in C L L .
Symmetry 17 01027 g014
Figure 15. Modified C L L of E 1 , Y channel.
Figure 15. Modified C L L of E 1 , Y channel.
Symmetry 17 01027 g015
Figure 16. Example for converting E 1 , Y combined with the original Cb and Cr channels from YCbCr back to the RGB color space.
Figure 16. Example for converting E 1 , Y combined with the original Cb and Cr channels from YCbCr back to the RGB color space.
Symmetry 17 01027 g016
Figure 17. Hash-based data extraction and authentication framework.
Figure 17. Hash-based data extraction and authentication framework.
Symmetry 17 01027 g017
Table 1. Summary of notations and definitions used in our authentication framework.
Table 1. Summary of notations and definitions used in our authentication framework.
Notations/TerminologyDefinitions
E 1 The e-page 1 (electronic page) in the document, represented as an RGB image
E 2 The e-page 2 in the document, also represented as an RGB image
E i The previous e-page in hierarchical order
E j The next e-page in hierarchical order
C A E Content area extracted from E , with dimensions u × v × 3
R ,   G ,   B Red, green, and blue channels, each of size u × v
R n o r m , G n o r m , B n o r m Normalized pixel values of R ,   G ,   B channel (each divided by 255, ranging within [0, 1])
H R , H G , H B Normalized pixel values accumulators for R ,   G ,   B channels, initialized to 0
H s The final hash value, obtained by concatenating H R , H G , and H B through the proposed hash value generation algorithm
M Modulo value to limit hash size, where M = 2 8 = 256
R ( i , j ) , G ( i , j ) , B ( i , j ) Pixel values of the R ,   G ,   B channels at coordinates of ( i , j )
R n o r m ( i , j ) , G n o r m ( i , j ) , B n o r m ( i , j ) Normalized pixel values for R ,   G ,   B at coordinates of ( i , j )
The concatenation operator used to combine H R , H G , and H B
m o d Modulo operation to limit hash size (applied to H R , H G , and H B )
× Multiplication operator used for weighting pixel contributions
β A weight factor or scaling parameter is applied in hash computation to adjust pixel contribution when embedding H s , n into C L L
S E 1 Stego e-page 1 (i.e., the modified E 1 ) after embedding H s , n into C L L using DWT-based data hiding
S E 2 Stego e-page 2 (i.e., the modified E 2 ) after embedding H s , n   a n d   H S E 1 into C L L using DWT-based data hiding
C L L The LL subband coefficients were obtained after applying DWT to the luminance component ( E 1 ,   Y ) of E 1
αBlending factor that controls the strength of embedding
ϵA small constant used to avoid division by zero in β calculations
µ L L Mean intensity of a block in C L L
σ L L The standard deviation of a block in C L L
C L L ,   k A block of coefficients from C L L where each block size is to u × v
C L L ,   k Modified C L L   block after embedding H s n
H s n Normalized hash H s scaled to match the intensity range of C L L using Equation (1)
M a x ( C L L ) , Min ( C L L )Maximum and minimum intensity values in C L L
Max ( H s ), min ( H s )Maximum and minimum values of the hash H s
H s Extracted hash value from S E 1
H S E 1 Hash value of stego e-page 1 ( S E 1 )
H S E 1 Extracted linkage hash value from e-page 2 ( E 2 )
C S E 1 The region of interest extracted from S E 1   corresponding to C A E
Table 2. Test results of our framework performance under various attacks on stego page 1.
Table 2. Test results of our framework performance under various attacks on stego page 1.
Attack TypePSNR (dB)SSIM
No attack45.800.99
Gaussian noise (σ = 5%)26.050.70
Gaussian noise (σ = 10%)20.380.60
JPEG compression (Q = 50)31.170.93
JPEG compression (Q = 60)31.900.94
Type-1 cropping attack19.250.55
Type-2 cropping attack17.140.40
Table 3. Performance evaluation on stego e-page 2 under various attacks.
Table 3. Performance evaluation on stego e-page 2 under various attacks.
Attack TypePSNR (dB)SSIM
No attack41.270.99
Gaussian noise (σ = 5%)26.220.75
Gaussian noise (σ = 10%)21.020.65
JPEG compression (Q = 50)31.400.94
JPEG compression (Q = 60)31.690.95
Type-1 cropping attack17.610.50
Type-2 cropping attack21.620.66
Table 4. Performance evaluation of DWT-based embedding in the YCbCr color space.
Table 4. Performance evaluation of DWT-based embedding in the YCbCr color space.
Stego e-PageAttacks/No AttackPSNR (dB)SSIMEmbedding Execution Time (s)Extraction Execution Time (s)
Stego e-page 1No attack45.800.990.110.05
Gaussian noise attack (σ = 5%)26.050.700.110.05
Gaussian noise attack (σ = 10%)20.380.600.110.05
JPEG compression attack (Q = 50)31.170.930.110.05
JPEG compression attack (Q = 60)31.900.940.120.06
Type-1 cropping attack19.260.550.100.05
Type-2 cropping attack17.140.400.090.04
Stego e-page 2No attack41.260.990.100.05
Gaussian noise attack (σ = 5%)26.220.750.100.05
Gaussian noise attack (σ = 10%)21.020.650.090.04
JPEG compression attack (Q = 50)31.400.940.090.04
JPEG compression attack (Q = 60)31.690.950.090.05
Type-1 cropping attack17.610.500.090.04
Type-2 cropping attack21.620.660.100.05
Table 5. Comparisons with three existing works.
Table 5. Comparisons with three existing works.
Schemes CriteriaChang et al. [4]Liu et al. [8]Lin et al. [29]Ours
Hiding strategyHash function and side-match VQHash-based randomized embeddingLSB/DWT hidingDWT hiding
Carrier typeImage, VQ-index tableImage, DCT coefficientsAR contentAR content
Content authentication-YesYesYes
Linkage authentication---Yes
Attacks-JPEG attackLuminance attack,
Color saturation attack,
Replacement attack
JPEG attack,
Cropping attack (types 1 and 2)
Tamper detection/similarity--Yes, similarity (70.55–100%)Yes
Average PSNR-42.11 dB[59.58 dB, 59.87 dB][41.26 dB, 45.80 dB]
Note: “-” indicates the corresponding criterion has not been discussed in the work.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, C.-C.; Nshimiyimana, A.; Chen, C.-C.; Liao, S.-H. Enhancing Security in Augmented Reality Through Hash-Based Data Hiding and Hierarchical Authentication Techniques. Symmetry 2025, 17, 1027. https://doi.org/10.3390/sym17071027

AMA Style

Lin C-C, Nshimiyimana A, Chen C-C, Liao S-H. Enhancing Security in Augmented Reality Through Hash-Based Data Hiding and Hierarchical Authentication Techniques. Symmetry. 2025; 17(7):1027. https://doi.org/10.3390/sym17071027

Chicago/Turabian Style

Lin, Chia-Chen, Aristophane Nshimiyimana, Chih-Cheng Chen, and Shu-Han Liao. 2025. "Enhancing Security in Augmented Reality Through Hash-Based Data Hiding and Hierarchical Authentication Techniques" Symmetry 17, no. 7: 1027. https://doi.org/10.3390/sym17071027

APA Style

Lin, C.-C., Nshimiyimana, A., Chen, C.-C., & Liao, S.-H. (2025). Enhancing Security in Augmented Reality Through Hash-Based Data Hiding and Hierarchical Authentication Techniques. Symmetry, 17(7), 1027. https://doi.org/10.3390/sym17071027

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop