You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

27 April 2024

Embedding Secret Data in a Vector Quantization Codebook Using a Novel Thresholding Scheme

,
,
and
1
Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan
2
Information and Communication Security Research Center, Feng Chia University, Taichung 40724, Taiwan
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Advances in Mathematical Cryptography and Information Security toward Industry 5.0

Abstract

In recent decades, information security has become increasingly valued, including many aspects of privacy protection, copyright protection, and digital forensics. Therefore, many data hiding schemes have been proposed and applied to various carriers such as text, images, audio, and videos. Vector Quantization (VQ) compression is a well-known method for compressing images. In previous research, most methods related to VQ compressed images have focused on hiding information in index tables, while only a few of the latest studies have explored embedding data in codebooks. We propose a data hiding scheme for VQ codebooks. With our approach, a sender XORs most of the pixel values in a codebook and then applies a threshold to control data embedding. The auxiliary information generated during this process is embedded alongside secret data in the index reordering phase. Upon receiving the stego codebook and the reordered index table, the recipient can extract the data and reconstruct the VQ-compressed image using the reverse process. Experimental results demonstrate that our scheme significantly improves embedding capacity compared to the most recent codebook-based methods. Specifically, we observe an improvement rate of 223.66% in a small codebook of size 64 and an improvement rate of 85.19% in a codebook of size 1024.

1. Introduction

With the widespread usage of digital media in daily life, protecting the security and privacy of data has become increasingly important. Artificial intelligence-based watermarking [], cryptography systems [], and data hiding [,,,,,] have become effective means of safeguarding confidential data and personal privacy at a time when AI is rapidly evolving. These new techniques are used in many fields, such as privacy protection, copyright protection, and digital forensics.
Vector Quantization (VQ) [] is a data compression technology. The basic principle of VQ-compressed images is to divide the image into non-overlapping pixel blocks, and then treat each pixel block as a vector. An algorithm, such as the Linde–Buzo–Gray (LBG) algorithm [] or another improved algorithm [], is used to train images into a codebook. The image’s pixel blocks are then mapped to the closest codeword in the codebook, and the indices of these codewords are stored as an index table. During decompression, the VQ compressed image can be reconstructed based on the stored codebook and the index table. In recent years, data hiding in the compression domain of VQ-compressed images has been an active area of research. This technique not only achieves image compression but also embeds data into the compression domain to enhance data security. Most of this research employed techniques to hide data within the index tables [,,,,], while some recent studies explored methods for hiding data within the codebooks [,]. In 2023, Liu et al. [] proposed a novel approach that focused on data embedding and extraction through sorting and reordering codewords of a VQ codebook. When a codebook size was 64, they achieved a bit rate of 0.2578 bits per pixel (bpp). Building upon this work, in 2024, Chang et al. [] introduced a method for data embedding by employing pairwise adjustments of pixels within codewords. When a codebook size was 64, their method achieved a higher bit rate of 0.5820 bpp, indicating improved performance in terms of data capacity.
The key carrier of the proposed scheme is a VQ codebook which is not a common medium for data embedding due to its size. Even though its embedding capacity is relatively small compared to that of a cover image, it is very interesting to find ways to improve the capacity. Therefore, we propose a scheme that utilizes a threshold. First, codewords are preprocessed to reduce their pixel values through XOR operations. Then, data embedding is performed by looking up a rule table. Compared with the previous methods, it not only significantly improves the embedding capacity but also allows adapting to different situations by adjusting of the threshold to meet different requirements.
This particular study makes several significant contributions, as follows:
  • Compared with a more recent compatible method, the embedding capacity is significantly improved. On a codebook of size 64, the embedding bit rate is 1.8838 bpp, which makes the improvement rate go as high as 223.66%. Even for a codebook of size 1024, the improvement rate can be as high as 85.19%.
  • Our proposed scheme provides an adjustable threshold that can be adjusted to suit various requirements and reflects the flexibility of our approach.
  • Our proposed scheme can losslessly reconstruct a VQ-compressed image. Achieving a PSNR of +∞ between a VQ-compressed image and the reconstructed VQ-compress image using the original codebook indicates that the two VQ-compressed images are exactly the same.
In Section 2, we introduce some methods related to our proposed scheme. Section 3 describes the details of data embedding and the reverse process of our proposed scheme. Experimental results demonstrate the good performance of our proposed scheme in Section 4. Finally, in Section 5, our proposed scheme is concluded.

3. Proposed Scheme

In this section, we introduce a novel data hiding scheme in VQ codebooks. The preprocessing phase, which reduces the pixel values of codewords through XOR operations, is described in Section 3.1. Section 3.2 details the data embedding phase, in which we utilize an adjustable threshold to flexibly accommodate different requirements. Section 3.3 outlines the data extraction and image recovery phase after a receiver receives the stego codebook and the reordered index table. Figure 5 shows the flow of our proposed scheme.
Figure 5. Flow of Our Proposed Scheme.

3.1. Preprocessing Phase

The sender begins a codebook training process using the original images by employing the LBG algorithm []. This process entails partitioning the original images into 4 × 4 blocks and iteratively refining a codebook to accurately represent these blocks of data. The VQ codebook C B and the corresponding index tables I T for VQ compressed images are generated after the process. Since the VQ codebook exhibits a characteristic that most pixel values in a codeword are very similar, we utilize XOR to condense these values to assist in achieving better embedding results. In Equation (5), where i represents the current codeword and j represents the location of pixel p in the current codeword, we preserve the first pixel of the codeword i as the reference pixel and leave its value unchanged. Meanwhile, we XOR the remaining pixels with the first one to derive the preprocessed pixel values, denoted as p p . Figure 6 presents an example of four codewords in a codebook. It fulfills our requirement that the values of other pixels mostly become very small after XORing with the first pixel.
p p i j = p i j   , p i j p i 1   ,         i f   j = 1 o t h e r w i s e
Figure 6. An example of preprocessing phase in a segment of the codebook.

3.2. Data Embedding Phase

Our scheme supports adjustable embedding rules by setting different thresholds to flexibly adapt to various application scenarios. We utilize encoding rules similar to prefix codes to establish correspondence between the data and the label. Figure 7 illustrates encoding rules for different thresholds, where t represents the threshold, l represents the label, and d represents the data.
Figure 7. Encoding Rules for Different Thresholds.
To ensure successful recovery of the codeword, the first pixel within each codeword, acting as a reference, must remain unchanged. Before utilizing the embedding process s p i j = p p i j × t + l to embed data, it is essential to first ascertain whether the embedded pixel value, s p i j , will cause an overflow. If an overflow is anticipated, the embedding is deemed unacceptable, and the pixel value remains unchanged. Otherwise, embedding can proceed. Equation (6) encapsulates this mathematical process, where s p represents the stego pixel, p p represents the preprocessed pixel, i and j represent the positions of the pixel in the codebook, and t is the threshold specified.
s p i j = p p i j   , p p i j × t + l   ,         i f   j = 1   o r   p p i j × t + l   > 255 o t h e r w i s e
While calculating s p , we need to check whether the output s p conforms to the input rules of p p to ensure correct recovery. In terms of mathematical formulas, it is necessary to check the s p of all non-reference pixels ( j 1 ). If s p i j × t [ 0,255 ] , it means that it is naturally known that it can be embedded without additional auxiliary information. If s p i j × t > 255 , further checking shown in Equation (7) is required. If it is not embeddable, let the indicator i n d k = 0 be saved as auxiliary information; otherwise, if it is embeddable, let the indicator i n d k = 1 , where k is the current indicator, and k is automatically incremented every time an indicator is added. Finally, we obtain a binary sequence composed of indicators. Because most preprocessed pixel values in the codebook are relatively close, there will be many consecutive 0’s or consecutive 1’s in the binary sequence. We can utilize the ERLE algorithm [] for auxiliary information compression to conserve capacity and allocate more capacity for embedded data.
i n d k = 0   , 1   ,         i f   p p i j × t + l   > 255 o t h e r w i s e
Figure 8 provides a codeword example that covers all embedding cases when t = 3 . In this codeword, the first pixel ( j = 1 ) circled in yellow serves as the reference pixel which remains unchanged. The pixel circled in orange is a case of non-embeddable because its pixel value would overflow after embedding. To avoid confusion during the data extraction and image recovery phases for the non-embeddable case, auxiliary information i n d k = 0 is required to be recorded. Circles in light green and dark green indicate pixels are embeddable, and Equation (6) is the mathematical formula for embedding. Taking the row in light green, when t = 3 and the data d embedded is 00, the corresponding label l = 0 according to Figure 7; therefore, the embedded pixel value s p = 60 × 3 + 0 = 180 . Similarly, in the dark green row, the corresponding label l = 1 when the data d embedded is 01, so s p = 8 × 3 + 1 = 25 . The light green row indicates that it needs to check whether the output obeys the input rules, s p i j × t > 255 . To prevent confusion with non-embeddable situations, it is necessary to use i n d k = 1 as the auxiliary information. Finally, the dark green row indicates that the embedded pixel can be known directly without any auxiliary information.
Figure 8. An example of data embedding phase in a codeword ( t = 3 ).
After our scheme completes embedding, the stego codebook is sorted using Liu et al.’s method [] and then more data can be embedded using VQ reordering. It should be noted that there are very important technical details in the process. During embedding, we first retain the reference pixels and record indicators for non-embeddable pixels. Then, we embed the embeddable pixel, represented by light green in Figure 8, which requires an indicator. At this point, a portion of the secret data has been embedded and all indicators have been obtained. After applying ERLE algorithm compression, we embed the auxiliary information as data into pixels that can be directly known to be embeddable, represented by dark green in Figure 8. If the capacity does not support embedding all auxiliary information, the remaining auxiliary information can also be embedded using VQ reordering. If there is remaining capacity, additional secret data can be embedded. Eventually, the sender will transmit the stego codebook and the reordered index table to the receiver.
Algorithm 1 describes the process of our proposed scheme in the data embedding phase, outlining how data is encoded and embedded into the stego data.
Algorithm 1 Data Embedding
InputPreprocessed codebook P C B , index table I T , secret S , and threshold t .
OutputStego codebook S C B and reordered index table R I T .
Step 1Preserve reference pixels and pixels causing overflow. Hide secret data in pixels flagged by indicators according to encoding rules. Record all indicators.
k = 1 ;
for  i { 1 , 2 , ,   s i z e P C B }
   for  j { 1 , 2 , , 16 }
     Obtain the label l according to the encoding rules.
     if  j = 1 or ( p p i j × t + l ) > 255
         s p i j = p p i j ;
     else
        if  s p i j × t > 255
           s p i j = p p i j × t + l ;
          if  ( p p i j × t + l ) > 255
            indicator i n d k = 0 ;
          else
            indicator i n d k = 1 ;
          end if
           k = k + 1 ;
        end if
       end if
    end for
end for
Step 2Compress the indicator sequence using the ERLE algorithm to obtain compressed auxiliary information.
Step 3Similar to Step 1, but here, l is first used for auxiliary information. Any remaining space can be used to embed the secret.
for  i { 1 , 2 , ,   s i z e P C B }
   for  j { 1 , 2 , , 16 }
     Obtain the label l according to the encoding rules.
     if  j = 1 or ( p p i j × t + l ) > 255
        s p i j = p p i j ;
     else
       if  s p i j × t 255
            s p i j = p p i j × t + l ;
       end if
     end if
    end for
end for
Step 4Utilize reordering for additional data embedding. Embed any remaining auxiliary information not fully embedded in Step 3, and use available spaces to embed secret data.
Step 5Output stego codebook S C B and reordered index table R I T .

3.3. Data Extraction and Image Recovery Phase

When the receiver has a stego codebook and a reordered index table, the stego codebook must first be reordered to extract data, which may include both secret data and auxiliary information. Then, it proceeds to extract data from pixel values that can be directly identified as embedded, which may also include both secret data and auxiliary information. After extracting all auxiliary information, ERLE algorithm decompression is applied to retrieve all indicators. Subsequently, all pixels other than references are examined with s p i j × t > 255 . If i n d k = 0 , the pixel remains unchanged. If i n d k = 1 , data extraction occurs. The mathematical formulas for data extraction are depicted in Equations (8) and (9). It is evident that when pixels are references or non-embedded ones, they remain unchanged. Otherwise, they are embedded pixels, and extraction operations are executed, denoted as r p i j = s p i j ÷ t , where r p p represents the pixel of the reordered preprocessed codebook. Subsequently, the label l = s p i j r p i j × t can be calculated, and then the data can be extracted by referring to the encoding rules depicted in Figure 7.
r p p i j = s p i j   , s p i j ÷ t   ,         i f   j = 1   o r   ( s p i j × t > 255   a n d   i n d k = 0 ) o t h e r w i s e
l = s p i j r p p i j × t
Figure 9 provides an example of the data extraction phase in a codeword when t = 3 . Circled in yellow are the reference pixels, which remain unchanged. Subsequently, the other pixels are checked. If s p i j × t > 255 , then the indicator is further examined. If i n d k = 0 , it indicates non-embeddable pixels, which remain unchanged and are represented in orange in the figure. If i n d k = 1 , it indicates the presence of embedding. For the pixels circled in light green in the picture, first, calculate r p p i j = 180 ÷ 3 = 60, then compute the label l = 180 60 × 3 = 0 , and subsequently refer to the encoding rules to extract the data as 00. Similarly, the same applies to pixels directly identified as embedded, represented in dark green. First, calculate r p p i j = 25 ÷ 3 = 8, then compute the label l = 25 8 × 3 = 1 . According to the encoding rules, the extracted data is 01.
Figure 9. An example of data extraction phase in a codeword ( t = 3 ).
After the data is extracted, we obtain the reordered preprocessed codebook. Subsequently, we only need to perform post-processing to restore the pixels. As shown in Equation (10), the reference pixel remains unchanged, while the remaining pixels and the first pixel are XORed to obtain the recovered pixel, r p . After processing all pixels, the receiver can obtain the recovered codebook, and then compare the reordered index table with it to recover the VQ compressed image. After processing all pixels, the receiver can obtain the recovered codebook. Then, the receiver compares the reordered index table with it to reconstruct the VQ compressed image.
r p i j = r p p i j   , r p p i j r p p i 1   ,         i f   j = 1 o t h e r w i s e
Algorithm 2 describes the process of our proposed scheme in the data extraction phase, outlining how data is decoded and extracted from the stego data, and then recovers the codebook.
Algorithm 2 Data Extraction
InputStego codebook S C B , reordered index table R I T , and threshold t .
OutputSecret S and reordered codebook R C B .
Step 1Use reordering of the stego codebook S C B to extract data.
Step 2Keep reference pixels unchanged and recover the pixels that can be directly known to be embedded and extract data.
for  i { 1 , 2 , ,   s i z e S C B }
   for  j { 1 , 2 , , 16 }
     if  j = 1
        r p p i j = s p i j ;
     else
       if  s p i j × t 255
          r p p i j = s p i j ÷ t ;
          l = s p i j r p p i j × t ;
         Obtain the data according to the encoding rules.
       end if
     end if
   end for
end for
Step 3Similar to Step 2, the situation of s p i j × t > 255 needs to be judged based on the indicator.
for  i { 1 , 2 , , s i z e S C B }
   for  j { 1 , 2 , , 16 }
     if  j = 1
        r p p i j = s p i j ;
     else
       if  s p i j × t > 255
         if  i n d k = 0
           r p p i j = s p i j ;
         else
            r p p i j = s p i j ÷ t ;
            l = s p i j r p p i j × t ;
           Obtain the data according to the encoding rules.
         end if
       Obtain the secret S and the reordered preprocessed codebook R P C B .
       end if
     end if
   end for
end for
Step 4Post-processing for reordered preprocessed codebook R P C B
for  i { 1 , 2 , ,   s i z e S C B }
   for  j { 1 , 2 , , 16 }
     if  j = 1
        r p i j = r p p i j ;
     else
        r p i j = r p p i j r p p i 1 ;
     end if
   end for
end for
Obtain the recovered codebook R C B .
Step 5Output secret S and recovered codebook R C B .

4. Experimental Results

This section shows the experimental results of our proposed scheme. Figure 10 displays six grayscale images of size 512 × 512 used as test images, trained to generate a VQ codebook. Our experimental environment runs on a Windows 11 operating system with an AMD Ryzen 7 5800H CPU with Radeon Graphics, 16 GB of memory, and MATLAB R2023B software.
Figure 10. Six 512 × 512 test images: (a) Egretta, (b) Elaine, (c) peppers, (d) Tiffany, (e) woodland, (f) Zelda.
Table 1 presents comparisons of PSNR values between a reconstructed image and its original image (OI), as well as between the reconstructed image and its VQ image (VI) reconstructed using the original codebook. It can be clearly observed from the table that the PSNR values for all VI images are +∞, which shows that our proposed scheme can reconstruct VQ-compressed images fully.
Table 1. PSNR comparisons of reconstructed images with their original images and reconstructed images with the VQ images reconstructed using the original codebook.
Our scheme supports adjustable thresholds. Table 2 shows embedded capacities (ECs) for codebook sizes 64, 128, 256, 512, and 1024, with different thresholds from 2 to 8. It is worth mentioning that with codebook sizes 512 and 1024, we observe that the ECs of t = 5 decreases compared with the ECs of t = 4 . The main reason is that when t is larger, the number of embeddable pixels decreases even the embedded bits for a pixel may increase. The combination of these factors leads to the changes in the ECs. We also observe that when t = 8 , the embedding capacity is the largest regardless of the codebook size and this result is in line with the methodology of our proposed scheme. We can see the reason from Figure 7, as when t = 8 , every label can embed 3 bits, so a high embedding capacity can be achieved. With a codebook size of 1024, the embedding capacity can be as high as 27,425 bits.
Table 2. Embedding capacity comparison under different codebook sizes and different thresholds.
Table 3 shows the comparison between our scheme with t = 8 and those of Liu et al. [] and Chang et al. []. We compared the embedding capacity (EC) and embedding rate (ER) for codebooks of sizes 64, 128, 256, 512, and 1024. The ER is expressed in bits per pixel (bpp), calculated as the total embedded bits divided by the total number of pixels in the codebook, as shown in Equation (11). It can be intuitively illustrated from the table that the embedding capacity of our scheme is far better than the other two schemes for codebooks of any size. For a 64-size codebook, the embedding rate is as high as 1.8838 bpp. Compared with the latest scheme by Chang et al., the improvement rate reaches as high as 223.66%. Even for a codebook of size 1024, the improvement rate is as high as 85.19% which fully reflects the superior performance of our scheme.
E R = E C t o t a l   n u m b e r   o f   p i x e l s
Table 3. Comparison with other data embedding schemes in the codebook.
We randomly sampled 12 images from the USC-SIPI image database [], comprising images of various sizes (256 × 256, 512 × 512, and 1024 × 1024). These images were utilized to train codebooks sized 64, 128, 256, 512, and 1024. Figure 11 illustrates the comparisons of ECs with other codebook-based data embedding schemes.
Figure 11. Comparisons with other data embedding schemes [,] in the codebook trained by 12 random images from the USC-SIPI image database.

5. Conclusions

Our proposed scheme introduces a novel approach for embedding data in VQ codebooks. We achieve this by XORing most pixel values in the codebook and applying a threshold to regulate data embedding. Through this method, we efficiently integrate both secret data and auxiliary information during the index reordering step of embedding. Upon receiving the stego codebook and the reordered index table, the recipient can losslessly extract the data and reconstruct the VQ-compressed image using the reverse process. Experimental validation of our scheme demonstrates a substantial enhancement in embedding capacity compared to state-of-the-art codebook-based methods. Specifically, we achieve an embedding rate of 1.8838 bpp with an impressive improvement rate of 223.66% using small codebooks of size 64. Similarly, we observe a notable improvement rate of 85.19% in larger codebooks of size 1024. These findings highlight the superior performance and efficacy of our proposed scheme when using VQ codebooks to embed data. Future research directions may encompass more study of carrier characteristics, achieving higher embedding capacity, and expanding towards verifiability. These endeavors hold significant implications for privacy protection, copyright preservation, and digital forensics.

Author Contributions

Conceptualization, Y.L., J.-C.L. and C.-C.C. (Chin-Chen Chang); methodology, Y.L., J.-C.L. and C.-C.C. (Chin-Chen Chang); software, Y.L.; validation, Y.L.; writing—original draft preparation, Y.L. and J.-C.L.; writing—review and editing, Y.L., J.-C.L., C.-C.C. (Chin-Chen Chang) and C.-C.C. (Ching-Chun Chang). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sahu, A.K.; Umachandran, K.; Biradar, V.D.; Comfort, O.; Sri Vigna Hema, V.; Odimegwu, F.; Saifullah, M.A. A study on content tampering in multimedia watermarking. SN Comput. Sci. 2023, 4, 222. [Google Scholar] [CrossRef]
  2. Ramesh, R.K.; Dodmane, R.; Shetty, S.; Aithal, G.; Sahu, M.; Sahu, A.K. A Novel and Secure Fake-Modulus Based Rabin-3 Cryptosystem. Cryptography 2023, 7, 44. [Google Scholar] [CrossRef]
  3. Ni, Z.; Shi, Y.Q.; Ansari, N.; Su, W. Reversible data hiding. IEEE Trans. Circuits Syst. Video Technol. 2006, 16, 354–362. [Google Scholar]
  4. Chang, C.C.; Kieu, T.D.; Chou, Y.C. A high payload steganographic scheme based on (7, 4) hamming code for digital images. In Proceedings of the 2008 International Symposium on Electronic Commerce and Security, Guangzhou, China, 3–5 August 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 16–21. [Google Scholar]
  5. Zhang, X. Reversible data hiding in encrypted image. IEEE Signal Process. Lett. 2011, 18, 255–258. [Google Scholar] [CrossRef]
  6. Huang, C.T.; Weng, C.Y.; Shongwe, N.S. Capacity-Raising Reversible Data Hiding Using Empirical Plus–Minus One in Dual Images. Mathematics 2023, 11, 1764. [Google Scholar] [CrossRef]
  7. Zhang, Q.; Chen, K. Reversible Data Hiding in Encrypted Images Based on Two-Round Image Interpolation. Mathematics 2023, 12, 32. [Google Scholar] [CrossRef]
  8. He, D.; Cai, Z. Reversible Data Hiding for Color Images Using Channel Reference Mapping and Adaptive Pixel Prediction. Mathematics 2024, 12, 517. [Google Scholar] [CrossRef]
  9. Gray, R. Vector quantization. IEEE Assp Mag. 1984, 1, 4–29. [Google Scholar] [CrossRef]
  10. Linde, Y.; Buzo, A.; Gray, R. An algorithm for vector quantizer design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef]
  11. Chang, C.C.; Hu, Y.C. A fast LBG codebook training algorithm for vector quantization. IEEE Trans. Consum. Electron. 1998, 44, 1201–1208. [Google Scholar] [CrossRef]
  12. Hsieh, C.H.; Tsai, J.C. Lossless compression of VQ index with search-order coding. IEEE Trans. Image Process. 1996, 5, 1579–1582. [Google Scholar] [CrossRef] [PubMed]
  13. Yang, C.H.; Lin, Y.C. Reversible data hiding of a VQ index table based on referred counts. J. Vis. Commun. Image Represent. 2009, 20, 399–407. [Google Scholar] [CrossRef]
  14. Lee, J.D.; Chiou, Y.H.; Guo, J.M. Lossless data hiding for VQ indices based on neighboring correlation. Inf. Sci. 2013, 221, 419–438. [Google Scholar] [CrossRef]
  15. Qin, C.; Hu, Y.C. Reversible data hiding in VQ index table with lossless coding and adaptive switching mechanism. Signal Process. 2016, 129, 48–55. [Google Scholar] [CrossRef]
  16. Rahmani, P.; Dastghaibyfard, G. Two reversible data hiding schemes for VQ-compressed images based on index coding. IET Image Process. 2018, 12, 1195–1203. [Google Scholar] [CrossRef]
  17. Liu, J.C.; Chang, C.C.; Lin, C.C. Hiding Information in a Well-Trained Vector Quantization Codebook. In Proceedings of the 2023 6th International Conference on Signal Processing and Machine Learning, Tianjin, China, 14–16 July 2023; pp. 287–292. [Google Scholar]
  18. Chang, C.C.; Liu, J.C.; Chang, C.C.; Lin, Y. Hiding Information in a Reordered Codebook Using Pairwise Adjustments in Codewords. In Proceedings of the 2024 5th International Conference on Computer Vision and Computational Intelligence, Bangkok, Thailand, 29–31 January 2024. [Google Scholar]
  19. Chen, K.; Chang, C.C. High-capacity reversible data hiding in encrypted images based on extended run-length coding and block-based MSB plane rearrangement. J. Vis. Commun. Image Represent. 2019, 58, 334–344. [Google Scholar] [CrossRef]
  20. Weber, A.G. The USC-SIPI Image Database: Version 5. 2006. Available online: http://sipi.usc.edu/database/ (accessed on 11 April 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.