Enhancement of the Generation Quality of Generative Linguistic Steganographic Texts by a Character-Based Diffusion Embedding Algorithm (CDEA)
Abstract
1. Introduction
Contributions
- We proposed a CDEA that improves the quality of steganographic text while maintaining high embedding capacity.
- We mitigated word selection imbalance by utilizing character-level frequency patterns and a grouping mechanism based on power-law distributions.
- We conducted a quantitative analysis of common embedding algorithms, including perfect binary trees, Huffman coding, arithmetic coding, and distribution-copy methods, under consistent conditions to evaluate their impact on steganographic text quality.
- The experimental results demonstrated that combining CDEA with XLNet significantly enhances the perceptual imperceptibility of generated steganographic text.
2. Related Works
2.1. Generative Text Steganography
2.2. Deep Learning-Based Text Steganalysis
3. Methods and Metrics
3.1. XLNet
3.1.1. Segmented Recurrence
3.1.2. Dual-Stream Attention
3.2. Flesch Reading Ease
3.3. Gunning Fog Index
4. Proposed Approach
4.1. CDEA
4.2. Steganographic Mechanism
Algorithm 1: Single-step steganographic text generation |
01 Input: sensitive information SI, communication history data CHD 02 Output: steganographic text ST 03 Mapping Table(MT)CDEA 04 Initialize list[] 05 threshold == 1 06 for in SI do 07 // item == [prefix, suffix] 08 item Encoding ( MT) 09 if item[0] > threshold do 10 modulus int(len(MT[item[0]])/2) 11 // item == [prefix, suffix, cb] 12 if item[1) > modulus do 13 item[1) item[1] % modulus 14 item.append(1) 15 else do 16 item.append(0) 17 end if 18 end if 19 list.add(item) 20 end for 21 CHD == CHD, CHD_initial == CHD 22 while len(list) > 0 do 23 //All word[j] == (, ) 24 All Word XLNet(CHD) 25 Candidate Pool top- and top- (All Word) 26 Temp Candidate Pool[list[0]] 27 list list[1:] 28 CHD CHD.append(temp) 29 end while 30 ST CHD.remove(CHD_initial) 31 Return ST |
4.3. Discussions
5. Experiments
5.1. Evaluation Metrics
5.2. Comparative Analysis
5.2.1. Perceptual Imperceptibility
5.2.2. Statistical Imperceptibility
5.3. Ablation
5.3.1. Perceptual Imperceptibility in Ablation
5.3.2. Statistical Imperceptibility in Ablation
5.4. Safety Proof
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
CDEA | Character-based Diffusion Embedding Algorithm |
cb | Confirmation Bit |
AI | Artificial Intelligence |
NLP | Natural Language Processing |
VAE | Variational AutoEncoders |
GAN | Generative Adversarial Networks |
CS | Character Sequence |
CHD | Communication History Data |
CP | Candidate Pool |
MT | Mapping Table |
Diversity | |
Perplexity | |
Grammaticality Score | |
Descriptiveness | |
) Divergence | |
PBT | Perfect Binary Trees |
HC | Huffman Coding |
AC | Arithmetic Coding |
Appendix A
Method | Sample |
---|---|
CEDA | “I was in a position of power where there had been a tremendous number of people who had no experience with electricity,” recalled the director, the director of photography. “It had to be a man with a strong desire and knowledge of how electricity works to get the best of people on the job, to make sure that they did the job. The only thing you can do is to keep it going.” |
Discop [36] | You have that others do well and do not even take in the administrative feet; for instance, one holds startled and skeptical-witnessed various-the-themes, as auditorium manager Freddie Lasker considers one ‘classic example of how city officials react in a dramatic fashion,’ like many World War I-era locations where Columbus—with its mix of German, Austrian, and Scottish immigrants, alongside non-German foreigners and settled Irish—saw minimal Jewish recruitment despite New York’s 150-year Jewish presence, while ethnic German influences led to Italian surnames and epithets reflecting local culture |
HC [46] | documentary titled Secret Story-Game-Second Strip—although comedy does make claims—titled Second Strip—feature fictional TV featuring David Campbell originally titled Episode—Larry Liar starring David Roberts whom Howard originally titled Al Liar starring Robert P. Smith creator Howard Terry Carell himself previously portrayed Joe Porky starring Sam Roberts despite having written prior titles fictional version himself playing Tom Liar himself earlier aired TV plays titled His Son—You Don Will Live—playing himself starring Robert |
AC [48] | non structured loosely structured sequence according similar terms specified today among comedy films popular genre written towards Western genres performed similarly titled respectively adaptations films published towards western genres written toward Western genres written toward western genres those previously popular throughout Spaghetti genres written towards Western genres produced among genres whose novels feature darker genres involved non melodrama plot genre audiences executed directly towards Western genres involving genres important characters written specifically directly toward Western genres? Like Rosemary |
PBT [14] | increasingly stringent controls regulating events caused throughout nature itself—see role changes section!—leading me somewhat less pleased today how you’re using action again without killing humans despite controlling incidents throughout Earth mythology according controls within itself without resorting strictly using rules enforced across mankind itself—e.g., murder outside themselves even less popular attacks within Europe within India according controls—giving power solely control across humankind overall according how regulations regulate certain types—particularly human behavior using methods similar compared |
References
- Bhattacharjya, A.; Zhong, X.; Wang, J. Strong, efficient and reliable personal messaging peer to peer architecture based on hybrid RSA. In Proceedings of the International Conference on Internet of Things and Cloud Computing (ICC 2016), The Møller Centre-Churchill College, Cambridge, UK, 22–23 March 2016; ISBN 978-1-4503-4063-2/16/03. [Google Scholar]
- Kumar, J.R.H.; Bhargavramu, N.; Durga, L.S.N.; Nimmagadda, D.; Bhattacharjya, A. Blockchain Based Traceability in Computer Peripherals in Universities Scenarios. In Proceedings of the 2023 3rd International Conference on Electronic and Electrical Engineering and Intelligent System (ICE3IS), Yogyakarta, Indonesia, 9–10 August 2023. [Google Scholar]
- Cachin, C. An Information-Theoretic Model for Steganography; International Workshop on Information Hiding; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
- Wu, N.; Shang, P.; Fan, J.; Yang, Z.; Ma, W.; Liu, Z. Research on coverless text steganography based on single bit rules. J. Phys. Conf. Ser. 2019, 1237, 022077. [Google Scholar] [CrossRef]
- Luo, Y.; Huang, Y.; Li, F.; Chang, C. Text steganography based on ci-poetry generation using Markov chain model. KSII Trans. Internet Inf. Syst. (TIIS) 2016, 10, 4568–4584. [Google Scholar]
- Luo, Y.; Huang, Y. Text steganography with high embedding rate: Using recurrent neural networks to generate chinese classic poetry. In Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security, Philadelphia, PA, USA, 20–21 June 2017. [Google Scholar]
- Tong, Y.; Liu, Y.; Wang, J.; Xin, G. Text steganography on RNN-generated lyrics. Math. Biosci. Eng. 2019, 16, 5451–5463. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems. Curran Associates Inc.: Red Hook, NY, USA, 2017; Volume 30, pp. 6000–6010. [Google Scholar]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. 2018, Volume 3. Available online: https://www.mikecaptain.com/resources/pdf/GPT-1.pdf (accessed on 4 January 2025).
- Ziegler, Z.M.; Deng, Y.; Rush, A.M. Neural linguistic steganography. arXiv 2019, arXiv:1909.01496. [Google Scholar] [CrossRef]
- Yang, Z.-L.; Guo, X.-Q.; Chen, Z.-M.; Huang, Y.-F.; Zhang, Y.-J. RNN-stega: Linguistic steganography based on recurrent neural networks. IEEE Trans. Inf. Forensics Secur. 2018, 14, 1280–1295. [Google Scholar] [CrossRef]
- Dai, F.Z.; Cai, Z. Towards near-imperceptible steganographic text. arXiv 2019, arXiv:1907.06679. [Google Scholar] [CrossRef]
- Fang, T.; Jaggi, M.; Argyraki, K. Generating steganographic text with LSTMs. arXiv 2017, arXiv:1705.10742. [Google Scholar] [CrossRef]
- Xiang, L.; Yang, S.; Liu, Y.; Li, Q.; Zhu, C. Novel linguistic steganography based on character-level text generation. Mathematics 2020, 8, 1558. [Google Scholar] [CrossRef]
- Adeeb, O.F.A.; Kabudian, S.J. Arabic text steganography based on deep learning methods. IEEE Access 2022, 10, 94403–94416. [Google Scholar] [CrossRef]
- Yang, Z.-L.; Zhang, S.-Y.; Hu, Y.-T.; Hu, Z.-W.; Huang, Y.-F. VAE-Stega: Linguistic steganography based on variational auto-encoder. IEEE Trans. Inf. Forensics Secur. 2020, 16, 880–895. [Google Scholar] [CrossRef]
- Zhou, X.; Peng, W.; Yang, B.; Wen, J.; Xue, Y.; Zhong, P. Linguistic steganography based on adaptive probability distribution. IEEE Trans. Dependable Secur. Comput. 2021, 19, 2982–2997. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM 2014, 63, 139–144. [Google Scholar] [CrossRef]
- Yi, B.; Wu, H.; Feng, G.; Zhang, X. ALiSa: Acrostic linguistic steganography based on BERT and Gibbs sampling. IEEE Signal Process. Lett. 2022, 29, 687–691. [Google Scholar] [CrossRef]
- Cao, Y.; Zhou, Z.; Chakraborty, C.; Wang, M.; Wu, Q.M.J.; Sun, X.; Yu, K. Generative steganography based on long readable text generation. IEEE Trans. Comput. Soc. Syst. 2022, 11, 4584–4594. [Google Scholar] [CrossRef]
- Yan, R.; Yang, Y.; Song, T. A secure and disambiguating approach for generative linguistic steganography. IEEE Signal Process. Lett. 2023, 30, 1047–1051. [Google Scholar] [CrossRef]
- Ding, C.; Fu, Z.; Yang, Z.; Yu, Q.; Li, D.; Huang, Y. Context-aware Linguistic Steganography Model Based on Neural Machine Translation. IEEE/ACM Trans. Audio Speech Lang. Process. 2023, 32, 868–878. [Google Scholar] [CrossRef]
- Rajba, P.; Keller, J.; Mazurczyk, W. Proof-of-work based new encoding scheme for information hiding purposes. In Proceedings of the 18th International Conference on Availability, Reliability and Security, Benevento, Italy, 29 August–1 September 2023. [Google Scholar]
- Yu, L.; Lu, Y.; Yan, X.; Wang, X. Generative Text Steganography via Multiple Social Network Channels Based on Transformers. In Proceedings of the CCF International Conference on Natural Language Processing and Chinese Computing, Guilin, China, 24–25 September 2022; Springer International Publishing: Cham, Switzerland, 2022. [Google Scholar]
- Huang, C.; Yang, Z.; Hu, Z.; Yang, J.; Qi, H.; Zhang, J.; Zheng, L. DNA Synthetic Steganography Based on Conditional Probability Adaptive Coding. IEEE Trans. Inf. Forensics Secur. 2023, 18, 4747–4759. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, R.; Liu, J.; Lei, Q. A Semantic Controllable Long Text Steganography Framework Based on LLM Prompt Engineering and Knowledge Graph. IEEE Signal Process. Lett. 2024, 31, 2610–2614. [Google Scholar] [CrossRef]
- Wu, J.; Wu, Z.; Xue, Y.; Wen, J.; Peng, W. Generative text steganography with large language model. In Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, VIC, Australia, 28 October–1 November 2024. [Google Scholar]
- Lin, K.; Luo, Y.; Zhang, Z.; Ping, L. Zero-shot Generative Linguistic Steganography. arXiv 2024, arXiv:2403.10856. [Google Scholar] [CrossRef]
- Sun, B.; Li, Y.; Zhang, J.; Xu, H.; Ma, X.; Xia, P. Topic Controlled Steganography via Graph-to-Text Generation. CMES-Computer Model. Eng. Sci. 2023, 136, 157–176. [Google Scholar] [CrossRef]
- Pang, K. FreStega: A Plug-and-Play Method for Boosting Imperceptibility and Capacity in Generative Linguistic Steganography for Real-World Scenarios. arXiv 2024, arXiv:2412.19652. [Google Scholar] [CrossRef]
- Huang, Y.-S.; Just, P.; Narayanan, K.; Tian, C. OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized Distributions. arXiv 2024, arXiv:2410.04328. [Google Scholar] [CrossRef]
- Bai, M.; Yang, J.; Pang, K.; Huang, Y.; Gao, Y. Semantic Steganography: A Framework for Robust and High-Capacity Information Hiding using Large Language Models. arXiv 2024, arXiv:2412.11043. [Google Scholar] [CrossRef]
- Ding, J.; Chen, K.; Wang, Y.; Zhao, N.; Zhang, W.; Yu, N. Discop: Provably Secure Steganography in Practice Based on Distribution Copies. In Proceedings of the 2023 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 21–25 May 2023; pp. 2238–2255. [Google Scholar] [CrossRef]
- Wen, J.; Zhou, X.; Zhong, P.; Xue, Y. Convolutional neural network based text steganalysis. IEEE Signal Process. Lett. 2019, 26, 460–464. [Google Scholar] [CrossRef]
- Yang, Z.; Wang, K.; Li, J.; Huang, Y.; Zhang, Y.-J. TS-RNN: Text steganalysis based on recurrent neural networks. IEEE Signal Process. Lett. 2019, 26, 1743–1747. [Google Scholar] [CrossRef]
- Yang, Z.; Huang, Y.; Zhang, Y.-J. A fast and efficient text steganalysis method. IEEE Signal Process. Lett. 2019, 26, 627–631. [Google Scholar] [CrossRef]
- Niu, Y.; Wen, J.; Zhong, P.; Xue, Y. A hybrid R-BILSTM-C neural network based text steganalysis. IEEE Signal Process. Lett. 2019, 26, 1907–1911. [Google Scholar] [CrossRef]
- Peng, W.; Li, S.; Qian, Z.; Zhang, X. Text steganalysis based on hierarchical supervised learning and dual attention mechanism. IEEE/ACM Trans. Audio Speech Lang. Process. 2023, 31, 3513–3526. [Google Scholar] [CrossRef]
- Huang, K.; Zhang, Z.; Wei, Y.; Zhang, T.; Yang, Z.; Zhou, L. GSDFuse: Capturing Cognitive Inconsistencies from Multi-Dimensional Weak Signals in Social Media Steganalysis. arXiv 2025, arXiv:2505.17085. [Google Scholar]
- Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.; Le, Q.V. Xlnet: Generalized autoregressive pretraining for language understanding. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.; Le, Q.; Salakhutdinov, R. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv 2019, arXiv:1901.02860. [Google Scholar]
- Wikipedia Contributors. Letter Frequency. Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/wiki/Letter_frequency (accessed on 22 August 2025).
- Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
- Graves, A. Long short-term memory. In Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012; pp. 37–45. [Google Scholar]
- Langdon, G.; Rissanen, J. A simple general binary source code (Corresp.). IEEE Trans. Inf. Theory 1982, 28, 800–803. [Google Scholar] [CrossRef]
Character Order | |||||||
---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
Space | e | t | a | o | i | n | s |
9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 |
h | r | d | l | c | u | m | w |
17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 |
f | g | y | p | b | v | k | j |
25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 |
x | q | z | 0 | 1 | 2 | 3 | 4 |
33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 |
5 | 6 | 7 | 8 | 9 | . | , | “ |
41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 |
‘ | ! | ? | ; | : | ( | ) | [ |
49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 |
] | { | } | @ | # | $ | % | ^ |
57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 |
& | * | _ | - | + | = | ‘ | ~ |
65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 |
/ | \ | | | < | > | E | T | A |
73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 |
O | I | N | S | H | R | L | D |
81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 |
C | U | M | W | F | G | P | Y |
89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 |
B | K | V | J | X | Q | Z | NUL |
97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 |
SOH | STX | ETX | EOT | ENQ | ACK | BEL | BS |
105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 |
HT | LF | VT | FF | CR | SO | SI | DLE |
113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 |
DC1 | DC2 | DC3 | DC4 | NAK | SYN | ETB | CAN |
121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 |
EM | SUB | ESC | FS | GS | RS | US | DEL |
Group No. | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
Elements within a group | {1–2} | {3–6} | {7–14} | {15–30} | {31–62} | {63–128} |
Method | CP Build Strategies | Size of CP | |
---|---|---|---|
Our | 33 | 3.8 ± 0.2 | |
Discop [36] | 32 | ||
RNN-stega [14] | 16 | ||
32 | |||
VAE-stega [19] | 16 | ||
32 | |||
LLM-stega [35] | 32 |
Category | Configuration/Version | Category | Configuration/Version |
---|---|---|---|
GPU | NVIDIA A40, 46 GB | Operating system | Ubuntu 22.04 |
CUDA | 12.2 | PyTorch | 2.6.0+cu118 |
CPU | Intel Xeon | Transformers | 4.42.4 |
RAM | 128 GB | pandas | 2.2.2 |
Method | Generation Time | Extraction Time | GPU (Allocation/Reservation) | CPU |
---|---|---|---|---|
Our | 2.95 (s) | 2.00 (s) | 1406.1 MB/1452.0 MB | 814.2 MB |
Discop [36] | 3.54 (s) | 2.77 (s) | 1397.9 MB/1506.0 MB | 2241.3 MB |
RNN-stega [14] | 3.07 (s) | 2.54 (s) | 155.8 MB/250.0 MB | 532.7 MB |
VAE-stega [19] | 2.99 (s) | 2.35 (s) | 293.8 MB/300.0 MB | 473.9 MB |
LLM-stega [35] | 2.82 (s) | 4.18 (s) | 634.3 MB/734.0 MB | 2404.5 MB |
Method | |||||
---|---|---|---|---|---|
Our-movie review | 3.8 ± 0.2 | 5.54 | 2.33 | 0.54 | 0.33 |
Discop [36] | 7.20 | 2.78 | 0.57 | 0.38 | |
RNN-stega [14] | 33.36 | 8.20 | 0.45 | 0.29 | |
RNN-stega [14] | 52.54 | 8.82 | 0.56 | 0.29 | |
VAE-stega [19] | 22.67 | 4.17 | 0.75 | 0.38 | |
VAE-stega [19] | 30.56 | 4.63 | 0.78 | 0.37 | |
Our-news | 3.9 ± 0.2 | 5.72 | 2.41 | 0.58 | 0.35 |
Our-fairy tales | 6.21 | 2.54 | 0.56 | 0.32 | |
LLM-stega [35] | 8.32 | 3.57 | 0.53 | 0.28 |
Method | Time Complexity | Space Complexity |
---|---|---|
Our | ||
Discop [36] | ||
HC [47] | ||
AC [48] | ||
PBT [14] |
Method | bpw | ||||
---|---|---|---|---|---|
Our-3.8 | 3.8 | 5.54 | 2.33 | 0.54 | 0.33 |
Discop-3.7 [36] | 3.7 | 7.20 | 2.78 | 0.57 | 0.38 |
HC-4.3 [46] | 4.3 | 22.92 | 7.19 | 0.5 | 0.7 |
AC-4.3 [48] | 4.3 | 20.40 | 6.50 | 0.47 | 0.7 |
PBT-3.9 [14] | 3.9 | 27.31 | 7.78 | 0.66 | 0.58 |
Method | CNN [37] | TS-RNN [38] | FCN [39] | |||||||||
Metrics (%) | ||||||||||||
Our-3.8 | 92.35 | 89.74 | 95.55 | 92.36 | 75.82 | 79.27 | 66.65 | 71.63 | 65.60 | 59.87 | 97.69 | 73.76 |
Discop-3.7 [40] | 85.72 | 83.37 | 89.91 | 85.97 | 71.23 | 73.12 | 67.99 | 68.43 | 85.59 | 77.81 | 99.00 | 87.02 |
HC-4.3 [47] | 99.80 | 99.79 | 99.79 | 99.79 | 95.67 | 93.51 | 97.93 | 95.60 | 68.31 | 61.58 | 98.76 | 75.52 |
AC-4.3 [48] | 99.80 | 99.86 | 99.72 | 99.79 | 95.12 | 93.46 | 96.74 | 94.99 | 71.41 | 72.88 | 83.66 | 73.02 |
PBT-3.9 [14] | 99.78 | 99.93 | 99.61 | 99.77 | 94.45 | 91.17 | 98.18 | 94.46 | 69.84 | 62.75 | 99.97 | 76.75 |
Method | RBILSTM [40] | HiDuNet [41] | GSDF [42] | |||||||||
Metrics (%) | ||||||||||||
Our-3.8 | 98.53 | 98.01 | 98.95 | 98.47 | 90.62 | 88.04 | 94.33 | 90.60 | 72.33 | 70.06 | 78.00 | 73.82 |
Discop-3.7 [40] | 92.48 | 89.90 | 95.97 | 92.46 | 92.00 | 88.68 | 96.55 | 91.97 | 52.67 | 51.74 | 98.03 | 67.73 |
HC-4.3 [47] | 99.82 | 99.72 | 99.89 | 99.81 | 96.00 | 94.31 | 98.02 | 95.99 | 87.17 | 89.20 | 84.77 | 86.93 |
AC-4.3 [48] | 99.73 | 99.79 | 99.65 | 99.72 | 95.00 | 93.36 | 97.04 | 94.99 | 88.33 | 88.57 | 89.14 | 88.85 |
PBT-3.9 [14] | 99.83 | 99.79 | 99.86 | 99.82 | 96.62 | 94.79 | 98.76 | 96.62 | 91.78 | 90.12 | 95.09 | 92.54 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Y.; Li, Q.; Bhattacharjya, A.; Wu, X.; Li, H.; Chang, Q.; Zhu, L.; Xiao, Y. Enhancement of the Generation Quality of Generative Linguistic Steganographic Texts by a Character-Based Diffusion Embedding Algorithm (CDEA). Appl. Sci. 2025, 15, 9663. https://doi.org/10.3390/app15179663
Chen Y, Li Q, Bhattacharjya A, Wu X, Li H, Chang Q, Zhu L, Xiao Y. Enhancement of the Generation Quality of Generative Linguistic Steganographic Texts by a Character-Based Diffusion Embedding Algorithm (CDEA). Applied Sciences. 2025; 15(17):9663. https://doi.org/10.3390/app15179663
Chicago/Turabian StyleChen, Yingquan, Qianmu Li, Aniruddha Bhattacharjya, Xiaocong Wu, Huifeng Li, Qing Chang, Le Zhu, and Yan Xiao. 2025. "Enhancement of the Generation Quality of Generative Linguistic Steganographic Texts by a Character-Based Diffusion Embedding Algorithm (CDEA)" Applied Sciences 15, no. 17: 9663. https://doi.org/10.3390/app15179663
APA StyleChen, Y., Li, Q., Bhattacharjya, A., Wu, X., Li, H., Chang, Q., Zhu, L., & Xiao, Y. (2025). Enhancement of the Generation Quality of Generative Linguistic Steganographic Texts by a Character-Based Diffusion Embedding Algorithm (CDEA). Applied Sciences, 15(17), 9663. https://doi.org/10.3390/app15179663