LLM-Guided Weighted Contrastive Learning with Topic-Aware Masking for Efficient Domain Adaptation: A Case Study on Pulp-Era Science Fiction
Abstract
1. Introduction
- Domain-aware weighted contrastive learning framework: Introduction of a systematic topic modeling-based masking approach combined with graduated LLM similarity feedback, significantly outperforming generic masking strategies while maintaining computational efficiency through minimal training requirements (2 epochs vs. traditional 10–40 epochs);
- SF-ProbeEval benchmark: Development of the first specialized benchmark for early 20th-century science fiction text understanding, comprising five linguistically targeted probing tasks that address the gap in domain-specific evaluation frameworks for historical literary corpora;
- Comprehensive empirical validation: Rigorous experimental evaluation demonstrating consistent improvements across multiple linguistic metrics with practical applicability for specialized text processing scenarios where traditional fine-tuning is resource-intensive or impractical.
2. Related Works
2.1. Mathematical Formulation of Domain Adaptation
2.2. Domain Adaptation for Specialized Text Collections
2.3. Contrastive Learning and Domain-Aware Augmentation
3. Methodology
3.1. Problem Formulation
3.2. Domain-Specific Dataset Construction
3.3. Language Model Architectures
3.4. LLM-Guided Contrastive Pair Generation
3.4.1. Domain-Aware Masking Strategy
- Single-keyword masking: targets individual domain-critical terms.
- Multiple-keyword masking: , where for broader contextual variation.
- Partial-keyword masking: for compound terms (e.g., “space-ships” → “<mask>-ships”).
3.4.2. LLM-Based Sentence Generation and Scoring
“Your task is to rewrite the following early 20th-century science fiction sentence by replacing each <mask> token with the most appropriate and coherent word to complete the sentence grammatically while preserving the style and meaning. If you cannot fill in the blanks, just say None as your answer.”
“Your task is to rate the similarity of two sentences drawn from early 20th-century science fiction prose. Return a single floating-point score between 0 and 5 inclusive. Consider four aspects when judging similarity: 1. preservation of the core meaning, 2. retention or plausible substitution of SF devices, 3. consistency of temporal, spatial, and technological background, 4. fidelity to key SF topic words.”
3.5. Weighted Contrastive Learning Framework
- T denotes the sentence length (number of tokens);
- represents the t-th token in the original sentence x;
- represents the t-th token in the LLM-generated sentence ;
- is the indicator function that returns 1 if the condition is true, 0 otherwise;
- is the discriminator’s predicted probability that the token at position t has been replaced;
- is the sentence-level embedding that provides contextual information to the discriminator.
3.6. Implementation Details
4. Experimental Evaluation
4.1. SF-ProbeEval: A Domain-Specific Probing Benchmark
4.2. Performance Analysis
4.3. Embedding Quality Assessment
4.4. Ablation Study
5. Discussion
5.1. Effectiveness of AI-Generated Feedback
5.2. Computational Efficiency and Resource Constraints
5.3. Framework Design and Transferability
5.4. Limitations and Future Research Directions
6. Conclusions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Methodological Details
Appendix A.1. GPT-4.1-mini Prompts and API Configuration
- Model: GPT-4.1-mini.
- Temperature: 0.1 (for consistency).
- Max tokens: 150 for infilling, 10 for scoring.
- Retry attempts: 3 for failed requests.
- API calls: sequential processing with 1 s delays.
Original Sentence | Infilled Sentence | Score |
---|---|---|
We established ourselves with our apparatus in a building which he rented and went ahead with our experiments | We established ourselves with our equipment in a building which he rented and went ahead with our experiments | 5 |
They wouldn’t have the information to give the Hans, nor would they be capable of imparting it. | They wouldn’t have the technology to give the Hans, nor would they be capable of imparting it. | 4 |
The gas for this purpose is drawn from a hole tapped through the cliff. | The pipe for this purpose is drawn from a hole tapped through the cliff. | 3 |
Suppose a great star from outside should come into the solar system? | Suppose a great ship from outside should come into the harbor? | 2 |
Appendix A.2. Masking Strategy Examples
- Original: “At least, I have employed a ray destructive to haemoglobin—the red blood cells.”
- Masked: “At least, I have employed a <mask> destructive to haemoglobin—the red blood cells.”
- Original: “The orbit of this planet was assuredly interior to the orbit of the earth, because it accompanied the sun in its apparent motion; yet it was neither Mercury nor Venus, because neither one nor the other of these has any satellite at all.”
- Masked: “The <mask> of this <mask> was assuredly interior to the orbit of the earth, because it accompanied the sun in its apparent motion; yet it was neither Mercury nor Venus, because neither one nor the other of these has any <mask> at all.”
- Original: “in four great cones, or space-ships, to establish themselves upon earth.”
- Masked: “in four great cones, or <mask>ships, to establish themselves upon earth.”
Appendix B. Extended Related Work Details
Appendix B.1. Evolution of Contrastive Learning Methods
Appendix B.2. Mathematical Formulations
Appendix C. Corpus Analysis Results
Appendix C.1. Complete Topic Classification
Topic ID | Label |
---|---|
0 | Asian Servant Stereotype |
1 | Bombardment Warfare |
2 | Interplanetary Diplomatic Council |
3 | Court Rituals |
4 | Fourth-Dimensional Geometry |
5 | Ornate Costume Descriptions |
6 | Optical Invisibility Experiment |
7 | Green Prism Miracle |
8 | Opulent Interior Architecture |
9 | Comic Planetary Voyage |
10 | Reckless Automobile Chase |
11 | Antitoxin Research |
12 | Clinical Hospital Drama |
13 | Shining One Cult |
14 | Lunar Surface Exploration |
15 | Neptunian Cataclysm |
16 | Scientific Crime Trial |
17 | Civilization Retrospective |
18 | Close-Quarters Combat |
19 | Industrial Capitalism |
20 | University Academia |
21 | Philosophy of Discovery |
22 | Carnivorous Creatures |
23 | Rocket Propulsion Engineering |
24 | Atomic Particle Physics |
25 | Aerial Fleet Warfare |
26 | Mountain Expedition |
27 | Luminous Vortex Phenomena |
28 | Polar Sea Voyage |
29 | Global Geography Overview |
Appendix C.2. Domain Keywords by Topic
Topic | Top Keywords |
---|---|
Neptunian Cataclysm | neptune, astronomical, planet, saturn, jupiter, solar system, comet, toward sun, uranus, astronomer, sky, orbit, telescope, sunward, moon |
Scientific Crime Trial | crime, trust, priestley, professor fleckner, detective, crime, district attorney, fleckner, prison, mystery, professor kempton, chandler, prisoner, judge, lawyer, murderer |
Carnivorous Creatures | prey, spider, insect, caterpillar, carnivorous, tribesman, bee, claw, fern, wasp, monster, spear, edible, mushroom, beetle |
Rocket Propulsion Engineering | gyroscope, rocket tube, power unit, apparatus, ship, velocity, generator, speed, pilot, acceleration, cylinder, unit, torpedo, mechanism, control room |
Court Rituals | servant, slave, majesty, ceremony, noble, lord, queen, prayer, master, royal, traitor, high priest, robe, monarch, sanctuary |
Appendix D. Statistical Significance Testing Results
Task | N | BERT acc | RoBERTa acc |
---|---|---|---|
BShift | 600 | −0.10% | +0.17% |
SOMO | 600 | +0.49% *** | +0.78% *** |
Coord_Inv | 594 | −0.09% | −0.05% |
TreeDepth | 360 | +0.00% | +0.00% |
WordContents | 350 | +0.00% | +0.00% |
Micro (BERT) | 2504 | +0.72% *** | |
Micro (RoBERTa) | 2504 | +0.98% *** |
Appendix E. Validation Against Human Annotation
Appendix E.1. Expert Annotation Protocol and Agreement Analysis
- ICC(2,k): 0.975 (average measures).
- Krippendorff’s : 0.929 (interval), 0.918 (ordinal), 0.640 (nominal).
Appendix E.2. GPT vs. Expert Score Comparison
Metric | Value [95% CI] |
---|---|
Pearson correlation (r) | 0.607 [0.522, 0.680] *** |
Spearman correlation () | 0.638 [0.561, 0.704] *** |
Mean Absolute Error | 0.987 |
Root Mean Squared Error | 1.343 |
Mean bias (GPT − Expert) | −0.347 |
Appendix E.3. Sample Size Adequacy and Confidence Intervals
References
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; Burstein, J., Doran, C., Solorio, T., Eds.; pp. 4171–4186. [Google Scholar] [CrossRef]
- Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
- Gururangan, S.; Marasović, A.; Swayamdipta, S.; Lo, K.; Beltagy, I.; Downey, D.; Smith, N.A. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks. arXiv 2020, arXiv:2004.10964. [Google Scholar]
- Qiu, X.; Sun, T.; Xu, Y.; Shao, Y.; Dai, N.; Huang, X. Pre-trained models for natural language processing: A survey. Sci. China Technol. Sci. 2020, 63, 1872–1897. [Google Scholar] [CrossRef]
- Ling, C.; Zhao, X.; Lu, J.; Deng, C.; Zheng, C.; Wang, J.; Chowdhury, T.; Li, Y.; Cui, H.; Zhang, X.; et al. Domain specialization as the key to make large language models disruptive: A comprehensive survey. arXiv 2023, arXiv:2305.18703. [Google Scholar]
- Li, B.; Zhou, H.; He, J.; Wang, M.; Yang, Y.; Li, L. On the sentence embeddings from pre-trained language models. arXiv 2020, arXiv:2011.05864. [Google Scholar] [CrossRef]
- Manjavacas, E.; Fonteyn, L. Adapting vs. pre-training language models for historical languages. J. Data Min. Digit. Humanit. 2022, NLP4DH. [Google Scholar] [CrossRef]
- Clark, K.; Luong, M.T.; Le, Q.V.; Manning, C.D. Electra: Pre-training text encoders as discriminators rather than generators. arXiv 2020, arXiv:2003.10555. [Google Scholar]
- Sun, Y.; Wang, S.; Feng, S.; Ding, S.; Pang, C.; Shang, J.; Liu, J.; Chen, X.; Zhao, Y.; Lu, Y.; et al. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. arXiv 2021, arXiv:2107.02137. [Google Scholar]
- Gao, T.; Yao, X.; Chen, D. SimCSE: Simple Contrastive Learning of Sentence Embeddings. arXiv 2021, arXiv:2104.08821. [Google Scholar]
- Yan, Y.; Li, R.; Wang, S.; Zhang, F.; Wu, W.; Xu, W. Consert: A contrastive framework for self-supervised sentence representation transfer. arXiv 2021, arXiv:2105.11741. [Google Scholar]
- Giorgi, J.; Nitski, O.; Wang, B.; Bader, G. Declutr: Deep contrastive learning for unsupervised textual representations. arXiv 2020, arXiv:2006.03659. [Google Scholar]
- Chuang, Y.S.; Dangovski, R.; Luo, H.; Zhang, Y.; Chang, S.; Soljačić, M.; Li, S.W.; tau Yih, W.; Kim, Y.; Glass, J. DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings. arXiv 2022, arXiv:2204.10298. [Google Scholar]
- Cheng, Q.; Yang, X.; Sun, T.; Li, L.; Qiu, X. Improving Contrastive Learning of Sentence Embeddings from AI Feedback. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, Toronto, ON, Canada, 9–14 July 2023; Rogers, A., Boyd-Graber, J., Okazaki, N., Eds.; pp. 11122–11138. [Google Scholar] [CrossRef]
- Zhang, Y.; He, R.; Liu, Z.; Lim, K.H.; Bing, L. An unsupervised sentence embedding method by mutual information maximization. arXiv 2020, arXiv:2009.12061. [Google Scholar]
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 2022, 35, 27730–27744. [Google Scholar]
- Zhou, K.; Zhang, B.; Zhao, W.X.; Wen, J.R. Debiased contrastive learning of unsupervised sentence representations. arXiv 2022, arXiv:2205.00656. [Google Scholar] [CrossRef]
- Kim, Y.; Oh, D.; Huang, H.H. SynCSE: Syntax Graph-based Contrastive Learning of Sentence Embeddings. Expert Syst. Appl. 2025, 287, 128047. [Google Scholar] [CrossRef]
- Grootendorst, M. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv 2022, arXiv:2203.05794. [Google Scholar]
- Zhao, H.; Des Combes, R.T.; Zhang, K.; Gordon, G. On learning invariant representations for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; Proceedings of Machine Learning Research: Cambridge, MA, USA, 2019; Volume 97, pp. 7523–7532. [Google Scholar]
- Zhang, Y.; Liu, T.; Long, M.; Jordan, M. Bridging theory and algorithm for domain adaptation. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; Proceedings of Machine Learning Research: Cambridge, MA, USA, 2019; Volume 97, pp. 7404–7413. [Google Scholar]
- Acuna, D.; Zhang, G.; Law, M.T.; Fidler, S. f-domain adversarial learning: Theory and algorithms. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021), Virtual, 18–24 July 2021; Proceedings of Machine Learning Research: Cambridge, MA, USA, 2021; Volume 139, pp. 66–75. [Google Scholar]
- He, Y.; Wang, H.; Li, B.; Zhao, H. Gradual domain adaptation: Theory and algorithms. J. Mach. Learn. Res. 2024, 25, 1–40. [Google Scholar]
- Pham, T.H.; Wang, Y.; Yin, C.; Zhang, X.; Zhang, P. Open-Set Heterogeneous Domain Adaptation: Theoretical Analysis and Algorithm. arXiv 2024, arXiv:2412.13036. [Google Scholar] [CrossRef]
- Wang, Z.; Mao, Y. Information-theoretic analysis of unsupervised domain adaptation. arXiv 2022, arXiv:2210.00706. [Google Scholar]
- Chen, Y.; Li, S.; Li, Y.; Atari, M. Surveying the Dead Minds: Historical-Psychological Text Analysis with Contextualized Construct Representation (CCR) for Classical Chinese. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, Miami, FL, USA, 12–16 November 2024; Al-Onaizan, Y., Bansal, M., Chen, Y.N., Eds.; pp. 2597–2615. [Google Scholar] [CrossRef]
- Wan, Z.; Zhang, Y.; Wang, Y.; Cheng, F.; Kurohashi, S. Reformulating Domain Adaptation of Large Language Models as Adapt-Retrieve-Revise: A Case Study on Chinese Legal Domain. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2024, Bangkok, Thailand, 11–16 August 2024; Ku, L.W., Martins, A., Srikumar, V., Eds.; pp. 5030–5041. [Google Scholar] [CrossRef]
- Manjavacas Arevalo, E.; Fonteyn, L. MacBERTh: Development and Evaluation of a Historically Pre-trained Language Model for English (1450–1950). In Proceedings of the Workshop on Natural Language Processing for Digital Humanities, Silchar, India, 16–19 December 2021; Hämäläinen, M., Alnajjar, K., Partanen, N., Rueter, J., Eds.; pp. 23–36. [Google Scholar]
- Xu, J.; Shao, W.; Chen, L.; Liu, L. SimCSE++: Improving contrastive learning for sentence embeddings from two perspectives. arXiv 2023, arXiv:2305.13192. [Google Scholar]
- Jiang, T.; Jiao, J.; Huang, S.; Zhang, Z.; Wang, D.; Zhuang, F.; Wei, F.; Huang, H.; Deng, D.; Zhang, Q. Promptbert: Improving bert sentence embeddings with prompts. arXiv 2022, arXiv:2201.04337. [Google Scholar] [CrossRef]
- Su, J.; Cao, J.; Liu, W.; Ou, Y. Whitening sentence representations for better semantics and faster retrieval. arXiv 2021, arXiv:2103.15316. [Google Scholar] [CrossRef]
- Wu, X.; Gao, C.; Zang, L.; Han, J.; Wang, Z.; Hu, S. Esimcse: Enhanced sample building method for contrastive learning of unsupervised sentence embedding. arXiv 2021, arXiv:2109.04380. [Google Scholar]
- Schick, T.; Schütze, H. Generating datasets with pretrained language models. arXiv 2021, arXiv:2104.07540. [Google Scholar] [CrossRef]
- Meng, Y.; Huang, J.; Zhang, Y.; Han, J. Generating training data with language models: Towards zero-shot language understanding. Adv. Neural Inf. Process. Syst. 2022, 35, 462–477. [Google Scholar]
- Bai, Y.; Kadavath, S.; Kundu, S.; Askell, A.; Kernion, J.; Jones, A.; Chen, A.; Goldie, A.; Mirhoseini, A.; McKinnon, C.; et al. Constitutional ai: Harmlessness from ai feedback. arXiv 2022, arXiv:2212.08073. [Google Scholar] [CrossRef]
- Maaten, L.V.D.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Kiros, R.; Zhu, Y.; Salakhutdinov, R.R.; Zemel, R.; Urtasun, R.; Torralba, A.; Fidler, S. Skip-thought vectors. In Proceedings of the Advances in Neural Information Processing Systems 28 (NeurIPS 2015), Montreal, QC, Canada, 7–12 December 2015; pp. 3294–3302. [Google Scholar]
- Logeswaran, L.; Lee, H. An efficient framework for learning sentence representations. arXiv 2018, arXiv:1803.02893. [Google Scholar] [CrossRef]
- Kim, T.; Yoo, K.M.; Lee, S.G. Self-guided contrastive learning for BERT sentence representations. arXiv 2021, arXiv:2106.07345. [Google Scholar] [CrossRef]
- Carlsson, F.; Gogoulou, E.; Ylipää, E.; Cuba Gyllensten, A.; Sahlgren, M. Semantic re-tuning with contrastive tension. In Proceedings of the International Conference on Learning Representations (ICLR 2021), Online, 3–7 May 2021. Poster presentation. [Google Scholar]
- Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Routledge: New York, NY, USA, 2013. [Google Scholar]
Encoder | Learning Rate | Masking Ratio | Epochs | Batch Size | |
---|---|---|---|---|---|
BERT-base | 7 | 0.30 | 0.005 | 2 | 64 |
RoBERTa-base | 1 | 0.20 | 0.005 | 2 | 64 |
Task | Description |
---|---|
Word Contents | Identify science fiction terminology and archaic vocabulary, evaluating adaptation to period-specific lexicons including scientific devices, astronomical terms, and technological concepts from pulp-era narratives. |
Tree Depth | Predict syntactic complexity levels in vintage prose, assessing understanding of elaborate sentence constructions characteristic of 1920s–1930s science fiction writing styles. |
BShift (Bigram Shift) | Detect local syntactic perturbations in period-appropriate word sequences, measuring sensitivity to historical word order patterns and archaic grammatical structures. |
SOMO (Semantic Odd Man Out) | Identify semantic anomalies within science fiction contexts, evaluating understanding of genre-specific relationships including scientific speculation and technological innovation concepts. |
Coord_Inv (Coordinate Inversion) | Detect structural modifications in complex vintage sentences, assessing comprehension of elaborate discourse patterns typical of early science fiction literary style. |
Model | Method | Word Contents (%) | Tree Depth (%) | BShift (%) | SOMO (%) | Coord_Inv (%) | Avg (%) |
---|---|---|---|---|---|---|---|
BERT | BERT-base | 47.14 | 19.72 | 67.50 | 63.33 | 79.80 | 55.50 |
SimCSE | 71.43 | 17.78 | 64.17 | 59.50 | 69.87 | 56.55 | |
DiffCSE | 73.71 | 13.33 | 62.67 | 61.67 | 67.17 | 55.71 | |
Proposed | 79.71 | 18.33 | 69.67 | 67.00 | 80.64 | 63.07 | |
RoBERTa | RoBERTa-base | 56.00 | 20.00 | 67.17 | 55.17 | 72.90 | 54.25 |
SimCSE | 74.86 | 13.89 | 48.33 | 60.67 | 48.99 | 49.35 | |
DiffCSE | 78.29 | 15.28 | 53.33 | 58.33 | 51.52 | 51.35 | |
Proposed | 80.86 | 13.33 | 66.67 | 56.50 | 74.07 | 58.29 |
Model | Method | Word Contents (%) | Tree Depth (%) | BShift (%) | SOMO (%) | Coord_Inv (%) | Avg (%) |
---|---|---|---|---|---|---|---|
BERT | SimCSE | 69.43 | 16.94 | 64.17 | 61.33 | 75.93 | 57.56 |
DiffCSE | 66.57 | 18.61 | 70.17 | 66.00 | 78.96 | 60.06 | |
Proposed | 79.71 | 18.33 | 69.67 | 67.00 | 80.64 | 63.07 | |
RoBERTa | SimCSE | 71.14 | 20.83 | 47.50 | 47.00 | 59.43 | 49.18 |
DiffCSE | 72.00 | 18.89 | 52.00 | 48.17 | 66.16 | 51.44 | |
Proposed | 80.86 | 13.33 | 66.67 | 56.50 | 74.07 | 58.29 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kang, S. LLM-Guided Weighted Contrastive Learning with Topic-Aware Masking for Efficient Domain Adaptation: A Case Study on Pulp-Era Science Fiction. Electronics 2025, 14, 3351. https://doi.org/10.3390/electronics14173351
Kang S. LLM-Guided Weighted Contrastive Learning with Topic-Aware Masking for Efficient Domain Adaptation: A Case Study on Pulp-Era Science Fiction. Electronics. 2025; 14(17):3351. https://doi.org/10.3390/electronics14173351
Chicago/Turabian StyleKang, Sujin. 2025. "LLM-Guided Weighted Contrastive Learning with Topic-Aware Masking for Efficient Domain Adaptation: A Case Study on Pulp-Era Science Fiction" Electronics 14, no. 17: 3351. https://doi.org/10.3390/electronics14173351
APA StyleKang, S. (2025). LLM-Guided Weighted Contrastive Learning with Topic-Aware Masking for Efficient Domain Adaptation: A Case Study on Pulp-Era Science Fiction. Electronics, 14(17), 3351. https://doi.org/10.3390/electronics14173351