Enhancing Text-to-SPARQL Generation via In-Context Learning with Example Selection Strategies †
Abstract
1. Introduction
2. Related Works
3. Methodology
3.1. Structural Classification of RDF Triples
3.2. System Framework
3.3. Few-Shot Prompt Design
3.4. Example Selection
- FR: Examples are randomly sampled from the entire training dataset.
- SS: Both the input and training questions are encoded into vectors, and the most similar examples are selected based on cosine similarity.
- STR: The target question is first assigned to a structural type by the RDF triples classifier, and examples are randomly sampled within that category.
- STSS: The target question is assigned to a structural type, and the most semantically similar examples are selected from within the same category.
4. Experiments
4.1. Experimental Results Without Fine-Tuning
4.2. Experimental Results with Fine-Tuning
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Dong, Q.; Li, L.; Dai, D.; Zheng, C.; Ma, J.; Li, R.; Xia, H.; Xu, J.; Wu, Z.; Chang, B.; et al. A survey on in-context learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing; Association for Computational Linguistics: Stroudsburg, PA, USA, 2024; pp. 1107–1128. [Google Scholar]
- Chen, Y.; Zhong, R.; Zha, S.; Karypis, G.; He, H. Meta-learning via language model in-context tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics; Association for Computational Linguistics: Stroudsburg, PA, USA, 2022; Volume 1, pp. 719–730. [Google Scholar]
- Garg, S.; Tsipras, D.; Liang, P.S.; Valiant, G. What can transformers learn in-context? A case study of simple function classes. Adv. Neural Inf. Process. Syst. 2022, 35, 30583–30598. [Google Scholar]
- Zahera, H.M.; Ali, M.; Sherif, M.A.; Moussallem, D.; Ngomo, A.C.N. Generating SPARQL from Natural Language Using Chain-of-Thoughts Prompting. In Proceedings of the 20th International Conference on Semantic Systems; IOS Press: Amsterdam, The Netherlands, 2024; pp. 353–368. [Google Scholar]
- Kovriguina, L.; Teucher, R.; Radyush, D.; Mouromtsev, D. SPARQLGEN: One-Shot Prompt-based Approach for SPARQL Query Generation. In Proceedings of the SEMANTiCS 2023 Posters & Demos, Leipzig, Germany, 20–22 September 2023. [Google Scholar]
- D’Abramo, J.; Zugarini, A.; Torroni, P. Dynamic few-shot learning for knowledge graph question answering. arXiv 2024, arXiv:2407.01409. [Google Scholar] [CrossRef]
- Kosten, C.; Nooralahzadeh, F.; Stockinger, K. Evaluating the effectiveness of prompt engineering for knowledge graph question answering. Front. Artif. Intell. 2025, 7, 1454258. [Google Scholar] [CrossRef]
- Taffa, T.A.; Usbeck, R. Leveraging LLMs in scholarly knowledge graph question answering. arXiv 2023, arXiv:2311.09841. [Google Scholar] [CrossRef]
- Li, Y. A practical survey on zero-shot prompt design for in-context learning. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing; INCOMA Ltd.: Shoumen, Bulgaria, 2023; pp. 641–647. [Google Scholar]
- Geng, M.; Wang, S.; Dong, D.; Wang, H.; Li, G.; Jin, Z.; Mao, X.; Liao, X. Large language models are few-shot summarizers: Multi-intent comment generation via in-context learning. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering; Association for Computing Machinery: New York, NY, USA, 2024; pp. 1–13. [Google Scholar]
- Li, T.; Zhang, G.; Do, Q.D.; Yue, X.; Chen, W. Long-context LLMs struggle with long in-context learning. arXiv 2024, arXiv:2404.02060. [Google Scholar]
- Bertsch, A.; Ivgi, M.; Xiao, E.; Alon, U.; Berant, J.; Gormley, M.R.; Neubig, G. In-context learning with long-context models: An in-depth exploration. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics; Association for Computational Linguistics: Stroudsburg, PA, USA, 2025; Volume 1, pp. 12119–12149. [Google Scholar]
- Chen, Y.H.; Lu, E.J.L.; Cheng, K.H. Enhancing SPARQL query generation for question answering with a hybrid encoder-decoder and cross-attention model. J. Web Semant. 2025, 87, 100869. [Google Scholar] [CrossRef]
- Chen, Y.H.; Lu, E.J.L.; Tsao, C.N. Enhancing SPARQL query generation using multi-label text-to-text models. Data Knowl. Eng. 2026, 164, 102584. [Google Scholar] [CrossRef]
- Longpre, S.; Hou, L.; Vu, T.; Webson, A.; Chung, H.W.; Tay, Y.; Zhou, D.; Le, Q.V.; Zoph, B.; Wei, J.; et al. The Flan collection: Designing data and methods for effective instruction tuning. In Proceedings of the 40th International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2023; pp. 22631–22648. [Google Scholar]
- Chung, H.W.; Hou, L.; Longpre, S.; Zoph, B.; Tay, Y.; Fedus, W.; Li, Y.; Wang, X.; Dehghani, M.; Brahma, S.; et al. Scaling instruction-finetuned language models. J. Mach. Learn. Res. 2024, 25, 70. [Google Scholar]



| Method | Setting | BLEU Score | F1 Score |
|---|---|---|---|
| FR | 0-shot | 0.0529 | 0.1566 |
| 1-shot | 0.1219 | 0.2372 | |
| 3-shot | 0.1496 | 0.2423 | |
| SS | 1-shot | 0.2382 | 0.3563 |
| 3-shot | 0.3777 | 0.4913 | |
| STR | 1-shot | 0.1346 | 0.2505 |
| 3-shot | 0.2227 | 0.3296 | |
| STSS | 1-shot | 0.2329 | 0.3497 |
| 3-shot | 0.3980 | 0.5120 |
| Method | Setting | BLEU Score | F1 Score |
|---|---|---|---|
| FR | 0-shot | 0.8748 | 0.9158 |
| 1-shot | 0.8816 | 0.9188 | |
| 3-shot | 0.8807 | 0.9198 | |
| SS | 1-shot | 0.8766 | 0.9177 |
| 3-shot | 0.8860 | 0.9224 | |
| STR | 1-shot | 0.8765 | 0.9175 |
| 3-shot | 0.8756 | 0.9156 | |
| STSS | 1-shot | 0.8763 | 0.9178 |
| 3-shot | 0.8783 | 0.9194 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Lu, E.J.-L.; Su, Z.-T. Enhancing Text-to-SPARQL Generation via In-Context Learning with Example Selection Strategies. Eng. Proc. 2026, 134, 36. https://doi.org/10.3390/engproc2026134036
Lu EJ-L, Su Z-T. Enhancing Text-to-SPARQL Generation via In-Context Learning with Example Selection Strategies. Engineering Proceedings. 2026; 134(1):36. https://doi.org/10.3390/engproc2026134036
Chicago/Turabian StyleLu, Eric Jui-Lin, and Zi-Ting Su. 2026. "Enhancing Text-to-SPARQL Generation via In-Context Learning with Example Selection Strategies" Engineering Proceedings 134, no. 1: 36. https://doi.org/10.3390/engproc2026134036
APA StyleLu, E. J.-L., & Su, Z.-T. (2026). Enhancing Text-to-SPARQL Generation via In-Context Learning with Example Selection Strategies. Engineering Proceedings, 134(1), 36. https://doi.org/10.3390/engproc2026134036

