Next Article in Journal
Influence of Sewage Sludge Ash on Clay Properties
Previous Article in Journal
Analysis Method for Bending Deflection of the Inner Frame System in an Airborne Optoelectronic Platform
Previous Article in Special Issue
Can LLMs Generate Coherent Summaries? Leveraging LLM Summarization for Spanish-Language News Articles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Techniques and Applications of Natural Language Processing

by
Rajvardhan Patil
1,* and
Venkat Gudivada
2
1
School of Computing, Grand Valley State University, Allendale, MI 49401, USA
2
Department of Computer Science, East Carolina University, Greenville, NC 27858, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1726; https://doi.org/10.3390/app16041726
Submission received: 9 January 2026 / Accepted: 4 February 2026 / Published: 9 February 2026
(This article belongs to the Special Issue Techniques and Applications of Natural Language Processing)

1. Introduction

Natural language processing (NLP) has undergone rapid and transformative progress in recent years, driven largely by advances in deep learning and the emergence of large language models (LLMs). These models have enabled substantial improvements in language understanding and generation, supporting applications such as conversational systems, question answering, information retrieval, summarization, and decision support [1]. As a result, NLP technologies are increasingly being deployed in real-world, high-impact domains.
Despite these advances, significant challenges remain. Issues such as hallucination, bias, scalability, privacy preservation, multilingual and low-resource language support, and ethical deployment continue to limit the reliability and trustworthiness of NLP systems [2]. Addressing these challenges requires not only improvements in model architectures but also careful consideration of domain adaptation, evaluation rigor, human–AI interaction, and responsible AI practices.
In this context, the Special Issue “Techniques and Applications of Natural Language Processing” was launched to present recent advances in both foundational NLP methods and applied systems. This Special Issue brings together eight peer-reviewed contributions that collectively highlight current research trends, methodological innovations, and practical applications across diverse languages and domains. The included articles reflect the growing emphasis on robustness, efficiency, multilinguality, and real-world applicability in modern NLP research.

2. An Overview of Published Articles

This Special Issue comprises eight contributions that address complementary aspects of NLP research and applications.
Contribution (1), by Padilla Cuevas et al., presents MédicoBERT, a domain-specific language model for Spanish medical NLP tasks. The study demonstrates how systematic hyperparameter optimization can improve question-answering performance, highlighting the importance of tailored language models for healthcare applications.
Contribution (2), by Um and Kim, investigates performance enhancement in transformer-based language models through the integration of neural topic attention. The proposed approach enriches contextual representations and shows improved performance across NLP tasks, illustrating how architectural modifications can complement standard transformer designs.
Contribution (3), by Albahli, introduces an advanced framework for Arabic named entity recognition that effectively addresses morphological richness and nested entities. The work contributes to NLP research for morphologically complex languages and demonstrates improved entity recognition accuracy in Arabic text.
Contribution (4), by Patil et al., analyzes the performance of the LLaMA3 model on classification tasks using parameter-efficient fine-tuning techniques, including LoRA and QLoRA. The study provides insights into accuracy–efficiency trade-offs when adapting large language models for downstream tasks.
Contribution (5), by Viveros-Muñoz et al., explores whether the grammatical structure of prompts influences the responses generated by generative AI systems. Through an exploratory analysis in Spanish, the authors demonstrate that prompt formulation can significantly affect model outputs, underscoring the importance of prompt engineering.
Contribution (6), by Carrasco-Sáez et al., examines higher education students’ prompting techniques and their impact on ChatGPT’s performance. The findings highlight how user expertise and prompting strategies shape the effectiveness of human–AI interaction in educational settings.
Contribution (7), by Geng et al., focuses on speech recognition for the low-resource Tongan language. The authors propose a transfer learning approach based on layer-wise fine-tuning and lexicon parameter enhancement, achieving improved recognition performance and contributing to multilingual and low-resource NLP research.
Contribution (8), by Pan et al., investigates the use of LLMs for summarizing Spanish-language news articles. The study evaluates coherence and quality in generated summaries, demonstrating the potential and limitations of LLM-based summarization in real-world information processing scenarios.

3. Conclusions

The contributions presented in this Special Issue collectively illustrate the evolving landscape of NLP research, emphasizing both methodological innovation and practical deployment.

3.1. Key Outcomes and Advances

Across the eight articles, several key themes emerge. First, domain adaptation and parameter-efficient fine-tuning are shown to be effective strategies for improving model performance while managing computational costs. Second, architectural enhancements and prompt engineering play a critical role in shaping model behavior and output quality. Third, multilingual and low-resource language support remains a central research focus, with several contributions addressing Spanish, Arabic, and Tongan language processing. Finally, the growing importance of human–AI interaction and evaluation rigor is evident in studies examining prompting strategies and educational use cases.

3.2. Future Directions and Open Challenges

While the works in this Special Issue demonstrate notable progress, several challenges remain. Future research must continue to prioritize efficiency, interpretability, robustness, and responsible AI practices to ensure trustworthy deployment in real-world settings [3], particularly as NLP systems are deployed in sensitive and high-impact domains. Hybrid approaches that integrate learning-based models with external knowledge sources and structured representations represent a promising direction. Additionally, sustained efforts are needed to support low-resource and multilingual languages and to foster collaboration between NLP researchers and domain experts. Addressing these challenges will be essential for ensuring that NLP technologies are reliable, trustworthy, and socially beneficial.

Author Contributions

Conceptualization, R.P. and V.G.; writing—original draft preparation, R.P.; writing—review and editing, R.P. and V.G.; supervision and Special Issue coordination, R.P. and V.G. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

List of Contributions

  • Padilla Cuevas, J.; Reyes-Ortiz, J.; Cuevas-Rasgado, A.; Mora-Gutiérrez, R.; Bravo, M. MédicoBERT: A Medical Language Model for Spanish Natural Language Processing Tasks with a Question-Answering Application Using Hyperparameter Optimization. Appl. Sci. 2024, 14, 7031. https://doi.org/10.3390/app14167031.
  • Um, T.; Kim, N. A Study on Performance Enhancement by Integrating Neural Topic Attention with Transformer-Based Language Model. Appl. Sci. 2024, 14, 7898. https://doi.org/10.3390/app14177898.
  • Albahli, S. An Advanced Natural Language Processing Framework for Arabic Named Entity Recognition: A Novel Approach to Handling Morphological Richness and Nested Entities. Appl. Sci. 2025, 15, 3073. https://doi.org/10.3390/app15063073.
  • Patil, R.; Khot, P.; Gudivada, V. Analyzing LLAMA3 Performance on Classification Task Using LoRA and QLoRA Techniques. Appl. Sci. 2025, 15, 3087. https://doi.org/10.3390/app15063087.
  • Viveros-Muñoz, R.; Carrasco-Sáez, J.; Contreras-Saavedra, C.; San-Martín-Quiroga, S. Does the Grammatical Structure of Prompts Influence the Responses of Generative Artificial Intelligence? Appl. Sci. 2025, 15, 3882. https://doi.org/10.3390/app15073882.
  • Carrasco-Sáez, J.; Contreras-Saavedra, C.; San-Martín-Quiroga, S.; Viveros-Muñoz, R. Analyzing Higher Education Students’ Prompting Techniques and Their Impact on ChatGPT’s Performance. Appl. Sci. 2025, 15, 7651. https://doi.org/10.3390/app15147651.
  • Geng, J.; Jia, D.; Li, Z.; He, Z.; Wu, N.; Zhang, W.; Cui, R. Tongan Speech Recognition Based on Layer-Wise Fine-Tuning Transfer Learning and Lexicon Parameter Enhancement. Appl. Sci. 2025, 15, 11412. https://doi.org/10.3390/app152111412.
  • Pan, R.; Bernal-Beltrán, T.; Salas-Zárate, M.; Paredes-Valverde, M.; García-Díaz, J.; Valencia-García, R. Can LLMs Generate Coherent Summaries? Leveraging LLM Summarization for Spanish-Language News Articles. Appl. Sci. 2025, 15, 11834. https://doi.org/10.3390/app152111834.

References

  1. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models Are Few-Shot Learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  2. Joshi, P.; Santy, S.; Budhiraja, A.; Bali, K.; Choudhury, M. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 6282–6293. [Google Scholar]
  3. Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), Atlanta, GA, USA, 29–31 January 2019; pp. 220–229. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Patil, R.; Gudivada, V. Techniques and Applications of Natural Language Processing. Appl. Sci. 2026, 16, 1726. https://doi.org/10.3390/app16041726

AMA Style

Patil R, Gudivada V. Techniques and Applications of Natural Language Processing. Applied Sciences. 2026; 16(4):1726. https://doi.org/10.3390/app16041726

Chicago/Turabian Style

Patil, Rajvardhan, and Venkat Gudivada. 2026. "Techniques and Applications of Natural Language Processing" Applied Sciences 16, no. 4: 1726. https://doi.org/10.3390/app16041726

APA Style

Patil, R., & Gudivada, V. (2026). Techniques and Applications of Natural Language Processing. Applied Sciences, 16(4), 1726. https://doi.org/10.3390/app16041726

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop