Machine Learning and Natural Language Processing

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Big Data and Augmented Intelligence".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 968

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematical Sciences, Eastern New Mexico University, Portales, NM 88130, USA
Interests: recommender systems; applied machine learning; data mining; natural language processing

E-Mail Website
Guest Editor
Department of Mathematical Sciences, Eastern New Mexico University, Portales, NM 88130, USA
Interests: edge computing; IoT; deep learning; sentiment analysis

E-Mail Website
Guest Editor
Department of Computer Science, New Mexico State University, Las Cruces, NM 88001, USA
Interests: data mining; big data; applied machine learning

Special Issue Information

Dear Colleagues,

The Internet is evolving rapidly, driven by the demand for intelligent communication systems. At the core of this transformation is the synergy between Machine Learning (ML) and Natural Language Processing (NLP). NLP enables computers to understand human language, while ML enhances its accuracy and efficiency through data-driven learning. This combination powers advanced applications such as language translation, sentiment analysis, and text generation. This Special Issue of Future Internet explores how ML-driven NLP is shaping next-generation network and communication technologies, enabling more efficient, secure, and user-friendly experiences in an increasingly interconnected world.

This Special Issue invites high-quality contributions for original research papers, case studies, and surveys addressing the following and related topics:

  • Deep Learning for real-time language translation for enhanced communication;
  • Privacy-preserving NLP for secure communication channels;
  • NLP-driven network traffic analysis for anomaly and threat detection;
  • Federated learning for decentralized language model training in networked environments;
  • Sentiment analysis in social media networks for crisis response and public safety;
  • Energy-efficient NLP algorithms for edge computing devices in IoT networks;
  • Cross-lingual models for global IoT communication systems and interoperability;
  • AI-powered chatbots and virtual assistants in telecommunication customer service and network management;
  • Ethical challenges in network-deployed NLP systems;
  • Transformers and Large Language Models (LLMs) for digital communication and network optimization;
  • Automated text summarization for information retrieval and knowledge management in networks;
  • NLP for personalized learning and feedback in online education environments;
  • NLP for network log analysis and automated troubleshooting;
  • Security and privacy in ML-based NLP applications for networked systems.

Dr. Edgar Ceh Varela
Dr. Sarbagya Shakya
Prof. Dr. Huiping Cao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • natural language processing
  • intelligent networks
  • network communication

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1078 KiB  
Article
Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models
by Jaewoo Yang, Hayun Kim, Junyung Ji and Younghoon Kim
Future Internet 2025, 17(4), 185; https://doi.org/10.3390/fi17040185 - 21 Apr 2025
Viewed by 182
Abstract
Modern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such as INT8. However, we identify [...] Read more.
Modern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such as INT8. However, we identify a critical challenge in activation quantization for GLU (Gated Linear Unit) variants, which are commonly used in the feed-forward networks of modern LLMs like the LLaMA family. Specifically, severe local quantization errors arise due to excessively large activation magnitudes, which we refer to as activation spikes, leading to significant degradation in model performance. Our analysis reveals a systematic pattern of these spikes: they predominantly occur in the FFN (feed-forward network) layers at the early and late layers of the model and are concentrated on a small subset of tokens rather than being uniformly distributed across a token sequence. To mitigate this issue, we propose two empirical methods: Quantization-free Module (QFeM) and Quantization-free Prefix (QFeP), which isolate activation spikes during quantization. Extensive experiments demonstrated that our methods effectively improve activation quantization, particularly in coarse-grained quantization schemes, enhancing the performance of LLMs with GLU variants and addressing the limitations of existing quantization techniques. The code for implementing our methods and reproducing the experiments is publicly available our GitHub repository. Full article
(This article belongs to the Special Issue Machine Learning and Natural Language Processing)
Show Figures

Figure 1

Back to TopTop