applsci-logo

Journal Browser

Journal Browser

Cutting-Edge Neural Networks for NLP (Natural Language Processing)

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 May 2025 | Viewed by 1267

Special Issue Editor


E-Mail Website
Guest Editor
School of Information Technology and Electronic Engineering, The University of Queensland, Brisbane, QLD 4072, Australia
Interests: big data; natural language processing; knowledge graph; Internet of Things
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue on "Cutting-Edge Neural Networks for NLP (Natural Language Processing)" aims to explore the latest advancements in neural network architectures and their transformative impact on the field of NLP. The scope includes, but is not limited to, the following topics:

1) Advanced Architectures: Examination of state-of-the-art neural network models such as transformers, BERT, GPT, and their variations. Proposing new architectures in areas such as self-supervised or environment-aware continuous learning; machine consciousness learning, including the emotion and sentiment representations; and first-principle learning. This includes innovations in model design, training techniques, and performance enhancements.

2) Application and Performance: Analysis of how these cutting-edge models improve performance across various NLP tasks like machine translation, sentiment analysis, text summarization, and personalized question answering. Studies highlighting benchmarks and real-world applications are encouraged.

3) Multimodal and Multilingual Approaches: Exploration of models that integrate text with other data types (e.g., images, audio, and time series) and those that handle multiple languages, emphasizing improvements in comprehension and generation across diverse languages and contexts.

4) Ethical and Practical Considerations: Discussions on the ethical implications, biases, and societal impact of deploying advanced neural networks in NLP. This includes addressing challenges related to fairness, truth discovery, accountability, and transparency.

Future Directions and Emerging Trends: Insights into upcoming trends in NLP, potential breakthroughs, and the future trajectory of neural networks in processing natural language.

This Special Issue seeks contributions from researchers, practitioners, and industry experts to present novel research findings, case studies, and reviews that push the boundaries of current NLP capabilities.

Prof. Dr. Xue Li
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • advanced architectures
  • application and performance
  • multimodal and multilingual approaches
  • ethical and practical considerations

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2087 KiB  
Article
Meta-Data-Guided Robust Deep Neural Network Classification with Noisy Label
by Jie Lu, Yufeng Wang, Aiju Shi, Jianhua Ma and Qun Jin
Appl. Sci. 2025, 15(4), 2080; https://doi.org/10.3390/app15042080 - 16 Feb 2025
Viewed by 586
Abstract
Deep neural network (DNN)-based classifiers have witnessed great applications in various fields. Unfortunately, the labels of real-world training data are commonly noisy, i.e., the labels of a large percentage of training samples are wrong, which negatively affects the performance of a trained DNN [...] Read more.
Deep neural network (DNN)-based classifiers have witnessed great applications in various fields. Unfortunately, the labels of real-world training data are commonly noisy, i.e., the labels of a large percentage of training samples are wrong, which negatively affects the performance of a trained DNN classifier during inference. Therefore, it is challenging to practically formulate a robust DNN classifier using noisy labels in training. To address the above issue, our work designs an effective architecture for training the robust DNN classifier with noisy labels, named a cross dual-branch network guided by meta-data on a single side (CoNet-MS), in which a small amount of clean data, i.e., meta-data, are used to guide the training of the DNN classifier. Specifically, the contributions of our work are threefold. First, based on the principle of small loss, each branch using the base classifier as a neural network module infers partial samples with pseudo-clean labels, which are then used for training another branch through a cross structure that can alleviate the cumulative impact of mis-inference. Second, a meta-guided module is designed and inserted into the single branch, e.g., the upper branch, which dynamically adjusts the ratio between the observed label and the pseudo-label output by the classifier in the loss function for each training sample. The asymmetric dual-branch design makes two classifiers diverge, which facilitates them to filter different types of noisy labels and avoid confirmation bias in self-training. Finally, thorough experiments demonstrate that the trained classifier with our proposal is more robust: the accuracy of the classifier trained with our proposed CoNet-MS on multiple datasets under various ratios of noisy labels and noise types outperforms other classifiers of learning with noisy labels (LNLs), including the state-of-the-art meta-data-based LNL classifier. Full article
(This article belongs to the Special Issue Cutting-Edge Neural Networks for NLP (Natural Language Processing))
Show Figures

Figure 1

Back to TopTop