Application of Neural Networks and Deep Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 2552

Special Issue Editors


E-Mail Website
Guest Editor
College of Artificial Intelligence and Automation, Hohai University,Nanjing 210098, China
Interests: neural networks; deep learning; reinforcement learning
College of Artificial Intelligence and Automation, Hohai University, Nanjing 210098, China
Interests: intelligent information processing and intelligent control; advanced control theory and application
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The neural network is a computational model based on the structure and function of a biological nervous system, capable of recognizing patterns and regularities through learning. Deep learning is a neural network-based machine learning technique that uses multiple layers of nonlinear transformations to learn high-level representations to implement tasks such as classification, regression, and generation. Recently, the application of neural networks and deep learning has been realized in both academic and industrial perspectives.

This Special Issue, entitled “Application of Neural Networks and Deep Learning”, presents the latest research and developments in neural networks and deep learning for various engineering, science, and management applications. This Special Issue aims to serve as a forum for exchanging ideas and knowledge among researchers and practitioners worldwide, inspiring new research directions and applications in this field. Both original research articles and review papers are welcome for submission to this Special Issue. Topics might include but are not limited to the following:

  • Fuzzy neural network control;
  • Intelligent control;
  • Reinforcement learning;
  • Deep learning.

Dr. Yundi Chu
Dr. Shixi Hou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • fuzzy neural network control
  • intelligent control
  • reinforcement learning
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 12978 KiB  
Article
A Framework for Breast Cancer Classification with Deep Features and Modified Grey Wolf Optimization
by Fathimathul Rajeena P.P and Sara Tehsin
Mathematics 2025, 13(8), 1236; https://doi.org/10.3390/math13081236 - 9 Apr 2025
Viewed by 301
Abstract
Breast cancer is the most common disease in women, with 287,800 new cases and 43,200 deaths in 2022 across United States. Early mammographic picture analysis and processing reduce mortality and enable efficient treatment. Several deep-learning-based mammography classification methods have been developed. Due to [...] Read more.
Breast cancer is the most common disease in women, with 287,800 new cases and 43,200 deaths in 2022 across United States. Early mammographic picture analysis and processing reduce mortality and enable efficient treatment. Several deep-learning-based mammography classification methods have been developed. Due to low-contrast images and irrelevant information in publicly available breast cancer datasets, existing models generally perform poorly. Pre-trained convolutional neural network models trained on generic datasets tend to extract irrelevant features when applied to domain-specific classification tasks, highlighting the need for a feature selection mechanism to transform high-dimensional data into a more discriminative feature space. This work introduces an innovative and effective multi-step pathway to overcome these restrictions. In preprocessing, mammographic pictures are haze-reduced using adaptive transformation, normalized using a cropping algorithm, and balanced using rotation, flipping, and noise addition. A 32-layer convolutional neural model inspired by YOLO, U-Net, and ResNet is intended to extract highly discriminative features for breast cancer classification. A modified Grey Wolf Optimization algorithm with three significant adjustments improves feature selection and redundancy removal over the previous approach. The robustness and efficacy of the proposed model in the classification of breast cancer were validated by its consistently high performance across multiple benchmark mammogram datasets. The model’s constant and better performance proves its robust generalization, giving it a powerful solution for binary and multiclass breast cancer classification. Full article
(This article belongs to the Special Issue Application of Neural Networks and Deep Learning)
Show Figures

Figure 1

16 pages, 3654 KiB  
Article
Deep Learning-Based In Situ Micrograph Synthesis and Augmentation for Crystallization Process Image Analysis
by Muyang Li, Tuo Yao, Jian Liu, Ziyi Liu, Zhenguo Gao and Junbo Gong
Mathematics 2024, 12(22), 3448; https://doi.org/10.3390/math12223448 - 5 Nov 2024
Viewed by 1184
Abstract
Deep learning-based in situ imaging and analysis for crystallization process are essential for optimizing product qualities, reducing experimental costs through real-time monitoring, and controlling the process. However, large and high-quality annotated datasets are required to train accurate models, which are time consuming. Therefore, [...] Read more.
Deep learning-based in situ imaging and analysis for crystallization process are essential for optimizing product qualities, reducing experimental costs through real-time monitoring, and controlling the process. However, large and high-quality annotated datasets are required to train accurate models, which are time consuming. Therefore, we proposed a novel methodology that applied image synthesis neural networks to generate virtual information-rich images, enabling efficient and rapid dataset expansion while simultaneously reducing annotation costs. Experiments were conducted on the L-alanine crystallization process to obtain process images and to validate the proposed workflow. The proposed method, aided by interpolation augmentation and data warping augmentation to enhance data richness, utilized only 25% of the training annotations, consistently segmenting crystallization process images comparable to those models utilizing 100% of the training data annotations, achieving an average precision of nearly 98%. Additionally, based on the analysis of Kullback–Leibler divergence, the proposed method demonstrated excellent performance in extracting in situ information regarding aspect ratios and crystal size distributions during the crystallization process. Moreover, its ability to leverage expert labels with a four-fold enhanced efficiency holds great potential for advancing various applications in crystallization processes. Full article
(This article belongs to the Special Issue Application of Neural Networks and Deep Learning)
Show Figures

Figure 1

17 pages, 6806 KiB  
Article
Siamese-Derived Attention Dense Network for Seismic Impedance Inversion
by Jiang Wu
Mathematics 2024, 12(18), 2824; https://doi.org/10.3390/math12182824 - 12 Sep 2024
Viewed by 737
Abstract
Seismic impedance inversion is essential for providing high-resolution stratigraphic analysis. Therefore, improving the accuracy while ensuring the efficiency of the inversion model is crucial for practical implementation. Recently, deep learning-based approaches have proven superior in capturing complex relationships between different data domains. In [...] Read more.
Seismic impedance inversion is essential for providing high-resolution stratigraphic analysis. Therefore, improving the accuracy while ensuring the efficiency of the inversion model is crucial for practical implementation. Recently, deep learning-based approaches have proven superior in capturing complex relationships between different data domains. In this paper, a Siamese-derived attention-dense network (SADN) is proposed, which incorporates both prediction and Siamese modules. In the prediction module, DenseNet serves as the backbone, and a channel attention mechanism is integrated into DenseNet to improve the weight of factors highly correlated with seismic impedance inversion. A bottleneck structure is employed in DenseNet to reduce computational costs. In the Siamese module, a weight-shared DenseNet is employed to compute the distribution similarity between the predicted impedance and the actual impedance, effectively regularizing the distribution similarity between the inverted seismic impedance and the recorded ground truth. The qualitative and quantitative results demonstrate the advantage of the SADN over commonly used traditional networks for seismic impedance inversion. Full article
(This article belongs to the Special Issue Application of Neural Networks and Deep Learning)
Show Figures

Figure 1

Back to TopTop