Advance in Neural Networks and Visual Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 July 2026 | Viewed by 978

Special Issue Editors

School of Information Science and Engineering, Shandong University, Qingdao 266237, China
Interests: image processing; computer vision; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
Interests: image processing and computer vision; machine learning and artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information Science and Engineering, Shandong University, Qingdao 266237, China
Interests: image processing; pattern recognition and machine vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues

This Special Issue focuses on advances in neural networks and visual learning, and aims to highlight recent advancements in the field of visual analysis using deep learning and neural networks. Visual analysis, including image and video processing systems, is closely related to various fields, such as the Internet of Things, automatic navigation, intelligent robots and smart healthcare. However, image and video analysis can be time-consuming, costly, and prone to human error. With the emergence of artificial neural networks, many of these challenges can be addressed by automating the tasks involved in visual learning and analysis.

Therefore, this Special Issue, entitled “Advance in Neural Networks and Visual Learning”, invites leading researchers and developers from both academia and industry to discuss and present their latest research and innovations on the theory, algorithms, and system technologies that could substantially enhance existing artificial neural networks for visual learning. We encourage prospective authors to submit high-quality research related to this subject, including new theoretical methods, innovative applications, and system prototypes.

Dr. Lei Chen
Dr. Peng Zhang
Prof. Dr. Xianye Ben
Dr. Mingqiang Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • computer vision
  • image processing
  • pattern recognition
  • machine learning
  • deep learning
  • artificial neural networks
  • natural language processing
  • robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 3910 KB  
Article
Defect Detection Algorithm of Galvanized Sheet Based on S-C-B-YOLO
by Yicheng Liu, Gaoxia Fan, Hanquan Zhang and Dong Xiao
Mathematics 2026, 14(1), 110; https://doi.org/10.3390/math14010110 - 28 Dec 2025
Viewed by 258
Abstract
Galvanized steel sheets are vital anti-corrosion materials, yet their surface quality is prone to defects that impact performance. Manual inspection is inefficient, while conventional machine vision struggles with complex, small-scale defects in industrial settings. Although deep learning offers promising solutions, standard object detection [...] Read more.
Galvanized steel sheets are vital anti-corrosion materials, yet their surface quality is prone to defects that impact performance. Manual inspection is inefficient, while conventional machine vision struggles with complex, small-scale defects in industrial settings. Although deep learning offers promising solutions, standard object detection models like YOLOv5 (which is short for ‘You Only Look Once’) exhibit limitations in handling the subtle textures, scale variations, and reflective surfaces characteristic of galvanized sheet defects. To address these challenges, this paper proposes S-C-B-YOLO, an enhanced detection model based on YOLOv5. First, a Squeeze-and-Excitation (SE) attention mechanism is integrated into the deep layers of the backbone network to adaptively recalibrate channel-wise features, improving focus on defect-relevant information. Second, a Transformer block is combined with a C3 module to form a C3TR module, enhancing the model’s ability to capture global contextual relationships for irregular defects. Finally, the original path aggregation network (PANet) is replaced with a bidirectional feature pyramid network (Bi-FPN) to facilitate more efficient multi-scale feature fusion, significantly boosting sensitivity to small defects. Extensive experiments on a dedicated galvanized sheet defect dataset show that S-C-B-YOLO achieves a mean average precision (mAP@0.5) of 92.6% and an inference speed of 62 FPS, outperforming several baseline models including YOLOv3, YOLOv7, and Faster R-CNN. The proposed model demonstrates a favorable balance between accuracy and speed, offering a robust and practical solution for automated, real-time defect inspection in galvanized steel production. Full article
(This article belongs to the Special Issue Advance in Neural Networks and Visual Learning)
Show Figures

Figure 1

Review

Jump to: Research

40 pages, 5732 KB  
Review
From Context to Human: A Review of VLM Contextualization in the Recognition of Human States in Visual Data
by Corneliu Florea, Constantin-Bogdan Popescu, Andrei Racovițeanu, Andreea Nițu and Laura Florea
Mathematics 2026, 14(1), 175; https://doi.org/10.3390/math14010175 - 2 Jan 2026
Viewed by 429
Abstract
This paper presents a narrative review of the contextualization and contribution offered by vision–language models (VLMs) for human-centric understanding in images. Starting from the correlation between humans and their context (background) and by incorporating VLM-generated embeddings into recognition architectures, recent solutions have advanced [...] Read more.
This paper presents a narrative review of the contextualization and contribution offered by vision–language models (VLMs) for human-centric understanding in images. Starting from the correlation between humans and their context (background) and by incorporating VLM-generated embeddings into recognition architectures, recent solutions have advanced the recognition of human actions, the detection and classification of violent behavior, and inference of human emotions from body posture and facial expression. While powerful and general, VLMs may also introduce biases that can be reflected in the overall performance. Unlike prior reviews that focus on a single task or generic image captioning, this review jointly examines multiple human-centric problems in VLM-based approaches. The study begins by describing the key elements of VLMs (including architectural foundations, pre-training techniques, and cross-modal fusion strategies) and explains why they are suitable for contextualization. In addition to highlighting the improvements brought by VLMs, it critically discusses their limitations (including human-related biases) and presents a mathematical perspective and strategies for mitigating them. This review aims to consolidate the technical landscape of VLM-based contextualization for human state recognition and detection. It aims to serve as a foundational reference for researchers seeking to control the power of language-guided VLMs in recognizing human states correlated with contextual cues. Full article
(This article belongs to the Special Issue Advance in Neural Networks and Visual Learning)
Show Figures

Figure 1

Back to TopTop