sensors-logo

Journal Browser

Journal Browser

AI-Based Computer Vision Sensors & Systems—2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 1584

Special Issue Editors


E-Mail Website
Guest Editor
School of Artificial Intelligence, Xidian University, Xi'an, China
Interests: visual cognitive computing; computer vision; visual big data mining; intelligent algorithms
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor Assistant
Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi, Japan
Interests: spatial mechanisms of human visual attention; size tuning; cognitive science; LLM for psychology; explainable human–AI interaction systems

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) in computer vision sensors and systems is a specialized field that encompasses both current and historical AI advancements, as well as their potential impact and future prospects within sensor technology and its applications. This Special Issue explores the innovative landscape of AI-based computer vision sensors and systems, emphasizing their transformative potential across a variety of applications. These technologies harness advanced imaging techniques to facilitate real-time analysis and intelligent decision-making. We invite researchers to submit original articles investigating the use of RGB cameras, depth cameras (e.g., LiDAR), and thermal cameras in conjunction with image processing units (GPUs, TPUs, FPGAs) and object detection frameworks (e.g., YOLO, SSD, Faster R-CNN) in areas such as environmental monitoring, healthcare imaging, autonomous navigation, and security systems. This Issue aims to highlight innovative methodologies that enhance object detection, gesture recognition, and real-time analytics, ultimately advancing the capabilities of computer vision.

Prof. Dr. Xuefeng Liang
Guest Editor

Dr. Guangyu Chen
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • RGB cameras
  • depth cameras (LiDAR)
  • thermal cameras
  • image processing units (GPUs, TPUs, FPGAs)
  • YOLO (You Only Look Once)
  • gesture recognition systems
  • autonomous navigation systems
  • augmented reality (AR)
  • industrial automation
  • smart surveillance systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 5377 KB  
Article
M3ENet: A Multi-Modal Fusion Network for Efficient Micro-Expression Recognition
by Ke Zhao, Xuanyu Liu and Guangqian Yang
Sensors 2025, 25(20), 6276; https://doi.org/10.3390/s25206276 - 10 Oct 2025
Viewed by 413
Abstract
Micro-expression recognition (MER) aims to detect brief and subtle facial movements that reveal suppressed emotions, discerning authentic emotional responses in scenarios such as visitor experience analysis in museum settings. However, it remains a highly challenging task due to the fleeting duration, low intensity, [...] Read more.
Micro-expression recognition (MER) aims to detect brief and subtle facial movements that reveal suppressed emotions, discerning authentic emotional responses in scenarios such as visitor experience analysis in museum settings. However, it remains a highly challenging task due to the fleeting duration, low intensity, and limited availability of annotated data. Most existing approaches rely solely on either appearance or motion cues, thereby restricting their ability to capture expressive information fully. To overcome these limitations, we propose a lightweight multi-modal fusion network, termed M3ENet, which integrates both motion and appearance cues through early-stage feature fusion. Specifically, our model extracts horizontal, vertical, and strain-based optical flow between the onset and apex frames, alongside RGB images from the onset, apex, and offset frames. These inputs are processed by two modality-specific subnetworks, whose features are fused to exploit complementary information for robust classification. To improve generalization in low data regimes, we employ targeted data augmentation and adopt focal loss to mitigate class imbalance. Extensive experiments on five benchmark datasets, including CASME I, CASME II, CAS(ME)2, SAMM, and MMEW, demonstrate that M3ENet achieves state-of-the-art performance with high efficiency. Ablation studies and Grad-CAM visualizations further confirm the effectiveness and interpretability of the proposed architecture. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

26 pages, 5861 KB  
Article
Robust Industrial Surface Defect Detection Using Statistical Feature Extraction and Capsule Network Architectures
by Azeddine Mjahad and Alfredo Rosado-Muñoz
Sensors 2025, 25(19), 6063; https://doi.org/10.3390/s25196063 - 2 Oct 2025
Viewed by 356
Abstract
Automated quality control is critical in modern manufacturing, especially for metallic cast components, where fast and accurate surface defect detection is required. This study evaluates classical Machine Learning (ML) algorithms using extracted statistical parameters and deep learning (DL) architectures including ResNet50, Capsule Networks, [...] Read more.
Automated quality control is critical in modern manufacturing, especially for metallic cast components, where fast and accurate surface defect detection is required. This study evaluates classical Machine Learning (ML) algorithms using extracted statistical parameters and deep learning (DL) architectures including ResNet50, Capsule Networks, and a 3D Convolutional Neural Network (CNN3D) using 3D image inputs. Using the Dataset Original, ML models with the selected parameters achieved high performance: RF reached 99.4 ± 0.2% precision and 99.4 ± 0.2% sensitivity, GB 96.0 ± 0.2% precision and 96.0 ± 0.2% sensitivity. ResNet50 trained with extracted parameters reached 98.0 ± 1.5% accuracy and 98.2 ± 1.7% F1-score. Capsule-based architectures achieved the best results, with ConvCapsuleLayer reaching 98.7 ± 0.2% accuracy and 100.0 ± 0.0% precision for the normal class, and 98.9 ± 0.2% F1-score for the affected class. CNN3D applied on 3D image inputs reached 88.61 ± 1.01% accuracy and 90.14 ± 0.95% F1-score. Using the Dataset Expanded with ML and PCA-selected features, Random Forest achieved 99.4 ± 0.2% precision and 99.4 ± 0.2% sensitivity, K-Nearest Neighbors 99.2 ± 0.0% precision and 99.2 ± 0.0% sensitivity, and SVM 99.2 ± 0.0% precision and 99.2 ± 0.0% sensitivity, demonstrating consistent high performance. All models were evaluated using repeated train-test splits to calculate averages of standard metrics (accuracy, precision, recall, F1-score), and processing times were measured, showing very low per-image execution times (as low as 3.69×104 s/image), supporting potential real-time industrial application. These results indicate that combining statistical descriptors with ML and DL architectures provides a robust and scalable solution for automated, non-destructive surface defect detection, with high accuracy and reliability across both the original and expanded datasets. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

19 pages, 3920 KB  
Article
HCDFI-YOLOv8: A Transmission Line Ice Cover Detection Model Based on Improved YOLOv8 in Complex Environmental Contexts
by Lipeng Kang, Feng Xing, Tao Zhong and Caiyan Qin
Sensors 2025, 25(17), 5421; https://doi.org/10.3390/s25175421 - 2 Sep 2025
Viewed by 540
Abstract
When unmanned aerial vehicles (UAVs) perform transmission line ice cover detection, it is often due to the variable shooting angle and complex background environment, which leads to difficulties such as poor ice-covering recognition accuracy and difficulty in accurately identifying the target. To address [...] Read more.
When unmanned aerial vehicles (UAVs) perform transmission line ice cover detection, it is often due to the variable shooting angle and complex background environment, which leads to difficulties such as poor ice-covering recognition accuracy and difficulty in accurately identifying the target. To address these issues, this study proposes an improved icing detection model based on HCDFI–You Only Look Once version 8 (HCDFI-YOLOv8). First, a cross-dense hybrid (CDH) parallel heterogeneous convolutional module is proposed, which can not only improve the detection accuracy of the model, but also effectively alleviate the problem of the surge in the number of floating-point operations during the improvement of the model. Second, deep and shallow feature weighted fusion using improved CSPDarknet53 to 2-Stage FPN_Dynamic Feature Fusion (C2f_DFF) module is proposed to reduce feature loss in neck networks. Third, optimization of the detection head using the feature adaptive spatial feature fusion (FASFF) detection head module is performed to enhance the model’s ability to extract features at different scales. Finally, a new inner-complete intersection over union (Inner_CIoU) loss function is introduced to solve the contradiction of the CIOU loss function used in the original YOLOv8. Experimental results demonstrate that the proposed HCDFI-YOLOv8 model achieves a 2.7% improvement in mAP@0.5 and a 2.5% improvement in mAP@0.5:0.95 compared to standard YOLOv8. Among twelve models for icing detection, the proposed model delivers the highest overall detection accuracy. The accuracy of the HCDFI-YOLOv8 model in detecting complex transmission line environments is verified and effective technical support is provided for transmission line ice cover detection. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

Back to TopTop