sensors-logo

Journal Browser

Journal Browser

Deep Learning for Perception and Recognition Based on Sensor Data: Methods and Applications, 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 31 August 2026 | Viewed by 1007

Special Issue Editors


E-Mail Website
Guest Editor
State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang, China
Interests: multimodal perception and recognition; light field processing; computer vision in industrial scenarios
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer Science and Technology, Shanghai Electric Power University, Shanghai, China
Interests: pattern recognition and machine learning; interpretable artificial intelligence; computer vision; smart grid
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Automation, Central South University, Changsha 410083, China
Interests: infrared thermography; temperature measurement; deep learning; vision-based measurement; object detection; information fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid advancement of deep learning technology has brought about transformative breakthroughs in interpreting data from a multitude of sensors, revolutionizing perception and recognition systems across a wide range of applications. In addition to driving innovation in industrial sectors, it has opened up significant opportunities in fields such as intelligent transportation, smart cities, healthcare, and robotics, where perception is fundamentally built upon heterogeneous sensor streams from cameras, LiDAR, radar, inertial measurement units (IMUs), and beyond.

Deep learning significantly enhances the accuracy and robustness of sensor-based perception and recognition systems through hierarchical feature extraction in multilayer neural networks, achieving remarkable results in areas such as sensor fusion, vision-based inspection, spectral data analysis, and point cloud processing. By training on large volumes of sensor data, deep learning algorithms are able to automatically learn complex feature representations and efficiently recognize patterns directly from raw or minimally processed sensor inputs.

As application scenarios grow more complex and sensor data become increasingly diverse (e.g., multimodal, high-dimensional, and streaming), deep learning models continue to face significant challenges in solving real-world perception and recognition problems. These challenges include ensuring model generalization when dealing with inherently noisy, imbalanced, or limited sensor data; enhancing performance through self-supervised, few-shot, or transfer learning in cases of insufficient labeled sensor data; seamlessly integrating information across different sensor scales, dimensions, and modalities; and developing explainable and trustworthy perception and recognition systems for high-risk applications.

This Special Issue seeks to highlight advanced research in deep learning for sensor-based perception and recognition. Submitted papers should clearly present novel contributions, whether in general methodologies or innovative applications, addressing any of the following or related topics:

  • Deep Learning for Novel Sensor Modalities
  • Multimodal and Cross-Modal Sensor Fusion
  • Efficient Deep Learning Models for Edge and Sensor Networks
  • Self-Supervised/Semi-Supervised Learning for Sensor Data
  • Point Cloud and 3D Range Data Processing
  • Time-Series and Signal Processing for Sensors
  • Explainable AI (XAI) for Sensor-Based Systems
  • Hardware-Software Co-Design for Sensor Intelligence
  • Multi-Sensor Perception for Robotics: Vision and Beyond
  • Intelligent Decision-Making Based on Sensor-Driven Perception

Dr. Gaochang Wu
Dr. Zizhu Fan
Dr. Dong Pan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • visual perception
  • multimodal learning
  • multimodal fusion
  • pattern analysis
  • knowledge system
  • explainable machine learning
  • robot vision

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 4607 KB  
Article
Cross-Modal Interaction Fusion-Based Uncertainty-Aware Prediction Method for Industrial Froth Flotation Concentrate Grade by Using a Hybrid SKNet-ViT Framework
by Fanlei Lu, Weihua Gui, Yulong Wang, Jiayi Zhou and Xiaoli Wang
Sensors 2026, 26(1), 150; https://doi.org/10.3390/s26010150 - 25 Dec 2025
Viewed by 185
Abstract
In froth flotation, the features of froth images are important information to predict the concentrate grade. However, the froth structure is influenced by multiple factors, such as air flowrate, slurry level, ore properties, reagents, etc., which leads to highly complex and dynamic changes [...] Read more.
In froth flotation, the features of froth images are important information to predict the concentrate grade. However, the froth structure is influenced by multiple factors, such as air flowrate, slurry level, ore properties, reagents, etc., which leads to highly complex and dynamic changes in the image features. Additionally, issues such as the immeasurability of ore properties and measurement errors pose significant uncertainties including aleatoric uncertainty (intrinsic variability from ore fluctuations and sensor noise) and epistemic uncertainty (incomplete feature representation and local data heterogeneity) and generalization challenges for prediction models. This paper proposes an uncertainty quantification regression framework based on cross-modal interaction fusion, which integrates the complementary advantages of Selective Kernel Networks (SKNet) and Vision Transformers (ViT). By designing a cross-modal interaction module, the method achieves deep fusion of local and global features, reducing epistemic uncertainty caused by incomplete feature expression in single-models. Meanwhile, by combining adaptive calibrated quantile regression—using exponential moving average (EMA) to track real-time coverage and adjust parameters dynamically—the prediction interval coverage is optimized, addressing the inability of static quantile regression to adapt to aleatoric uncertainty. And through the localized conformal prediction module, sensitivity to local data distributions is enhanced, avoiding the limitation of global conformal methods in ignoring local heterogeneity. Experimental results demonstrate that this method significantly improves the robustness of uncertainty estimation while maintaining high prediction accuracy, providing strong support for intelligent optimization and decision-making in industrial flotation processes. Full article
Show Figures

Figure 1

33 pages, 1798 KB  
Article
Analyzing Parameter-Efficient Convolutional Neural Network Architectures for Visual Classification
by Nazmul Shahadat and Anthony S. Maida
Sensors 2025, 25(24), 7663; https://doi.org/10.3390/s25247663 - 17 Dec 2025
Viewed by 363
Abstract
Advances in visual recognition have relied on increasingly deep and wide convolutional neural networks (CNNs), which often introduce substantial computational and memory costs. This review summarizes recent progress in parameter-efficient CNN design across three directions: hypercomplex representations with cross-channel weight sharing, axial attention [...] Read more.
Advances in visual recognition have relied on increasingly deep and wide convolutional neural networks (CNNs), which often introduce substantial computational and memory costs. This review summarizes recent progress in parameter-efficient CNN design across three directions: hypercomplex representations with cross-channel weight sharing, axial attention mechanisms, and real-valued architectures using separable convolutions. We highlight how these approaches reduce parameter counts while maintaining or improving accuracy. We further analyze our contributions within this landscape. Full hypercomplex neural networks (FHNNs) employ hypercomplex layers throughout the architecture to reduce latency and parameters, while representational axial attention models (RepAA) extend this efficiency by generating additional feature representations. To mitigate the remaining overhead of spatial hypercomplex operations, we introduce separable hypercomplex networks (SHNNs), which factorize quaternion convolutions into sequential vectormap operations, lowering parameters by approximately 50%. Finally, we compare these models with popular efficient architectures, such as MobileNets and SqueezeNets, and demonstrate that our residual one-dimensional convolutional networks (RCNs) achieve competitive performance in image classification and super-resolution with significantly fewer parameters. This review highlights emerging strategies for reducing computational overhead in CNNs and outlines directions for future research. Full article
Show Figures

Figure 1

15 pages, 12206 KB  
Article
LMS-Res-YOLO: Lightweight and Multi-Scale Cucumber Detection Model with Residual Blocks
by Bo Li, Guangjin Zhong and Wei Ke
Sensors 2025, 25(23), 7305; https://doi.org/10.3390/s25237305 - 1 Dec 2025
Cited by 1 | Viewed by 362
Abstract
Efficient cucumber detection in greenhouse environments is crucial for agricultural automation, yet challenges like background interference, target occlusion, and resource constraints of edge devices hinder existing solutions. This paper proposes LMS-Res-YOLO, a lightweight multi-scale cucumber detection model with three key innovations: (1) A [...] Read more.
Efficient cucumber detection in greenhouse environments is crucial for agricultural automation, yet challenges like background interference, target occlusion, and resource constraints of edge devices hinder existing solutions. This paper proposes LMS-Res-YOLO, a lightweight multi-scale cucumber detection model with three key innovations: (1) A plug-and-play HEU module (High-Efficiency Unit with residual blocks) that enhances multi-scale feature representation while reducing computational redundancy. (2) A DE-HEAD (Decoupled and Efficient detection HEAD) that reduces the number of model parameters, floating-point operations (FLOPs), and model size. (3) Integration of KernelWarehouse dynamic convolution (KWConv) to balance parameter efficiency and feature expression. Experimental results demonstrate that our model achieves 97.9% mAP@0.5 (0.7% improvement over benchmark model YOLOv8_n), 87.8% mAP@0.5:0.95 (2.3% improvement), and a 95.9% F1-score (0.7% improvement), while reducing FLOPs by 33.3% and parameters by 19.3%. The model shows superior performance in challenging cucumber detection scenarios, with potential applications in edge devices. Full article
Show Figures

Figure 1

Back to TopTop