sensors-logo

Journal Browser

Journal Browser

Multisensor Data Fusion and Its Applications in Object Detection and Tracking

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: 20 February 2026 | Viewed by 1712

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
Interests: computer vision; image processing; pattern recognition; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528228, China
Interests: image processing; pattern recognition; data fusion; optical metrology

E-Mail Website
Guest Editor
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
Interests: image processing; pattern recognition; machine learning

Special Issue Information

Dear Colleagues,

Multisensor data fusion integrates information from multiple sensors to improve object detection and tracking performance. By combining data from various sources, it mitigates limitations such as noise, blind spots, and environmental interference, making the system more reliable and accurate. Multisensor data fusion is widely applied in autonomous vehicles, robotics, surveillance, and aerospace, where precise and real-time object detection and tracking are critical. This technique leverages data obtained from different types of sensors, such as radar, LiDAR, cameras, and thermal sensors, to create a comprehensive understanding of the environment. Recent advancements in data fusion algorithms, machine learning, and signal processing have expanded the potential of multisensor data fusion, enabling the more efficient processing of sensor data and improving the robustness of object tracking systems. This Special Issue delves into innovative methodologies, frameworks, and real-world applications of multisensor data fusion, highlighting its significance in enhancing the detection and tracking capabilities of modern systems.

Prof. Dr. Huafeng Li
Dr. Xiaosong Li
Dr. Yafei Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multisensor fusion
  • multisource image fusion
  • data fusion algorithms
  • object detection
  • object tracking
  • autonomous systems
  • machine learning
  • sensor integration
  • robotics perception
  • signal processing for sensor fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1640 KiB  
Article
Cross-Modal Object Detection Based on Content-Guided Feature Fusion and Self-Calibration
by Liyang Ning, Xuxun Liu, Luoyu Zhou and Xueyu Zou
Sensors 2025, 25(11), 3392; https://doi.org/10.3390/s25113392 - 28 May 2025
Viewed by 183
Abstract
Traditional transformers suffer from limitations in local attention, which can result in inadequate feature representation and reduced detection accuracy in cross-modal object detection tasks. Additionally, deep features are prone to degradation through multiple convolutional layers, leading to the loss of detailed information. To [...] Read more.
Traditional transformers suffer from limitations in local attention, which can result in inadequate feature representation and reduced detection accuracy in cross-modal object detection tasks. Additionally, deep features are prone to degradation through multiple convolutional layers, leading to the loss of detailed information. To address these issues, we propose a dual-backbone cross-modal object detection model based on YOLOv8n. First, we introduce a parallel network in the backbone to enable the model to process information from different modalities simultaneously. Second, we design a content-guided fusion module (CGF) in the feature extraction network, leveraging both transformer and convolution operations to capture global and local information, thereby enhancing the model’s ability to focus on detailed object features. Finally, we propose an adaptive calibration fusion module (ACF) in the neck to fuse shallow and deep features, supplementing fine-grained details and improving the model’s detection capability in complex environments. Experimental results show that on the LLVIP dataset, our model achieves mAP50 of 96.4 and mAP95 of 63.8; on the M3FD dataset, it achieves mAP50 of 83.7 and mAP95 of 56.6. Our model outperforms baseline models and other state-of-the-art methods in detection accuracy, demonstrating robust performance for cross-modal object detection tasks across various environments. Full article
Show Figures

Figure 1

21 pages, 3625 KiB  
Article
Multimodal Material Classification Using Visual Attention
by Mohadeseh Maleki, Ghazal Rouhafzay and Ana-Maria Cretu
Sensors 2024, 24(23), 7664; https://doi.org/10.3390/s24237664 - 29 Nov 2024
Viewed by 1135
Abstract
The material of an object is an inherent property that can be perceived through various sensory modalities, yet the integration of multisensory information substantially improves the accuracy of these perceptions. For example, differentiating between a ceramic and a plastic cup with similar visual [...] Read more.
The material of an object is an inherent property that can be perceived through various sensory modalities, yet the integration of multisensory information substantially improves the accuracy of these perceptions. For example, differentiating between a ceramic and a plastic cup with similar visual properties may be difficult when relying solely on visual cues. However, the integration of touch and audio feedback when interacting with these objects can significantly clarify these distinctions. Similarly, combining audio and touch exploration with visual guidance can optimize the sensory examination process. In this study, we introduce a multisensory approach for categorizing object materials by integrating visual, audio, and touch perceptions. The main contribution of this paper is the exploration of a computational model of visual attention that directs the sampling of touch and audio data. We conducted experiments using a subset of 63 household objects from a publicly available dataset, the ObjectFolder dataset. Our findings indicate that incorporating a visual attention model enhances the ability to generalize material classifications to new objects and achieves superior performance compared to a baseline approach, where data are gathered through random interactions with an object’s surface. Full article
Show Figures

Figure 1

Back to TopTop