sensors-logo

Journal Browser

Journal Browser

Multi-Sensor Data Fusion

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 20 November 2025 | Viewed by 10434

Special Issue Editor


E-Mail Website
Guest Editor
Division of Intelligent Future Technologies, Mälardalen University, 721 23 Västerås, Sweden
Interests: machine learning; evolutionary computing; multi-sensor data fusion; reasoning with uncertainty; their applications in practical scenarios

Special Issue Information

Dear Colleagues,

Multi-sensor data fusion aims to synergistically combine data and information from multiple sensors and sources to achieve more accurate and specific inference than that which could be obtained by using a single sensor alone. It is a multidisciplinary area wherein various mathematical, signal processing, and artificial intelligence techniques are employed to analyze data and extract useful, coherent information about the underlying situation. In recent decades, research in sensor fusion has proliferated and become promising for many technical and engineering applications such as robotics and autonomous systems, process control, automated manufacturing, remote sensing, and sensor networks.

This Special Issue will showcase recent advances in the field of multi-sensor data and information fusion.

We expect that papers will tackle at least one of the following three aspects: architecture, methods and algorithms, and applications. Both fundamental papers exploring new methodologies and applied papers demonstrating new applications of sensor fusion are welcome. Topics of interest for this Special Issue include, but are not limited to, the following:

  • New system architecture and design paradigm;
  • Advanced signal and image processing;
  • Machine learning for sensor fusion;
  • Adaptive fusion techniques;
  • Multi-modal sensor fusion;
  • Uncertainty handling in fusion process;
  • Resource management and fusion process refinement;
  • Evaluation of fusion performance;
  • Applications to real-world problems.

Prof. Dr. Ning Xiong
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor fusion
  • fusion system architecture
  • signal
  • image
  • machine learning
  • multi-modal fusion
  • uncertainty
  • resource management
  • fusion performance

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 5557 KiB  
Article
Flight Trajectory Prediction Based on Automatic Dependent Surveillance-Broadcast Data Fusion with Interacting Multiple Model and Informer Framework
by Fan Li, Xuezhi Xu, Rihan Wang, Mingyuan Ma and Zijing Dong
Sensors 2025, 25(8), 2531; https://doi.org/10.3390/s25082531 - 17 Apr 2025
Viewed by 190
Abstract
Aircraft trajectory prediction is challenging because of the flight process with uncertain kinematic motion and varying dynamics, which is characterized by intricate temporal dependencies of the flight surveillance data. To address these challenges, this study proposes a novel hybrid prediction framework, the IMM-Informer, [...] Read more.
Aircraft trajectory prediction is challenging because of the flight process with uncertain kinematic motion and varying dynamics, which is characterized by intricate temporal dependencies of the flight surveillance data. To address these challenges, this study proposes a novel hybrid prediction framework, the IMM-Informer, which integrates an interacting multiple model (IMM) approach with the deep learning-based Informer model. The IMM processes flight tracking with multiple typical motion models to produce the initial state predictions. Within the Informer framework, the encoder captures the temporal features with the ProbSparse self-attention mechanism, and the decoder generates trajectory deviation predictions. A final fusion combines the IMM estimators with Informer correction outputs and leverages their respective strengths to achieve accurate and robust predictions. The experiments are conducted using real flight surveillance data received from automatic dependent surveillance-broadcast (ADS-B) sensors to validate the effectiveness of the proposed method. The results demonstrate that the IMM-Informer framework has notable prediction error reductions and significantly outperforms the prediction accuracies of the standalone sequence prediction network models. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

23 pages, 3173 KiB  
Article
A New Association Approach for Multi-Sensor Air Traffic Surveillance Data Based on Deep Neural Networks
by Joaquin Vico Navarro, Juan Vicente Balbastre Tejedor and Juan Antonio Vila Carbó
Sensors 2025, 25(3), 931; https://doi.org/10.3390/s25030931 - 4 Feb 2025
Viewed by 963
Abstract
Air Traffic Services play a crucial role in the safety, security, and efficiency of air transportation. The International Civil Aviation Organization (ICAO) performance-based surveillance concept requires monitoring the actual performance of the surveillance systems underpinning these services. This assessment is usually based on [...] Read more.
Air Traffic Services play a crucial role in the safety, security, and efficiency of air transportation. The International Civil Aviation Organization (ICAO) performance-based surveillance concept requires monitoring the actual performance of the surveillance systems underpinning these services. This assessment is usually based on the analysis of data gathered during the normal operation of the surveillance systems, also known as opportunity traffic. Processing opportunity traffic requires data association to identify and assign the sensor detections to a flight. Current techniques for association require expert knowledge of the flight dynamics of the target aircraft and have issues with high-manoeuvrability targets like military aircraft and Unmanned Aircraft (UA). This paper addresses the data association problem through the use of the Multi-Sensor Intelligent Data Association (M-SIOTA) algorithm based on Deep Neural Networks (DNNs). This is an innovative perspective on the data association of multi-sensor surveillance through the lens of machine learning. This approach enables data processing without assuming any dynamics model, so it is applicable to any aircraft class or airspace structure. The proposed algorithm is trained and validated using several surveillance datasets corresponding to various phases of flight and surveillance sensor mixes. Results show improvements in association performance in the different scenarios. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

21 pages, 2583 KiB  
Article
MDAR: A Multiscale Features-Based Network for Remotely Measuring Human Heart Rate Utilizing Dual-Branch Architecture and Alternating Frame Shifts in Facial Videos
by Linhua Zhang, Jinchang Ren, Shuang Zhao and Peng Wu
Sensors 2024, 24(21), 6791; https://doi.org/10.3390/s24216791 - 22 Oct 2024
Viewed by 1066
Abstract
Remote photoplethysmography (rPPG) refers to a non-contact technique that measures heart rate through analyzing the subtle signal changes of facial blood flow captured by video sensors. It is widely used in contactless medical monitoring, remote health management, and activity monitoring, providing a more [...] Read more.
Remote photoplethysmography (rPPG) refers to a non-contact technique that measures heart rate through analyzing the subtle signal changes of facial blood flow captured by video sensors. It is widely used in contactless medical monitoring, remote health management, and activity monitoring, providing a more convenient and non-invasive way to monitor heart health. However, factors such as ambient light variations, facial movements, and differences in light absorption and reflection pose challenges to deep learning-based methods. To solve these difficulties, we put forward a measurement network of heart rate based on multiscale features. In this study, we designed and implemented a dual-branch signal processing framework that combines static and dynamic features, proposing a novel and efficient method for feature fusion, enhancing the robustness and reliability of the signal. Furthermore, we proposed an alternate time-shift module to enhance the model’s temporal depth. To integrate the features extracted at different scales, we utilized a multiscale feature fusion method, enabling the model to accurately capture subtle changes in blood flow. We conducted cross-validation on three public datasets: UBFC-rPPG, PURE, and MMPD. The results demonstrate that MDAR not only ensures fast inference speed but also significantly improves performance. The two main indicators, MAE and MAPE, achieved improvements of at least 30.6% and 30.2%, respectively, surpassing state-of-the-art methods. These conclusions highlight the potential advantages of MDAR for practical applications. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

38 pages, 16115 KiB  
Article
Neural Approach to Coordinate Transformation for LiDAR–Camera Data Fusion in Coastal Observation
by Ilona Garczyńska-Cyprysiak, Witold Kazimierski and Marta Włodarczyk-Sielicka
Sensors 2024, 24(20), 6766; https://doi.org/10.3390/s24206766 - 21 Oct 2024
Viewed by 1908
Abstract
The paper presents research related to coastal observation using a camera and LiDAR (Light Detection and Ranging) mounted on an unmanned surface vehicle (USV). Fusion of data from these two sensors can provide wider and more accurate information about shore features, utilizing the [...] Read more.
The paper presents research related to coastal observation using a camera and LiDAR (Light Detection and Ranging) mounted on an unmanned surface vehicle (USV). Fusion of data from these two sensors can provide wider and more accurate information about shore features, utilizing the synergy effect and combining the advantages of both systems. Fusion is used in autonomous cars and robots, despite many challenges related to spatiotemporal alignment or sensor calibration. Measurements from various sensors with different timestamps have to be aligned, and the measurement systems need to be calibrated to avoid errors related to offsets. When using data from unstable, moving platforms, such as surface vehicles, it is more difficult to match sensors in time and space, and thus, data acquired from different devices will be subject to some misalignment. In this article, we try to overcome these problems by proposing the use of a point matching algorithm for coordinate transformation for data from both systems. The essence of the paper is to verify algorithms based on selected basic neural networks, namely the multilayer perceptron (MLP), the radial basis function network (RBF), and the general regression neural network (GRNN) for the alignment process. They are tested with real recorded data from the USV and verified against numerical methods commonly used for coordinate transformation. The results show that the proposed approach can be an effective solution as an alternative to numerical calculations, due to process improvement. The image data can provide information for identifying characteristic objects, and the obtained accuracies for platform dynamics in the water environment are satisfactory (root mean square error—RMSE—smaller than 1 m in many cases). The networks provided outstanding results for the training set; however, they did not perform as well as expected, in terms of the generalization capability of the model. This leads to the conclusion that processing algorithms cannot overcome the limitations of matching point accuracy. Further research will extend the approach to include information on the position and direction of the vessel. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

20 pages, 3825 KiB  
Article
A Lightweight Pathological Gait Recognition Approach Based on a New Gait Template in Side-View and Improved Attention Mechanism
by Congcong Li, Bin Wang, Yifan Li and Bo Liu
Sensors 2024, 24(17), 5574; https://doi.org/10.3390/s24175574 - 28 Aug 2024
Viewed by 1036
Abstract
As people age, abnormal gait recognition becomes a critical problem in the field of healthcare. Currently, some algorithms can classify gaits with different pathologies, but they cannot guarantee high accuracy while keeping the model lightweight. To address these issues, this paper proposes a [...] Read more.
As people age, abnormal gait recognition becomes a critical problem in the field of healthcare. Currently, some algorithms can classify gaits with different pathologies, but they cannot guarantee high accuracy while keeping the model lightweight. To address these issues, this paper proposes a lightweight network (NSVGT-ICBAM-FACN) based on the new side-view gait template (NSVGT), improved convolutional block attention module (ICBAM), and transfer learning that fuses convolutional features containing high-level information and attention features containing semantic information of interest to achieve robust pathological gait recognition. The NSVGT contains different levels of information such as gait shape, gait dynamics, and energy distribution at different parts of the body, which integrates and compensates for the strengths and limitations of each feature, making gait characterization more robust. The ICBAM employs parallel concatenation and depthwise separable convolution (DSC). The former strengthens the interaction between features. The latter improves the efficiency of processing gait information. In the classification head, we choose to employ DSC instead of global average pooling. This method preserves the spatial information and learns the weights of different locations, which solves the problem that the corner points and center points in the feature map have the same weight. The classification accuracies for this paper’s model on the self-constructed dataset and GAIT-IST dataset are 98.43% and 98.69%, which are 0.77% and 0.59% higher than that of the SOTA model, respectively. The experiments demonstrate that the method achieves good balance between lightweightness and performance. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

20 pages, 3546 KiB  
Article
Research into the Applications of a Multi-Scale Feature Fusion Model in the Recognition of Abnormal Human Behavior
by Congcong Li, Yifan Li, Bin Wang and Yuting Zhang
Sensors 2024, 24(15), 5064; https://doi.org/10.3390/s24155064 - 5 Aug 2024
Cited by 1 | Viewed by 1189
Abstract
Due to the increasing severity of aging populations in modern society, the accurate and timely identification of, and responses to, sudden abnormal behaviors of the elderly have become an urgent and important issue. In the current research on computer vision-based abnormal behavior recognition, [...] Read more.
Due to the increasing severity of aging populations in modern society, the accurate and timely identification of, and responses to, sudden abnormal behaviors of the elderly have become an urgent and important issue. In the current research on computer vision-based abnormal behavior recognition, most algorithms have shown poor generalization and recognition abilities in practical applications, as well as issues with recognizing single actions. To address these problems, an MSCS–DenseNet–LSTM model based on a multi-scale attention mechanism is proposed. This model integrates the MSCS (Multi-Scale Convolutional Structure) module into the initial convolutional layer of the DenseNet model to form a multi-scale convolution structure. It introduces the improved Inception X module into the Dense Block to form an Inception Dense structure, and gradually performs feature fusion through each Dense Block module. The CBAM attention mechanism module is added to the dual-layer LSTM to enhance the model’s generalization ability while ensuring the accurate recognition of abnormal actions. Furthermore, to address the issue of single-action abnormal behavior datasets, the RGB image dataset RIDS (RGB image dataset) and the contour image dataset CIDS (contour image dataset) containing various abnormal behaviors were constructed. The experimental results validate that the proposed MSCS–DenseNet–LSTM model achieved an accuracy, sensitivity, and specificity of 98.80%, 98.75%, and 98.82% on the two datasets, and 98.30%, 98.28%, and 98.38%, respectively. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

19 pages, 5402 KiB  
Article
Multi-Sensor Adaptive Weighted Data Fusion Based on Biased Estimation
by Mingwei Qiu and Bo Liu
Sensors 2024, 24(11), 3275; https://doi.org/10.3390/s24113275 - 21 May 2024
Cited by 1 | Viewed by 1259
Abstract
In order to avoid the loss of optimality of the optimal weighting factor in some cases and to further reduce the estimation error of an unbiased estimator, a multi-sensor adaptive weighted data fusion algorithm based on biased estimation is proposed. First, it is [...] Read more.
In order to avoid the loss of optimality of the optimal weighting factor in some cases and to further reduce the estimation error of an unbiased estimator, a multi-sensor adaptive weighted data fusion algorithm based on biased estimation is proposed. First, it is proven that an unbiased estimator can further optimize estimation error, and the reasons for the loss of optimality of the optimal weighting factor are analyzed. Second, the method of constructing a biased estimation value by using an unbiased estimation value and calculating the optimal weighting factor by using estimation error is proposed. Finally, the performance of least squares estimation data fusion, batch estimation data fusion, and biased estimation data fusion is compared through simulation tests, and test results show that biased estimation data fusion has a greater advantage in accuracy, stability, and noise resistance. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

28 pages, 36717 KiB  
Article
Multi-Sensor Image and Range-Based Techniques for the Geometric Documentation and the Photorealistic 3D Modeling of Complex Architectural Monuments
by Alexandra Tsiachta, Panagiotis Argyrou, Ioannis Tsougas, Maria Kladou, Panagiotis Ravanidis, Dimitris Kaimaris, Charalampos Georgiadis, Olga Georgoula and Petros Patias
Sensors 2024, 24(9), 2671; https://doi.org/10.3390/s24092671 - 23 Apr 2024
Cited by 4 | Viewed by 1911
Abstract
The selection of the optimal methodology for the 3D geometric documentation of cultural heritage is a subject of high concern in contemporary scientific research. As a matter of fact, it requires a multi-source data acquisition process and the fusion of datasets from different [...] Read more.
The selection of the optimal methodology for the 3D geometric documentation of cultural heritage is a subject of high concern in contemporary scientific research. As a matter of fact, it requires a multi-source data acquisition process and the fusion of datasets from different sensors. This paper aims to demonstrate the workflow for the proper implementation and integration of geodetic, photogrammetric and laser scanning techniques so that high-quality photorealistic 3D models and other documentation products can be generated for a complicated, large-dimensional architectural monument and its surroundings. As a case study, we present the monitoring of the Mehmet Bey Mosque, which is a landmark in the city of Serres and a significant remaining sample of the Ottoman architecture in Greece. The surveying campaign was conducted in the context of the 2022–2023 annual workshop of the Interdepartmental Program of Postgraduate Studies “Protection Conservation and Restoration of Cultural Monuments” of the Aristotle University of Thessaloniki, and it served as a geometric background for interdisciplinary cooperation and decision-making on the monument restoration process. The results of our study encourage the fusion of terrestrial laser scanning and photogrammetric datasets for the 3D modeling of the mosque, as they supplement each other as regards geometry and texture. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

Back to TopTop