Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = multi-source sensory information fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3904 KiB  
Article
IFGAN—A Novel Image Fusion Model to Fuse 3D Point Cloud Sensory Data
by Henry Alexander Ignatious, Hesham El-Sayed and Salah Bouktif
J. Sens. Actuator Netw. 2024, 13(1), 15; https://doi.org/10.3390/jsan13010015 - 7 Feb 2024
Cited by 5 | Viewed by 2631
Abstract
To enhance the level of autonomy in driving, it is crucial to ensure optimal execution of critical maneuvers in all situations. However, numerous accidents involving autonomous vehicles (AVs) developed by major automobile manufacturers in recent years have been attributed to poor decision making [...] Read more.
To enhance the level of autonomy in driving, it is crucial to ensure optimal execution of critical maneuvers in all situations. However, numerous accidents involving autonomous vehicles (AVs) developed by major automobile manufacturers in recent years have been attributed to poor decision making caused by insufficient perception of environmental information. AVs employ diverse sensors in today’s technology-driven settings to gather this information. However, due to technical and natural factors, the data collected by these sensors may be incomplete or ambiguous, leading to misinterpretation by AVs and resulting in fatal accidents. Furthermore, environmental information obtained from multiple sources in the vehicular environment often exhibits multimodal characteristics. To address this limitation, effective preprocessing of raw sensory data becomes essential, involving two crucial tasks: data cleaning and data fusion. In this context, we propose a comprehensive data fusion engine that categorizes various sensory data formats and appropriately merges them to enhance accuracy. Specifically, we suggest a general framework to combine audio, visual, and textual data, building upon our previous research on an innovative hybrid image fusion model that fused multispectral image data. However, this previous model faced challenges when fusing 3D point cloud data and handling large volumes of sensory data. To overcome these challenges, our study introduces a novel image fusion model called Image Fusion Generative Adversarial Network (IFGAN), which incorporates a multi-scale attention mechanism into both the generator and discriminator of a Generative Adversarial Network (GAN). The primary objective of image fusion is to merge complementary data from various perspectives of the same scene to enhance the clarity and detail of the final image. The multi-scale attention mechanism serves two purposes: the first, capturing comprehensive spatial information to enable the generator to focus on foreground and background target information in the sensory data, and the second, constraining the discriminator to concentrate on attention regions rather than the entire input image. Furthermore, the proposed model integrates the color information retention concept from the previously proposed image fusion model. Furthermore, we propose simple and efficient models for extracting salient image features. We evaluate the proposed models using various standard metrics and compare them with existing popular models. The results demonstrate that our proposed image fusion model outperforms the other models in terms of performance. Full article
Show Figures

Figure 1

20 pages, 1614 KiB  
Article
A Collaborative Multi-Granularity Architecture for Multi-Source IoT Sensor Data in Air Quality Evaluations
by Wantong Li, Chao Zhang, Yifan Cui and Jiale Shi
Electronics 2023, 12(11), 2380; https://doi.org/10.3390/electronics12112380 - 24 May 2023
Cited by 6 | Viewed by 1593
Abstract
Air pollution (AP) is a significant environmental issue that poses a potential threat to human health. Its adverse effects on human health are diverse, ranging from sensory discomfort to acute physiological reactions. As such, air quality evaluation (AQE) serves as a crucial process [...] Read more.
Air pollution (AP) is a significant environmental issue that poses a potential threat to human health. Its adverse effects on human health are diverse, ranging from sensory discomfort to acute physiological reactions. As such, air quality evaluation (AQE) serves as a crucial process that involves the collection of samples from the environment and their analysis to measure AP levels. With the proliferation of Internet of Things (IoT) devices and sensors, real-time and continuous measurement of air pollutants in urban environments has become possible. However, the data obtained from multiple sources of IoT sensors can be uncertain and inaccurate, posing challenges in effectively utilizing and fusing this data. Meanwhile, differences in opinions among decision-makers regarding AQE can affect the outcome of the final decision. To tackle these challenges, this paper systematically investigates a novel multi-attribute group decision-making (MAGDM) approach based on hesitant trapezoidal fuzzy (HTrF) information and discusses its application to AQE. First, by combining HTrF sets (HTrFSs) with multi-granulation rough sets (MGRSs), a new rough set model, named HTrF MGRSs, on a two-universe model is proposed. Second, the definition and property of the presented model are studied. Third, a decision-making approach based on the background of AQE is constructed via utilizing decision-making index sets (DMISs). Lastly, the validity and feasibility of the constructed approach are demonstrated via a case study conducted in the AQE setting using experimental and comparative analyses. The outcomes of the experiment demonstrate that the presented architecture owns the ability to handle multi-source IoT sensor data (MSIoTSD), providing a sensible conclusion for AQE. In summary, the MAGDM method presented in this article is a promising scheme for solving decision-making problems, where HTrFSs possess excellent information description capabilities and can adequately describe indecision and uncertainty information. Meanwhile, MGRSs serve as an outstanding information fusion tool that can improve the quality and level of decision-making. DMISs are better able to analyze and evaluate information and reduce the impact of disagreement on decision outcomes. The proposed architecture, therefore, provides a viable solution for MSIoTSD facing uncertainty or hesitancy in the AQE environment. Full article
(This article belongs to the Special Issue Advances in Intelligent Data Analysis and Its Applications)
Show Figures

Figure 1

24 pages, 19389 KiB  
Article
Intelligent Parking Control Method Based on Multi-Source Sensory Information Fusion and End-to-End Deep Learning
by Zhenpeng Ma, Haobin Jiang, Shidian Ma and Yue Li
Appl. Sci. 2023, 13(8), 5003; https://doi.org/10.3390/app13085003 - 16 Apr 2023
Cited by 4 | Viewed by 2354
Abstract
To address the challenges of inefficient intelligent parking performance and reduced efficiency in complex environments, this study presents an end-to-end intelligent parking control model based on a Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) architecture incorporating multi-source sensory information fusion to improve decision-making and [...] Read more.
To address the challenges of inefficient intelligent parking performance and reduced efficiency in complex environments, this study presents an end-to-end intelligent parking control model based on a Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) architecture incorporating multi-source sensory information fusion to improve decision-making and adaptability. The model can produce real-time intelligent parking control decisions by extracting spatiotemporal features, including comprehensive 360-degree panoramic images and ultrasonic sensor distance measurements. To enhance the coverage of real-world environments in the dataset, a data collection platform was developed, leveraging the PreScan simulation platform in conjunction with actual parking environments. Consequently, a comprehensive parking environment dataset comprising various types was constructed. A deep learning model was devised to ameliorate horizontal and vertical control in intelligent parking systems, integrating Convolutional Neural Networks and Long Short-Term Memory in a parallel configuration. By meticulously accounting for parking environment characteristics, sliding window parameters were optimized, and transfer learning was employed for secondary model training to fortify the prediction accuracy. To ascertain the system’s robustness, simulation tests were performed. The ultimate results from the actual environment experiment revealed that the end-to-end intelligent parking model substantially surpassed the existing approaches, bolstering parking efficiency and effectiveness in complex contexts. Full article
Show Figures

Figure 1

21 pages, 4188 KiB  
Article
A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory
by Xinjian Xiang, Kehan Li, Bingqiang Huang and Ying Cao
Sensors 2022, 22(15), 5902; https://doi.org/10.3390/s22155902 - 7 Aug 2022
Cited by 13 | Viewed by 3354
Abstract
The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, [...] Read more.
The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, the cloud model is employed to construct the basic probability assignment (BPA) function of the evidence corresponding to each data source. To address the issue that traditional evidence theory produces results that do not correspond to the facts when fusing conflicting evidence, the three measures of the Jousselme distance, cosine similarity, and the Jaccard coefficient are combined to measure the similarity of the evidence. The Hellinger distance of the interval is used to calculate the credibility of the evidence. The similarity and credibility are combined to improve the evidence, and the fusion is performed according to Dempster’s rule to finally obtain the results. The numerical example results show that the proposed improved evidence theory method has better convergence and focus, and the confidence in the correct proposition is up to 100%. Applying the proposed multi-sensor data-fusion method to early indoor fire detection, the method improves the accuracy by 0.9–6.4% and reduces the false alarm rate by 0.7–10.2% compared with traditional and other improved evidence theories, proving its validity and feasibility, which provides a certain reference value for multi-sensor information fusion. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

23 pages, 12682 KiB  
Article
Sensor Modeling for Underwater Localization Using a Particle Filter
by Humberto Martínez-Barberá, Pablo Bernal-Polo and David Herrero-Pérez
Sensors 2021, 21(4), 1549; https://doi.org/10.3390/s21041549 - 23 Feb 2021
Cited by 15 | Viewed by 4888
Abstract
This paper presents a framework for processing, modeling, and fusing underwater sensor signals to provide a reliable perception for underwater localization in structured environments. Submerged sensory information is often affected by diverse sources of uncertainty that can deteriorate the positioning and tracking. By [...] Read more.
This paper presents a framework for processing, modeling, and fusing underwater sensor signals to provide a reliable perception for underwater localization in structured environments. Submerged sensory information is often affected by diverse sources of uncertainty that can deteriorate the positioning and tracking. By adopting uncertain modeling and multi-sensor fusion techniques, the framework can maintain a coherent representation of the environment, filtering outliers, inconsistencies in sequential observations, and useless information for positioning purposes. We evaluate the framework using cameras and range sensors for modeling uncertain features that represent the environment around the vehicle. We locate the underwater vehicle using a Sequential Monte Carlo (SMC) method initialized from the GPS location obtained on the surface. The experimental results show that the framework provides a reliable environment representation during the underwater navigation to the localization system in real-world scenarios. Besides, they evaluate the improvement of localization compared to the position estimation using reliable dead-reckoning systems. Full article
(This article belongs to the Special Issue Intelligent Sensing Systems for Vehicle)
Show Figures

Figure 1

29 pages, 9193 KiB  
Article
The Joint Adaptive Kalman Filter (JAKF) for Vehicle Motion State Estimation
by Siwei Gao, Yanheng Liu, Jian Wang, Weiwen Deng and Heekuck Oh
Sensors 2016, 16(7), 1103; https://doi.org/10.3390/s16071103 - 16 Jul 2016
Cited by 12 | Viewed by 8158
Abstract
This paper proposes a multi-sensory Joint Adaptive Kalman Filter (JAKF) through extending innovation-based adaptive estimation (IAE) to estimate the motion state of the moving vehicles ahead. JAKF views Lidar and Radar data as the source of the local filters, which aims to adaptively [...] Read more.
This paper proposes a multi-sensory Joint Adaptive Kalman Filter (JAKF) through extending innovation-based adaptive estimation (IAE) to estimate the motion state of the moving vehicles ahead. JAKF views Lidar and Radar data as the source of the local filters, which aims to adaptively adjust the measurement noise variance-covariance (V-C) matrix ‘R’ and the system noise V-C matrix ‘Q’. Then, the global filter uses R to calculate the information allocation factor ‘β’ for data fusion. Finally, the global filter completes optimal data fusion and feeds back to the local filters to improve the measurement accuracy of the local filters. Extensive simulation and experimental results show that the JAKF has better adaptive ability and fault tolerance. JAKF enables one to bridge the gap of the accuracy difference of various sensors to improve the integral filtering effectivity. If any sensor breaks down, the filtered results of JAKF still can maintain a stable convergence rate. Moreover, the JAKF outperforms the conventional Kalman filter (CKF) and the innovation-based adaptive Kalman filter (IAKF) with respect to the accuracy of displacement, velocity, and acceleration, respectively. Full article
(This article belongs to the Special Issue Advances in Multi-Sensor Information Fusion: Theory and Applications)
Show Figures

Graphical abstract

Back to TopTop