sensors-logo

Journal Browser

Journal Browser

Deep Learning for Multi-Sensor Fusion

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (30 September 2019) | Viewed by 38114

Special Issue Editors


E-Mail Website
Guest Editor
SATIE Laboratory, University Paris-Saclay, Rue Noezlin, 91405 Orsay CEDEX, France
Interests: uncertain reasoning; pattern recognition; computer vision; machine learning for classification; remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
SATIE laboratory, University Paris-Saclay, rue Noezlin, Orsay 91405 cedex, France
Interests: deep learning; 3D vision; robotic navigation and localization; multiple camera systems

E-Mail Website
Guest Editor
INRIA Stars team, 2004 route des Lucioles, 06902 Sophia Antipolis Cedex BP 93 France
Interests: activity recognition; action localization; people detection and tracking; human behavior analysis

Special Issue Information

Dear Colleagues,

With the advent of deep learning, architectures with billions of parameters and trained in vast collections of data have become more and more prevalent. Deep architectures are now recognized as overcoming classic approaches provided that the learning datasets are available or synthesizable. At the same time, the development of sensor technologies has led to a diversity of sources of information that are now available to robotic systems and to inference algorithms. Extensive research efforts have been devoted in the last decades to the design of the combination of these different pieces of information, e.g., modelling source features, prior knowledge, and decisions under uncertain reasoning. Yet, although some works have shown examples where deep architectures succeeded in learning the optimal multi-source combination, important advances are still required to understand how to design such powerful architectures, to train them with respect to multi-sensor or multi-source data, and to make machine learning interact with source models and prior knowledge from the training process to the final decision.

The aim of this Special Issue is to highlight innovative developments with respect to the current challenges in processing multi-sensor or multi-source data related to designing resilient architectures. We particularly welcome contributions that will provide insights into the key mechanisms encouraging the good behaviour and robustness of the methods.

Topics include but are not limited to the following:

  • Multi-source based learning with domain-specific prior knowledge constraints;
  • Learning in the presence of imperfect data and/or imprecise ground truth;
  • Hierarchical learning for integrating additional sources effectively;
  • Autonomous navigation based on multi-sensor fusion, with a special focus on robustness to sensor failure;
  • Video and audio modalities for expression and activity recognition, or for behaviour disorder detection;
  • Data fusion for remote sensing and aerial photography;
  • Multimodal biometric systems.
Prof. Sylvie Le Hegarat-Mascle
Dr. Emanuel Aldea
Dr. Francois Bremond
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep neural architectures
  • Machine learning
  • Data fusion
  • Multi-sensor
  • Imperfect data
  • Image modality

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 8971 KiB  
Article
DeepLocate: Smartphone Based Indoor Localization with a Deep Neural Network Ensemble Classifier
by Imran Ashraf, Soojung Hur, Sangjoon Park and Yongwan Park
Sensors 2020, 20(1), 133; https://doi.org/10.3390/s20010133 - 24 Dec 2019
Cited by 36 | Viewed by 3692
Abstract
A quickly growing location-based services area has led to increased demand for indoor positioning and localization. Undoubtedly, Wi-Fi fingerprint-based localization is one of the promising indoor localization techniques, yet the variation of received signal strength is a major problem for accurate localization. Magnetic [...] Read more.
A quickly growing location-based services area has led to increased demand for indoor positioning and localization. Undoubtedly, Wi-Fi fingerprint-based localization is one of the promising indoor localization techniques, yet the variation of received signal strength is a major problem for accurate localization. Magnetic field-based localization has emerged as a new player and proved a potential indoor localization technology. However, one of its major limitations is degradation in localization accuracy when various smartphones are used. The localization performance is different from various smartphones even with the same localization technique. This research leverages the use of a deep neural network-based ensemble classifier to perform indoor localization with heterogeneous devices. The chief aim is to devise an approach that can achieve a similar localization accuracy using various smartphones. Features extracted from magnetic data of Galaxy S8 are fed into neural networks (NNs) for training. The experiments are performed with Galaxy S8, LG G6, LG G7, and Galaxy A8 smartphones to investigate the impact of device dependence on localization accuracy. Results demonstrate that NNs can play a significant role in mitigating the impact of device heterogeneity and increasing indoor localization accuracy. The proposed approach is able to achieve a localization accuracy of 2.64 m at 50% on four different devices. The mean error is 2.23 m, 2.52 m, 2.59 m, and 2.78 m for Galaxy S8, LG G6, LG G7, and Galaxy A8, respectively. Experiments on a publicly available magnetic dataset of Sony Xperia M2 using the proposed approach show a mean error of 2.84 m with a standard deviation of 2.24 m, while the error at 50% is 2.33 m. Furthermore, the impact of devices on various attitudes on the localization accuracy is investigated. Full article
(This article belongs to the Special Issue Deep Learning for Multi-Sensor Fusion)
Show Figures

Figure 1

25 pages, 2294 KiB  
Article
Augmenting Deep Learning Performance in an Evidential Multiple Classifier System
by Jennifer Vandoni, Sylvie Le Hégarat-Mascle and Emanuel Aldea
Sensors 2019, 19(21), 4664; https://doi.org/10.3390/s19214664 - 27 Oct 2019
Viewed by 2462
Abstract
The main objective of this work is to study the applicability of ensemble methods in the context of deep learning with limited amounts of labeled data. We exploit an ensemble of neural networks derived using Monte Carlo dropout, along with an ensemble of [...] Read more.
The main objective of this work is to study the applicability of ensemble methods in the context of deep learning with limited amounts of labeled data. We exploit an ensemble of neural networks derived using Monte Carlo dropout, along with an ensemble of SVM classifiers which owes its effectiveness to the hand-crafted features used as inputs and to an active learning procedure. In order to leverage each classifier’s respective strengths, we combine them in an evidential framework, which models specifically their imprecision and uncertainty. The application we consider in order to illustrate the interest of our Multiple Classifier System is pedestrian detection in high-density crowds, which is ideally suited for its difficulty, cost of labeling and intrinsic imprecision of annotation data. We show that the fusion resulting from the effective modeling of uncertainty allows for performance improvement, and at the same time, for a deeper interpretation of the result in terms of commitment of the decision. Full article
(This article belongs to the Special Issue Deep Learning for Multi-Sensor Fusion)
Show Figures

Figure 1

13 pages, 2977 KiB  
Article
Fusion of Video and Inertial Sensing for Deep Learning–Based Human Action Recognition
by Haoran Wei, Roozbeh Jafari and Nasser Kehtarnavaz
Sensors 2019, 19(17), 3680; https://doi.org/10.3390/s19173680 - 24 Aug 2019
Cited by 50 | Viewed by 5107
Abstract
This paper presents the simultaneous utilization of video images and inertial signals that are captured at the same time via a video camera and a wearable inertial sensor within a fusion framework in order to achieve a more robust human action recognition compared [...] Read more.
This paper presents the simultaneous utilization of video images and inertial signals that are captured at the same time via a video camera and a wearable inertial sensor within a fusion framework in order to achieve a more robust human action recognition compared to the situations when each sensing modality is used individually. The data captured by these sensors are turned into 3D video images and 2D inertial images that are then fed as inputs into a 3D convolutional neural network and a 2D convolutional neural network, respectively, for recognizing actions. Two types of fusion are considered—Decision-level fusion and feature-level fusion. Experiments are conducted using the publicly available dataset UTD-MHAD in which simultaneous video images and inertial signals are captured for a total of 27 actions. The results obtained indicate that both the decision-level and feature-level fusion approaches generate higher recognition accuracies compared to the approaches when each sensing modality is used individually. The highest accuracy of 95.6% is obtained for the decision-level fusion approach. Full article
(This article belongs to the Special Issue Deep Learning for Multi-Sensor Fusion)
Show Figures

Figure 1

37 pages, 7838 KiB  
Article
A Novel Deep Learning Method for Intelligent Fault Diagnosis of Rotating Machinery Based on Improved CNN-SVM and Multichannel Data Fusion
by Wenfeng Gong, Hui Chen, Zehui Zhang, Meiling Zhang, Ruihan Wang, Cong Guan and Qin Wang
Sensors 2019, 19(7), 1693; https://doi.org/10.3390/s19071693 - 9 Apr 2019
Cited by 169 | Viewed by 10143
Abstract
Intelligent fault diagnosis methods based on deep learning becomes a research hotspot in the fault diagnosis field. Automatically and accurately identifying the incipient micro-fault of rotating machinery, especially for fault orientations and severity degree, is still a major challenge in the field of [...] Read more.
Intelligent fault diagnosis methods based on deep learning becomes a research hotspot in the fault diagnosis field. Automatically and accurately identifying the incipient micro-fault of rotating machinery, especially for fault orientations and severity degree, is still a major challenge in the field of intelligent fault diagnosis. The traditional fault diagnosis methods rely on the manual feature extraction of engineers with prior knowledge. To effectively identify an incipient fault in rotating machinery, this paper proposes a novel method, namely improved the convolutional neural network-support vector machine (CNN-SVM) method. This method improves the traditional convolutional neural network (CNN) model structure by introducing the global average pooling technology and SVM. Firstly, the temporal and spatial multichannel raw data from multiple sensors is directly input into the improved CNN-Softmax model for the training of the CNN model. Secondly, the improved CNN are used for extracting representative features from the raw fault data. Finally, the extracted sparse representative feature vectors are input into SVM for fault classification. The proposed method is applied to the diagnosis multichannel vibration signal monitoring data of a rolling bearing. The results confirm that the proposed method is more effective than other existing intelligence diagnosis methods including SVM, K-nearest neighbor, back-propagation neural network, deep BP neural network, and traditional CNN. Full article
(This article belongs to the Special Issue Deep Learning for Multi-Sensor Fusion)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 2582 KiB  
Review
Deep Learning on Multi Sensor Data for Counter UAV Applications—A Systematic Review
by Stamatios Samaras, Eleni Diamantidou, Dimitrios Ataloglou, Nikos Sakellariou, Anastasios Vafeiadis, Vasilis Magoulianitis, Antonios Lalas, Anastasios Dimou, Dimitrios Zarpalas, Konstantinos Votis, Petros Daras and Dimitrios Tzovaras
Sensors 2019, 19(22), 4837; https://doi.org/10.3390/s19224837 - 6 Nov 2019
Cited by 120 | Viewed by 15236
Abstract
Usage of Unmanned Aerial Vehicles (UAVs) is growing rapidly in a wide range of consumer applications, as they prove to be both autonomous and flexible in a variety of environments and tasks. However, this versatility and ease of use also brings a rapid [...] Read more.
Usage of Unmanned Aerial Vehicles (UAVs) is growing rapidly in a wide range of consumer applications, as they prove to be both autonomous and flexible in a variety of environments and tasks. However, this versatility and ease of use also brings a rapid evolution of threats by malicious actors that can use UAVs for criminal activities, converting them to passive or active threats. The need to protect critical infrastructures and important events from such threats has brought advances in counter UAV (c-UAV) applications. Nowadays, c-UAV applications offer systems that comprise a multi-sensory arsenal often including electro-optical, thermal, acoustic, radar and radio frequency sensors, whose information can be fused to increase the confidence of threat’s identification. Nevertheless, real-time surveillance is a cumbersome process, but it is absolutely essential to detect promptly the occurrence of adverse events or conditions. To that end, many challenging tasks arise such as object detection, classification, multi-object tracking and multi-sensor information fusion. In recent years, researchers have utilized deep learning based methodologies to tackle these tasks for generic objects and made noteworthy progress, yet applying deep learning for UAV detection and classification is considered a novel concept. Therefore, the need to present a complete overview of deep learning technologies applied to c-UAV related tasks on multi-sensor data has emerged. The aim of this paper is to describe deep learning advances on c-UAV related tasks when applied to data originating from many different sensors as well as multi-sensor information fusion. This survey may help in making recommendations and improvements of c-UAV applications for the future. Full article
(This article belongs to the Special Issue Deep Learning for Multi-Sensor Fusion)
Show Figures

Figure 1

Back to TopTop