sensors-logo

Journal Browser

Journal Browser

Special Issue "Information Fusion and Machine Learning for Sensors"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 30 September 2020.

Special Issue Editors

Prof. Dr. Jose Manuel Molina López
Website
Guest Editor
Computer Science and Engineering Department, Universidad Carlos III de Madrid, Edificio Sabatini, 28911 Leganés, Spain
Interests: data fusion; machine learning; Internet of Things (IoT); ambient intelligent; AAL; privacy
Special Issues and Collections in MDPI journals
Dr. Miguel Angel Patricio
Website
Guest Editor
Computer Science Department, Universidad Carlos III de Madrid, Avenida Gregorio Peces-Barba Martínez, 22. 28270 Colmenarejo Madrid, Spain
Interests: machine learning; visual processing; deep learning; IoT

Special Issue Information

Dear Colleagues,

In today’s digital world, information is the key factor in making decisions. Ubiquitous electronic sources, such as sensors and video, provide a steady stream of data, while text-based data from databases, the Internet, email, chat, VOIP (Voice over Internet Protocol), and social media are growing exponentially. The ability to make sense of data by fusing them into new knowledge would provide clear advantages in making decisions.

Fusion systems aim to integrate sensor data and information in databases, knowledge bases, contextual information, etc., in order to describe situations. In a sense, the goal of information fusion is to attain a global view of a scenario in order to make the best decision.

One of the main goals of future research in data fusion (DF) is the application of machine learning (ML) techniques on this fused information to extract knowledge. How to apply ML in these large data sets and what techniques could be applied depending on the data stored are the main points of this Special Issue.

The key aspect in modern DF applications is the appropriate integration of all types of information or knowledge—observational data, knowledge models (a priori or inductively learned), and contextual information. Each of these categories has a distinctive nature and potential support for the result of the fusion process:

Observational Data: Observational data are the fundamental data about a dynamic scenario, as collected from some observational capability (sensors of any type). These data are about the observable entities in the world that are of interest;

Contextual Information: Contextual information has become fundamental to developing models in complex scenarios. Context and the elements of what could be called contextual information could be defined as “the set of circumstances surrounding a task that are potentially of relevance to its completion”. Due to its task relevance, fusion or estimating/inferring the task implies the development of a best-possible estimate taking into account this lateral knowledge.

Learned Knowledge: DF systems combine multisource data to provide inferences, exploiting models of the expected behaviors of entities (physical models, such as cinematics, or logical models, such as expected behaviors, depending on the context). In those cases where a priori knowledge for DF process development cannot be formed, one possibility is to try and excise knowledge through online machine learning processes, operating on observational and other data. These are procedural and algorithmic methods for discovering the relationships among, and the behaviors of, the entities of interest.

This Special Issue invites contributions on the following topics (but is not limited to them):

Data fusion of distributed sensors
Context definition and management
Machine learning techniques
Reduction complexity in data sets
Recommendation systems
Integration of IA techniques 
Reasoning systems in data fusion environments
Integration of data fusion
Ambient intelligence
Data fusion on autonomous systems
Virtual and augmented reality
Human computer interaction
Visual pattern recognition
Environment modeling and reconstruction from images
Surveillance systems
Visual systems 
Data fusion and ML in UAVs
Big data analytics platforms and tools for data fusion and analytics
Cloud computing technologies and their use for big data, data fusion, and data analytics

Prof. Dr. Jose Manuel Molina López
Dr. Miguel Angel Patricio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • data fusion
  • data analytics

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle
Architecture for Trajectory-Based Fishing Ship Classification with AIS Data
Sensors 2020, 20(13), 3782; https://doi.org/10.3390/s20133782 - 06 Jul 2020
Abstract
This paper proposes a data preparation process for managing real-world kinematic data and detecting fishing vessels. The solution is a binary classification that classifies ship trajectories into either fishing or non-fishing ships. The data used are characterized by the typical problems found in [...] Read more.
This paper proposes a data preparation process for managing real-world kinematic data and detecting fishing vessels. The solution is a binary classification that classifies ship trajectories into either fishing or non-fishing ships. The data used are characterized by the typical problems found in classic data mining applications using real-world data, such as noise and inconsistencies. The two classes are also clearly unbalanced in the data, a problem which is addressed using algorithms that resample the instances. For classification, a series of features are extracted from spatiotemporal data that represent the trajectories of the ships, available from sequences of Automatic Identification System (AIS) reports. These features are proposed for the modelling of ship behavior but, because they do not contain context-related information, the classification can be applied in other scenarios. Experimentation shows that the proposed data preparation process is useful for the presented classification problem. In addition, positive results are obtained using minimal information. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

Open AccessArticle
A Novel Fault Diagnosis Approach for Chillers Based on 1-D Convolutional Neural Network and Gated Recurrent Unit
Sensors 2020, 20(9), 2458; https://doi.org/10.3390/s20092458 - 26 Apr 2020
Cited by 2
Abstract
The safety of an Internet Data Center (IDC) is directly determined by the reliability and stability of its chiller system. Thus, combined with deep learning technology, an innovative hybrid fault diagnosis approach (1D-CNN_GRU) based on the time-series sequences is proposed in this study [...] Read more.
The safety of an Internet Data Center (IDC) is directly determined by the reliability and stability of its chiller system. Thus, combined with deep learning technology, an innovative hybrid fault diagnosis approach (1D-CNN_GRU) based on the time-series sequences is proposed in this study for the chiller system using 1-Dimensional Convolutional Neural Network (1D-CNN) and Gated Recurrent Unit (GRU). Firstly, 1D-CNN is applied to automatically extract the local abstract features of the sensor sequence data. Secondly, GRU with long and short term memory characteristics is applied to capture the global features, as well as the dynamic information of the sequence. Moreover, batch normalization and dropout are introduced to accelerate network training and address the overfitting issue. The effectiveness and reliability of the proposed hybrid algorithm are assessed on the RP-1043 dataset; based on the experimental results, 1D-CNN_GRU displays the best performance compared with the other state-of-the-art algorithms. Further, the experimental results reveal that 1D-CNN_GRU has a superior identification rate for minor faults. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

Open AccessArticle
Prediction of I–V Characteristic Curve for Photovoltaic Modules Based on Convolutional Neural Network
Sensors 2020, 20(7), 2119; https://doi.org/10.3390/s20072119 - 09 Apr 2020
Abstract
Photovoltaic (PV) modules are exposed to the outside, which is affected by radiation, the temperature of the PV module back-surface, relative humidity, atmospheric pressure and other factors, which makes it difficult to test and analyze the performance of photovoltaic modules. Traditionally, the equivalent [...] Read more.
Photovoltaic (PV) modules are exposed to the outside, which is affected by radiation, the temperature of the PV module back-surface, relative humidity, atmospheric pressure and other factors, which makes it difficult to test and analyze the performance of photovoltaic modules. Traditionally, the equivalent circuit method is used to analyze the performance of PV modules, but there are large errors. In this paper—based on machine learning methods and large amounts of photovoltaic test data—convolutional neural network (CNN) and multilayer perceptron (MLP) neural network models are established to predict the I–V curve of photovoltaic modules. Furthermore, the accuracy and the fitting degree of these methods for current–voltage (I–V) curve prediction are compared in detail. The results show that the prediction accuracy of the CNN and MLP neural network model is significantly better than that of the traditional equivalent circuit models. Compared with MLP models, the CNN model has better accuracy and fitting degree. In addition, the error distribution concentration of CNN has better robustness and the pre-test curve is smoother and has better nonlinear segment fitting effects. Thus, the CNN is superior to MLP model and the traditional equivalent circuit model in complex climate conditions. CNN is a high-confidence method to predict the performance of PV modules. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

Other

Jump to: Research

Open AccessLetter
Application-Oriented Retinal Image Models for Computer Vision
Sensors 2020, 20(13), 3746; https://doi.org/10.3390/s20133746 - 04 Jul 2020
Abstract
Energy and storage restrictions are relevant variables that software applications should be concerned about when running in low-power environments. In particular, computer vision (CV) applications exemplify well that concern, since conventional uniform image sensors typically capture large amounts of data to be further [...] Read more.
Energy and storage restrictions are relevant variables that software applications should be concerned about when running in low-power environments. In particular, computer vision (CV) applications exemplify well that concern, since conventional uniform image sensors typically capture large amounts of data to be further handled by the appropriate CV algorithms. Moreover, much of the acquired data are often redundant and outside of the application’s interest, which leads to unnecessary processing and energy spending. In the literature, techniques for sensing and re-sampling images in non-uniform fashions have emerged to cope with these problems. In this study, we propose Application-Oriented Retinal Image Models that define a space-variant configuration of uniform images and contemplate requirements of energy consumption and storage footprints for CV applications. We hypothesize that our models might decrease energy consumption in CV tasks. Moreover, we show how to create the models and validate their use in a face detection/recognition application, evidencing the compromise between storage, energy, and accuracy. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Graphical abstract

Back to TopTop