sensors-logo

Journal Browser

Journal Browser

Computational Intelligence-Based Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (20 February 2019) | Viewed by 37717

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science Department, Universidad de Oviedo, E.P.I. Gijón. Sedes Departamentales 1.1.28. 33202 Gijón, Spain
Interests: intelligent data analysis; learning under uncertainty; computational intelligence; fuzzy sets; mathematical models; signal processing; dimensional metrology; industrial applications (ecoefficiency, rechargeable batteries, clean energy)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering, University of Oviedo, 33204 Gijón, Spain
Interests: lithium-ion battery testing and characterization; lithium-ion battery degradation mechanisms via non-invasive methods; incremental capacity and peak area analyses; mechanistic battery modeling; battery lithium plating; battery fast charging; battery diagnosis and prognosis; battery state of charge and state of health determination methods
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Frequently, smart sensors operate in a well-defined hierarchical structure. Sensor intelligence in the lower layer improves selectivity, suppressing the influence of undesired variables. Intelligence in the middle layer generates intermediate output by combining the outputs of the lower layer. Intermediate outputs are sent to the upper layer intelligence, which recognizes the situation. The lower layer uses to be implemented in low-capability microcontrollers, often with energy constraints, and hence it is the higher layer where computationally heavy tasks use to take place. There is a compromise between the amount of information that is passed to the higher layers and the amount of processing at the lowest levels. Cooperative processing, data-driven feature extraction, model-based and indirect measurements, among other techniques, are leveraged to balance computational power, energy consumption and information flow at the lowest layer. Data fusion, parameter tuning and model-based synthesis of variables are performed at the middle layer. Intelligent data analysis and certain data-driven decision systems are deployed at the higher layer. In this respect, Computational Intelligence (CI) and Soft Computing-based sensors build on fuzzy logic, artificial neural networks, evolutionary computing, learning theory and probabilistic methods to solve the mentioned tasks at each level of the architecture. The application of CI to sensor systems is a hot topic, as shown by the following (non-exhaustive) list of problems, that is comprised of different applications of CI to sensor systems reported during the first half of 2018:

  • Computational efficiency (power control)
  • Cooperative processing (swarm intelligence, fog computing, etc.) in sensor networks
  • Cyber Physical Systems (CPS) and Internet of Things
  • Data analytics and cloud computing
  • Data-driven feature extraction (time series, 2D and 3D imaging)
  • Human activity recognition (wearable sensors)
  • Hyper-parameter learning (learning and tuning of sensor parameters, automatic calibration)
  • Indirect measurements (soft sensors)
  • Information fusion in sensor networks
  • Management of uncertain, incomplete and/or noisy data
  • Privacy-preserving data aggregation
  • Sensitivity and robustness analysis
  • Social sensing (humans as “sensors” to report observations about the physical world)

The Special Issue will publish original research, reviews and applications in the field of Computational Intelligence techniques (fuzzy logic, artificial neural networks, evolutionary computing, learning theory and probabilistic methods) applied to sensor systems.

Prof. Dr. Luciano Sánchez
Dr. David Anseán
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computational Intelligence
  • Computational efficiency and energy management
  • Cyber Physical Systems
  • Data aggregation and information fusion
  • Human activity recognition
  • Internet of Things
  • Uncertain, missing and noisy data
  • Social sensing
  • Soft sensors
  • Soft Computing

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 1418 KiB  
Article
Virtual Sensors for Optimal Integration of Human Activity Data
by Antonio A. Aguileta, Ramon F. Brena, Oscar Mayora, Erik Molino-Minero-Re and Luis A. Trejo
Sensors 2019, 19(9), 2017; https://doi.org/10.3390/s19092017 - 29 Apr 2019
Cited by 10 | Viewed by 3588
Abstract
Sensors are becoming more and more ubiquitous as their price and availability continue to improve, and as they are the source of information for many important tasks. However, the use of sensors has to deal with noise and failures. The lack of reliability [...] Read more.
Sensors are becoming more and more ubiquitous as their price and availability continue to improve, and as they are the source of information for many important tasks. However, the use of sensors has to deal with noise and failures. The lack of reliability in the sensors has led to many forms of redundancy, but simple solutions are not always the best, and the precise way in which several sensors are combined has a big impact on the overall result. In this paper, we discuss how to deal with the combination of information coming from different sensors, acting thus as “virtual sensors”, in the context of human activity recognition, in a systematic way, aiming for optimality. To achieve this goal, we construct meta-datasets containing the “signatures” of individual datasets, and apply machine-learning methods in order to distinguish when each possible combination method could be actually the best. We present specific results based on experimentation, supporting our claims of optimality. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

18 pages, 2318 KiB  
Article
An AutomationML Based Ontology for Sensor Fusion in Industrial Plants
by Eder Mateus Nunes Gonçalves, Alvaro Freitas and Silvia Botelho
Sensors 2019, 19(6), 1311; https://doi.org/10.3390/s19061311 - 15 Mar 2019
Cited by 8 | Viewed by 4101
Abstract
AutomationML (AML) can be seen as a partial knowledge-based solution for manufacturing and automation domains since it permits integrating different engineering data format, and also contains information about physical and logical structures of production systems, using basic concepts as resources, process, and products, [...] Read more.
AutomationML (AML) can be seen as a partial knowledge-based solution for manufacturing and automation domains since it permits integrating different engineering data format, and also contains information about physical and logical structures of production systems, using basic concepts as resources, process, and products, in semantic structures. However, it is not a complete knowledge-based solution because it does not have mechanisms for querying and reasoning procedures, which are basic functions for semantic inferences. Additionally, AutomationML does not deal with aspects of sensor fusion naturally. In this sense, we propose an ontology to describe those sensors’ fusion elements, including procedures for runtime processing, and also elements that can turn AutomationML into a complete knowledge-based solution. The approach was applied in a case study with two different industrial processes with some sensors under fusion. The results obtained demonstrate that the ontology allows describing sensors that are under fusion and deal with the occurrence of data divergence. In a broader view, the results show how to apply AutomationML description for runtime processing of data generated from different sensors of a manufacturing system using an ontology to complement the AML description, where AutomationML concentrates knowledge about a specific production system and the ontology describes a general and reusable knowledge about sensor fusion. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

16 pages, 4282 KiB  
Article
Adaptive Image Rendering Using a Nonlinear Mapping-Function-Based Retinex Model
by JongGeun Oh and Min-Cheol Hong
Sensors 2019, 19(4), 969; https://doi.org/10.3390/s19040969 - 25 Feb 2019
Cited by 7 | Viewed by 2934
Abstract
This paper introduces an adaptive image rendering using a parametric nonlinear mapping-function-based on the retinex model in a low-light source. For this study, only a luminance channel was used to estimate the reflectance component of an observed low-light image, therefore halo artifacts coming [...] Read more.
This paper introduces an adaptive image rendering using a parametric nonlinear mapping-function-based on the retinex model in a low-light source. For this study, only a luminance channel was used to estimate the reflectance component of an observed low-light image, therefore halo artifacts coming from the use of the multiple center/surround Gaussian filters were reduced. A new nonlinear mapping function that incorporates the statistics of the luminance and the estimated reflectance in the reconstruction process is proposed. In addition, a new method to determine the gain and offset of the mapping function is addressed to adaptively control the contrast ratio. Finally, the relationship between the estimated luminance and the reconstructed luminance is used to reconstruct the chrominance channels. The experimental results demonstrate that the proposed method leads to the promised subjective and objective improvements over state-of-the-art, scale-based retinex methods. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

28 pages, 6024 KiB  
Article
Estimating Visibility of Annotations for View Management in Spatial Augmented Reality Based on Machine-Learning Techniques
by Keita Ichihashi and Kaori Fujinami
Sensors 2019, 19(4), 939; https://doi.org/10.3390/s19040939 - 22 Feb 2019
Cited by 9 | Viewed by 4213
Abstract
Augmented Reality (AR) is a class of “mediated reality” that artificially modifies the human perception by superimposing virtual objects on the real world, which is expected to supplement reality. In visual-based augmentation, text and graphics, i.e., label, are often associated with a physical [...] Read more.
Augmented Reality (AR) is a class of “mediated reality” that artificially modifies the human perception by superimposing virtual objects on the real world, which is expected to supplement reality. In visual-based augmentation, text and graphics, i.e., label, are often associated with a physical object or a place to describe it. View management in AR is to maintain the visibility of the associated information and plays an important role on communicating the information. Various view management techniques have been investigated so far; however, most of them have been designed for two dimensional see-through displays, and few have been investigated for projector-based AR called spatial AR. In this article, we propose a view management method for spatial AR, VisLP, that places labels and linkage lines based on the estimation of the visibility. Since the information is directly projected on objects, the nature of optics such as reflection and refraction constrains the visibility in addition to the spatial relationship between the information, the objects, and the user. VisLP employs machine-learning techniques to estimate the visibility that reflects human’s subjective mental workload in reading information and objective measures of reading correctness in various projection conditions. Four classes are defined for a label, while the visibility of a linkage line has three classes. After 88 and 28 classification features for label and linkage line visibility estimators are designed, respectively, subsets of features with 15 and 14 features are chosen to improve the processing speed of feature calculation up to 170%, with slight degradation of classification performance. An online experiment with new users and objects showed that 76.0% of the system’s judgments were matched with the users’ evaluations, while 73% of the linkage line visibility estimations were matched. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

28 pages, 14357 KiB  
Article
A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition using a Wearable Hybrid Sensor System
by Haibin Yu, Guoxiong Pan, Mian Pan, Chong Li, Wenyan Jia, Li Zhang and Mingui Sun
Sensors 2019, 19(3), 546; https://doi.org/10.3390/s19030546 - 28 Jan 2019
Cited by 14 | Viewed by 4119
Abstract
Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the [...] Read more.
Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

20 pages, 2176 KiB  
Article
Activity-Aware Wearable System for Power-Efficient Prediction of Physiological Responses
by Nathan Starliper, Farrokh Mohammadzadeh, Tanner Songkakul, Michelle Hernandez, Alper Bozkurt and Edgar Lobaton
Sensors 2019, 19(3), 441; https://doi.org/10.3390/s19030441 - 22 Jan 2019
Cited by 20 | Viewed by 5756
Abstract
Wearable health monitoring has emerged as a promising solution to the growing need for remote health assessment and growing demand for personalized preventative care and wellness management. Vital signs can be monitored and alerts can be made when anomalies are detected, potentially improving [...] Read more.
Wearable health monitoring has emerged as a promising solution to the growing need for remote health assessment and growing demand for personalized preventative care and wellness management. Vital signs can be monitored and alerts can be made when anomalies are detected, potentially improving patient outcomes. One major challenge for the use of wearable health devices is their energy efficiency and battery-lifetime, which motivates the recent efforts towards the development of self-powered wearable devices. This article proposes a method for context aware dynamic sensor selection for power optimized physiological prediction using multi-modal wearable data streams. We first cluster the data by physical activity using the accelerometer data, and then fit a group lasso model to each activity cluster. We find the optimal reduced set of groups of sensor features, in turn reducing power usage by duty cycling these and optimizing prediction accuracy. We show that using activity state-based contextual information increases accuracy while decreasing power usage. We also show that the reduced feature set can be used in other regression models increasing accuracy and decreasing energy burden. We demonstrate the potential reduction in power usage using a custom-designed multi-modal wearable system prototype. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

16 pages, 2286 KiB  
Article
Air Quality Monitoring for Vulnerable Groups in Residential Environments Using a Multiple Hazard Gas Detector
by Yujiao Wu, Taoping Liu, Sai Ho Ling, Jan Szymanski, Wentian Zhang and Steven Weidong Su
Sensors 2019, 19(2), 362; https://doi.org/10.3390/s19020362 - 17 Jan 2019
Cited by 31 | Viewed by 5446
Abstract
This paper presents a smart “e-nose” device to monitor indoor hazardous air. Indoor hazardous odor is a threat for seniors, infants, children, pregnant women, disabled residents, and patients. To overcome the limitations of using existing non-intelligent, slow-responding, deficient gas sensors, we propose a [...] Read more.
This paper presents a smart “e-nose” device to monitor indoor hazardous air. Indoor hazardous odor is a threat for seniors, infants, children, pregnant women, disabled residents, and patients. To overcome the limitations of using existing non-intelligent, slow-responding, deficient gas sensors, we propose a novel artificial-intelligent-based multiple hazard gas detector (MHGD) system that is mounted on a motor vehicle-based robot which can be remotely controlled. First, we optimized the sensor array for the classification of three hazardous gases, including cigarette smoke, inflammable ethanol, and off-flavor from spoiled food, using an e-nose with a mixing chamber. The mixing chamber can prevent the impact of environmental changes. We compared the classification results of all combinations of sensors, and selected the one with the highest accuracy (98.88%) as the optimal sensor array for the MHGD. The optimal sensor array was then mounted on the MHGD to detect and classify the target gases without a mixing chamber but in a controlled environment. Finally, we tested the MHGD under these conditions, and achieved an acceptable accuracy (70.00%). Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

14 pages, 4102 KiB  
Article
Model Order Identification for Cable Force Estimation Using a Markov Chain Monte Carlo-Based Bayesian Approach
by Shaodong Zhan, Zhi Li, Jianmin Hu, Yiping Liang and Guanglie Zhang
Sensors 2018, 18(12), 4187; https://doi.org/10.3390/s18124187 - 29 Nov 2018
Cited by 6 | Viewed by 2673
Abstract
The tensile force on the hanger cables of a suspension bridge is an important indicator of the structural health of the bridge. Tensile force estimation methods based on the measured frequency of the hanger cable have been widely used. These methods empirically pre-determinate [...] Read more.
The tensile force on the hanger cables of a suspension bridge is an important indicator of the structural health of the bridge. Tensile force estimation methods based on the measured frequency of the hanger cable have been widely used. These methods empirically pre-determinate the corresponding model order of the measured frequency. However, because of the uncertain flexural rigidity, this empirical order determination method not only plays a limited role in high-order frequencies, but also hinders the online cable force estimation. Therefore, we propose a new method to automatically identify the corresponding model order of the measured frequency, which is based on a Markov chain Monte Carlo (MCMC)-based Bayesian approach. It solves the limitation of empirical determination in the case of large flexural rigidity. The tensile force and the flexural rigidity of cables can be calculated simultaneously using the proposed method. The feasibility of the proposed method is validated via a numerical study involving a finite element model that considers the flexural rigidity and via field application to a suspension bridge. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Graphical abstract

19 pages, 13323 KiB  
Article
An Ontology-Driven Approach for Integrating Intelligence to Manage Human and Ecological Health Risks in the Geospatial Sensor Web
by Xiaoliang Meng, Feng Wang, Yichun Xie, Guoqiang Song, Shifa Ma, Shiyuan Hu, Junming Bai and Yiming Yang
Sensors 2018, 18(11), 3619; https://doi.org/10.3390/s18113619 - 25 Oct 2018
Cited by 8 | Viewed by 4156
Abstract
Due to the rapid installation of a massive number of fixed and mobile sensors, monitoring machines are intentionally or unintentionally involved in the production of a large amount of geospatial data. Environmental sensors and related software applications are rapidly altering human lifestyles and [...] Read more.
Due to the rapid installation of a massive number of fixed and mobile sensors, monitoring machines are intentionally or unintentionally involved in the production of a large amount of geospatial data. Environmental sensors and related software applications are rapidly altering human lifestyles and even impacting ecological and human health. However, there are rarely specific geospatial sensor web (GSW) applications for certain ecological public health questions. In this paper, we propose an ontology-driven approach for integrating intelligence to manage human and ecological health risks in the GSW. We design a Human and Ecological health Risks Ontology (HERO) based on a semantic sensor network ontology template. We also illustrate a web-based prototype, the Human and Ecological Health Risk Management System (HaEHMS), which helps health experts and decision makers to estimate human and ecological health risks. We demonstrate this intelligent system through a case study of automatic prediction of air quality and related health risk. Full article
(This article belongs to the Special Issue Computational Intelligence-Based Sensors)
Show Figures

Figure 1

Back to TopTop