sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence and Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 85879

Special Issue Editors


E-Mail Website
Guest Editor
Departament d’Informàtica, Escola Tècnica Superior d’Enginyeria, Universitat de València, 46100 Burjassot, Valencia, Spain
Interests: computer networks; wireless sensor networks; multimedia networks; cloud computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science Department, University of Valencia, 46100 Valencia, Spain
Interests: multimedia networks; streaming; QoE; QoS; IoTs; cloud computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sensors provide valuable data about physical magnitudes and environmental phenomena. However, the translation of these data into concrete actions requires processing the inputs that may come from one or many types of sensors, including sensor networks. Such processing can benefit from Artificial Intelligence (AI), and the use of machine learning, neural networks (including deep architectures), and information fusion methods have been common in this field. Currently, these concepts can be applied in different IoT architectures, where there are sensor and actuator nodes that communicate and create the networks. These types of networks tend to be autonomous networks that adapt to several conditions, creating smart IoT networks. These smart IoT networks would not be possible to carry out without use of artificial intelligence algorithms in their core.

This Special Issue will focus on the applications of AI to transform the data acquired from sensors into valuable information. Topics of interest include but are not limited to:

  • AI to process data coming from sensor networks
  • Information fusion methods to combine information from multiple sensors
  • Machine learning and decision making to issue responses to sensor data
  • Deep learning architectures for sensor applications
  • Smart sensors
  • Smart IoT networks
  • Machine learning methods to process sensor outputs
  • Explainable AI for sensor applications
  • AI-based sensors for efficient energy management
  • Databases to enable research on AI-based sensor applications

Dr. Miguel Arevalillo-Herráez
Dr. Miguel García-Pineda
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • intelligent sensors
  • deep learning
  • neural networks
  • information fusion
  • explainable AI
  • smart IoT networks

Related Special Issue

Published Papers (23 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 2018 KiB  
Article
An Intra-Subject Approach Based on the Application of HMM to Predict Concentration in Educational Contexts from Nonintrusive Physiological Signals in Real-World Situations
by Ana Serrano-Mamolar, Miguel Arevalillo-Herráez, Guillermo Chicote-Huete and Jesus G. Boticario
Sensors 2021, 21(5), 1777; https://doi.org/10.3390/s21051777 - 04 Mar 2021
Cited by 2 | Viewed by 2168
Abstract
Previous research has proven the strong influence of emotions on student engagement and motivation. Therefore, emotion recognition is becoming very relevant in educational scenarios, but there is no standard method for predicting students’ affects. However, physiological signals have been widely used in educational [...] Read more.
Previous research has proven the strong influence of emotions on student engagement and motivation. Therefore, emotion recognition is becoming very relevant in educational scenarios, but there is no standard method for predicting students’ affects. However, physiological signals have been widely used in educational contexts. Some physiological signals have shown a high accuracy in detecting emotions because they reflect spontaneous affect-related information, which is fresh and does not require additional control or interpretation. Most proposed works use measuring equipment for which applicability in real-world scenarios is limited because of its high cost and intrusiveness. To tackle this problem, in this work, we analyse the feasibility of developing low-cost and nonintrusive devices to obtain a high detection accuracy from easy-to-capture signals. By using both inter-subject and intra-subject models, we present an experimental study that aims to explore the potential application of Hidden Markov Models (HMM) to predict the concentration state from 4 commonly used physiological signals, namely heart rate, breath rate, skin conductance and skin temperature. We also study the effect of combining these four signals and analyse their potential use in an educational context in terms of intrusiveness, cost and accuracy. The results show that a high accuracy can be achieved with three of the signals when using HMM-based intra-subject models. However, inter-subject models, which are meant to obtain subject-independent approaches for affect detection, fail at the same task. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

22 pages, 46167 KiB  
Article
Joint Multimodal Embedding and Backtracking Search in Vision-and-Language Navigation
by Jisu Hwang and Incheol Kim
Sensors 2021, 21(3), 1012; https://doi.org/10.3390/s21031012 - 02 Feb 2021
Cited by 1 | Viewed by 2382
Abstract
Due to the development of computer vision and natural language processing technologies in recent years, there has been a growing interest in multimodal intelligent tasks that require the ability to concurrently understand various forms of input data such as images and text. Vision-and-language [...] Read more.
Due to the development of computer vision and natural language processing technologies in recent years, there has been a growing interest in multimodal intelligent tasks that require the ability to concurrently understand various forms of input data such as images and text. Vision-and-language navigation (VLN) require the alignment and grounding of multimodal input data to enable real-time perception of the task status on panoramic images and natural language instruction. This study proposes a novel deep neural network model (JMEBS), with joint multimodal embedding and backtracking search for VLN tasks. The proposed JMEBS model uses a transformer-based joint multimodal embedding module. JMEBS uses both multimodal context and temporal context. It also employs backtracking-enabled greedy local search (BGLS), a novel algorithm with a backtracking feature designed to improve the task success rate and optimize the navigation path, based on the local and global scores related to candidate actions. A novel global scoring method is also used for performance improvement by comparing the partial trajectories searched thus far with a plurality of natural language instructions. The performance of the proposed model on various operations was then experimentally demonstrated and compared with other models using the Matterport3D Simulator and room-to-room (R2R) benchmark datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

21 pages, 2821 KiB  
Article
A Novel BN Learning Algorithm Based on Block Learning Strategy
by Xinyu Li, Xiaoguang Gao and Chenfeng Wang
Sensors 2020, 20(21), 6357; https://doi.org/10.3390/s20216357 - 07 Nov 2020
Cited by 3 | Viewed by 1795
Abstract
Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm [...] Read more.
Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm with a mutual information based K-means algorithm (BLMKM algorithm). This method utilizes an improved K-means algorithm to block the nodes in BN and a maximum minimum parents and children (MMPC) algorithm to obtain the whole skeleton of BN and find possible graph structures based on separated blocks. Then, a pruned dynamic programming algorithm is performed sequentially for all possible graph structures to get possible BNs and find the best BN by scoring function. Experiments show that for high-dimensional and sparse data, the BLMKM algorithm can achieve the same accuracy in a reasonable time compared with non-blocking classical learning algorithms. Compared to the existing block learning algorithms, the BLMKM algorithm has a time advantage on the basis of ensuring accuracy. The analysis of the real radar effect mechanism dataset proves that BLMKM algorithm can quickly establish a global and accurate causality model to find the cause of interference, predict the detecting result, and guide the parameters optimization. BLMKM algorithm is efficient for BN learning and has practical application value. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

13 pages, 2415 KiB  
Article
Digital Forensics of Scanned QR Code Images for Printer Source Identification Using Bottleneck Residual Block
by Zhongyuan Guo, Hong Zheng, Changhui You, Xiaohang Xu, Xiongbin Wu, Zhaohui Zheng and Jianping Ju
Sensors 2020, 20(21), 6305; https://doi.org/10.3390/s20216305 - 05 Nov 2020
Cited by 6 | Viewed by 4444
Abstract
With the rapid development of information technology and the widespread use of the Internet, QR codes are widely used in all walks of life and have a profound impact on people’s work and life. However, the QR code itself is likely to be [...] Read more.
With the rapid development of information technology and the widespread use of the Internet, QR codes are widely used in all walks of life and have a profound impact on people’s work and life. However, the QR code itself is likely to be printed and forged, which will cause serious economic losses and criminal offenses. Therefore, it is of great significance to identify the printer source of QR code. A method of printer source identification for scanned QR Code image blocks based on convolutional neural network (PSINet) is proposed, which innovatively introduces a bottleneck residual block (BRB). We give a detailed theoretical discussion and experimental analysis of PSINet in terms of network input, the first convolution layer design based on residual structure, and the overall architecture of the proposed convolution neural network (CNN). Experimental results show that the proposed PSINet in this paper can obtain extremely excellent printer source identification performance, the accuracy of printer source identification of QR code on eight printers can reach 99.82%, which is not only better than LeNet and AlexNet widely used in the field of digital image forensics, but also exceeds state-of-the-art deep learning methods in the field of printer source identification. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

19 pages, 8206 KiB  
Article
PORF-DDPG: Learning Personalized Autonomous Driving Behavior with Progressively Optimized Reward Function
by Jie Chen, Tao Wu, Meiping Shi and Wei Jiang
Sensors 2020, 20(19), 5626; https://doi.org/10.3390/s20195626 - 01 Oct 2020
Cited by 9 | Viewed by 4054
Abstract
Autonomous driving with artificial intelligence technology has been viewed as promising for autonomous vehicles hitting the road in the near future. In recent years, considerable progress has been made with Deep Reinforcement Learnings (DRLs) for realizing end-to-end autonomous driving. Still, driving safely and [...] Read more.
Autonomous driving with artificial intelligence technology has been viewed as promising for autonomous vehicles hitting the road in the near future. In recent years, considerable progress has been made with Deep Reinforcement Learnings (DRLs) for realizing end-to-end autonomous driving. Still, driving safely and comfortably in real dynamic scenarios with DRL is nontrivial due to the reward functions being typically pre-defined with expertise. This paper proposes a human-in-the-loop DRL algorithm for learning personalized autonomous driving behavior in a progressive learning way. Specifically, a progressively optimized reward function (PORF) learning model is built and integrated into the Deep Deterministic Policy Gradient (DDPG) framework, which is called PORF-DDPG in this paper. PORF consists of two parts: the first part of the PORF is a pre-defined typical reward function on the system state, the second part is modeled as a Deep Neural Network (DNN) for representing driving adjusting intention by the human observer, which is the main contribution of this paper. The DNN-based reward model is progressively learned using the front-view images as the input and via active human supervision and intervention. The proposed approach is potentially useful for driving in dynamic constrained scenarios when dangerous collision events might occur frequently with classic DRLs. The experimental results show that the proposed autonomous driving behavior learning method exhibits online learning capability and environmental adaptability. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

21 pages, 1877 KiB  
Article
Fusing Visual Attention CNN and Bag of Visual Words for Cross-Corpus Speech Emotion Recognition
by Minji Seo and Myungho Kim
Sensors 2020, 20(19), 5559; https://doi.org/10.3390/s20195559 - 28 Sep 2020
Cited by 28 | Viewed by 3659
Abstract
Speech emotion recognition (SER) classifies emotions using low-level features or a spectrogram of an utterance. When SER methods are trained and tested using different datasets, they have shown performance reduction. Cross-corpus SER research identifies speech emotion using different corpora and languages. Recent cross-corpus [...] Read more.
Speech emotion recognition (SER) classifies emotions using low-level features or a spectrogram of an utterance. When SER methods are trained and tested using different datasets, they have shown performance reduction. Cross-corpus SER research identifies speech emotion using different corpora and languages. Recent cross-corpus SER research has been conducted to improve generalization. To improve the cross-corpus SER performance, we pretrained the log-mel spectrograms of the source dataset using our designed visual attention convolutional neural network (VACNN), which has a 2D CNN base model with channel- and spatial-wise visual attention modules. To train the target dataset, we extracted the feature vector using a bag of visual words (BOVW) to assist the fine-tuned model. Because visual words represent local features in the image, the BOVW helps VACNN to learn global and local features in the log-mel spectrogram by constructing a frequency histogram of visual words. The proposed method shows an overall accuracy of 83.33%, 86.92%, and 75.00% in the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), the Berlin Database of Emotional Speech (EmoDB), and Surrey Audio-Visual Expressed Emotion (SAVEE), respectively. Experimental results on RAVDESS, EmoDB, SAVEE demonstrate improvements of 7.73%, 15.12%, and 2.34% compared to existing state-of-the-art cross-corpus SER approaches. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

12 pages, 1195 KiB  
Article
Color Image Generation from Range and Reflection Data of LiDAR
by Hyun-Koo Kim, Kook-Yeol Yoo and Ho-Youl Jung
Sensors 2020, 20(18), 5414; https://doi.org/10.3390/s20185414 - 21 Sep 2020
Cited by 2 | Viewed by 3388
Abstract
Recently, it has been reported that a camera-captured-like color image can be generated from the reflection data of 3D light detection and ranging (LiDAR). In this paper, we present that the color image can also be generated from the range data of LiDAR. [...] Read more.
Recently, it has been reported that a camera-captured-like color image can be generated from the reflection data of 3D light detection and ranging (LiDAR). In this paper, we present that the color image can also be generated from the range data of LiDAR. We propose deep learning networks that generate color images by fusing reflection and range data from LiDAR point clouds. In the proposed networks, the two datasets are fused in three ways—early, mid, and last fusion techniques. The baseline network is the encoder-decoder structured fully convolution network (ED-FCN). The image generation performances were evaluated according to source types, including reflection data-only, range data-only, and fusion of the two datasets. The well-known KITTI evaluation data were used for training and verification. The simulation results showed that the proposed last fusion method yields improvements of 0.53 dB, 0.49 dB, and 0.02 in gray-scale peak signal-to-noise ratio (PSNR), color-scale PSNR, and structural similarity index measure (SSIM), respectively, over the conventional reflection-based ED-FCN. Besides, the last fusion method can be applied to real-time applications with an average processing time of 13.56 ms per frame. The methodology presented in this paper would be a powerful tool for generating data from two or more heterogeneous sources. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

21 pages, 4207 KiB  
Article
SurfNetv2: An Improved Real-Time SurfNet and Its Applications to Defect Recognition of Calcium Silicate Boards
by Chi-Yi Tsai and Hao-Wei Chen
Sensors 2020, 20(16), 4356; https://doi.org/10.3390/s20164356 - 05 Aug 2020
Cited by 6 | Viewed by 2401
Abstract
This paper presents an improved Convolutional Neural Network (CNN) architecture to recognize surface defects of the Calcium Silicate Board (CSB) using visual image information based on a deep learning approach. The proposed CNN architecture is inspired by the existing SurfNet architecture and is [...] Read more.
This paper presents an improved Convolutional Neural Network (CNN) architecture to recognize surface defects of the Calcium Silicate Board (CSB) using visual image information based on a deep learning approach. The proposed CNN architecture is inspired by the existing SurfNet architecture and is named SurfNetv2, which comprises a feature extraction module and a surface defect recognition module. The output of the system is the recognized defect category on the surface of the CSB. In the collection of the training dataset, we manually captured the defect images presented on the surface of the CSB samples. Then, we divided these defect images into four categories, which are crash, dirty, uneven, and normal. In the training stage, the proposed SurfNetv2 is trained through an end-to-end supervised learning method, so that the CNN model learns how to recognize surface defects of the CSB only through the RGB image information. Experimental results show that the proposed SurfNetv2 outperforms five state-of-the-art methods and achieves a high recognition accuracy of 99.90% and 99.75% in our private CSB dataset and the public Northeastern University (NEU) dataset, respectively. Moreover, the proposed SurfNetv2 model achieves a real-time computing speed of about 199.38 fps when processing images with a resolution of 128 × 128 pixels. Therefore, the proposed CNN model has great potential for real-time automatic surface defect recognition applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Graphical abstract

15 pages, 3185 KiB  
Article
Machine Learning Modelling and Feature Engineering in Seismology Experiment
by Michail Nikolaevich Brykov, Ivan Petryshynets, Catalin Iulian Pruncu, Vasily Georgievich Efremenko, Danil Yurievich Pimenov, Khaled Giasin, Serhii Anatolievich Sylenko and Szymon Wojciechowski
Sensors 2020, 20(15), 4228; https://doi.org/10.3390/s20154228 - 29 Jul 2020
Cited by 10 | Viewed by 3429
Abstract
This article aims to discusses machine learning modelling using a dataset provided by the LANL (Los Alamos National Laboratory) earthquake prediction competition hosted by Kaggle. The data were obtained from a laboratory stick-slip friction experiment that mimics real earthquakes. Digitized acoustic signals were [...] Read more.
This article aims to discusses machine learning modelling using a dataset provided by the LANL (Los Alamos National Laboratory) earthquake prediction competition hosted by Kaggle. The data were obtained from a laboratory stick-slip friction experiment that mimics real earthquakes. Digitized acoustic signals were recorded against time to failure of a granular layer compressed between steel plates. In this work, machine learning was employed to develop models that could predict earthquakes. The aim is to highlight the importance and potential applicability of machine learning in seismology The XGBoost algorithm was used for modelling combined with 6-fold cross-validation and the mean absolute error (MAE) metric for model quality estimation. The backward feature elimination technique was used followed by the forward feature construction approach to find the best combination of features. The advantage of this feature engineering method is that it enables the best subset to be found from a relatively large set of features in a relatively short time. It was confirmed that the proper combination of statistical characteristics describing acoustic data can be used for effective prediction of time to failure. Additionally, statistical features based on the autocorrelation of acoustic data can also be used for further improvement of model quality. A total of 48 statistical features were considered. The best subset was determined as having 10 features. Its corresponding MAE was 1.913 s, which was stable to the third decimal point. The presented results can be used to develop artificial intelligence algorithms devoted to earthquake prediction. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

31 pages, 6590 KiB  
Article
On Data-Driven Sparse Sensing and Linear Estimation of Fluid Flows
by Balaji Jayaraman and S M Abdullah Al Mamun
Sensors 2020, 20(13), 3752; https://doi.org/10.3390/s20133752 - 04 Jul 2020
Cited by 12 | Viewed by 3400
Abstract
The reconstruction of fine-scale information from sparse data measured at irregular locations is often needed in many diverse applications, including numerous instances of practical fluid dynamics observed in natural environments. This need is driven by tasks such as data assimilation or the recovery [...] Read more.
The reconstruction of fine-scale information from sparse data measured at irregular locations is often needed in many diverse applications, including numerous instances of practical fluid dynamics observed in natural environments. This need is driven by tasks such as data assimilation or the recovery of fine-scale knowledge including models from limited data. Sparse reconstruction is inherently badly represented when formulated as a linear estimation problem. Therefore, the most successful linear estimation approaches are better represented by recovering the full state on an encoded low-dimensional basis that effectively spans the data. Commonly used low-dimensional spaces include those characterized by orthogonal Fourier and data-driven proper orthogonal decomposition (POD) modes. This article deals with the use of linear estimation methods when one encounters a non-orthogonal basis. As a representative thought example, we focus on linear estimation using a basis from shallow extreme learning machine (ELM) autoencoder networks that are easy to learn but non-orthogonal and which certainly do not parsimoniously represent the data, thus requiring numerous sensors for effective reconstruction. In this paper, we present an efficient and robust framework for sparse data-driven sensor placement and the consequent recovery of the higher-resolution field of basis vectors. The performance improvements are illustrated through examples of fluid flows with varying complexity and benchmarked against well-known POD-based sparse recovery methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

18 pages, 7943 KiB  
Article
Prognosis of Bearing and Gear Wears Using Convolutional Neural Network with Hybrid Loss Function
by Chang-Cheng Lo, Ching-Hung Lee and Wen-Cheng Huang
Sensors 2020, 20(12), 3539; https://doi.org/10.3390/s20123539 - 22 Jun 2020
Cited by 18 | Viewed by 4683
Abstract
This study aimed to propose a prognostic method based on a one-dimensional convolutional neural network (1-D CNN) with clustering loss by classification training. The 1-D CNN was trained by collecting the vibration signals of normal and malfunction data in hybrid loss function (i.e., [...] Read more.
This study aimed to propose a prognostic method based on a one-dimensional convolutional neural network (1-D CNN) with clustering loss by classification training. The 1-D CNN was trained by collecting the vibration signals of normal and malfunction data in hybrid loss function (i.e., classification loss in output and clustering loss in feature space). Subsequently, the obtained feature was adopted to estimate the status for prognosis. The open bearing dataset and established gear platform were utilized to validate the functionality and feasibility of the proposed model. Moreover, the experimental platform was used to simulate the gear mechanism of the semiconductor robot to conduct a practical experiment to verify the accuracy of the model estimation. The experimental results demonstrate the performance and effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

16 pages, 3332 KiB  
Article
Real-Time Prediction of Rate of Penetration in S-Shape Well Profile Using Artificial Intelligence Models
by Salaheldin Elkatatny
Sensors 2020, 20(12), 3506; https://doi.org/10.3390/s20123506 - 21 Jun 2020
Cited by 13 | Viewed by 2703
Abstract
Rate of penetration (ROP) is defined as the amount of removed rock per unit area per unit time. It is affected by several factors which are inseparable. Current established models for determining the ROP include the basic mathematical and physics equations, as well [...] Read more.
Rate of penetration (ROP) is defined as the amount of removed rock per unit area per unit time. It is affected by several factors which are inseparable. Current established models for determining the ROP include the basic mathematical and physics equations, as well as the use of empirical correlations. Given the complexity of the drilling process, the use of artificial intelligence (AI) has been a game changer because most of the unknown parameters can now be accounted for entirely at the modeling process. The objective of this paper is to evaluate the ability of the optimized adaptive neuro-fuzzy inference system (ANFIS), functional neural networks (FN), random forests (RF), and support vector machine (SVM) models to predict the ROP in real time from the drilling parameters in the S-shape well profile, for the first time, based on the drilling parameters of weight on bit (WOB), drillstring rotation (DSR), torque (T), pumping rate (GPM), and standpipe pressure (SPP). Data from two wells were used for training and testing (Well A and Well B with 4012 and 1717 data points, respectively), and one well for validation (Well C) with 2500 data points. Well A and Well B data were combined in the training-testing phase and were randomly divided into a 70:30 ratio for training/testing. The results showed that the ANFIS, FN, and RF models could effectively predict the ROP from the drilling parameters in the S-shape well profile, while the accuracy of the SVM model was very low. The ANFIS, FN, and RF models predicted the ROP for the training data with average absolute percentage errors (AAPEs) of 9.50%, 13.44%, and 3.25%, respectively. For the testing data, the ANFIS, FN, and RF models predicted the ROP with AAPEs of 9.57%, 11.20%, and 8.37%, respectively. The ANFIS, FN, and RF models overperformed the available empirical correlations for ROP prediction. The ANFIS model estimated the ROP for the validation data with an AAPE of 9.06%, whereas the FN model predicted the ROP with an AAPE of 10.48%, and the RF model predicted the ROP with an AAPE of 10.43%. The SVM model predicted the ROP for the validation data with a very high AAPE of 30.05% and all empirical correlations predicted the ROP with AAPEs greater than 25%. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

33 pages, 4961 KiB  
Article
NLP-Based Approach for Predicting HMI State Sequences Towards Monitoring Operator Situational Awareness
by Harsh V. P. Singh and Qusay H. Mahmoud
Sensors 2020, 20(11), 3228; https://doi.org/10.3390/s20113228 - 05 Jun 2020
Cited by 5 | Viewed by 3878
Abstract
A novel approach presented herein transforms the Human Machine Interface (HMI) states, as a pattern of visual feedback states that encompass both operator actions and process states, from a multi-variate time-series to a natural language processing (NLP) modeling domain. The goal of this [...] Read more.
A novel approach presented herein transforms the Human Machine Interface (HMI) states, as a pattern of visual feedback states that encompass both operator actions and process states, from a multi-variate time-series to a natural language processing (NLP) modeling domain. The goal of this approach is to predict operator response patterns for n a h e a d time-step window given k l a g g e d past HMI state patterns. The NLP approach offers the possibility of encoding (semantic) contextual relations within HMI state patterns. Towards which, a technique for framing raw HMI data for supervised training using sequence-to-sequence (seq2seq) deep-learning machine translation algorithms is presented. In addition, a custom Seq2Seq convolutional neural network (CNN) NLP model based on current state-of-the-art design elements such as attention, is compared against a standard recurrent neural network (RNN) based NLP model. Results demonstrate comparable effectiveness of both the designs of NLP models evaluated for modeling HMI states. RNN NLP models showed higher ( 26 % ) forecast accuracy, in general for both in-sample and out-of-sample test datasets. However, custom CNN NLP model showed higher ( 53 % ) validation accuracy indicative of less over-fitting with the same amount of available training data. The real-world application of the proposed NLP modeling of industrial HMIs, such as in power generating stations control rooms, aviation (cockpits), and so forth, is towards the realization of a non-intrusive operator situational awareness monitoring framework through prediction of HMI states. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

18 pages, 3738 KiB  
Article
Newly Developed Correlations to Predict the Rheological Parameters of High-Bentonite Drilling Fluid Using Neural Networks
by Ahmed Gowida, Salaheldin Elkatatny, Khaled Abdelgawad and Rahul Gajbhiye
Sensors 2020, 20(10), 2787; https://doi.org/10.3390/s20102787 - 14 May 2020
Cited by 21 | Viewed by 2458
Abstract
High-bentonite mud (HBM) is a water-based drilling fluid characterized by its remarkable improvement in cutting removal and hole cleaning efficiency. Periodic monitoring of the rheological properties of HBM is mandatory for optimizing the drilling operation. The objective of this study is to develop [...] Read more.
High-bentonite mud (HBM) is a water-based drilling fluid characterized by its remarkable improvement in cutting removal and hole cleaning efficiency. Periodic monitoring of the rheological properties of HBM is mandatory for optimizing the drilling operation. The objective of this study is to develop new sets of correlations using artificial neural network (ANN) to predict the rheological parameters of HBM while drilling using the frequent measurements, every 15 to 20 min, of mud density (MD) and Marsh funnel viscosity (FV). The ANN models were developed using 200 field data points. The dataset was divided into 70:30 ratios for training and testing the ANN models respectively. The optimized ANN models showed a significant match between the predicted and the measured rheological properties with a high correlation coefficient (R) higher than 0.90 and a maximum average absolute percentage error (AAPE) of 6%. New empirical correlations were extracted from the ANN models to estimate plastic viscosity (PV), yield point (YP), and apparent viscosity (AV) directly without running the models for easier and practical application. The results obtained from AV empirical correlation outperformed the previously published correlations in terms of R and AAPE. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

15 pages, 5236 KiB  
Article
Development of a Machine Learning-Based Damage Identification Method Using Multi-Point Simultaneous Acceleration Measurement Results
by Pang-jo Chun, Tatsuro Yamane, Shota Izumi and Naoya Kuramoto
Sensors 2020, 20(10), 2780; https://doi.org/10.3390/s20102780 - 14 May 2020
Cited by 21 | Viewed by 3069
Abstract
It is necessary to assess damage properly for the safe use of a structure and for the development of an appropriate maintenance strategy. Although many efforts have been made to measure the vibration of a structure to determine the degree of damage, the [...] Read more.
It is necessary to assess damage properly for the safe use of a structure and for the development of an appropriate maintenance strategy. Although many efforts have been made to measure the vibration of a structure to determine the degree of damage, the accuracy of evaluation is not high enough, so it is difficult to say that a damage evaluation based on vibrations in a structure has not been put to practical use. In this study, we propose a method to evaluate damage by measuring the acceleration of a structure at multiple points and interpreting the results with a Random Forest, which is a kind of supervised machine learning. The proposed method uses the maximum response acceleration, standard deviation, logarithmic decay rate, and natural frequency to improve the accuracy of damage assessment. We propose a three-step Random Forest method to evaluate various damage types based on the results of these many measurements. Then, the accuracy of the proposed method is verified based on the results of a cross-validation and a vibration test of an actual damaged specimen. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

14 pages, 1150 KiB  
Article
Wavelet-Like Transform to Optimize the Order of an Autoregressive Neural Network Model to Predict the Dissolved Gas Concentration in Power Transformer Oil from Sensor Data
by Francisco Elânio Bezerra, Fernando André Zemuner Garcia, Silvio Ikuyo Nabeta, Gilberto Francisco Martha de Souza, Ivan Eduardo Chabu, Josemir Coelho Santos, Shigueru Nagao Junior and Fabio Henrique Pereira
Sensors 2020, 20(9), 2730; https://doi.org/10.3390/s20092730 - 11 May 2020
Cited by 11 | Viewed by 3178
Abstract
Dissolved gas analysis (DGA) is one of the most important methods to analyze fault in power transformers. In general, DGA is applied in monitoring systems based upon an autoregressive model; the current value of a time series is regressed on past values of [...] Read more.
Dissolved gas analysis (DGA) is one of the most important methods to analyze fault in power transformers. In general, DGA is applied in monitoring systems based upon an autoregressive model; the current value of a time series is regressed on past values of the same series, as well as present and past values of some exogenous variables. The main difficulty is to decide the order of the autoregressive model; this means determining the number of past values to be used. This study proposes a wavelet-like transform to optimize the order of the variables in a nonlinear autoregressive neural network to predict the in oil dissolved gas concentration (DGC) from sensor data. Daubechies wavelets of different lengths are used to create representations with different time delays of ten DGC, which are then subjected to a procedure based on principal components analysis (PCA) and Pearson’s correlation to find out the order of an autoregressive model. The representations with optimal time delays for each DGC are applied as input in a multi-layer perceptron (MLP) network with backpropagation algorithm to predict the gas at the present and future times. This approach produces better results than choosing the same time delay for all inputs, as usual. The forecasts reached an average mean absolute percentage error (MAPE) of 5.763%, 1.525%, 1.831%, 2.869%, and 5.069% for C2H2, C2H6, C2H4, CH4, and H2, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

11 pages, 2775 KiB  
Article
Five Typical Stenches Detection Using an Electronic Nose
by Wei Jiang and Daqi Gao
Sensors 2020, 20(9), 2514; https://doi.org/10.3390/s20092514 - 29 Apr 2020
Cited by 5 | Viewed by 2356
Abstract
This paper deals with the classification of stenches, which can stimulate olfactory organs to discomfort people and pollute the environment. In China, the triangle odor bag method, which only depends on the state of the panelist, is widely used in determining odor concentration. [...] Read more.
This paper deals with the classification of stenches, which can stimulate olfactory organs to discomfort people and pollute the environment. In China, the triangle odor bag method, which only depends on the state of the panelist, is widely used in determining odor concentration. In this paper, we propose a stenches detection system composed of an electronic nose and machine learning algorithms to discriminate five typical stenches. These five chemicals producing stenches are 2-phenylethyl alcohol, isovaleric acid, methylcyclopentanone, γ-undecalactone, and 2-methylindole. We will use random forest, support vector machines, backpropagation neural network, principal components analysis (PCA), and linear discriminant analysis (LDA) in this paper. The result shows that LDA (support vector machine (SVM)) has better performance in detecting the stenches considered in this paper. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

14 pages, 254 KiB  
Article
A Public Domain Dataset for Real-Life Human Activity Recognition Using Smartphone Sensors
by Daniel Garcia-Gonzalez, Daniel Rivero, Enrique Fernandez-Blanco and Miguel R. Luaces
Sensors 2020, 20(8), 2200; https://doi.org/10.3390/s20082200 - 13 Apr 2020
Cited by 87 | Viewed by 12855 | Correction
Abstract
In recent years, human activity recognition has become a hot topic inside the scientific community. The reason to be under the spotlight is its direct application in multiple domains, like healthcare or fitness. Additionally, the current worldwide use of smartphones makes it particularly [...] Read more.
In recent years, human activity recognition has become a hot topic inside the scientific community. The reason to be under the spotlight is its direct application in multiple domains, like healthcare or fitness. Additionally, the current worldwide use of smartphones makes it particularly easy to get this kind of data from people in a non-intrusive and cheaper way, without the need for other wearables. In this paper, we introduce our orientation-independent, placement-independent and subject-independent human activity recognition dataset. The information in this dataset is the measurements from the accelerometer, gyroscope, magnetometer, and GPS of the smartphone. Additionally, each measure is associated with one of the four possible registered activities: inactive, active, walking and driving. This work also proposes asupport vector machine (SVM) model to perform some preliminary experiments on the dataset. Considering that this dataset was taken from smartphones in their actual use, unlike other datasets, the development of a good model on such data is an open problem and a challenge for researchers. By doing so, we would be able to close the gap between the model and a real-life application. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
21 pages, 2532 KiB  
Article
ANN-Based Airflow Control for an Oscillating Water Column Using Surface Elevation Measurements
by Fares M'zoughi, Izaskun Garrido, Aitor J. Garrido and Manuel De La Sen
Sensors 2020, 20(5), 1352; https://doi.org/10.3390/s20051352 - 29 Feb 2020
Cited by 17 | Viewed by 3522
Abstract
Oscillating water column (OWC) plants face power generation limitations due to the stalling phenomenon. This behavior can be avoided by an airflow control strategy that can anticipate the incoming peak waves and reduce its airflow velocity within the turbine duct. In this sense, [...] Read more.
Oscillating water column (OWC) plants face power generation limitations due to the stalling phenomenon. This behavior can be avoided by an airflow control strategy that can anticipate the incoming peak waves and reduce its airflow velocity within the turbine duct. In this sense, this work aims to use the power of artificial neural networks (ANN) to recognize the different incoming waves in order to distinguish the strong waves that provoke the stalling behavior and generate a suitable airflow speed reference for the airflow control scheme. The ANN is, therefore, trained using real surface elevation measurements of the waves. The ANN-based airflow control will control an air valve in the capture chamber to adjust the airflow speed as required. A comparative study has been carried out to compare the ANN-based airflow control to the uncontrolled OWC system in different sea conditions. Also, another study has been carried out using real measured wave input data and generated power of the NEREIDA wave power plant. Results show the effectiveness of the proposed ANN airflow control against the uncontrolled case ensuring power generation improvement. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

16 pages, 4666 KiB  
Article
Classification of VLF/LF Lightning Signals Using Sensors and Deep Learning Methods
by Jiaquan Wang, Qijun Huang, Qiming Ma, Sheng Chang, Jin He, Hao Wang, Xiao Zhou, Fang Xiao and Chao Gao
Sensors 2020, 20(4), 1030; https://doi.org/10.3390/s20041030 - 14 Feb 2020
Cited by 28 | Viewed by 4905
Abstract
Lightning waveform plays an important role in lightning observation, location, and lightning disaster investigation. Based on a large amount of lightning waveform data provided by existing real-time very low frequency/low frequency (VLF/LF) lightning waveform acquisition equipment, an automatic and accurate lightning waveform classification [...] Read more.
Lightning waveform plays an important role in lightning observation, location, and lightning disaster investigation. Based on a large amount of lightning waveform data provided by existing real-time very low frequency/low frequency (VLF/LF) lightning waveform acquisition equipment, an automatic and accurate lightning waveform classification method becomes extremely important. With the widespread application of deep learning in image and speech recognition, it becomes possible to use deep learning to classify lightning waveforms. In this study, 50,000 lightning waveform samples were collected. The data was divided into the following categories: positive cloud ground flash, negative cloud ground flash, cloud ground flash with ionosphere reflection signal, positive narrow bipolar event, negative narrow bipolar event, positive pre-breakdown process, negative pre-breakdown process, continuous multi-pulse cloud flash, bipolar pulse, skywave. A multi-layer one-dimensional convolutional neural network (1D-CNN) was designed to automatically extract VLF/LF lightning waveform features and distinguish lightning waveforms. The model achieved an overall accuracy of 99.11% in the lightning dataset and overall accuracy of 97.55% in a thunderstorm process. Considering its excellent performance, this model could be used in lightning sensors to assist in lightning monitoring and positioning. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

17 pages, 5304 KiB  
Article
Classification of Low Frequency Signals Emitted by Power Transformers Using Sensors and Machine Learning Methods
by Daniel Jancarczyk, Marcin Bernaś and Tomasz Boczar
Sensors 2019, 19(22), 4909; https://doi.org/10.3390/s19224909 - 10 Nov 2019
Cited by 4 | Viewed by 2619
Abstract
This paper proposes a method of automatically detecting and classifying low frequency noise generated by power transformers using sensors and dedicated machine learning algorithms. The method applies the frequency spectra of sound pressure levels generated during operation by transformers in a real environment. [...] Read more.
This paper proposes a method of automatically detecting and classifying low frequency noise generated by power transformers using sensors and dedicated machine learning algorithms. The method applies the frequency spectra of sound pressure levels generated during operation by transformers in a real environment. The spectra frequency interval and its resolution are automatically optimized for the selected machine learning algorithm. Various machine learning algorithms, optimization techniques, and transformer types were researched: two indoor type transformers from Schneider Electric and two overhead type transformers manufactured by ABB. As a result, a method was proposed that provides a way in which inspections of working transformers (from background) and their type can be performed with an accuracy of over 97%, based on the generated low-frequency noise. The application of the proposed preprocessing stage increased the accuracy of this method by 10%. Additionally, machine learning algorithms were selected which offer robust solutions (with the highest accuracy) for noise classification. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

Other

Jump to: Research

15 pages, 5881 KiB  
Letter
Indirect Feedback Measurement of Flow in a Water Pumping Network Employing Artificial Intelligence
by Thommas Kevin Sales Flores, Juan Moises Mauricio Villanueva, Heber P. Gomes and Sebastian Y. C. Catunda
Sensors 2021, 21(1), 75; https://doi.org/10.3390/s21010075 - 25 Dec 2020
Cited by 4 | Viewed by 2540
Abstract
Indirect measurement can be used as an alternative to obtain a desired quantity, whose physical positioning or use of a direct sensor in the plant is expensive or not possible. This procedure can been improved by means of feedback control strategies of a [...] Read more.
Indirect measurement can be used as an alternative to obtain a desired quantity, whose physical positioning or use of a direct sensor in the plant is expensive or not possible. This procedure can been improved by means of feedback control strategies of a secondary variable, which can be measured and controlled. Its main advantage is a new form of dynamic response, with improvements in the response time of the measurement of the quantity of interest. In water pumping networks, this methodology can be employed for measuring the flow indirectly, which can be advantageous due to the high price of flow sensors and the operational complexity to install them in pipelines. In this work, we present the use of artificial intelligence techniques in the implementation of the feedback system for indirect flow measurement. Among the contributions of this new technique is the design of the pressure controller using the Fuzzy logic theory, which rules out the need for knowing the plant model, as well as the use of an artificial neural network for the construction of nonlinear models with the purpose of indirectly estimating the flow. The validation of the proposed approach was carried out through experimental tests in a water pumping system, fully automated and installed at the Laboratory of Hydraulic and Energy Efficiency in Sanitation at the Federal University of Paraiba (LENHS/UFPB). The results were compared with an electromagnetic flow sensor present in the system, obtaining a maximum relative error of 10%. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Show Figures

Figure 1

6 pages, 182 KiB  
Correction
Correction: Garcia-Gonzalez, D.; Rivero, D.; Fernandez-Blanco, E.; Luaces, M.R. A Public Domain Dataset for Real-Life Human Activity Recognition Using Smartphone Sensors. Sensors 2020, 20, 2200
by Daniel Garcia-Gonzalez, Daniel Rivero, Enrique Fernandez-Blanco and Miguel R. Luaces
Sensors 2020, 20(16), 4650; https://doi.org/10.3390/s20164650 - 18 Aug 2020
Cited by 8 | Viewed by 1992
Abstract
The authors wish to make the following corrections to this paper [...] Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensors)
Back to TopTop