sensors-logo

Journal Browser

Journal Browser

Editor's Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to authors, or important in this field. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:

Article

Article
The Algorithm of Determining an Anti-Collision Manoeuvre Trajectory Based on the Interpolation of Ship’s State Vector
Sensors 2021, 21(16), 5332; https://doi.org/10.3390/s21165332 - 06 Aug 2021
Cited by 9
Abstract
The determination of a ship’s safe trajectory in collision situations at sea is one of the basic functions in autonomous navigation of ships. While planning a collision avoiding manoeuvre in open waters, the navigator has to take into account the ships manoeuvrability and [...] Read more.
The determination of a ship’s safe trajectory in collision situations at sea is one of the basic functions in autonomous navigation of ships. While planning a collision avoiding manoeuvre in open waters, the navigator has to take into account the ships manoeuvrability and hydrometeorological conditions. To this end, the ship’s state vector is predicted—position coordinates, speed, heading, and other movement parameters—at fixed time intervals for different steering scenarios. One possible way to solve this problem is a method using the interpolation of the ship’s state vector based on the data from measurements conducted during the sea trials of the ship. This article presents the interpolating function within any convex quadrilateral with the nodes being its vertices. The proposed function interpolates the parameters of the ship’s state vector for the specified point of a plane, where the values in the interpolation nodes are data obtained from measurements performed during a series of turning circle tests, conducted for different starting conditions and various rudder settings. The proposed method of interpolation was used in the process of determining the anti-collision manoeuvre trajectory. The mechanism is based on the principles of a modified Dijkstra algorithm, in which the graph takes the form of a regular network of points. The transition between the graph vertices depends on the safe passing level of other objects and the degree of departure from the planned route. The determined shortest path between the starting vertex and the target vertex is the optimal solution for the discrete space of solutions. The algorithm for determining the trajectory of the anti-collision manoeuvre was implemented in autonomous sea-going vessel technology. This article presents the results of laboratory tests and tests conducted under quasi-real conditions using physical ship models. The experiments confirmed the effective operation of the developed algorithm of the determination of the anti-collision manoeuvre trajectory in the technological framework of autonomous ship navigation. Full article
(This article belongs to the Special Issue Sensors and Sensor's Fusion in Autonomous Vehicles)
Show Figures

Figure 1

Article
Real Time Pear Fruit Detection and Counting Using YOLOv4 Models and Deep SORT
Sensors 2021, 21(14), 4803; https://doi.org/10.3390/s21144803 - 14 Jul 2021
Cited by 14
Abstract
This study aimed to produce a robust real-time pear fruit counter for mobile applications using only RGB data, the variants of the state-of-the-art object detection model YOLOv4, and the multiple object-tracking algorithm Deep SORT. This study also provided a systematic and pragmatic methodology [...] Read more.
This study aimed to produce a robust real-time pear fruit counter for mobile applications using only RGB data, the variants of the state-of-the-art object detection model YOLOv4, and the multiple object-tracking algorithm Deep SORT. This study also provided a systematic and pragmatic methodology for choosing the most suitable model for a desired application in agricultural sciences. In terms of accuracy, YOLOv4-CSP was observed as the optimal model, with an [email protected] of 98%. In terms of speed and computational cost, YOLOv4-tiny was found to be the ideal model, with a speed of more than 50 FPS and FLOPS of 6.8–14.5. If considering the balance in terms of accuracy, speed and computational cost, YOLOv4 was found to be most suitable and had the highest accuracy metrics while satisfying a real time speed of greater than or equal to 24 FPS. Between the two methods of counting with Deep SORT, the unique ID method was found to be more reliable, with an F1count of 87.85%. This was because YOLOv4 had a very low false negative in detecting pear fruits. The ROI line is more reliable because of its more restrictive nature, but due to flickering in detection it was not able to count some pears despite their being detected. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Article
TIP4.0: Industrial Internet of Things Platform for Predictive Maintenance
Sensors 2021, 21(14), 4676; https://doi.org/10.3390/s21144676 - 08 Jul 2021
Cited by 10
Abstract
Industry 4.0, allied with the growth and democratization of Artificial Intelligence (AI) and the advent of IoT, is paving the way for the complete digitization and automation of industrial processes. Maintenance is one of these processes, where the introduction of a predictive approach, [...] Read more.
Industry 4.0, allied with the growth and democratization of Artificial Intelligence (AI) and the advent of IoT, is paving the way for the complete digitization and automation of industrial processes. Maintenance is one of these processes, where the introduction of a predictive approach, as opposed to the traditional techniques, is expected to considerably improve the industry maintenance strategies with gains such as reduced downtime, improved equipment effectiveness, lower maintenance costs, increased return on assets, risk mitigation, and, ultimately, profitable growth. With predictive maintenance, dedicated sensors monitor the critical points of assets. The sensor data then feed into machine learning algorithms that can infer the asset health status and inform operators and decision-makers. With this in mind, in this paper, we present TIP4.0, a platform for predictive maintenance based on a modular software solution for edge computing gateways. TIP4.0 is built around Yocto, which makes it readily available and compliant with Commercial Off-the-Shelf (COTS) or proprietary hardware. TIP4.0 was conceived with an industry mindset with communication interfaces that allow it to serve sensor networks in the shop floor and modular software architecture that allows it to be easily adjusted to new deployment scenarios. To showcase its potential, the TIP4.0 platform was validated over COTS hardware, and we considered a public data-set for the simulation of predictive maintenance scenarios. We used a Convolution Neural Network (CNN) architecture, which provided competitive performance over the state-of-the-art approaches, while being approximately four-times and two-times faster than the uncompressed model inference on the Central Processing Unit (CPU) and Graphical Processing Unit, respectively. These results highlight the capabilities of distributed large-scale edge computing over industrial scenarios. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Article
A Vision-Based Social Distancing and Critical Density Detection System for COVID-19
Sensors 2021, 21(13), 4608; https://doi.org/10.3390/s21134608 - 05 Jul 2021
Cited by 29
Abstract
Social distancing (SD) is an effective measure to prevent the spread of the infectious Coronavirus Disease 2019 (COVID-19). However, a lack of spatial awareness may cause unintentional violations of this new measure. Against this backdrop, we propose an active surveillance system to slow [...] Read more.
Social distancing (SD) is an effective measure to prevent the spread of the infectious Coronavirus Disease 2019 (COVID-19). However, a lack of spatial awareness may cause unintentional violations of this new measure. Against this backdrop, we propose an active surveillance system to slow the spread of COVID-19 by warning individuals in a region-of-interest. Our contribution is twofold. First, we introduce a vision-based real-time system that can detect SD violations and send non-intrusive audio-visual cues using state-of-the-art deep-learning models. Second, we define a novel critical social density value and show that the chance of SD violation occurrence can be held near zero if the pedestrian density is kept under this value. The proposed system is also ethically fair: it does not record data nor target individuals, and no human supervisor is present during the operation. The proposed system was evaluated across real-world datasets. Full article
(This article belongs to the Special Issue Machine Learning in Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

Article
The Promise of Sleep: A Multi-Sensor Approach for Accurate Sleep Stage Detection Using the Oura Ring
Sensors 2021, 21(13), 4302; https://doi.org/10.3390/s21134302 - 23 Jun 2021
Cited by 12
Abstract
Consumer-grade sleep trackers represent a promising tool for large scale studies and health management. However, the potential and limitations of these devices remain less well quantified. Addressing this issue, we aim at providing a comprehensive analysis of the impact of accelerometer, autonomic nervous [...] Read more.
Consumer-grade sleep trackers represent a promising tool for large scale studies and health management. However, the potential and limitations of these devices remain less well quantified. Addressing this issue, we aim at providing a comprehensive analysis of the impact of accelerometer, autonomic nervous system (ANS)-mediated peripheral signals, and circadian features for sleep stage detection on a large dataset. Four hundred and forty nights from 106 individuals, for a total of 3444 h of combined polysomnography (PSG) and physiological data from a wearable ring, were acquired. Features were extracted to investigate the relative impact of different data streams on 2-stage (sleep and wake) and 4-stage classification accuracy (light NREM sleep, deep NREM sleep, REM sleep, and wake). Machine learning models were evaluated using a 5-fold cross-validation and a standardized framework for sleep stage classification assessment. Accuracy for 2-stage detection (sleep, wake) was 94% for a simple accelerometer-based model and 96% for a full model that included ANS-derived and circadian features. Accuracy for 4-stage detection was 57% for the accelerometer-based model and 79% when including ANS-derived and circadian features. Combining the compact form factor of a finger ring, multidimensional biometric sensory streams, and machine learning, high accuracy wake-sleep detection and sleep staging can be accomplished. Full article
Show Figures

Figure 1

Article
Towards 6G IoT: Tracing Mobile Sensor Nodes with Deep Learning Clustering in UAV Networks
Sensors 2021, 21(11), 3936; https://doi.org/10.3390/s21113936 - 07 Jun 2021
Cited by 9
Abstract
Unmanned aerial vehicles (UAVs) in the role of flying anchor nodes have been proposed to assist the localisation of terrestrial Internet of Things (IoT) sensors and provide relay services in the context of the upcoming 6G networks. This paper considered the objective of [...] Read more.
Unmanned aerial vehicles (UAVs) in the role of flying anchor nodes have been proposed to assist the localisation of terrestrial Internet of Things (IoT) sensors and provide relay services in the context of the upcoming 6G networks. This paper considered the objective of tracing a mobile IoT device of unknown location, using a group of UAVs that were equipped with received signal strength indicator (RSSI) sensors. The UAVs employed measurements of the target’s radio frequency (RF) signal power to approach the target as quickly as possible. A deep learning model performed clustering in the UAV network at regular intervals, based on a graph convolutional network (GCN) architecture, which utilised information about the RSSI and the UAV positions. The number of clusters was determined dynamically at each instant using a heuristic method, and the partitions were determined by optimising an RSSI loss function. The proposed algorithm retained the clusters that approached the RF source more effectively, removing the rest of the UAVs, which returned to the base. Simulation experiments demonstrated the improvement of this method compared to a previous deterministic approach, in terms of the time required to reach the target and the total distance covered by the UAVs. Full article
(This article belongs to the Special Issue 6G Wireless Communication Systems)
Show Figures

Figure 1

Article
A High-Resolution Reflective Microwave Planar Sensor for Sensing of Vanadium Electrolyte
Sensors 2021, 21(11), 3759; https://doi.org/10.3390/s21113759 - 28 May 2021
Cited by 16
Abstract
Microwave planar sensors employ conventional passive complementary split ring resonators (CSRR) as their sensitive region. In this work, a novel planar reflective sensor is introduced that deploys CSRRs as the front-end sensing element at fres=6 GHz with an extra loss-compensating [...] Read more.
Microwave planar sensors employ conventional passive complementary split ring resonators (CSRR) as their sensitive region. In this work, a novel planar reflective sensor is introduced that deploys CSRRs as the front-end sensing element at fres=6 GHz with an extra loss-compensating negative resistance that restores the dissipated power in the sensor that is used in dielectric material characterization. It is shown that the S11 notch of −15 dB can be improved down to −40 dB without loss of sensitivity. An application of this design is shown in discriminating different states of vanadium redox solutions with highly lossy conditions of fully charged V5+ and fully discharged V4+ electrolytes. Full article
(This article belongs to the Special Issue State-of-the-Art Technologies in Microwave Sensors)
Show Figures

Figure 1

Article
Evaluating the Single-Shot MultiBox Detector and YOLO Deep Learning Models for the Detection of Tomatoes in a Greenhouse
Sensors 2021, 21(10), 3569; https://doi.org/10.3390/s21103569 - 20 May 2021
Cited by 18
Abstract
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage [...] Read more.
The development of robotic solutions for agriculture requires advanced perception capabilities that can work reliably in any crop stage. For example, to automatise the tomato harvesting process in greenhouses, the visual perception system needs to detect the tomato in any life cycle stage (flower to the ripe tomato). The state-of-the-art for visual tomato detection focuses mainly on ripe tomato, which has a distinctive colour from the background. This paper contributes with an annotated visual dataset of green and reddish tomatoes. This kind of dataset is uncommon and not available for research purposes. This will enable further developments in edge artificial intelligence for in situ and in real-time visual tomato detection required for the development of harvesting robots. Considering this dataset, five deep learning models were selected, trained and benchmarked to detect green and reddish tomatoes grown in greenhouses. Considering our robotic platform specifications, only the Single-Shot MultiBox Detector (SSD) and YOLO architectures were considered. The results proved that the system can detect green and reddish tomatoes, even those occluded by leaves. SSD MobileNet v2 had the best performance when compared against SSD Inception v2, SSD ResNet 50, SSD ResNet 101 and YOLOv4 Tiny, reaching an F1-score of 66.15%, an mAP of 51.46% and an inference time of 16.44ms with the NVIDIA Turing Architecture platform, an NVIDIA Tesla T4, with 12 GB. YOLOv4 Tiny also had impressive results, mainly concerning inferring times of about 5 ms. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Article
UAV Photogrammetry under Poor Lighting Conditions—Accuracy Considerations
Sensors 2021, 21(10), 3531; https://doi.org/10.3390/s21103531 - 19 May 2021
Cited by 7
Abstract
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw [...] Read more.
The use of low-level photogrammetry is very broad, and studies in this field are conducted in many aspects. Most research and applications are based on image data acquired during the day, which seems natural and obvious. However, the authors of this paper draw attention to the potential and possible use of UAV photogrammetry during the darker time of the day. The potential of night-time images has not been yet widely recognized, since correct scenery lighting or lack of scenery light sources is an obvious issue. The authors have developed typical day- and night-time photogrammetric models. They have also presented an extensive analysis of the geometry, indicated which process element had the greatest impact on degrading night-time photogrammetric product, as well as which measurable factor directly correlated with image accuracy. The reduction in geometry during night-time tests was greatly impacted by the non-uniform distribution of GCPs within the study area. The calibration of non-metric cameras is sensitive to poor lighting conditions, which leads to the generation of a higher determination error for each intrinsic orientation and distortion parameter. As evidenced, uniformly illuminated photos can be used to construct a model with lower reprojection error, and each tie point exhibits greater precision. Furthermore, they have evaluated whether commercial photogrammetric software enabled reaching acceptable image quality and whether the digital camera type impacted interpretative quality. The research paper is concluded with an extended discussion, conclusions, and recommendation on night-time studies. Full article
(This article belongs to the Special Issue Unmanned Aerial Systems and Remote Sensing)
Show Figures

Figure 1

Article
Predicting Exact Valence and Arousal Values from EEG
Sensors 2021, 21(10), 3414; https://doi.org/10.3390/s21103414 - 14 May 2021
Cited by 9
Abstract
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, [...] Read more.
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, sadness, anger, etc.) and have not attempted to predict exact values for valence and arousal, which would provide a wider range of emotional states. This paper describes our proposed model for predicting the exact values of valence and arousal in a subject-independent scenario. To create it, we studied the best features, brain waves, and machine learning models that are currently in use for emotion classification. This systematic analysis revealed that the best prediction model uses a KNN regressor (K = 1) with Manhattan distance, features from the alpha, beta and gamma bands, and the differential asymmetry from the alpha band. Results, using the DEAP, AMIGOS and DREAMER datasets, show that our model can predict valence and arousal values with a low error (MAE < 0.06, RMSE < 0.16) and a strong correlation between predicted and expected values (PCC > 0.80), and can identify four emotional classes with an accuracy of 84.4%. The findings of this work show that the features, brain waves and machine learning models, typically used in emotion classification tasks, can be used in more challenging situations, such as the prediction of exact values for valence and arousal. Full article
(This article belongs to the Special Issue Biomedical Signal Acquisition and Processing Using Sensors)
Show Figures

Figure 1

Article
Testing the Contribution of Multi-Source Remote Sensing Features for Random Forest Classification of the Greater Amanzule Tropical Peatland
Sensors 2021, 21(10), 3399; https://doi.org/10.3390/s21103399 - 13 May 2021
Cited by 7
Abstract
Tropical peatlands such as Ghana’s Greater Amanzule peatland are highly valuable ecosystems and under great pressure from anthropogenic land use activities. Accurate measurement of their occurrence and extent is required to facilitate sustainable management. A key challenge, however, is the high cloud cover [...] Read more.
Tropical peatlands such as Ghana’s Greater Amanzule peatland are highly valuable ecosystems and under great pressure from anthropogenic land use activities. Accurate measurement of their occurrence and extent is required to facilitate sustainable management. A key challenge, however, is the high cloud cover in the tropics that limits optical remote sensing data acquisition. In this work we combine optical imagery with radar and elevation data to optimise land cover classification for the Greater Amanzule tropical peatland. Sentinel-2, Sentinel-1 and Shuttle Radar Topography Mission (SRTM) imagery were acquired and integrated to drive a machine learning land cover classification using a random forest classifier. Recursive feature elimination was used to optimize high-dimensional and correlated feature space and determine the optimal features for the classification. Six datasets were compared, comprising different combinations of optical, radar and elevation features. Results showed that the best overall accuracy (OA) was found for the integrated Sentinel-2, Sentinel-1 and SRTM dataset (S2+S1+DEM), significantly outperforming all the other classifications with an OA of 94%. Assessment of the sensitivity of land cover classes to image features indicated that elevation and the original Sentinel-1 bands contributed the most to separating tropical peatlands from other land cover types. The integration of more features and the removal of redundant features systematically increased classification accuracy. We estimate Ghana’s Greater Amanzule peatland covers 60,187 ha. Our proposed methodological framework contributes a robust workflow for accurate and detailed landscape-scale monitoring of tropical peatlands, while our findings provide timely information critical for the sustainable management of the Greater Amanzule peatland. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Article
Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network
Sensors 2021, 21(9), 3046; https://doi.org/10.3390/s21093046 - 27 Apr 2021
Cited by 50
Abstract
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by [...] Read more.
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Article
Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM
Sensors 2021, 21(8), 2852; https://doi.org/10.3390/s21082852 - 18 Apr 2021
Cited by 83
Abstract
Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved [...] Read more.
Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity. Full article
Show Figures

Figure 1

Article
The Utilization of Artificial Neural Network Equalizer in Optical Camera Communications
Sensors 2021, 21(8), 2826; https://doi.org/10.3390/s21082826 - 16 Apr 2021
Cited by 8
Abstract
In this paper, we propose and validate an artificial neural network-based equalizer for the constant power 4-level pulse amplitude modulation in an optical camera communications system. We introduce new terminology to measure the quality of the communications link in terms of the number [...] Read more.
In this paper, we propose and validate an artificial neural network-based equalizer for the constant power 4-level pulse amplitude modulation in an optical camera communications system. We introduce new terminology to measure the quality of the communications link in terms of the number of row pixels per symbol Npps, which allows a fair comparison considering the progress made in the development of the current image sensors in terms of the frame rates and the resolutions of each frame. Using the proposed equalizer, we experimentally demonstrate a non-flickering system using a single light-emitting diode (LED) with Npps of 20 and 30 pixels/symbol for the unequalized and equalized systems, respectively. Potential transmission rates of up to 18.6 and 24.4 kbps are achieved with and without the equalization, respectively. The quality of the received signal is assessed using the eye-diagram opening and its linearity and the bit error rate performance. An acceptable bit error rate (below the forward error correction limit) and an improvement of ~66% in the eye linearity are achieved using a single LED and a typical commercial camera with equalization. Full article
(This article belongs to the Special Issue Visible Light Communication, Networking, and Sensing)
Show Figures

Figure 1

Article
An Enhanced Indoor Positioning Algorithm Based on Fingerprint Using Fine-Grained CSI and RSSI Measurements of IEEE 802.11n WLAN
Sensors 2021, 21(8), 2769; https://doi.org/10.3390/s21082769 - 14 Apr 2021
Cited by 11
Abstract
Received signal strength indication (RSSI) obtained by Medium Access Control (MAC) layer is widely used in range-based and fingerprint location systems due to its low cost and low complexity. However, RSS is affected by noise signals and multi-path, and its positioning performance is [...] Read more.
Received signal strength indication (RSSI) obtained by Medium Access Control (MAC) layer is widely used in range-based and fingerprint location systems due to its low cost and low complexity. However, RSS is affected by noise signals and multi-path, and its positioning performance is not stable. In recent years, many commercial WiFi devices support the acquisition of physical layer channel state information (CSI). CSI is an index that can characterize the signal characteristics with more fine granularity than RSS. Compared with RSS, CSI can avoid the effects of multi-path and noise by analyzing the characteristics of multi-channel sub-carriers. To improve the indoor location accuracy and algorithm efficiency, this paper proposes a hybrid fingerprint location technology based on RSS and CSI. In the off-line phase, to overcome the problems of low positioning accuracy and fingerprint drift caused by signal instability, a methodology based on the Kalman filter and a Gaussian function is proposed to preprocess the RSSI value and CSI amplitude value, and the improved CSI phase is incorporated after the linear transformation. The mutation and noisy data are then effectively eliminated, and the accurate and smoother outputs of the RSSI and CSI values can be achieved. Then, the accurate hybrid fingerprint database is established after dimensionality reduction of the obtained high-dimensional data values. The weighted k-nearest neighbor (WKNN) algorithm is applied to reduce the complexity of the algorithm during the online positioning stage, and the accurate indoor positioning algorithm is accomplished. Experimental results show that the proposed algorithm exhibits good performance on anti-noise ability, fusion positioning accuracy, and real-time filtering. Compared with CSI-MIMO, FIFS, and RSSI-based methods, the proposed fusion correction method has higher positioning accuracy and smaller positioning error. Full article
(This article belongs to the Special Issue Indoor Positioning and Navigation)
Show Figures

Figure 1

Article
Assessment of Vineyard Canopy Characteristics from Vigour Maps Obtained Using UAV and Satellite Imagery
Sensors 2021, 21(7), 2363; https://doi.org/10.3390/s21072363 - 29 Mar 2021
Cited by 7
Abstract
Canopy characterisation is a key factor for the success and efficiency of the pesticide application process in vineyards. Canopy measurements to determine the optimal volume rate are currently conducted manually, which is time-consuming and limits the adoption of precise methods for volume rate [...] Read more.
Canopy characterisation is a key factor for the success and efficiency of the pesticide application process in vineyards. Canopy measurements to determine the optimal volume rate are currently conducted manually, which is time-consuming and limits the adoption of precise methods for volume rate selection. Therefore, automated methods for canopy characterisation must be established using a rapid and reliable technology capable of providing precise information about crop structure. This research providedregression models for obtaining canopy characteristics of vineyards from unmanned aerial vehicle (UAV) and satellite images collected in three significant growth stages. Between 2018 and 2019, a total of 1400 vines were characterised manually and remotely using a UAV and a satellite-based technology. The information collected from the sampled vines was analysed by two different procedures. First, a linear relationship between the manual and remote sensing data was investigated considering every single vine as a data point. Second, the vines were clustered based on three vigour levels in the parcel, and regression models were fitted to the average values of the ground-based and remote sensing-estimated canopy parameters. Remote sensing could detect the changes in canopy characteristics associated with vegetation growth. The combination of normalised differential vegetation index (NDVI) and projected area extracted from the UAV images is correlated with the tree row volume (TRV) when raw point data were used. This relationship was improved and extended to canopy height, width, leaf wall area, and TRV when the data were clustered. Similarly, satellite-based NDVI yielded moderate coefficients of determination for canopy width with raw point data, and for canopy width, height, and TRV when the vines were clustered according to the vigour. The proposed approach should facilitate the estimation of canopy characteristics in each area of a field using a cost-effective, simple, and reliable technology, allowing variable rate application in vineyards. Full article
Show Figures

Figure 1

Article
A Deep-Learning Framework for the Detection of Oil Spills from SAR Data
Sensors 2021, 21(7), 2351; https://doi.org/10.3390/s21072351 - 28 Mar 2021
Cited by 12
Abstract
Oil leaks onto water surfaces from big tankers, ships, and pipeline cracks cause considerable damage and harm to the marine environment. Synthetic Aperture Radar (SAR) images provide an approximate representation for target scenes, including sea and land surfaces, ships, oil spills, and look-alikes. [...] Read more.
Oil leaks onto water surfaces from big tankers, ships, and pipeline cracks cause considerable damage and harm to the marine environment. Synthetic Aperture Radar (SAR) images provide an approximate representation for target scenes, including sea and land surfaces, ships, oil spills, and look-alikes. Detection and segmentation of oil spills from SAR images are crucial to aid in leak cleanups and protecting the environment. This paper introduces a two-stage deep-learning framework for the identification of oil spill occurrences based on a highly unbalanced dataset. The first stage classifies patches based on the percentage of oil spill pixels using a novel 23-layer Convolutional Neural Network. In contrast, the second stage performs semantic segmentation using a five-stage U-Net structure. The generalized Dice loss is minimized to account for the reduced oil spill representation in the patches. The results of this study are very promising and provide a comparable improved precision and Dice score compared to related work. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Article
Wind Turbine Main Bearing Fault Prognosis Based Solely on SCADA Data
Sensors 2021, 21(6), 2228; https://doi.org/10.3390/s21062228 - 23 Mar 2021
Cited by 16
Abstract
As stated by the European Academy of Wind Energy (EAWE), the wind industry has identified main bearing failures as a critical issue in terms of increasing wind turbine reliability and availability. This is owing to major repairs with high replacement costs and long [...] Read more.
As stated by the European Academy of Wind Energy (EAWE), the wind industry has identified main bearing failures as a critical issue in terms of increasing wind turbine reliability and availability. This is owing to major repairs with high replacement costs and long downtime periods associated with main bearing failures. Thus, the main bearing fault prognosis has become an economically relevant topic and is a technical challenge. In this work, a data-based methodology for fault prognosis is presented. The main contributions of this work are as follows: (i) Prognosis is achieved by using only supervisory control and data acquisition (SCADA) data, which is already available in all industrial-sized wind turbines; thus, no extra sensors that are designed for a specific purpose need to be installed. (ii) The proposed method only requires healthy data to be collected; thus, it can be applied to any wind farm even when no faulty data has been recorded. (iii) The proposed algorithm works under different and varying operating and environmental conditions. (iv) The validity and performance of the established methodology is demonstrated on a real underproduction wind farm consisting of 12 wind turbines. The obtained results show that advanced prognostic systems based solely on SCADA data can predict failures several months prior to their occurrence and allow wind turbine operators to plan their operations. Full article
(This article belongs to the Special Issue Sensors for Wind Turbine Fault Diagnosis and Prognosis)
Show Figures

Figure 1

Article
A Smart and Secure Logistics System Based on IoT and Cloud Technologies
Sensors 2021, 21(6), 2231; https://doi.org/10.3390/s21062231 - 23 Mar 2021
Cited by 8
Abstract
Recently, one of the hottest topics in the logistics sector has been the traceability of goods and the monitoring of their condition during transportation. Perishable goods, such as fresh goods, have specifically attracted attention of the researchers that have already proposed different solutions [...] Read more.
Recently, one of the hottest topics in the logistics sector has been the traceability of goods and the monitoring of their condition during transportation. Perishable goods, such as fresh goods, have specifically attracted attention of the researchers that have already proposed different solutions to guarantee quality and freshness of food through the whole cold chain. In this regard, the use of Internet of Things (IoT)-enabling technologies and its specific branch called edge computing is bringing different enhancements thereby achieving easy remote and real-time monitoring of transported goods. Due to the fast changes of the requirements and the difficulties that researchers can encounter in proposing new solutions, the fast prototype approach could contribute to rapidly enhance both the research and the commercial sector. In order to make easy the fast prototyping of solutions, different platforms and tools have been proposed in the last years, however it is difficult to guarantee end-to-end security at all the levels through such platforms. For this reason, based on the experiments reported in literature and aiming at providing support for fast-prototyping, end-to-end security in the logistics sector, the current work presents a solution that demonstrates how the advantages offered by the Azure Sphere platform, a dedicated hardware (i.e., microcontroller unit, the MT3620) device and Azure Sphere Security Service can be used to realize a fast prototype to trace fresh food conditions through its transportation. The proposed solution guarantees end-to-end security and can be exploited by future similar works also in other sectors. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Article
ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays
Sensors 2021, 21(6), 2234; https://doi.org/10.3390/s21062234 - 23 Mar 2021
Cited by 12
Abstract
Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR [...] Read more.
Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers. Full article
(This article belongs to the Special Issue Wearable Technologies and Applications for Eye Tracking)
Show Figures

Figure 1

Article
Experimental Seaborne Passive Radar
Sensors 2021, 21(6), 2171; https://doi.org/10.3390/s21062171 - 20 Mar 2021
Cited by 7
Abstract
Passive bistatic radar does not emit energy by itself but relies on the energy emitted by illuminators of opportunity, such as radio or television transmitters. Ground-based passive radars are relatively well-developed, as numerous demonstrators and operational systems are being built. Passive radar on [...] Read more.
Passive bistatic radar does not emit energy by itself but relies on the energy emitted by illuminators of opportunity, such as radio or television transmitters. Ground-based passive radars are relatively well-developed, as numerous demonstrators and operational systems are being built. Passive radar on a moving platform, however, is a relatively new field. In this paper, an experimental seaborne passive radar system is presented. The radar uses digital radio (DAB) and digital television (DVB-T) for target detection. Results of clutter analysis are presented, as well as detections of real-life targets. Full article
(This article belongs to the Special Issue Active and Passive Radars on Mobile Platforms)
Show Figures

Figure 1

Article
Integrating a Low-Cost Electronic Nose and Machine Learning Modelling to Assess Coffee Aroma Profile and Intensity
Sensors 2021, 21(6), 2016; https://doi.org/10.3390/s21062016 - 12 Mar 2021
Cited by 18
Abstract
Aroma is one of the main attributes that consumers consider when appreciating and selecting a coffee; hence it is considered an important quality trait. However, the most common methods to assess aroma are based on expensive equipment or human senses through sensory evaluation, [...] Read more.
Aroma is one of the main attributes that consumers consider when appreciating and selecting a coffee; hence it is considered an important quality trait. However, the most common methods to assess aroma are based on expensive equipment or human senses through sensory evaluation, which is time-consuming and requires highly trained assessors to avoid subjectivity. Therefore, this study aimed to estimate the coffee intensity and aromas using a low-cost and portable electronic nose (e-nose) and machine learning modeling. For this purpose, triplicates of nine commercial coffee samples with different intensity levels were used for this study. Two machine learning models were developed based on artificial neural networks using the data from the e-nose as inputs to (i) classify the samples into low, medium, and high-intensity (Model 1) and (ii) to predict the relative abundance of 45 different aromas (Model 2). Results showed that it is possible to estimate the intensity of coffees with high accuracy (98%; Model 1), as well as to predict the specific aromas obtaining a high correlation coefficient (R = 0.99), and no under- or over-fitting of the models were detected. The proposed contactless, nondestructive, rapid, reliable, and low-cost method showed to be effective in evaluating volatile compounds in coffee, which is a potential technique to be applied within all stages of the production process to detect any undesirable characteristics on–time and ensure high-quality products. Full article
(This article belongs to the Special Issue Novel Contactless Sensors for Food, Beverage and Packaging Evaluation)
Show Figures

Figure 1

Article
Low Cost, Easy to Prepare and Disposable Electrochemical Molecularly Imprinted Sensor for Diclofenac Detection
Sensors 2021, 21(6), 1975; https://doi.org/10.3390/s21061975 - 11 Mar 2021
Cited by 10
Abstract
In this work, a disposable electrochemical (voltammetric) molecularly imprinted polymer (MIP) sensor for the selective determination of diclofenac (DCF) was constructed. The proposed MIP-sensor permits fast (30 min) analysis, is cheap, easy to prepare and has the potential to be integrated with portable [...] Read more.
In this work, a disposable electrochemical (voltammetric) molecularly imprinted polymer (MIP) sensor for the selective determination of diclofenac (DCF) was constructed. The proposed MIP-sensor permits fast (30 min) analysis, is cheap, easy to prepare and has the potential to be integrated with portable devices. Due to its simplicity and efficiency, surface imprinting by electropolymerization was used to prepare a MIP on a screen-printed carbon electrode (SPCE). MIP preparation was achieved by cyclic voltammetry (CV), using dopamine (DA) as a monomer in the presence of DCF. The differential pulse voltammetry (DPV) detection of DCF at MIP/SPCE and non-imprinted control sensors (NIP) showed an imprinting factor of 2.5. Several experimental preparation parameters were studied and optimized. CV and electrochemical impedance spectroscopy (EIS) experiments were performed to evaluate the electrode surface modifications. The MIP sensor showed adequate selectivity (in comparison with other drug molecules), intra-day repeatability of 7.5%, inter-day repeatability of 11.5%, a linear range between 0.1 and 10 μM (r2 = 0.9963) and a limit of detection (LOD) and quantification (LOQ) of 70 and 200 nM, respectively. Its applicability was successfully demonstrated by the determination of DCF in spiked water samples (river and tap water). Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Article
Quantifying Physiological Biomarkers of a Microwave Brain Stimulation Device
Sensors 2021, 21(5), 1896; https://doi.org/10.3390/s21051896 - 08 Mar 2021
Cited by 8
Abstract
Physiological signals are immediate and sensitive to neural and cardiovascular change resulting from brain stimulation, and are considered as a quantifying tool with which to evaluate the association between brain stimulation and cognitive performance. Brain stimulation outside a highly equipped, clinical setting requires [...] Read more.
Physiological signals are immediate and sensitive to neural and cardiovascular change resulting from brain stimulation, and are considered as a quantifying tool with which to evaluate the association between brain stimulation and cognitive performance. Brain stimulation outside a highly equipped, clinical setting requires the use of a low-cost, ambulatory miniature system. The purpose of this double-blind, randomized, sham-controlled study is to quantify the physiological biomarkers of the neural and cardiovascular systems induced by a microwave brain stimulation (MBS) device. We investigated the effect of an active MBS and a sham device on the cardiovascular and neurological responses of ten volunteers (mean age 26.33 years, 70% male). Electroencephalography (EEG) and electrocardiography (ECG) were recorded in the initial resting-state, intermediate state, and the final state at half-hour intervals using a portable sensing device. During the experiment, the participants were engaged in a cognitive workload. In the active MBS group, the power of high-alpha, high-beta, and low-beta bands in the EEG increased, and the power of low-alpha and theta waves decreased, relative to the sham group. RR Interval and QRS interval showed a significant association with MBS stimulation. Heart rate variability features showed no significant difference between the two groups. A wearable MBS modality may be feasible for use in biomedical research; the MBS can modulate the neurological and cardiovascular responses to cognitive workload. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Graphical abstract

Article
An Estimation Method of Continuous Non-Invasive Arterial Blood Pressure Waveform Using Photoplethysmography: A U-Net Architecture-Based Approach
Sensors 2021, 21(5), 1867; https://doi.org/10.3390/s21051867 - 07 Mar 2021
Cited by 14
Abstract
Blood pressure (BP) monitoring has significant importance in the treatment of hypertension and different cardiovascular health diseases. As photoplethysmogram (PPG) signals can be recorded non-invasively, research has been highly conducted to measure BP using PPG recently. In this paper, we propose a U-net [...] Read more.
Blood pressure (BP) monitoring has significant importance in the treatment of hypertension and different cardiovascular health diseases. As photoplethysmogram (PPG) signals can be recorded non-invasively, research has been highly conducted to measure BP using PPG recently. In this paper, we propose a U-net deep learning architecture that uses fingertip PPG signal as input to estimate arterial BP (ABP) waveform non-invasively. From this waveform, we have also measured systolic BP (SBP), diastolic BP (DBP), and mean arterial pressure (MAP). The proposed method was evaluated on a subset of 100 subjects from two publicly available databases: MIMIC and MIMIC-III. The predicted ABP waveforms correlated highly with the reference waveforms and we have obtained an average Pearson’s correlation coefficient of 0.993. The mean absolute error is 3.68 ± 4.42 mmHg for SBP, 1.97 ± 2.92 mmHg for DBP, and 2.17 ± 3.06 mmHg for MAP which satisfy the requirements of the Association for the Advancement of Medical Instrumentation (AAMI) standard and obtain grade A according to the British Hypertension Society (BHS) standard. The results show that the proposed method is an efficient process to estimate ABP waveform directly using fingertip PPG. Full article
(This article belongs to the Special Issue Machine Learning for Sensing and Healthcare 2020–2021)
Show Figures

Figure 1

Article
Towards Robust Robot Control in Cartesian Space Using an Infrastructureless Head- and Eye-Gaze Interface
Sensors 2021, 21(5), 1798; https://doi.org/10.3390/s21051798 - 05 Mar 2021
Cited by 9
Abstract
This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM [...] Read more.
This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM 2) for RGB-D cameras with a Magnetic Angular rate Gravity (MARG)-sensor filter. The data fusion process is designed to dynamically switch between magnetic, inertial and visual heading sources to enable robust orientation estimation under various disturbances, e.g., magnetic disturbances or degraded visual sensor data. The interface furthermore delivers accurate eye- and head-gaze vectors to enable precise robot end effector (EFF) positioning and employs a head motion mapping technique to effectively control the robots end effector orientation. An experimental proof of concept demonstrates that the proposed interface and its data fusion process generate reliable and robust pose estimation. The three-dimensional head- and eye-gaze position estimation pipeline delivers a mean Euclidean error of 19.0±15.7 mm for head-gaze and 27.4±21.8 mm for eye-gaze at a distance of 0.3–1.1 m to the user. This indicates that the proposed interface offers a precise control mechanism for hands-free and full six degree of freedom (DoF) robot teleoperation in Cartesian space by head- or eye-gaze and head motion. Full article
(This article belongs to the Special Issue Assistance Robotics and Sensors)
Show Figures

Figure 1

Article
Sensitivity, Noise and Resolution in a BEOL-Modified Foundry-Made ISFET with Miniaturized Reference Electrode for Wearable Point-of-Care Applications
Sensors 2021, 21(5), 1779; https://doi.org/10.3390/s21051779 - 04 Mar 2021
Cited by 10
Abstract
Ion-sensitive field-effect transistors (ISFETs) form a high sensitivity and scalable class of sensors, compatible with advanced complementary metal-oxide semiconductor (CMOS) processes. Despite many previous demonstrations about their merits as low-power integrated sensors, very little is known about their noise characterization when being operated [...] Read more.
Ion-sensitive field-effect transistors (ISFETs) form a high sensitivity and scalable class of sensors, compatible with advanced complementary metal-oxide semiconductor (CMOS) processes. Despite many previous demonstrations about their merits as low-power integrated sensors, very little is known about their noise characterization when being operated in a liquid gate configuration. The noise characteristics in various regimes of their operation are important to select the most suitable conditions for signal-to-noise ratio (SNR) and power consumption. This work reports systematic DC, transient, and noise characterizations and models of a back-end of line (BEOL)-modified foundry-made ISFET used as pH sensor. The aim is to determine the sensor sensitivity and resolution to pH changes and to calibrate numerical and lumped element models, capable of supporting the interpretation of the experimental findings. The experimental sensitivity is approximately 40 mV/pH with a normalized resolution of 5 mpH per µm2, in agreement with the literature state of the art. Differences in the drain current noise spectra between the ISFET and MOSFET configurations of the same device at low currents (weak inversion) suggest that the chemical noise produced by the random binding/unbinding of the H+ ions on the sensor surface is likely the dominant noise contribution in this regime. In contrast, at high currents (strong inversion), the two configurations provide similar drain noise levels suggesting that the noise originates in the underlying FET rather than in the sensing region. Full article
(This article belongs to the Special Issue Wearable/Wireless Body Sensor Networks for Healthcare Applications)
Show Figures

Figure 1

Article
COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases
Sensors 2021, 21(5), 1742; https://doi.org/10.3390/s21051742 - 03 Mar 2021
Cited by 13
Abstract
The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. [...] Read more.
The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. First, the number of available X-ray scans labeled as COVID-19-infected is relatively small. Second, all the works that have been carried out in the field are separate; there are no unified data, classes, and evaluation protocols. In this work, based on public and newly collected data, we propose two X-ray COVID-19 databases, which are three-class COVID-19 and five-class COVID-19 datasets. For both databases, we evaluate different deep learning architectures. Moreover, we propose an Ensemble-CNNs approach which outperforms the deep learning architectures and shows promising results in both databases. In other words, our proposed Ensemble-CNNs achieved a high performance in the recognition of COVID-19 infection, resulting in accuracies of 100% and 98.1% in the three-class and five-class scenarios, respectively. In addition, our approach achieved promising results in the overall recognition accuracy of 75.23% and 81.0% for the three-class and five-class scenarios, respectively. We make our databases of COVID-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies and comparisons. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Article
Triboelectric Rotary Motion Sensor for Industrial-Grade Speed and Angle Monitoring
Sensors 2021, 21(5), 1713; https://doi.org/10.3390/s21051713 - 02 Mar 2021
Cited by 8
Abstract
Mechanical motion sensing and monitoring is an important component in the field of industrial automation. Rotary motion is one of the most basic forms of mechanical motion, so it is of great significance for the development of the entire industry to realize rotary [...] Read more.
Mechanical motion sensing and monitoring is an important component in the field of industrial automation. Rotary motion is one of the most basic forms of mechanical motion, so it is of great significance for the development of the entire industry to realize rotary motion state monitoring. In this paper, a triboelectric rotary motion sensor (TRMS) with variable amplitude differential hybrid electrodes is proposed, and an integrated monitoring system (IMS) is designed to realize real-time monitoring of industrial-grade rotary motion state. First, the operating principle and monitoring characteristics are studied. The experiment results indicate that the TRMS can achieve rotation speed measurement in the range of 10–1000 rpm with good linearity, and the error rate of rotation speed is less than 0.8%. Besides, the TRMS has an angle monitoring range of 360° and its resolution is 1.5° in bidirectional rotation. Finally, the applications of the designed TRMS and IMS prove the feasibility of self-powered rotary motion monitoring. This work further promotes the development of triboelectric sensors (TESs) in industrial application. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Article
Fault Prediction and Early-Detection in Large PV Power Plants Based on Self-Organizing Maps
Sensors 2021, 21(5), 1687; https://doi.org/10.3390/s21051687 - 01 Mar 2021
Cited by 8
Abstract
In this paper, a novel and flexible solution for fault prediction based on data collected from Supervisory Control and Data Acquisition (SCADA) system is presented. Generic fault/status prediction is offered by means of a data driven approach based on a self-organizing map (SOM) [...] Read more.
In this paper, a novel and flexible solution for fault prediction based on data collected from Supervisory Control and Data Acquisition (SCADA) system is presented. Generic fault/status prediction is offered by means of a data driven approach based on a self-organizing map (SOM) and the definition of an original Key Performance Indicator (KPI). The model has been assessed on a park of three photovoltaic (PV) plants with installed capacity up to 10 MW, and on more than sixty inverter modules of three different technology brands. The results indicate that the proposed method is effective in predicting incipient generic faults in average up to 7 days in advance with true positives rate up to 95%. The model is easily deployable for on-line monitoring of anomalies on new PV plants and technologies, requiring only the availability of historical SCADA data, fault taxonomy and inverter electrical datasheet. Full article
(This article belongs to the Special Issue Fault Detection and Localization Using Electromagnetic Sensors)
Show Figures

Figure 1

Article
Proof of Concept for a Quick and Highly Sensitive On-Site Detection of SARS-CoV-2 by Plasmonic Optical Fibers and Molecularly Imprinted Polymers
Sensors 2021, 21(5), 1681; https://doi.org/10.3390/s21051681 - 01 Mar 2021
Cited by 24
Abstract
The rapid spread of the Coronavirus Disease 2019 (COVID-19) pandemic, caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pathogen has generated a huge international public health emergency. Currently the reference diagnostic technique for virus determination is Reverse Transcription Polymerase Chain Reaction [...] Read more.
The rapid spread of the Coronavirus Disease 2019 (COVID-19) pandemic, caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pathogen has generated a huge international public health emergency. Currently the reference diagnostic technique for virus determination is Reverse Transcription Polymerase Chain Reaction (RT-PCR) real time analysis that requires specialized equipment, reagents and facilities and typically 3–4 h to perform. Thus, the realization of simple, low-cost, small-size, rapid and point-of-care diagnostics tests has become a global priority. In response to the current need for quick, highly sensitive and on-site detection of the SARS-CoV-2 virus in several aqueous solutions, a specific molecularly imprinted polymer (MIP) receptor has been designed, realized, and combined with an optical sensor. More specifically, the proof of concept of a SARS-CoV-2 sensor has been demonstrated by exploiting a plasmonic plastic optical fiber sensor coupled with a novel kind of synthetic MIP nano-layer, especially designed for the specific recognition of Subunit 1 of the SARS-CoV-2 Spike protein. First, we have tested the effectiveness of the developed MIP receptor to bind the Subunit 1 of the SARS-CoV-2 spike protein, then the results of preliminary tests on SARS-CoV-2 virions, performed on samples of nasopharyngeal (NP) swabs in universal transport medium (UTM) and physiological solution (0.9% NaCl), were compared with those obtained with RT-PCR. According to these preliminary results, the sensitivity of the proposed optical-chemical sensor proved to be higher than the RT-PCR one. Furthermore, a relatively fast response time (about 10 min) to the virus was obtained without the use of additional reagents. Full article
(This article belongs to the Collection Optical Fiber Sensors)
Show Figures

Figure 1

Article
Deep Reinforcement Learning-Based Task Scheduling in IoT Edge Computing
Sensors 2021, 21(5), 1666; https://doi.org/10.3390/s21051666 - 28 Feb 2021
Cited by 20
Abstract
Edge computing (EC) has recently emerged as a promising paradigm that supports resource-hungry Internet of Things (IoT) applications with low latency services at the network edge. However, the limited capacity of computing resources at the edge server poses great challenges for scheduling application [...] Read more.
Edge computing (EC) has recently emerged as a promising paradigm that supports resource-hungry Internet of Things (IoT) applications with low latency services at the network edge. However, the limited capacity of computing resources at the edge server poses great challenges for scheduling application tasks. In this paper, a task scheduling problem is studied in the EC scenario, and multiple tasks are scheduled to virtual machines (VMs) configured at the edge server by maximizing the long-term task satisfaction degree (LTSD). The problem is formulated as a Markov decision process (MDP) for which the state, action, state transition, and reward are designed. We leverage deep reinforcement learning (DRL) to solve both time scheduling (i.e., the task execution order) and resource allocation (i.e., which VM the task is assigned to), considering the diversity of the tasks and the heterogeneity of available resources. A policy-based REINFORCE algorithm is proposed for the task scheduling problem, and a fully-connected neural network (FCN) is utilized to extract the features. Simulation results show that the proposed DRL-based task scheduling algorithm outperforms the existing methods in the literature in terms of the average task satisfaction degree and success ratio. Full article
(This article belongs to the Special Issue Edge/Fog Computing Technologies for IoT Infrastructure)
Show Figures

Figure 1

Article
Breathable Textile Rectangular Ring Microstrip Patch Antenna at 2.45 GHz for Wearable Applications
Sensors 2021, 21(5), 1635; https://doi.org/10.3390/s21051635 - 26 Feb 2021
Cited by 9
Abstract
A textile patch antenna is an attractive package for wearable applications as it offers flexibility, less weight, easy integration into the garment and better comfort to the wearer. When it comes to wearability, above all, comfort comes ahead of the rest of the [...] Read more.
A textile patch antenna is an attractive package for wearable applications as it offers flexibility, less weight, easy integration into the garment and better comfort to the wearer. When it comes to wearability, above all, comfort comes ahead of the rest of the properties. The air permeability and the water vapor permeability of textiles are linked to the thermophysiological comfort of the wearer as they help to improve the breathability of textiles. This paper includes the construction of a breathable textile rectangular ring microstrip patch antenna with improved water vapor permeability. A selection of high air permeable conductive fabrics and 3-dimensional knitted spacer dielectric substrates was made to ensure better water vapor permeability of the breathable textile rectangular ring microstrip patch antenna. To further improve the water vapor permeability of the breathable textile rectangular ring microstrip patch antenna, a novel approach of inserting a large number of small-sized holes of 1 mm diameter in the conductive layers (the patch and the ground plane) of the antenna was adopted. Besides this, the insertion of a large number of small-sized holes improved the flexibility of the rectangular ring microstrip patch antenna. The result was a breathable perforated (with small-sized holes) textile rectangular ring microstrip patch antenna with the water vapor permeability as high as 5296.70 g/m2 per day, an air permeability as high as 510 mm/s, and with radiation gains being 4.2 dBi and 5.4 dBi in the E-plane and H-plane, respectively. The antenna was designed to resonate for the Industrial, Scientific and Medical band at a specific 2.45 GHz frequency. Full article
(This article belongs to the Special Issue Wearable Antennas)
Show Figures

Figure 1

Article
Accuracy Investigation of the Pose Determination of a VR System
Sensors 2021, 21(5), 1622; https://doi.org/10.3390/s21051622 - 25 Feb 2021
Cited by 10
Abstract
The usage of VR gear in mixed reality applications demands a high position and orientation accuracy of all devices to achieve a satisfying user experience. This paper investigates the system behaviour of the VR system HTC Vive Pro at a testing facility that [...] Read more.
The usage of VR gear in mixed reality applications demands a high position and orientation accuracy of all devices to achieve a satisfying user experience. This paper investigates the system behaviour of the VR system HTC Vive Pro at a testing facility that is designed for the calibration of highly accurate positioning instruments like geodetic total stations, tilt sensors, geodetic gyroscopes or industrial laser scanners. Although the experiments show a high reproducibility of the position readings within a few millimetres, the VR system has systematic effects with magnitudes of several centimetres. A tilt of about 0.4° of the reference plane with respect to the horizontal plane was detected. Moreover, our results demonstrate that the tracking algorithm faces problems when several lighthouses are used. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Article
An Ensemble Learning Solution for Predictive Maintenance of Wind Turbines Main Bearing
Sensors 2021, 21(4), 1512; https://doi.org/10.3390/s21041512 - 22 Feb 2021
Cited by 8
Abstract
A novel and innovative solution addressing wind turbines’ main bearing failure predictions using SCADA data is presented. This methodology enables to cut setup times and has more flexible requirements when compared to the current predictive algorithms. The proposed solution is entirely unsupervised as [...] Read more.
A novel and innovative solution addressing wind turbines’ main bearing failure predictions using SCADA data is presented. This methodology enables to cut setup times and has more flexible requirements when compared to the current predictive algorithms. The proposed solution is entirely unsupervised as it does not require the labeling of data through work orders logs. Results of interpretable algorithms, which are tailored to capture specific aspects of main bearing failures, are merged into a combined health status indicator making use of Ensemble Learning principles. Based on multiple specialized indicators, the interpretability of the results is greater compared to black-box solutions that try to address the problem with a single complex algorithm. The proposed methodology has been tested on a dataset covering more than two year of operations from two onshore wind farms, counting a total of 84 turbines. All four main bearing failures are anticipated at least one month of time in advance. Combining individual indicators into a composed one proved effective with regard to all the tracked metrics. Accuracy of 95.1%, precision of 24.5% and F1 score of 38.5% are obtained averaging the values across the two windfarms. The encouraging results, the unsupervised nature and the flexibility and scalability of the proposed solution are appealing, making it particularly attractive for any online monitoring system used on single wind farms as well as entire wind turbine fleets. Full article
(This article belongs to the Special Issue Sensors for Wind Turbine Fault Diagnosis and Prognosis)
Show Figures

Figure 1

Article
A Data-Driven Approach to Predict Fatigue in Exercise Based on Motion Data from Wearable Sensors or Force Plate
Sensors 2021, 21(4), 1499; https://doi.org/10.3390/s21041499 - 22 Feb 2021
Cited by 7
Abstract
Fatigue increases the risk of injury during sports training and rehabilitation. Early detection of fatigue during exercises would help adapt the training in order to prevent over-training and injury. This study lays the foundation for a data-driven model to automatically predict the onset [...] Read more.
Fatigue increases the risk of injury during sports training and rehabilitation. Early detection of fatigue during exercises would help adapt the training in order to prevent over-training and injury. This study lays the foundation for a data-driven model to automatically predict the onset of fatigue and quantify consequent fatigue changes using a force plate (FP) or inertial measurement units (IMUs). The force plate and body-worn IMUs were used to capture movements associated with exercises (squats, high knee jacks, and corkscrew toe-touch) to estimate participant-specific fatigue levels in a continuous fashion using random forest (RF) regression and convolutional neural network (CNN) based regression models. Analysis of unseen data showed high correlation (up to 89%, 93%, and 94% for the squat, jack, and corkscrew exercises, respectively) between the predicted fatigue levels and self-reported fatigue levels. Predictions using force plate data achieved similar performance as those with IMU data; the best results in both cases were achieved with a convolutional neural network. The displacement of the center of pressure (COP) was found to be correlated with fatigue compared to other commonly used features of the force plate. Bland–Altman analysis also confirmed that the predicted fatigue levels were close to the true values. These results contribute to the field of human motion recognition by proposing a deep neural network model that can detect fairly small changes of motion data in a continuous process and quantify the movement. Based on the successful findings with three different exercises, the general nature of the methodology is potentially applicable to a variety of other forms of exercises, thereby contributing to the future adaptation of exercise programs and prevention of over-training and injury as a result of excessive fatigue. Full article
(This article belongs to the Special Issue Sensor-Based Measurement of Human Motor Performance)
Show Figures

Figure 1

Article
COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning
Sensors 2021, 21(4), 1480; https://doi.org/10.3390/s21041480 - 20 Feb 2021
Cited by 35
Abstract
Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for [...] Read more.
Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient’s death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%). Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Article
Volatile Organic Compound Vapour Measurements Using a Localised Surface Plasmon Resonance Optical Fibre Sensor Decorated with a Metal-Organic Framework
Sensors 2021, 21(4), 1420; https://doi.org/10.3390/s21041420 - 18 Feb 2021
Cited by 10
Abstract
A tip-based fibreoptic localised surface plasmon resonance (LSPR) sensor is reported for the sensing of volatile organic compounds (VOCs). The sensor is developed by coating the tip of a multi-mode optical fibre with gold nanoparticles (size: 40 nm) via a chemisorption process and [...] Read more.
A tip-based fibreoptic localised surface plasmon resonance (LSPR) sensor is reported for the sensing of volatile organic compounds (VOCs). The sensor is developed by coating the tip of a multi-mode optical fibre with gold nanoparticles (size: 40 nm) via a chemisorption process and further functionalisation with the HKUST-1 metal–organic framework (MOF) via a layer-by-layer process. Sensors coated with different cycles of MOFs (40, 80 and 120) corresponding to different crystallisation processes are reported. There is no measurable response to all tested volatile organic compounds (acetone, ethanol and methanol) in the sensor with 40 coating cycles. However, sensors with 80 and 120 coating cycles show a significant redshift of resonance wavelength (up to ~9 nm) to all tested volatile organic compounds as a result of an increase in the local refractive index induced by VOC capture into the HKUST-1 thin film. Sensors gradually saturate as VOC concentration increases (up to 3.41%, 4.30% and 6.18% in acetone, ethanol and methanol measurement, respectively) and show a fully reversible response when the concentration decreases. The sensor with the thickest film exhibits slightly higher sensitivity than the sensor with a thinner film. The sensitivity of the 120-cycle-coated MOF sensor is 13.7 nm/% (R2 = 0.951) with a limit of detection (LoD) of 0.005% in the measurement of acetone, 15.5 nm/% (R2 = 0.996) with an LoD of 0.003% in the measurement of ethanol and 6.7 nm/% (R2 = 0.998) with an LoD of 0.011% in the measurement of methanol. The response and recovery times were calculated as 9.35 and 3.85 min for acetone; 5.35 and 2.12 min for ethanol; and 2.39 and 1.44 min for methanol. The humidity and temperature crosstalk of 120-cycle-coated MOF was measured as 0.5 ± 0.2 nm and 0.5 ± 0.1 nm in the humidity range of 50–75% relative humidity (RH) and temperature range of 20–25 °C, respectively. Full article
(This article belongs to the Special Issue Volatile Organic Compounds Detection with Optical Fiber Sensors)
Show Figures

Figure 1

Article
Combining Augmented Reality and 3D Printing to Improve Surgical Workflows in Orthopedic Oncology: Smartphone Application and Clinical Evaluation
Sensors 2021, 21(4), 1370; https://doi.org/10.3390/s21041370 - 15 Feb 2021
Cited by 12
Abstract
During the last decade, orthopedic oncology has experienced the benefits of computerized medical imaging to reduce human dependency, improving accuracy and clinical outcomes. However, traditional surgical navigation systems do not always adapt properly to this kind of interventions. Augmented reality (AR) and three-dimensional [...] Read more.
During the last decade, orthopedic oncology has experienced the benefits of computerized medical imaging to reduce human dependency, improving accuracy and clinical outcomes. However, traditional surgical navigation systems do not always adapt properly to this kind of interventions. Augmented reality (AR) and three-dimensional (3D) printing are technologies lately introduced in the surgical environment with promising results. Here we present an innovative solution combining 3D printing and AR in orthopedic oncological surgery. A new surgical workflow is proposed, including 3D printed models and a novel AR-based smartphone application (app). This app can display the patient’s anatomy and the tumor’s location. A 3D-printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow. Experiments on six realistic phantoms provided a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience. Full article
(This article belongs to the Special Issue Computer Vision for 3D Perception and Applications)
Show Figures

Figure 1

Article
Understanding LSTM Network Behaviour of IMU-Based Locomotion Mode Recognition for Applications in Prostheses and Wearables
Sensors 2021, 21(4), 1264; https://doi.org/10.3390/s21041264 - 10 Feb 2021
Cited by 16
Abstract
Human Locomotion Mode Recognition (LMR) has the potential to be used as a control mechanism for lower-limb active prostheses. Active prostheses can assist and restore a more natural gait for amputees, but as a medical device it must minimize user risks, such as [...] Read more.
Human Locomotion Mode Recognition (LMR) has the potential to be used as a control mechanism for lower-limb active prostheses. Active prostheses can assist and restore a more natural gait for amputees, but as a medical device it must minimize user risks, such as falls and trips. As such, any control system must have high accuracy and robustness, with a detailed understanding of its internal operation. Long Short-Term Memory (LSTM) machine-learning networks can perform LMR with high accuracy levels. However, the internal behavior during classification is unknown, and they struggle to generalize when presented with novel users. The target problem addressed in this paper is understanding the LSTM classification behavior for LMR. A dataset of six locomotive activities (walking, stopped, stairs and ramps) from 22 non-amputee subjects is collected, capturing both steady-state and transitions between activities in natural environments. Non-amputees are used as a substitute for amputees to provide a larger dataset. The dataset is used to analyze the internal behavior of a reduced complexity LSTM network. This analysis identifies that the model primarily classifies activity type based on data around early stance. Evaluation of generalization for unseen subjects reveals low sensitivity to hyper-parameters and over-fitting to individuals’ gait traits. Investigating the differences between individual subjects showed that gait variations between users primarily occur in early stance, potentially explaining the poor generalization. Adjustment of hyper-parameters alone could not solve this, demonstrating the need for individual personalization of models. The main achievements of the paper are (i) the better understanding of LSTM for LMR, (ii) demonstration of its low sensitivity to learning hyper-parameters when evaluating novel user generalization, and (iii) demonstration of the need for personalization of ML models to achieve acceptable accuracy. Full article
Show Figures

Figure 1

Article
Sequential Model Based Intrusion Detection System for IoT Servers Using Deep Learning Methods
Sensors 2021, 21(4), 1113; https://doi.org/10.3390/s21041113 - 05 Feb 2021
Cited by 19
Abstract
IoT plays an important role in daily life; commands and data transfer rapidly between the servers and objects to provide services. However, cyber threats have become a critical factor, especially for IoT servers. There should be a vigorous way to protect the network [...] Read more.
IoT plays an important role in daily life; commands and data transfer rapidly between the servers and objects to provide services. However, cyber threats have become a critical factor, especially for IoT servers. There should be a vigorous way to protect the network infrastructures from various attacks. IDS (Intrusion Detection System) is the invisible guardian for IoT servers. Many machine learning methods have been applied in IDS. However, there is a need to improve the IDS system for both accuracy and performance. Deep learning is a promising technique that has been used in many areas, including pattern recognition, natural language processing, etc. The deep learning reveals more potential than traditional machine learning methods. In this paper, sequential model is the key point, and new methods are proposed by the features of the model. The model can collect features from the network layer via tcpdump packets and application layer via system routines. Text-CNN and GRU methods are chosen because the can treat sequential data as a language model. The advantage compared with the traditional methods is that they can extract more features from the data and the experiments show that the deep learning methods have higher F1-score. We conclude that the sequential model-based intrusion detection system using deep learning method can contribute to the security of the IoT servers. Full article
(This article belongs to the Special Issue Security and Privacy in Large-Scale Data Networks)
Show Figures

Figure 1

Article
Deep Learning Approaches on Defect Detection in High Resolution Aerial Images of Insulators
Sensors 2021, 21(4), 1033; https://doi.org/10.3390/s21041033 - 03 Feb 2021
Cited by 15
Abstract
By detecting the defect location in high-resolution insulator images collected by unmanned aerial vehicle (UAV) in various environments, the occurrence of power failure can be timely detected and the caused economic loss can be reduced. However, the accuracies of existing detection methods are [...] Read more.
By detecting the defect location in high-resolution insulator images collected by unmanned aerial vehicle (UAV) in various environments, the occurrence of power failure can be timely detected and the caused economic loss can be reduced. However, the accuracies of existing detection methods are greatly limited by the complex background interference and small target detection. To solve this problem, two deep learning methods based on Faster R-CNN (faster region-based convolutional neural network) are proposed in this paper, namely Exact R-CNN (exact region-based convolutional neural network) and CME-CNN (cascade the mask extraction and exact region-based convolutional neural network). Firstly, we proposed an Exact R-CNN based on a series of advanced techniques including FPN (feature pyramid network), cascade regression, and GIoU (generalized intersection over union). RoI Align (region of interest align) is introduced to replace RoI pooling (region of interest pooling) to address the misalignment problem, and the depthwise separable convolution and linear bottleneck are introduced to reduce the computational burden. Secondly, a new pipeline is innovatively proposed to improve the performance of insulator defect detection, namely CME-CNN. In our proposed CME-CNN, an insulator mask image is firstly generated to eliminate the complex background by using an encoder-decoder mask extraction network, and then the Exact R-CNN is used to detect the insulator defects. The experimental results show that our proposed method can effectively detect insulator defects, and its accuracy is better than the examined mainstream target detection algorithms. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Article
Structural Health Monitoring Using Ultrasonic Guided-Waves and the Degree of Health Index
Sensors 2021, 21(3), 993; https://doi.org/10.3390/s21030993 - 02 Feb 2021
Cited by 9
Abstract
This paper proposes a new damage index named degree of health (DoH) to efficiently tackle structural damage monitoring in real-time. As a key contribution, the proposed index relies on a pattern matching methodology that measures the time-of-flight mismatch of sequential ultrasonic guided-wave measurements [...] Read more.
This paper proposes a new damage index named degree of health (DoH) to efficiently tackle structural damage monitoring in real-time. As a key contribution, the proposed index relies on a pattern matching methodology that measures the time-of-flight mismatch of sequential ultrasonic guided-wave measurements using fuzzy logic fundamentals. The ultrasonic signals are generated using the transmission beamforming technique with a phased-array of piezoelectric transducers. The acquisition is carried out by two phased-arrays to compare the influence of pulse-echo and pitch-catch modes in the damage assessment. The proposed monitoring approach is illustrated in a fatigue test of an aluminum sheet with an initial notch. As an additional novelty, the proposed pattern matching methodology uses the data stemming from the transmission beamforming technique for structural health monitoring. The results demonstrate the efficiency and robustness of the proposed framework in providing a qualitative and quantitative assessment for fatigue crack damage. Full article
(This article belongs to the Special Issue Structural Health Monitoring with Ultrasonic Guided-Waves Sensors)
Show Figures

Figure 1

Article
A Deep Learning Model for Predictive Maintenance in Cyber-Physical Production Systems Using LSTM Autoencoders
Sensors 2021, 21(3), 972; https://doi.org/10.3390/s21030972 - 01 Feb 2021
Cited by 17
Abstract
Condition monitoring of industrial equipment, combined with machine learning algorithms, may significantly improve maintenance activities on modern cyber-physical production systems. However, data of proper quality and of adequate quantity, modeling both good operational conditions as well as abnormal situations throughout the operational lifecycle, [...] Read more.
Condition monitoring of industrial equipment, combined with machine learning algorithms, may significantly improve maintenance activities on modern cyber-physical production systems. However, data of proper quality and of adequate quantity, modeling both good operational conditions as well as abnormal situations throughout the operational lifecycle, are required. Nevertheless, this is difficult to acquire in a non-destructive approach. In this context, this study investigates an approach to enable a transition from preventive maintenance activities, that are scheduled at predetermined time intervals, into predictive ones. In order to enable such approaches in a cyber-physical production system, a deep learning algorithm is used, allowing for maintenance activities to be planned according to the actual operational status of the machine and not in advance. An autoencoder-based methodology is employed for classifying real-world machine and sensor data, into a set of condition-related labels. Real-world data collected from manufacturing operations are used for training and testing a prototype implementation of Long Short-Term Memory autoencoders for estimating the remaining useful life of the monitored equipment. Finally, the proposed approach is evaluated in a use case related to a steel industry production process. Full article
(This article belongs to the Special Issue Cyberphysical Sensing Systems for Fault Detection and Identification)
Show Figures

Figure 1

Article
Reduced Graphene Oxide and Polyaniline Nanofibers Nanocomposite for the Development of an Amperometric Glucose Biosensor
Sensors 2021, 21(3), 948; https://doi.org/10.3390/s21030948 - 01 Feb 2021
Cited by 16
Abstract
The control of glucose concentration is a crucial factor in clinical diagnosis and the food industry. Electrochemical biosensors based on reduced graphene oxide (rGO) and conducting polymers have a high potential for practical application. A novel thermal reduction protocol of graphene oxide (GO) [...] Read more.
The control of glucose concentration is a crucial factor in clinical diagnosis and the food industry. Electrochemical biosensors based on reduced graphene oxide (rGO) and conducting polymers have a high potential for practical application. A novel thermal reduction protocol of graphene oxide (GO) in the presence of malonic acid was applied for the synthesis of rGO. The rGO was characterized by scanning electron microscopy, X-ray diffraction analysis, Fourier-transform infrared spectroscopy, and Raman spectroscopy. rGO in combination with polyaniline (PANI), Nafion, and glucose oxidase (GOx) was used to develop an amperometric glucose biosensor. A graphite rod (GR) electrode premodified with a dispersion of PANI nanostructures and rGO, Nafion, and GOx was proposed as the working electrode of the biosensor. The optimal ratio of PANI and rGO in the dispersion used as a matrix for GOx immobilization was equal to 1:10. The developed glucose biosensor was characterized by a wide linear range (from 0.5 to 50 mM), low limit of detection (0.089 mM), good selectivity, reproducibility, and stability. Therefore, the developed biosensor is suitable for glucose determination in human serum. The PANI nanostructure and rGO dispersion is a promising material for the construction of electrochemical glucose biosensors. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Article
WSN-SLAP: Secure and Lightweight Mutual Authentication Protocol for Wireless Sensor Networks
Sensors 2021, 21(3), 936; https://doi.org/10.3390/s21030936 - 30 Jan 2021
Cited by 14
Abstract
Wireless sensor networks (WSN) are widely used to provide users with convenient services such as health-care, and smart home. To provide convenient services, sensor nodes in WSN environments collect and send the sensing data to the gateway. However, it can suffer from serious [...] Read more.
Wireless sensor networks (WSN) are widely used to provide users with convenient services such as health-care, and smart home. To provide convenient services, sensor nodes in WSN environments collect and send the sensing data to the gateway. However, it can suffer from serious security issues because susceptible messages are exchanged through an insecure channel. Therefore, secure authentication protocols are necessary to prevent security flaws in WSN. In 2020, Moghadam et al. suggested an efficient authentication and key agreement scheme in WSN. Unfortunately, we discover that Moghadam et al.’s scheme cannot prevent insider and session-specific random number leakage attacks. We also prove that Moghadam et al.’s scheme does not ensure perfect forward secrecy. To prevent security vulnerabilities of Moghadam et al.’s scheme, we propose a secure and lightweight mutual authentication protocol for WSNs (WSN-SLAP). WSN-SLAP has the resistance from various security drawbacks, and provides perfect forward secrecy and mutual authentication. We prove the security of WSN-SLAP by using Burrows-Abadi-Needham (BAN) logic, Real-or-Random (ROR) model, and Automated Verification of Internet Security Protocols and Applications (AVISPA) simulation. In addition, we evaluate the performance of WSN-SLAP compared with existing related protocols. We demonstrate that WSN-SLAP is more secure and suitable than previous protocols for WSN environments. Full article
(This article belongs to the Special Issue Cryptography and Information Security in Wireless Sensor Networks)
Show Figures

Figure 1

Article
Personalized Human Activity Recognition Based on Integrated Wearable Sensor and Transfer Learning
Sensors 2021, 21(3), 885; https://doi.org/10.3390/s21030885 - 28 Jan 2021
Cited by 19
Abstract
Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model’s generalization capability is a major challenge in this field. [...] Read more.
Human activity recognition (HAR) based on the wearable device has attracted more attention from researchers with sensor technology development in recent years. However, personalized HAR requires high accuracy of recognition, while maintaining the model’s generalization capability is a major challenge in this field. This paper designed a compact wireless wearable sensor node, which combines an air pressure sensor and inertial measurement unit (IMU) to provide multi-modal information for HAR model training. To solve personalized recognition of user activities, we propose a new transfer learning algorithm, which is a joint probability domain adaptive method with improved pseudo-labels (IPL-JPDA). This method adds the improved pseudo-label strategy to the JPDA algorithm to avoid cumulative errors due to inaccurate initial pseudo-labels. In order to verify our equipment and method, we use the newly designed sensor node to collect seven daily activities of 7 subjects. Nine different HAR models are trained by traditional machine learning and transfer learning methods. The experimental results show that the multi-modal data improve the accuracy of the HAR system. The IPL-JPDA algorithm proposed in this paper has the best performance among five HAR models, and the average recognition accuracy of different subjects is 93.2%. Full article
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Show Figures

Figure 1