Topic Editors

Prof. Dr. Yangquan Chen
Department of Mechanical Engineering (ME), University of California, Merced, CA 95343, USA
Prof. Dr. Subhas Mukhopadhyay
School of Engineering, Macquarie University, Sydney, NSW 2109, Australia
Dr. Nunzio Cennamo
Department of Engineering, University of Campania Luigi Vanvitelli, Via Roma 29, 81031 Aversa, Italy
Prof. Dr. M. Jamal Deen
Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON L8S4K1, Canada
Prof. Dr. Junseop Lee
Department of Materials Science and Engineering, Gachon University, 1342 Seongnam-Daero, Sujeong-Gu, Seongnam-Si, Gyeonggi-Do 13120, Korea
Prof. Dr. Simone Morais
REQUIMTE–LAQV, School of Engineering, Polytechnic Institute of Porto, 4249-015 Porto, Portugal

Artificial Intelligence in Sensors

Abstract submission deadline
closed (30 October 2022)
Manuscript submission deadline
31 December 2022
Viewed by
110373

Topic Information

Dear Colleagues,

This topic comprises several interdisciplinary research areas that cover the main aspects of sensor sciences.

There has been an increase in both the capabilities and challenges related within numerous application fields, e.g., Robotics, Industry 4.0, Automotive, Smart Cities, Medicine, Diagnosis, Food, Telecommunication, Environmental and Civil Applications, Health, and Security.

The associated applications constantly require novel sensors to improve their capabilities and challenges. Thus, Sensor Sciences represents a paradigm characterized by the integration of modern nanotechnologies and nanomaterials into manufacturing and industrial practice to develop tools for several application fields. The primary underlying goal of Sensor Sciences is to facilitate the closer interconnection and control of complex systems, machines, devices, and people to increase the support provided to humans in several application fields.

Sensor Sciences comprises a set of significant research fields, including:

  • Chemical Sensors;
  • Biosensors;
  • Physical Sensors;
  • Optical Sensors;
  • Microfluidics;
  • Sensor Networks;
  • Electronics and Mechanicals;
  • Mechatronics;
  • Internet of Things platforms and their applications;
  • Materials and Nanomaterials;
  • Data Security;
  • Artificial Intelligence;
  • Robotics;
  • UAV; UGV;
  • Remote Sensing;
  • Measurement Science and Technology;
  • Cognitive Computing Platforms and Applications, including technologies related to Artificial Intelligence, Machine Learning, as well as Big Data Processing and Analytics;
  • Advanced Interactive Technologies, including Augmented/Virtual Reality;
  • Advanced Data Visualization Techniques;
  • Instrumentation Science and Technology;
  • Nanotechnology;
  • Organic Electronics, Biophotonics, and Smart Materials;
  • Optoelectronics, Photonics, and Optical fibers;
  • MEMS, Microwaves, and Acoustic waves;
  • Physics and Biophysics;
  • Interdisciplinary Sciences.

This topic aims to collect the results of research in these fields and others. Therefore, submitting papers within those areas connected to sensors is strongly encouraged.

Prof. Dr. YangQuan Chen
Prof. Dr. Nunzio Cennamo
Prof. Dr. Subhas Mukhopadhyay
Prof. Dr. Simone Morais
Topic Editors

Keywords

  • chemical sensors
  • biosensors
  • remote sensing
  • physical sensors
  • optical sensors
  • microfluidics
  • sensor networks
  • electronics and mechanicals
  • mechatronics
  • internet of things platforms and their applications
  • materials and nanomaterials
  • data security
  • measurement science and technology
  • cognitive computing platforms and applications, including technologies related to artificial intelligence, machine learning, as well as big data processing and analytics
  • advanced interactive technologies, including augmented/virtual reality
  • advanced data visualization techniques
  • instrumentation science and technology
  • nanotechnology
  • organic electronics, biophotonics, and smart materials
  • optoelectronics, photonics, and optical fibers
  • MEMS, microwaves, and acoustic waves
  • physics and biophysics
  • interdisciplinary sciences
  • Artificial Intelligence
  • sensor networks
  • WSN
  • robotics
  • machine vision
  • deep learning
  • machine learning
  • computer vision
  • image processing
  • smart sensing
  • smart sensor
  • intelligent sensor
  • unmanned aerial vehicle
  • UAV
  • unmanned ground vehicle
  • UGV

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Sensors
sensors
3.847 6.4 2001 16.2 Days 2400 CHF Submit
Remote Sensing
remotesensing
5.349 7.4 2009 19.9 Days 2500 CHF Submit
Applied Sciences
applsci
2.838 3.7 2011 17.4 Days 2300 CHF Submit
Electronics
electronics
2.690 3.7 2012 16.6 Days 2000 CHF Submit
Drones
drones
5.532 7.2 2017 12.9 Days 1600 CHF Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (129 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Visual/Inertial/GNSS Integrated Navigation System under GNSS Spoofing Attack
Remote Sens. 2022, 14(23), 5975; https://doi.org/10.3390/rs14235975 - 25 Nov 2022
Abstract
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems [...] Read more.
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems under GNSS spoofing are not sufficient. In this paper, an open-source VIG algorithm, VINS-Fusion, based on nonlinear optimization, is used to analyze the performance of the VIG system under a GNSS spoofing attack. The influence of the visual inertial odometer (VIO) scale estimation error and the transformation matrix deviation in the transition period of spoofing detection is analyzed. Deviation correction methods based on the GNSS-assisted scale compensation coefficient estimation method and optimal pose transformation matrix selection are proposed for VIG-integrated system in spoofing areas. For an area that the integrated system can revisit many times, a global pose map-matching method is proposed. An outfield experiment with a GNSS spoofing attack is carried out in this paper. The experiment result shows that, even if the GNSS measurements are seriously affected by the spoofing, the integrated system still can run independently, following the preset waypoint. The scale compensation coefficient estimation method, the optimal pose transformation matrix selection method and the global pose map-matching method can depress the estimation error under the circumstances of a spoofing attack. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Multi-Channel Descriptor for LiDAR-Based Loop Closure Detection and Its Application
Remote Sens. 2022, 14(22), 5877; https://doi.org/10.3390/rs14225877 - 19 Nov 2022
Abstract
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting [...] Read more.
Simultaneous localization and mapping (SLAM) algorithm is a prerequisite for unmanned ground vehicle (UGV) localization, path planning, and navigation, which includes two essential components: frontend odometry and backend optimization. Frontend odometry tends to amplify the cumulative error continuously, leading to ghosting and drifting on the mapping results. However, loop closure detection (LCD) can be used to address this technical issue by significantly eliminating the cumulative error. The existing LCD methods decide whether a loop exists by constructing local or global descriptors and calculating the similarity between descriptors, which attaches great importance to the design of discriminative descriptors and effective similarity measurement mechanisms. In this paper, we first propose novel multi-channel descriptors (CMCD) to alleviate the lack of point cloud single information in the discriminative power of scene description. The distance, height, and intensity information of the point cloud is encoded into three independent channels of the shadow-casting region (bin) and then compressed it into a two-dimensional global descriptor. Next, an ORB-based dynamic threshold feature extraction algorithm (DTORB) is designed using objective 2D descriptors to describe the distributions of global and local point clouds. Then, a DTORB-based similarity measurement method is designed using the rotation-invariance and visualization characteristic of descriptor features to overcome the subjective tendency of the constant threshold ORB algorithm in descriptor feature extraction. Finally, verification is performed over KITTI odometry sequences and the campus datasets of Jilin University collected by us. The experimental results demonstrate the superior performance of our method to the state-of-the-art approaches. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Rolling Bearing Fault Diagnosis Using Hybrid Neural Network with Principal Component Analysis
Sensors 2022, 22(22), 8906; https://doi.org/10.3390/s22228906 - 17 Nov 2022
Abstract
With the rapid development of fault prognostics and health management (PHM) technology, more and more deep learning algorithms have been applied to the intelligent fault diagnosis of rolling bearings, and although all of them can achieve over 90% diagnostic accuracy, the generality and [...] Read more.
With the rapid development of fault prognostics and health management (PHM) technology, more and more deep learning algorithms have been applied to the intelligent fault diagnosis of rolling bearings, and although all of them can achieve over 90% diagnostic accuracy, the generality and robustness of the models cannot be truly verified under complex extreme variable loading conditions. In this study, an end-to-end rolling bearing fault diagnosis model of a hybrid deep neural network with principal component analysis is proposed. Firstly, in order to reduce the complexity of deep learning computation, data pre-processing is performed by principal component analysis (PCA) with feature dimensionality reduction. The preprocessed data is imported into the hybrid deep learning model. The first layer of the model uses a CNN algorithm for denoising and simple feature extraction, the second layer makes use of bi-directional long and short memory (BiLSTM) for greater in-depth extraction of the data with time series features, and the last layer uses an attention mechanism for optimal weight assignment, which can further improve the diagnostic precision. The test accuracy of this model is fully comparable to existing deep learning fault diagnosis models, especially under low load; the test accuracy is 100% at constant load and nearly 90% for variable load, and the test accuracy is 72.8% at extreme variable load (2.205 N·m/s–0.735 N·m/s and 0.735 N·m/s–2.205 N·m/s), which are the worst possible load conditions. The experimental results fully prove that the model has reliable robustness and generality. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
AIoT Precision Feeding Management System
Electronics 2022, 11(20), 3358; https://doi.org/10.3390/electronics11203358 - 18 Oct 2022
Abstract
Different fish species and different growth stages require different amounts of fish pellets. Excessive fish pellets increase the cost of aquaculture, and the leftover fish pellets sink to the bottom of the fish farm. This causes water pollution in the fish farm. Weather [...] Read more.
Different fish species and different growth stages require different amounts of fish pellets. Excessive fish pellets increase the cost of aquaculture, and the leftover fish pellets sink to the bottom of the fish farm. This causes water pollution in the fish farm. Weather changes and providing too many or too little fish pellets affect the growth of the fish. In light of the abovementioned factors, this article uses the artificial intelligence of things (AIoT) precision feeding management system to improve an existing fish feeder. The AIoT precision feeding management system is placed on the water surface of the breeding pond to measure the water surface fluctuations in the area of fish pellet application. The buoy, with s built-in three-axis accelerometer, senses the water surface fluctuations when the fish are foraging. Then, through the wireless transmission module, the data are sent back to the receiver and control device of the fish feeder. When the fish feeder receives the signal, it judges the returned value to adjust the feeding time. Through this system, the intelligent feeding of fish can be achieved by adjusting the amount of fish pellets in order to reduce the cost of aquaculture. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Intrinsic Calibration of Multi-Beam LiDARs for Agricultural Robots
Remote Sens. 2022, 14(19), 4846; https://doi.org/10.3390/rs14194846 - 28 Sep 2022
Abstract
With the advantages of high measurement accuracy and wide detection range, LiDARs have been widely used in information perception research to develop agricultural robots. However, the internal configuration of the laser transmitter layout changes with increasing sensor working duration, which makes it difficult [...] Read more.
With the advantages of high measurement accuracy and wide detection range, LiDARs have been widely used in information perception research to develop agricultural robots. However, the internal configuration of the laser transmitter layout changes with increasing sensor working duration, which makes it difficult to obtain accurate measurement with calibration files based on factory settings. To solve this problem, we investigate the intrinsic calibration of multi-beam laser sensors. Specifically, we calibrate the five intrinsic parameters of LiDAR with a nonlinear optimization strategy based on static planar models, which include measured distance, rotation angle, pitch angle, horizontal distance, and vertical distance. Firstly, we establish a mathematical model based on the physical structure of LiDAR. Secondly, we calibrate the internal parameters according to the mathematical model and evaluate the measurement accuracy after calibration. Here, we illustrate the parameter calibration with three steps: planar model estimation, objective function construction, and nonlinear optimization. We also introduce the ranging accuracy evaluation metrics, including the standard deviation of the distance from the laser scanning points to the planar models and the 3σ criterion. Finally, the experimental results show that the ranging error of calibrated sensors can be maintained within 3 cm, which verifies the effectiveness of the laser intrinsic calibration. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Optimal Compensation of MEMS Gyroscope Noise Kalman Filter Based on Conv-DAE and MultiTCN-Attention Model in Static Base Environment
Sensors 2022, 22(19), 7249; https://doi.org/10.3390/s22197249 - 24 Sep 2022
Abstract
Errors in microelectromechanical systems (MEMS) inertial measurement units (IMUs) are large, complex, nonlinear, and time varying. The traditional noise reduction and compensation methods based on traditional models are not applicable. This paper proposes a noise reduction method based on multi-layer combined deep learning [...] Read more.
Errors in microelectromechanical systems (MEMS) inertial measurement units (IMUs) are large, complex, nonlinear, and time varying. The traditional noise reduction and compensation methods based on traditional models are not applicable. This paper proposes a noise reduction method based on multi-layer combined deep learning for the MEMS gyroscope in the static base state. In this method, the combined model of MEMS gyroscope is constructed by Convolutional Denoising Auto-Encoder (Conv-DAE) and Multi-layer Temporal Convolutional Neural with the Attention Mechanism (MultiTCN-Attention) model. Based on the robust data processing capability of deep learning, the noise features are obtained from the past gyroscope data, and the parameter optimization of the Kalman filter (KF) by the Particle Swarm Optimization algorithm (PSO) significantly improves the filtering and noise reduction accuracy. The experimental results show that, compared with the original data, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 77.81% and 76.44% on the x and y axes, respectively; compared with the existing MEMS gyroscope noise compensation method based on the Autoregressive Moving Average with Kalman filter (ARMA-KF) model, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 44.00% and 46.66% on the x and y axes, respectively, reducing the noise impact by nearly three times. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Visual Saliency-Based Neural Network Architecture for No-Reference Image Quality Assessment
Appl. Sci. 2022, 12(19), 9567; https://doi.org/10.3390/app12199567 - 23 Sep 2022
Cited by 1
Abstract
Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality [...] Read more.
Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality of an image, but they typically award the picture quality score to an individual patch of the image. As a result, there would be a lot of misleading scores coming from patches. Some regions of the image are important and can contribute highly toward the right prediction of its quality. To prevent outlier regions, we suggest a technique with a visual saliency module which allows the only important region to bypass to the neural network and allows the network to only learn the important information required to predict the quality. The neural network architecture used in this study is Inception-ResNet-v2. We assess the proposed strategy using a benchmark database (KADID-10k) to show its efficacy. The outcome demonstrates better performance compared with certain popular no-reference IQA (NR-IQA) and full-reference IQA (FR-IQA) approaches. This technique is intended to be utilized to estimate the quality of an image being acquired in real time from drone imagery. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
An Adaptive Group of Density Outlier Removal Filter: Snow Particle Removal from LiDAR Data
Electronics 2022, 11(19), 2993; https://doi.org/10.3390/electronics11192993 - 21 Sep 2022
Abstract
Light Detection And Ranging (LiDAR) is an important technology integrated into self-driving cars to enhance the reliability of these systems. Even with some advantages over cameras, it is still limited under extreme weather conditions such as heavy rain, fog, or snow. Traditional methods [...] Read more.
Light Detection And Ranging (LiDAR) is an important technology integrated into self-driving cars to enhance the reliability of these systems. Even with some advantages over cameras, it is still limited under extreme weather conditions such as heavy rain, fog, or snow. Traditional methods such as Radius Outlier Removal (ROR) and Statistical Outlier Removal (SOR) are limited in their ability to detect snow points in LiDAR point clouds. This paper proposes an Adaptive Group of Density Outlier Removal (AGDOR) filter that can remove snow particles more effectively in raw LiDAR point clouds, with verification on the Winter Adverse Driving Dataset (WADS). In our proposed method, an intensity threshold combined with a proposed outlier removal filter was employed. Outstanding performance was obtained, with higher accuracy up to 96% and processing speed of 0.51 s per frame in our result. In particular, our filter outperforms the state-of-the-art filter by achieving a 16.32% higher Precision at the same accuracy. However, our method archive is lower in recall than the state-of-the-art method. This clearly indicates that AGDOR retains a significant amount of object points from LiDAR. The results suggest that our filter would be useful for snow removal under harsh weathers for autonomous driving systems. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Line Structure Extraction from LiDAR Point Cloud Based on the Persistence of Tensor Feature
Appl. Sci. 2022, 12(18), 9190; https://doi.org/10.3390/app12189190 - 14 Sep 2022
Abstract
The LiDAR point cloud has been widely used in scenarios of automatic driving, object recognition, structure reconstruction, etc., while it remains a challenging problem in line structure extraction, due to the noise and accuracy, especially in data acquired by consumer electronic devices. To [...] Read more.
The LiDAR point cloud has been widely used in scenarios of automatic driving, object recognition, structure reconstruction, etc., while it remains a challenging problem in line structure extraction, due to the noise and accuracy, especially in data acquired by consumer electronic devices. To address the issue, a line structure extraction method based on the persistence of tensor feature is proposed, and subsequently applied to the data acquired by an iPhone-based LiDAR sensor. The tensor of each point is encoded, voted, and aggregated by its neighborhood, and further decomposed into different geometric features in each dimension. Then, the line feature in the point cloud is represented and computed using the persistence of the tensor feature. Finally, the line structure is extracted based on the persistent homology according to the discrete Morse theory. With the LiDAR point cloud collected by the iPhone 12 Pro MAX, experiments are conducted, line structures are extracted from two different datasets, and results perform well in comparison with other related results. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
N-Step Pre-Training and Décalcomanie Data Augmentation for Micro-Expression Recognition
Sensors 2022, 22(17), 6671; https://doi.org/10.3390/s22176671 - 03 Sep 2022
Cited by 1
Abstract
Facial expressions are divided into micro- and macro-expressions. Micro-expressions are low-intensity emotions presented for a short moment of about 0.25 s, whereas macro-expressions last up to 4 s. To derive micro-expressions, participants are asked to suppress their emotions as much as possible while [...] Read more.
Facial expressions are divided into micro- and macro-expressions. Micro-expressions are low-intensity emotions presented for a short moment of about 0.25 s, whereas macro-expressions last up to 4 s. To derive micro-expressions, participants are asked to suppress their emotions as much as possible while watching emotion-inducing videos. However, it is a challenging process, and the number of samples collected tends to be less than those of macro-expressions. Because training models with insufficient data may lead to decreased performance, this study proposes two ways to solve the problem of insufficient data for micro-expression training. The first method involves N-step pre-training, which performs multiple transfer learning from action recognition datasets to those in the facial domain. Second, we propose Décalcomanie data augmentation, which is based on facial symmetry, to create a composite image by cutting and pasting both faces around their center lines. The results show that the proposed methods can successfully overcome the data shortage problem and achieve high performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Deep-Learning-Based Method for Estimating Permittivity of Ground-Penetrating Radar Targets
Remote Sens. 2022, 14(17), 4293; https://doi.org/10.3390/rs14174293 - 31 Aug 2022
Abstract
Correctly estimating the relative permittivity of buried targets is crucial for accurately determining the target type, geometric size, and reconstruction of shallow surface geological structures. In order to effectively identify the dielectric properties of buried targets, on the basis of extracting the feature [...] Read more.
Correctly estimating the relative permittivity of buried targets is crucial for accurately determining the target type, geometric size, and reconstruction of shallow surface geological structures. In order to effectively identify the dielectric properties of buried targets, on the basis of extracting the feature information of B-SCAN images, we propose an inversion method based on a deep neural network (DNN) to estimate the relative permittivity of targets. We first take the physical mechanism of ground-penetrating radar (GPR), working in the reflection measurement mode as the constrain condition, and then design a convolutional neural network (CNN) to extract the feature hyperbola of the underground target, which is used to calculate the buried depth of the target and the relative permittivity of the background medium. We further build a regression network and train the network model with the labeled sample set to estimate the relative permittivity of the target. Tests were carried out on the GPR simulation dataset and the field dataset of underground rainwater pipelines, respectively. The results show that the inversion method has high accuracy in estimating the relative permittivity of the target. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
SGSNet: A Lightweight Depth Completion Network Based on Secondary Guidance and Spatial Fusion
Sensors 2022, 22(17), 6414; https://doi.org/10.3390/s22176414 - 25 Aug 2022
Cited by 1
Abstract
The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this [...] Read more.
The depth completion task aims to generate a dense depth map from a sparse depth map and the corresponding RGB image. As a data preprocessing task, obtaining denser depth maps without affecting the real-time performance of downstream tasks is the challenge. In this paper, we propose a lightweight depth completion network based on secondary guidance and spatial fusion named SGSNet. We design the image feature extraction module to better extract features from different scales between and within layers in parallel and to generate guidance features. Then, SGSNet uses the secondary guidance to complete the depth completion. The first guidance uses the lightweight guidance module to quickly guide LiDAR feature extraction with the texture features of RGB images. The second guidance uses the depth information completion module for sparse depth map feature completion and inputs it into the DA-CSPN++ module to complete the dense depth map re-guidance. By using a lightweight bootstrap module, the overall network runs ten times faster than the baseline. The overall network is relatively lightweight, up to thirty frames, which is sufficient to meet the speed needs of large SLAM and three-dimensional reconstruction for sensor data extraction. At the time of submission, the accuracy of the algorithm in SGSNet ranked first in the KITTI ranking of lightweight depth completion methods. It was 37.5% faster than the top published algorithms in the rank and was second in the full ranking. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
EVtracker: An Event-Driven Spatiotemporal Method for Dynamic Object Tracking
Sensors 2022, 22(16), 6090; https://doi.org/10.3390/s22166090 - 15 Aug 2022
Abstract
An event camera is a novel bio-inspired sensor that effectively compensates for the shortcomings of current frame cameras, which include high latency, low dynamic range, motion blur, etc. Rather than capturing images at a fixed frame rate, an event camera produces an asynchronous [...] Read more.
An event camera is a novel bio-inspired sensor that effectively compensates for the shortcomings of current frame cameras, which include high latency, low dynamic range, motion blur, etc. Rather than capturing images at a fixed frame rate, an event camera produces an asynchronous signal by measuring the brightness change of each pixel. Consequently, an appropriate algorithm framework that can handle the unique data types of event-based vision is required. In this paper, we propose a dynamic object tracking framework using an event camera to achieve long-term stable tracking of event objects. One of the key novel features of our approach is to adopt an adaptive strategy that adjusts the spatiotemporal domain of event data. To achieve this, we reconstruct event images from high-speed asynchronous streaming data via online learning. Additionally, we apply the Siamese network to extract features from event data. In contrast to earlier models that only extract hand-crafted features, our method provides powerful feature description and a more flexible reconstruction strategy for event data. We assess our algorithm in three challenging scenarios: 6-DoF (six degrees of freedom), translation, and rotation. Unlike fixed cameras in traditional object tracking tasks, all three tracking scenarios involve the simultaneous violent rotation and shaking of both the camera and objects. Results from extensive experiments suggest that our proposed approach achieves superior accuracy and robustness compared to other state-of-the-art methods. Without reducing time efficiency, our novel method exhibits a 30% increase in accuracy over other recent models. Furthermore, results indicate that event cameras are capable of robust object tracking, which is a task that conventional cameras cannot adequately perform, especially for super-fast motion tracking and challenging lighting situations. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Accurate Spatial Positioning of Target Based on the Fusion of Uncalibrated Image and GNSS
Remote Sens. 2022, 14(16), 3877; https://doi.org/10.3390/rs14163877 - 10 Aug 2022
Abstract
The accurate spatial positioning of the target in a fixed camera image is a critical sensing technique. Conventional visual spatial positioning methods rely on tedious camera calibration and face great challenges in selecting the representative feature points to compute the position of the [...] Read more.
The accurate spatial positioning of the target in a fixed camera image is a critical sensing technique. Conventional visual spatial positioning methods rely on tedious camera calibration and face great challenges in selecting the representative feature points to compute the position of the target, especially when existing occlusion or in remote scenes. In order to avoid these deficiencies, this paper proposes a deep learning approach for accurate visual spatial positioning of the targets with the assistance of Global Navigation Satellite System (GNSS). It contains two stages: the first stage trains a hybrid supervised and unsupervised auto-encoder regression network offline to gain capability of regressing geolocation (longitude and latitude) directly from the fusion of image and GNSS, and learns an error scale factor to evaluate the regression error. The second stage firstly predicts regressed accurate geolocation online from the observed image and GNSS measurement, and then filters the predictive geolocation and the measured GNSS to output the optimal geolocation. The experimental results showed that the proposed approach increased the average positioning accuracy by 56.83%, 37.25%, 41.62% in a simulated scenario and 31.25%, 7.43%, 38.28% in a real-world scenario, compared with GNSS, the Interacting Multiple Model−Unscented Kalman Filters (IMM-UKF) and the supervised deep learning approach, respectively. Other improvements were also achieved in positioning stability, robustness, generalization, and performance in GNSS denied environments. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Evaluating the Forest Ecosystem through a Semi-Autonomous Quadruped Robot and a Hexacopter UAV
Sensors 2022, 22(15), 5497; https://doi.org/10.3390/s22155497 - 23 Jul 2022
Abstract
Accurate and timely monitoring is imperative to the resilience of forests for economic growth and climate regulation. In the UK, forest management depends on citizen science to perform tedious and time-consuming data collection tasks. In this study, an unmanned aerial vehicle (UAV) equipped [...] Read more.
Accurate and timely monitoring is imperative to the resilience of forests for economic growth and climate regulation. In the UK, forest management depends on citizen science to perform tedious and time-consuming data collection tasks. In this study, an unmanned aerial vehicle (UAV) equipped with a light sensor and positioning capabilities is deployed to perform aerial surveying and to observe a series of forest health indicators (FHIs) which are inaccessible from the ground. However, many FHIs such as burrows and deadwood can only be observed from under the tree canopy. Hence, we take the initiative of employing a quadruped robot with an integrated camera as well as an external sensing platform (ESP) equipped with light and infrared cameras, computing, communication and power modules to observe these FHIs from the ground. The forest-monitoring time can be extended by reducing computation and conserving energy. Therefore, we analysed different versions of the YOLO object-detection algorithm in terms of accuracy, deployment and usability by the EXP to accomplish an extensive low-latency detection. In addition, we constructed a series of new datasets to train the YOLOv5x and YOLOv5s for recognising FHIs. Our results reveal that YOLOv5s is lightweight and easy to train for FHI detection while performing close to real-time, cost-effective and autonomous forest monitoring. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
An SAR Ship Object Detection Algorithm Based on Feature Information Efficient Representation Network
Remote Sens. 2022, 14(14), 3489; https://doi.org/10.3390/rs14143489 - 21 Jul 2022
Cited by 1
Abstract
In the synthetic aperture radar (SAR) ship image, the target size is small and dense, the background is complex and changeable, the ship target is difficult to distinguish from the surrounding background, and there are many ship-like targets in the image. This makes [...] Read more.
In the synthetic aperture radar (SAR) ship image, the target size is small and dense, the background is complex and changeable, the ship target is difficult to distinguish from the surrounding background, and there are many ship-like targets in the image. This makes it difficult for deep-learning-based target detection algorithms to obtain effective feature information, resulting in missed and false detection. The effective expression of the feature information of the target to be detected is the key to the target detection algorithm. How to improve the clear expression of image feature information in the network has always been a difficult point. Aiming at the above problems, this paper proposes a new target detection algorithm, the feature information efficient representation network (FIERNet). The algorithm can extract better feature details, enhance network feature fusion and information expression, and improve model detection capabilities. First, the convolution transformer feature extraction (CTFE) module is proposed, and a convolution transformer feature extraction network (CTFENet) is built with this module as a feature extraction block. The network enables the model to obtain more accurate and comprehensive feature information, weakens the interference of invalid information, and improves the overall performance of the network. Second, a new effective feature information fusion (EFIF) module is proposed to enhance the transfer and fusion of the main information of feature maps. Finally, a new frame-decoding formula is proposed to further improve the coincidence between the predicted frame and the target frame and obtain more accurate picture information. Experiments show that the method achieves 94.14% and 92.01% mean precision (mAP) on SSDD and SAR-ship datasets, and it works well on large-scale SAR ship images. In addition, FIERNet greatly reduces the occurrence of missed detection and false detection in SAR ship detection. Compared to other state-of-the-art object detection algorithms, FIERNet outperforms them on various performance metrics on SAR images. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Trigger-Based K-Band Microwave Ranging System Thermal Control with Model-Free Learning Process
Electronics 2022, 11(14), 2173; https://doi.org/10.3390/electronics11142173 - 11 Jul 2022
Abstract
Micron-level accuracy K-band microwave ranging in space relies on the stability of the payload thermal control on-board; however, large quantities of thermal sensors and heating devices around the deployed instruments consume the precious inner communication resources of the central computer. Another problem arises, [...] Read more.
Micron-level accuracy K-band microwave ranging in space relies on the stability of the payload thermal control on-board; however, large quantities of thermal sensors and heating devices around the deployed instruments consume the precious inner communication resources of the central computer. Another problem arises, which is that the payload thermal protection environment can deteriorate gradually through years operating. In this paper, a new trigger-based thermal system controller design is proposed, with consideration of spaceborne communication burden reduction and actuator saturation, which guarantees stable temperature fluctuations of microwave payloads in space missions. The controller combines a nominal constant sampling PID inner loop and a trigger-based outer loop structure under constraints of heating device saturation. Moreover, an iterative model-free reinforcement learning process is adopted that can approximate the estimation of thermal dynamic modeling uncertainty online. Via extensive experiment in a laboratory environment, the performance of the proposed trigger thermal control is verified, with smaller temperature fluctuations compared to the nominal control, and obvious efficiency in system communications. The online learning algorithm is also tested with deliberate thermal conditions that deviate from the original system—the results can quickly converge to normal when the thermal disturbance is removed. Finally, the ranging accuracy is tested for the whole system, and a 25% (RMS) performance improvement can be realized by using a trigger-based control strategy—about 2.2 µm, compared to the nominal control method. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Technical Note
Optimal Sensor Placement Using Learning Models—A Mediterranean Case Study
Remote Sens. 2022, 14(13), 2989; https://doi.org/10.3390/rs14132989 - 22 Jun 2022
Abstract
In this paper, we discuss different approaches to optimal sensor placement and propose that an optimal sensor location can be selected using unsupervised learning methods such as self-organising maps, neural gas or the K-means algorithm. We show how each of the algorithms can [...] Read more.
In this paper, we discuss different approaches to optimal sensor placement and propose that an optimal sensor location can be selected using unsupervised learning methods such as self-organising maps, neural gas or the K-means algorithm. We show how each of the algorithms can be used for this purpose and that additional constraints such as distance from shore, which is presumed to be related to deployment and maintenance costs, can be considered. The study uses wind data over the Mediterranean Sea and uses the reconstruction error to evaluate sensor location selection. The reconstruction error shows that results deteriorate when additional constraints are added to the equation. However, it is also shown that a small fraction of the data is sufficient to reconstruct wind data over a larger geographic area with an error comparable to that of a meteorological model. The results are confirmed by several experiments and are consistent with the results of previous studies. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
DPSSD: Dual-Path Single-Shot Detector
Sensors 2022, 22(12), 4616; https://doi.org/10.3390/s22124616 - 18 Jun 2022
Abstract
Object detection is one of the most important and challenging branches of computer vision. It has been widely used in people’s lives, such as for surveillance security and autonomous driving. We propose a novel dual-path multi-scale object detection paradigm in order to extract [...] Read more.
Object detection is one of the most important and challenging branches of computer vision. It has been widely used in people’s lives, such as for surveillance security and autonomous driving. We propose a novel dual-path multi-scale object detection paradigm in order to extract more abundant feature information for the object detection task and optimize the multi-scale object detection problem, and based on this, we design a single-stage general object detection algorithm called Dual-Path Single-Shot Detector (DPSSD). The dual path ensures that shallow features, i.e., residual path and concatenation path, can be more easily utilized to improve detection accuracy. Our improved dual-path network is more adaptable to multi-scale object detection tasks, and we combine it with the feature fusion module to generate a multi-scale feature learning paradigm called the “Dual-Path Feature Pyramid”. We trained the models on PASCAL VOC datasets and COCO datasets with 320 pixels and 512 pixels input, respectively, and performed inference experiments to validate the structures in the neural network. The experimental results show that our algorithm has an advantage over anchor-based single-stage object detection algorithms and achieves an advanced level in average accuracy. Researchers can replicate the reported results of this paper. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Encoder-Decoder Structure with Multiscale Receptive Field Block for Unsupervised Depth Estimation from Monocular Video
Remote Sens. 2022, 14(12), 2906; https://doi.org/10.3390/rs14122906 - 17 Jun 2022
Abstract
Monocular depth estimation is a fundamental yet challenging task in computer vision as depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-based methods have led to considerable improvements for this task in a single image, most [...] Read more.
Monocular depth estimation is a fundamental yet challenging task in computer vision as depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-based methods have led to considerable improvements for this task in a single image, most existing approaches still fail to overcome this limitation. Supervised learning methods model depth estimation as a regression problem and, as a result, require large amounts of ground truth depth data for training in actual scenarios. Unsupervised learning methods treat depth estimation as the synthesis of a new disparity map, which means that rectified stereo image pairs need to be used as the training dataset. Aiming to solve such problem, we present an encoder-decoder based framework, which infers depth maps from monocular video snippets in an unsupervised manner. First, we design an unsupervised learning scheme for the monocular depth estimation task based on the basic principles of structure from motion (SfM) and it only uses adjacent video clips rather than paired training data as supervision. Second, our method predicts two confidence masks to improve the robustness of the depth estimation model to avoid the occlusion problem. Finally, we leverage the largest scale and minimum depth loss instead of the multiscale and average loss to improve the accuracy of depth estimation. The experimental results on the benchmark KITTI dataset for depth estimation show that our method outperforms competing unsupervised methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Communication
A Low-Power Analog Processor-in-Memory-Based Convolutional Neural Network for Biosensor Applications
Sensors 2022, 22(12), 4555; https://doi.org/10.3390/s22124555 - 16 Jun 2022
Cited by 2
Abstract
This paper presents an on-chip implementation of an analog processor-in-memory (PIM)-based convolutional neural network (CNN) in a biosensor. The operator was designed with low power to implement CNN as an on-chip device on the biosensor, which consists of plates of 32 × 32 [...] Read more.
This paper presents an on-chip implementation of an analog processor-in-memory (PIM)-based convolutional neural network (CNN) in a biosensor. The operator was designed with low power to implement CNN as an on-chip device on the biosensor, which consists of plates of 32 × 32 material. In this paper, 10T SRAM-based analog PIM, which performs multiple and average (MAV) operations with multiplication and accumulation (MAC), is used as a filter to implement CNN at low power. PIM proceeds with MAV operations, with feature extraction as a filter, using an analog method. To prepare the input feature, an input matrix is formed by scanning a 32 × 32 biosensor based on a digital controller operating at 32 MHz frequency. Memory reuse techniques were applied to the analog SRAM filter, which is the core of low power implementation, and in order to accurately grasp the MAC operational efficiency and classification, we modeled and trained numerous input features based on biosignal data, confirming the classification. When the learned weight data was input, 19 mW of power was consumed during analog-based MAC operation. The implementation showed an energy efficiency of 5.38 TOPS/W and was differentiated through the implementation of 8 bits of high resolution in the 180 nm CMOS process. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Vision-Based System for Stage Classification of Parkinsonian Gait Using Machine Learning and Synthetic Data
Sensors 2022, 22(12), 4463; https://doi.org/10.3390/s22124463 - 13 Jun 2022
Abstract
Parkinson’s disease is characterized by abnormal gait, which worsens as the condition progresses. Although several methods have been able to classify this feature through pose-estimation algorithms and machine-learning classifiers, few studies have been able to analyze its progression to perform stage classification of [...] Read more.
Parkinson’s disease is characterized by abnormal gait, which worsens as the condition progresses. Although several methods have been able to classify this feature through pose-estimation algorithms and machine-learning classifiers, few studies have been able to analyze its progression to perform stage classification of the disease. Moreover, despite the increasing popularity of these systems for gait analysis, the amount of available gait-related data can often be limited, thereby, hindering the progress of the implementation of this technology in the medical field. As such, creating a quantitative prognosis method that can identify the severity levels of a Parkinsonian gait with little data could help facilitate the study of the Parkinsonian gait for rehabilitation. In this contribution, we propose a vision-based system to analyze the Parkinsonian gait at various stages using linear interpolation of Parkinsonian gait models. We present a comparison between the performance of a k-nearest neighbors algorithm (KNN), support-vector machine (SVM) and gradient boosting (GB) algorithms in classifying well-established gait features. Our results show that the proposed system achieved 96–99% accuracy in evaluating the prognosis of Parkinsonian gaits. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Path-Planning System for Radioisotope Identification Devices Using 4π Gamma Imaging Based on Random Forest Analysis
Sensors 2022, 22(12), 4325; https://doi.org/10.3390/s22124325 - 07 Jun 2022
Abstract
We developed a path-planning system for radiation source identification devices using 4π gamma imaging. The estimated source location and activity were calculated by an integrated simulation model by using 4π gamma images at multiple measurement positions. Using these calculated values, a prediction model [...] Read more.
We developed a path-planning system for radiation source identification devices using 4π gamma imaging. The estimated source location and activity were calculated by an integrated simulation model by using 4π gamma images at multiple measurement positions. Using these calculated values, a prediction model to estimate the probability of identification at the next measurement position was created by via random forest analysis. The path-planning system based on the prediction model was verified by integrated simulation and experiment for a 137Cs point source. The results showed that 137Cs point sources were identified using the few measurement positions suggested by the path-planning system. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Long Short-Term Memory Network for Plasma Diagnosis from Langmuir Probe Data
Sensors 2022, 22(11), 4281; https://doi.org/10.3390/s22114281 - 04 Jun 2022
Cited by 1
Abstract
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe [...] Read more.
Electrostatic probe diagnosis is the main method of plasma diagnosis. However, the traditional diagnosis theory is affected by many factors, and it is difficult to obtain accurate diagnosis results. In this study, a long short-term memory (LSTM) approach is used for plasma probe diagnosis to derive electron density (Ne) and temperature (Te) more accurately and quickly. The LSTM network uses the data collected by Langmuir probes as input to eliminate the influence of the discharge device on the diagnosis that can be applied to a variety of discharge environments and even space ionospheric diagnosis. In the high-vacuum gas discharge environment, the Langmuir probe is used to obtain current–voltage (I–V) characteristic curves under different Ne and Te. A part of the data input network is selected for training, the other part of the data is used as the test set to test the network, and the parameters are adjusted to make the network obtain better prediction results. Two indexes, namely, mean squared error (MSE) and mean absolute percentage error (MAPE), are evaluated to calculate the prediction accuracy. The results show that using LSTM to diagnose plasma can reduce the impact of probe surface contamination on the traditional diagnosis methods and can accurately diagnose the underdense plasma. In addition, compared with Te, the Ne diagnosis result output by LSTM is more accurate. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
RadArnomaly: Protecting Radar Systems from Data Manipulation Attacks
Sensors 2022, 22(11), 4259; https://doi.org/10.3390/s22114259 - 02 Jun 2022
Abstract
Radar systems are mainly used for tracking aircraft, missiles, satellites, and watercraft. In many cases, information regarding the objects detected by a radar system is sent to, and used by, a peripheral consuming system, such as a missile system or a graphical user [...] Read more.
Radar systems are mainly used for tracking aircraft, missiles, satellites, and watercraft. In many cases, information regarding the objects detected by a radar system is sent to, and used by, a peripheral consuming system, such as a missile system or a graphical user interface used by an operator. Those systems process the data stream and make real-time operational decisions based on the data received. Given this, the reliability and availability of information provided by radar systems have grown in importance. Although the field of cyber security has been continuously evolving, no prior research has focused on anomaly detection in radar systems. In this paper, we present an unsupervised deep-learning-based method for detecting anomalies in radar system data streams; we take into consideration the fact that a data stream created by a radar system is heterogeneous, i.e., it contains both numerical and categorical features with non-linear and complex relationships. We propose a novel technique that learns the correlation between numerical features and an embedding representation of categorical features in an unsupervised manner. The proposed technique, which allows for the detection of the malicious manipulation of critical fields in a data stream, is complemented by a timing-interval anomaly-detection mechanism proposed for the detection of message-dropping attempts. Real radar system data were used to evaluate the proposed method. Our experiments demonstrated the method’s high detection accuracy on a variety of data-stream manipulation attacks (an average detection rate of 88% with a false -alarm rate of 1.59%) and message-dropping attacks (an average detection rate of 92% with a false-alarm rate of 2.2%). Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Transfer Learning for Sentiment Analysis Using BERT Based Supervised Fine-Tuning
Sensors 2022, 22(11), 4157; https://doi.org/10.3390/s22114157 - 30 May 2022
Cited by 3
Abstract
The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions empowers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled [...] Read more.
The growth of the Internet has expanded the amount of data expressed by users across multiple platforms. The availability of these different worldviews and individuals’ emotions empowers sentiment analysis. However, sentiment analysis becomes even more challenging due to a scarcity of standardized labeled data in the Bangla NLP domain. The majority of the existing Bangla research has relied on models of deep learning that significantly focus on context-independent word embeddings, such as Word2Vec, GloVe, and fastText, in which each word has a fixed representation irrespective of its context. Meanwhile, context-based pre-trained language models such as BERT have recently revolutionized the state of natural language processing. In this work, we utilized BERT’s transfer learning ability to a deep integrated model CNN-BiLSTM for enhanced performance of decision-making in sentiment analysis. In addition, we also introduced the ability of transfer learning to classical machine learning algorithms for the performance comparison of CNN-BiLSTM. Additionally, we explore various word embedding techniques, such as Word2Vec, GloVe, and fastText, and compare their performance to the BERT transfer learning strategy. As a result, we have shown a state-of-the-art binary classification performance for Bangla sentiment analysis that significantly outperforms all embedding and algorithms. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Dual Projection Fusion for Reference-Based Image Super-Resolution
Sensors 2022, 22(11), 4119; https://doi.org/10.3390/s22114119 - 28 May 2022
Cited by 1
Abstract
Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred [...] Read more.
Reference-based image super-resolution (RefSR) methods have achieved performance superior to that of single image super-resolution (SISR) methods by transferring texture details from an additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing RefSR methods simply add or concatenate the transferred texture feature with the LR features, which cannot effectively fuse the information of these two independently extracted features. Therefore, this paper proposes a dual projection fusion for reference-based image super-resolution (DPFSR), which enables the network to focus more on the different information between feature sources through inter-residual projection operations, ensuring effective filling of detailed information in the LR feature. Moreover, this paper also proposes a novel backbone called the deep channel attention connection network (DCACN), which is capable of extracting valuable high-frequency components from the LR space to further facilitate the effectiveness of image reconstruction. Experimental results show that we achieve the best peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) performance compared with the state-of-the-art (SOTA) SISR and RefSR methods. Visual results demonstrate that the proposed method in this paper recovers more natural and realistic texture details. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Deep Learning Regression Approaches Applied to Estimate Tillering in Tropical Forages Using Mobile Phone Images
Sensors 2022, 22(11), 4116; https://doi.org/10.3390/s22114116 - 28 May 2022
Abstract
We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. [...] Read more.
We assessed the performance of Convolutional Neural Network (CNN)-based approaches using mobile phone images to estimate regrowth density in tropical forages. We generated a dataset composed of 1124 labeled images with 2 mobile phones 7 days after the harvest of the forage plants. Six architectures were evaluated, including AlexNet, ResNet (18, 34, and 50 layers), ResNeXt101, and DarkNet. The best regression model showed a mean absolute error of 7.70 and a correlation of 0.89. Our findings suggest that our proposal using deep learning on mobile phone images can successfully be used to estimate regrowth density in forages. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
TimeREISE: Time Series Randomized Evolving Input Sample Explanation
Sensors 2022, 22(11), 4084; https://doi.org/10.3390/s22114084 - 27 May 2022
Cited by 1
Abstract
Deep neural networks are one of the most successful classifiers across different domains. However, their use is limited in safety-critical areas due to their limitations concerning interpretability. The research field of explainable artificial intelligence addresses this problem. However, most interpretability methods align to [...] Read more.
Deep neural networks are one of the most successful classifiers across different domains. However, their use is limited in safety-critical areas due to their limitations concerning interpretability. The research field of explainable artificial intelligence addresses this problem. However, most interpretability methods align to the imaging modality by design. The paper introduces TimeREISE, a model agnostic attribution method that shows success in the context of time series classification. The method applies perturbations to the input and considers different attribution map characteristics such as the granularity and density of an attribution map. The approach demonstrates superior performance compared to existing methods concerning different well-established measurements. TimeREISE shows impressive results in the deletion and insertion test, Infidelity, and Sensitivity. Concerning the continuity of an explanation, it showed superior performance while preserving the correctness of the attribution map. Additional sanity checks prove the correctness of the approach and its dependency on the model parameters. TimeREISE scales well with an increasing number of channels and timesteps. TimeREISE applies to any time series classification network and does not rely on prior data knowledge. TimeREISE is suited for any usecase independent of dataset characteristics such as sequence length, channel number, and number of classes. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Review
Recent Trends in AI-Based Intelligent Sensing
Electronics 2022, 11(10), 1661; https://doi.org/10.3390/electronics11101661 - 23 May 2022
Abstract
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has [...] Read more.
In recent years, intelligent sensing has gained significant attention because of its autonomous decision-making ability to solve complex problems. Today, smart sensors complement and enhance the capabilities of human beings and have been widely embraced in numerous application areas. Artificial intelligence (AI) has made astounding growth in domains of natural language processing, machine learning (ML), and computer vision. The methods based on AI enable a computer to learn and monitor activities by sensing the source of information in a real-time environment. The combination of these two technologies provides a promising solution in intelligent sensing. This survey provides a comprehensive summary of recent research on AI-based algorithms for intelligent sensing. This work also presents a comparative analysis of algorithms, models, influential parameters, available datasets, applications and projects in the area of intelligent sensing. Furthermore, we present a taxonomy of AI models along with the cutting edge approaches. Finally, we highlight challenges and open issues, followed by the future research directions pertaining to this exciting and fast-moving field. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
TCSPANet: Two-Staged Contrastive Learning and Sub-Patch Attention Based Network for PolSAR Image Classification
Remote Sens. 2022, 14(10), 2451; https://doi.org/10.3390/rs14102451 - 20 May 2022
Abstract
Polarimetric synthetic aperture radar (PolSAR) image classification has achieved great progress, but there still exist some obstacles. On the one hand, a large amount of PolSAR data is captured. Nevertheless, most of them are not labeled with land cover categories, which cannot be [...] Read more.
Polarimetric synthetic aperture radar (PolSAR) image classification has achieved great progress, but there still exist some obstacles. On the one hand, a large amount of PolSAR data is captured. Nevertheless, most of them are not labeled with land cover categories, which cannot be fully utilized. On the other hand, annotating PolSAR images relies more on domain knowledge and manpower, which makes pixel-level annotation harder. To alleviate the above problems, by integrating contrastive learning and transformer, we propose a novel patch-level PolSAR image classification, i.e., two-staged contrastive learning and sub-patch attention based network (TCSPANet). Firstly, the two-staged contrastive learning based network (TCNet) is designed for learning the representation information of PolSAR images without supervision, and obtaining the discrimination and comparability for actual land covers. Then, resorting to transformer, we construct the sub-patch attention encoder (SPAE) for modelling the context within patch samples. For training the TCSPANet, two patch-level datasets are built up based on unsupervised and semi-supervised methods. When predicting, the classification algorithm, classifying or splitting, is put forward to realise non-overlapping and coarse-to-fine patch-level classification. The classification results of multi-PolSAR images with one trained model suggests that our proposed model is superior to the compared methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Review
Deep Learning-Based Object Detection Techniques for Remote Sensing Images: A Survey
Remote Sens. 2022, 14(10), 2385; https://doi.org/10.3390/rs14102385 - 16 May 2022
Cited by 2
Abstract
Object detection in remote sensing images (RSIs) requires the locating and classifying of objects of interest, which is a hot topic in RSI analysis research. With the development of deep learning (DL) technology, which has accelerated in recent years, numerous intelligent and efficient [...] Read more.
Object detection in remote sensing images (RSIs) requires the locating and classifying of objects of interest, which is a hot topic in RSI analysis research. With the development of deep learning (DL) technology, which has accelerated in recent years, numerous intelligent and efficient detection algorithms have been proposed. Meanwhile, the performance of remote sensing imaging hardware has also evolved significantly. The detection technology used with high-resolution RSIs has been pushed to unprecedented heights, making important contributions in practical applications such as urban detection, building planning, and disaster prediction. However, although some scholars have authored reviews on DL-based object detection systems, the leading DL-based object detection improvement strategies have never been summarized in detail. In this paper, we first briefly review the recent history of remote sensing object detection (RSOD) techniques, including traditional methods as well as DL-based methods. Then, we systematically summarize the procedures used in DL-based detection algorithms. Most importantly, starting from the problems of complex object features, complex background information, tedious sample annotation that will be faced by high-resolution RSI object detection, we introduce a taxonomy based on various detection methods, which focuses on summarizing and classifying the existing attention mechanisms, multi-scale feature fusion, super-resolution and other major improvement strategies. We also introduce recognized open-source remote sensing detection benchmarks and evaluation metrics. Finally, based on the current state of the technology, we conclude by discussing the challenges and potential trends in the field of RSOD in order to provide a reference for researchers who have just entered the field. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Extraction of Micro-Doppler Feature Using LMD Algorithm Combined Supplement Feature for UAVs and Birds Classification
Remote Sens. 2022, 14(9), 2196; https://doi.org/10.3390/rs14092196 - 04 May 2022
Abstract
In the past few decades, the demand for reliable and robust systems capable of monitoring unmanned aerial vehicles (UAVs) increased significantly due to the security threats from its wide applications. During UAVs surveillance, birds are a typical confuser target. Therefore, discriminating UAVs from [...] Read more.
In the past few decades, the demand for reliable and robust systems capable of monitoring unmanned aerial vehicles (UAVs) increased significantly due to the security threats from its wide applications. During UAVs surveillance, birds are a typical confuser target. Therefore, discriminating UAVs from birds is critical for successful non-cooperative UAVs surveillance. Micro-Doppler signature (m-DS) reflects the scattering characteristics of micro-motion targets and has been utilized for many radar automatic target recognition (RATR) tasks. In this paper, the authors deploy local mean decomposition (LMD) to separate the m-DS of the micro-motion parts from the body returns of the UAVs and birds. After the separation, rotating parts will be obtained without the interference of the body components, and the m-DS features can also be revealed more clearly, which is conducive to feature extraction. What is more, there are some problems in using m-DS only for target classification. Firstly, extracting only m-DS features makes incomplete use of information in the spectrogram. Secondly, m-DS can be observed only for metal rotor UAVs, or large UAVs when they are closer to the radar. Lastly, m-DS cannot be observed when the size of the birds is small, or when it is gliding. The authors thus propose an algorithm for RATR of UAVs and interfering targets under a new system of L band staring radar. In this algorithm, to make full use of the information in the spectrogram and supplement the information in exceptional situations, m-DS, movement, and energy aggregation features of the target are extracted from the spectrogram. On the benchmark dataset, the proposed algorithm demonstrates a better performance than the state-of-the-art algorithms. More specifically, the equal error rate (EER) proposed is 2.56% lower than the existing methods, which demonstrates the effectiveness of the proposed algorithm. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Electrocardiogram Biometrics Using Transformer’s Self-Attention Mechanism for Sequence Pair Feature Extractor and Flexible Enrollment Scope Identification
Sensors 2022, 22(9), 3446; https://doi.org/10.3390/s22093446 - 30 Apr 2022
Cited by 1
Abstract
The existing electrocardiogram (ECG) biometrics do not perform well when ECG changes after the enrollment phase because the feature extraction is not able to relate ECG collected during enrollment and ECG collected during classification. In this research, we propose the sequence pair feature [...] Read more.
The existing electrocardiogram (ECG) biometrics do not perform well when ECG changes after the enrollment phase because the feature extraction is not able to relate ECG collected during enrollment and ECG collected during classification. In this research, we propose the sequence pair feature extractor, inspired by Bidirectional Encoder Representations from Transformers (BERT)’s sentence pair task, to obtain a dynamic representation of a pair of ECGs. We also propose using the self-attention mechanism of the transformer to draw an inter-identity relationship when performing ECG identification tasks. The model was trained once with datasets built from 10 ECG databases, and then, it was applied to six other ECG databases without retraining. We emphasize the significance of the time separation between enrollment and classification when presenting the results. The model scored 96.20%, 100.0%, 99.91%, 96.09%, 96.35%, and 98.10% identification accuracy on MIT-BIH Atrial Fibrillation Database (AFDB), Combined measurement of ECG, Breathing and Seismocardiograms (CEBSDB), MIT-BIH Normal Sinus Rhythm Database (NSRDB), MIT-BIH ST Change Database (STDB), ECG-ID Database (ECGIDDB), and PTB Diagnostic ECG Database (PTBDB), respectively, over a short time separation. The model scored 92.70% and 64.16% identification accuracy on ECGIDDB and PTBDB, respectively, over a long time separation, which is a significant improvement compared to state-of-the-art methods. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Convolutional Neural Network-Based Radar Antenna Scanning Period Recognition
Electronics 2022, 11(9), 1383; https://doi.org/10.3390/electronics11091383 - 26 Apr 2022
Abstract
The antenna scanning period (ASP) of radar is a crucial parameter in electronic warfare (EW) which is used in many applications, such as radar work pattern recognition and emitter recognition. For antennas of radars and EW systems, which perform scanning circularly, the method [...] Read more.
The antenna scanning period (ASP) of radar is a crucial parameter in electronic warfare (EW) which is used in many applications, such as radar work pattern recognition and emitter recognition. For antennas of radars and EW systems, which perform scanning circularly, the method based on threshold measurement is invalid. To overcome this shortcoming, this study proposes a method using the convolutional neural network (CNN) to recognize the ASP of radar under the condition that antennas of the radar and EW system both scan circularly. A system model is constructed, and factors affecting the received signal power are analyzed. A CNN model for rapid and accurate ASP radar classification is developed. A large number of received signal time–power images of three separate ASPs are used for the training and testing of the developed model under different experimental conditions. Numerical experiment results and performance comparison demonstrate high classification accuracy and effectiveness of the proposed method in the condition that antennas of radar and EW system are circular scan, where the average recognition accuracy for radar ASP is at least 90% when the signal to-noise ratio (SNR) is not less than 30 dB, which is significantly higher than the recognition accuracy of NAC and AFT methods based on adaptive threshold detection. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Frame-to-Frame Scan Matching Algorithm for 2D Lidar Based on Attention
Appl. Sci. 2022, 12(9), 4341; https://doi.org/10.3390/app12094341 - 25 Apr 2022
Cited by 2
Abstract
The frame-to-frame scan matching algorithm is the most basic robot localization and mapping module and has a huge impact on the accuracy of localization and mapping tasks. To achieve high-precision localization and mapping, we propose a 2D lidar frame-to-frame scanning matching algorithm based [...] Read more.
The frame-to-frame scan matching algorithm is the most basic robot localization and mapping module and has a huge impact on the accuracy of localization and mapping tasks. To achieve high-precision localization and mapping, we propose a 2D lidar frame-to-frame scanning matching algorithm based on an attention mechanism called ASM (Attention-based Scan Matching). Inspired by human navigation, we use a heuristic attention selection mechanism that only considers the areas covered by the robot’s attention while ignoring other areas when performing frame-to-frame scan matching tasks to achieve a similar performance as landmark-based localization. The selected landmark is not switched to another one before it becomes invisible; thus, the ASM cannot accumulate errors during the life cycle of a landmark, and the errors will only increase when the landmark switches. Ideally, the errors accumulate every time the robot moves the distance of the lidar sensing range, so the ASM algorithm can achieve high matching accuracy. On the other hand, the number of involved data during scan matching applications is small compared to the total number of data due to the attention mechanism; as a result, the ASM algorithm has high computational efficiency. In order to prove the effectiveness of the ASM algorithm, we conducted experiments on four datasets. The experimental results show that compared to current methods, ASM can achieve higher matching accuracy and speed. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Prediction of Upper Limb Action Intention Based on Long Short-Term Memory Neural Network
Electronics 2022, 11(9), 1320; https://doi.org/10.3390/electronics11091320 - 21 Apr 2022
Abstract
The use of an inertial measurement unit (IMU) to measure the motion data of the upper limb is a mature method, and the IMU has gradually become an important device for obtaining information sources to control assistive prosthetic hands. However, the control method [...] Read more.
The use of an inertial measurement unit (IMU) to measure the motion data of the upper limb is a mature method, and the IMU has gradually become an important device for obtaining information sources to control assistive prosthetic hands. However, the control method of the assistive prosthetic hand based on the IMU often has problems with high delay. Therefore, this paper proposes a method for predicting the action intentions of upper limbs based on a long short-term memory (LSTM) neural network. First, the degree of correlation between palm movement and arm movement is compared, and the Pearson correlation coefficient is calculated. The correlation coefficients are all greater than 0.6, indicating that there is a strong correlation between palm movement and arm movement. Then, the motion state of the upper limb is divided into the acceleration state, deceleration state and rest state. The rest state of the upper limb is used as a sign to control the assistive prosthetic hand. Using the LSTM to identify the motion state of the upper limb, the accuracy rate is 99%. When predicting the action intention of the upper limb based on the angular velocity of the shoulder and forearm, the LSTM is used to predict the angular velocity of the palm, and the average prediction error of palm motion is 1.5 rad/s. Finally, the feasibility of the method is verified through experiments, in the form of holding an assistive prosthetic hand to imitate a disabled person wearing a prosthesis. The assistive prosthetic hand is used to reproduce foot actions, and the average delay time of foot action was 0.65 s, which was measured by using the method based on the LSTM neural network. However, the average delay time of the manipulator control method based on threshold analysis is 1.35 s. Our experiments show that the prediction method based on the LSTM can achieve low prediction error and delay. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Redundancy Reduction for Sensor Deployment in Prosthetic Socket: A Case Study
Sensors 2022, 22(9), 3103; https://doi.org/10.3390/s22093103 - 19 Apr 2022
Cited by 1
Abstract
The irregular pressure exerted by a prosthetic socket over the residual limb is one of the major factors that cause the discomfort of amputees using artificial limbs. By deploying the wearable sensors inside the socket, the interfacial pressure distribution can be studied to [...] Read more.
The irregular pressure exerted by a prosthetic socket over the residual limb is one of the major factors that cause the discomfort of amputees using artificial limbs. By deploying the wearable sensors inside the socket, the interfacial pressure distribution can be studied to find the active regions and rectify the socket design. In this case study, a clustering-based analysis method is presented to evaluate the density and layout of these sensors, which aims to reduce the local redundancy of the sensor deployment. In particular, a Self-Organizing Map (SOM) and K-means algorithm are employed to find the clustering results of the sensor data, taking the pressure measurement of a predefined sensor placement as the input. Then, one suitable clustering result is selected to detect the layout redundancy from the input area. After that, the Pearson correlation coefficient (PCC) is used as a similarity metric to guide the removal of redundant sensors and generate a new sparser layout. The Jenson–Shannon Divergence (JSD) and the mean pressure are applied as posterior validation metrics that compare the pressure features before and after sensor removal. A case study of a clinical trial with two sensor strips is used to prove the utility of the clustering-based analysis method. The sensors on the posterior and medial regions are suggested to be reduced, and the main pressure features are kept. The proposed method can help sensor designers optimize sensor configurations for intra-socket measurements and thus assist the prosthetists in improving the socket fitting. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Development of a Soft Sensor for Flow Estimation in Water Supply Systems Using Artificial Neural Networks
Sensors 2022, 22(8), 3084; https://doi.org/10.3390/s22083084 - 18 Apr 2022
Cited by 2
Abstract
A water supply system is considered an essential service to the population as it is about providing an essential good for life. This system typically consists of several sensors, transducers, pumps, etc., and some of these elements have high costs and/or complex installation. [...] Read more.
A water supply system is considered an essential service to the population as it is about providing an essential good for life. This system typically consists of several sensors, transducers, pumps, etc., and some of these elements have high costs and/or complex installation. The indirect measurement of a quantity can be used to obtain a desired variable, dispensing with the use of a specific sensor in the plant. Among the contributions of this technique is the design of the pressure controller using the adaptive control, as well as the use of an artificial neural network for the construction of nonlinear models using inherent system parameters such as pressure, engine rotation frequency and control valve angle, with the purpose of estimating the flow. Among the various contributions of the research, we can highlight the suppression in the acquisition of physical flow meters, the elimination of physical installation and others. The validation was carried out through tests in an experimental bench located in the Laboratory of Energy and Hydraulic Efficiency in Sanitation of the Federal University of Paraiba. The results of the soft sensor were compared with those of an electromagnetic flux sensor, obtaining a maximum error of 10%. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Application of an Improved YOLOv5 Algorithm in Real-Time Detection of Foreign Objects by Ground Penetrating Radar
Remote Sens. 2022, 14(8), 1895; https://doi.org/10.3390/rs14081895 - 14 Apr 2022
Cited by 6
Abstract
Ground penetrating radar (GPR) detection is a popular technology in civil engineering. Because of its advantages of non-destructive testing (NDT) and high work efficiency, GPR is widely used to detect hard foreign objects in soil. However, the interpretation of GPR images relies heavily [...] Read more.
Ground penetrating radar (GPR) detection is a popular technology in civil engineering. Because of its advantages of non-destructive testing (NDT) and high work efficiency, GPR is widely used to detect hard foreign objects in soil. However, the interpretation of GPR images relies heavily on the work experience of researchers, which may lead to problems of low detection efficiency and a high false recognition rate. Therefore, this paper proposes a real-time detection technology of GPR based on deep learning for the application of soil foreign object detection. In this study, the GPR image signal is obtained in real time by the GPR instrument and software, and the image signals are preprocessed to improve the signal-to-noise ratio of the GPR image signals and improve the image quality. Then, in view of the problem that YOLOv5 poorly detects small targets, this study improves the problems of false detection and missed detection in real-time GPR detection by improving the network structure of YOLOv5, adding an attention mechanism, data enhancement, and other means. Finally, by establishing a regression equation for the position information of the ground penetrating radar, the precise localization of the foreign matter in the underground soil is realized. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Exploiting Graph and Geodesic Distance Constraint for Deep Learning-Based Visual Odometry
Remote Sens. 2022, 14(8), 1854; https://doi.org/10.3390/rs14081854 - 12 Apr 2022
Abstract
Visual odometry is the task of estimating the trajectory of the moving agents from consecutive images. It is a hot research topic both in robotic and computer vision communities and facilitates many applications, such as autonomous driving and virtual reality. The conventional odometry [...] Read more.
Visual odometry is the task of estimating the trajectory of the moving agents from consecutive images. It is a hot research topic both in robotic and computer vision communities and facilitates many applications, such as autonomous driving and virtual reality. The conventional odometry methods predict the trajectory by utilizing the multiple view geometry between consecutive overlapping images. However, these methods need to be carefully designed and fine-tuned to work well in different environments. Deep learning has been explored to alleviate the challenge by directly predicting the relative pose from the paired images. Deep learning-based methods usually focus on the consecutive images that are feasible to propagate the error over time. In this paper, graph loss and geodesic rotation loss are proposed to enhance deep learning-based visual odometry methods based on graph constraints and geodesic distance, respectively. The graph loss not only considers the relative pose loss of consecutive images, but also the relative pose of non-consecutive images. The relative pose of non-consecutive images is not directly predicted but computed from the relative pose of consecutive ones. The geodesic rotation loss is constructed by the geodesic distance and the model regresses a Lie algebra so(3) (3D vector). This allows a robust and stable convergence. To increase the efficiency, a random strategy is adopted to select the edges of the graph instead of using all of the edges. This strategy provides additional regularization for training the networks. Extensive experiments are conducted on visual odometry benchmarks, and the obtained results demonstrate that the proposed method has comparable performance to other supervised learning-based methods, as well as monocular camera-based methods. The source code and the weight are made publicly available. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
Multi-Agent Deep Q Network to Enhance the Reinforcement Learning for Delayed Reward System
Appl. Sci. 2022, 12(7), 3520; https://doi.org/10.3390/app12073520 - 30 Mar 2022
Cited by 1
Abstract
This study examines various factors and conditions that are related with the performance of reinforcement learning, and defines a multi-agent DQN system (N-DQN) model to improve them. N-DQN model is implemented in this paper with examples of maze finding and ping-pong as examples [...] Read more.
This study examines various factors and conditions that are related with the performance of reinforcement learning, and defines a multi-agent DQN system (N-DQN) model to improve them. N-DQN model is implemented in this paper with examples of maze finding and ping-pong as examples of delayed reward system, where delayed reward occurs, which makes general DQN learning difficult to apply. The implemented N-DQN shows about 3.5 times higher learning performance compared to the Q-Learning algorithm in the reward-sparse environment in the performance evaluation, and compared to DQN, it shows about 1.1 times faster goal achievement speed. In addition, through the implementation of the prioritized experience replay and the implementation of the reward acquisition section segmentation policy, such a problem as positive-bias of the existing reinforcement learning models seldom or never occurred. However, according to the characteristics of the architecture that uses many numbers of actors in parallel, the need for additional research on light-weighting the system for further performance improvement has raised. This paper describes in detail the structure of the proposed multi-agent N_DQN architecture, the contents of various algorithms used, and the specification for its implementation. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
A Novel Framework for Open-Set Authentication of Internet of Things Using Limited Devices
Sensors 2022, 22(7), 2662; https://doi.org/10.3390/s22072662 - 30 Mar 2022
Cited by 2
Abstract
The Internet of Things (IoT) is promising to transform a wide range of fields. However, the open nature of IoT makes it exposed to cybersecurity threats, among which identity spoofing is a typical example. Physical layer authentication, which identifies IoT devices based on [...] Read more.
The Internet of Things (IoT) is promising to transform a wide range of fields. However, the open nature of IoT makes it exposed to cybersecurity threats, among which identity spoofing is a typical example. Physical layer authentication, which identifies IoT devices based on the physical layer characteristics of signals, serves as an effective way to counteract identity spoofing. In this paper, we propose a deep learning-based framework for the open-set authentication of IoT devices. Specifically, additive angular margin softmax (AAMSoftmax) was utilized to enhance the discriminability of learned features and a modified OpenMAX classifier was employed to adaptively identify authorized devices and distinguish unauthorized ones. The experimental results for both simulated data and real ADS–B (Automatic Dependent Surveillance–Broadcast) data indicate that our framework achieved superior performance compared to current approaches, especially when the number of devices used for training is limited. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Road Speed Prediction Scheme by Analyzing Road Environment Data
Sensors 2022, 22(7), 2606; https://doi.org/10.3390/s22072606 - 29 Mar 2022
Cited by 1
Abstract
Road speed is an important indicator of traffic congestion. Therefore, the occurrence of traffic congestion can be reduced by predicting road speed because predicted road speed can be provided to users to distribute traffic. Traffic congestion prediction techniques can provide alternative routes to [...] Read more.
Road speed is an important indicator of traffic congestion. Therefore, the occurrence of traffic congestion can be reduced by predicting road speed because predicted road speed can be provided to users to distribute traffic. Traffic congestion prediction techniques can provide alternative routes to users in advance to help them avoid traffic jams. In this paper, we propose a machine-learning-based road speed prediction scheme using road environment data analysis. The proposed scheme uses not only the speed data of the target road, but also the speed data of neighboring roads that can affect the speed of the target road. Furthermore, the proposed scheme can accurately predict both the average road speed and rapidly changing road speeds. The proposed scheme uses historical average speed data from the target road organized by the day of the week and hour to reflect the average traffic flow on the road. Additionally, the proposed scheme analyzes speed changes in sections where the road speed changes rapidly to reflect traffic flows. Road speeds may change rapidly as a result of unexpected events such as accidents, disasters, and construction work. The proposed scheme predicts final road speeds by applying historical road speeds and events as weights for road speed prediction. It also considers weather conditions. The proposed scheme uses long short-term memory (LSTM), which is suitable for sequential data learning, as a machine learning algorithm for speed prediction. The proposed scheme can predict road speeds in 30 min by using weather data and speed data from the target and neighboring roads as input data. We demonstrate the capabilities of the proposed scheme through various performance evaluations. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Condition Monitoring of Ball Bearings Based on Machine Learning with Synthetically Generated Data
Sensors 2022, 22(7), 2490; https://doi.org/10.3390/s22072490 - 24 Mar 2022
Cited by 2
Abstract
Rolling element bearing faults significantly contribute to overall machine failures, which demand different strategies for condition monitoring and failure detection. Recent advancements in machine learning even further expedite the quest to improve accuracy in fault detection for economic purposes by minimizing scheduled maintenance. [...] Read more.
Rolling element bearing faults significantly contribute to overall machine failures, which demand different strategies for condition monitoring and failure detection. Recent advancements in machine learning even further expedite the quest to improve accuracy in fault detection for economic purposes by minimizing scheduled maintenance. Challenging tasks, such as the gathering of high quality data to explicitly train an algorithm, still persist and are limited in terms of the availability of historical data. In addition, failure data from measurements are typically valid only for the particular machinery components and their settings. In this study, 3D multi-body simulations of a roller bearing with different faults have been conducted to create a variety of synthetic training data for a deep learning convolutional neural network (CNN) and, hence, to address these challenges. The vibration data from the simulation are superimposed with noise collected from the measurement of a healthy bearing and are subsequently converted into a 2D image via wavelet transformation before being fed into the CNN for training. Measurements of damaged bearings are used to validate the algorithm’s performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Article
Classification of Tree Species in Different Seasons and Regions Based on Leaf Hyperspectral Images
Remote Sens. 2022, 14(6), 1524; https://doi.org/10.3390/rs14061524 - 21 Mar 2022
Cited by 1
Abstract
This paper aims to establish a tree species identification model suitable for different seasons and regions based on leaf hyperspectral images, and to mine a more effective hyperspectral identification algorithm. Firstly, the reflectance spectra of leaves in different seasons and regions were analyzed. [...] Read more.
This paper aims to establish a tree species identification model suitable for different seasons and regions based on leaf hyperspectral images, and to mine a more effective hyperspectral identification algorithm. Firstly, the reflectance spectra of leaves in different seasons and regions were analyzed. Then, to solve the problem that 0-element in sparse random (SR) coding matrices affects the classification performance of error-correcting output codes (ECOC), two versions of supervision-mechanism-based ECOC algorithms, namely SM-ECOC-V1 and SM-ECOC-V2, were proposed in this paper. In addition, the performance of the proposed algorithms was compared with that of six traditional algorithms based on all bands and feature bands. The experiment results show that seasonal and regional changes have an effect on the reflectance spectra of leaves, especially in the near-infrared region of 760–1000 nm. When the spectral information of different seasons and different regions is added into the identification model, tree species can be effectively classified. SM-ECOC-V2 achieves the best classification performance based on both all bands and feature bands. Furthermore, both SM-ECOC-V1 and SM-ECOC-V2 outperform the ECOC method under SR coding strategy, indicating the proposed methods can effectively avoid the influence of 0-element in SR coding matrix on classification performance. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Graphical abstract

Article
DAN-SuperPoint: Self-Supervised Feature Point Detection Algorithm with Dual Attention Network
Sensors 2022, 22(5), 1940; https://doi.org/10.3390/s22051940 - 02 Mar 2022
Abstract
In view of the poor performance of traditional feature point detection methods in low-texture situations, we design a new self-supervised feature extraction network that can be applied to the visual odometer (VO) front-end feature extraction module based on the deep learning method. First, [...] Read more.
In view of the poor performance of traditional feature point detection methods in low-texture situations, we design a new self-supervised feature extraction network that can be applied to the visual odometer (VO) front-end feature extraction module based on the deep learning method. First, the network uses the feature pyramid structure to perform multi-scale feature fusion to obtain a feature map containing multi-scale information. Then, the feature map is passed through the position attention module and the channel attention module to obtain the feature dependency relationship of the spatial dimension and the channel dimension, respectively, and the weighted spatial feature map and the channel feature map are added element by element to enhance the feature representation. Finally, the weighted feature maps are trained for detectors and descriptors respectively. In addition, in order to improve the prediction accuracy of feature point locations and speed up the network convergence, we add a confidence loss term and a tolerance loss term to the loss functions of the detector and descriptor, respectively. The experiments show that our network achieves satisfactory performance under the Hpatches dataset and KITTI dataset, indicating the reliability of the network. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1