Next Issue
Volume 23, November-1
Previous Issue
Volume 23, October-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 20 (October-2 2023) – 311 articles

Cover Story (view full-size image): Thermal tomography (TT) is a promising non-contact nondestructive imaging method for the detection of pores in metallic structures printed with the laser powder bed fusion (LPBF) additive manufacturing method. We introduce a novel multi-task learning (MTL) approach, which simultaneously performs a classification of synthetic TT images and segmentation of experimental scanning electron microscopy (SEM) images. Synthetic TT images are obtained from computer simulations of metallic structures with subsurface elliptical-shape defects, while experimental SEM images are obtained from imaging of LPBF-printed stainless-steel coupons. The results of this study show that the MTL network performs better in both the classification and segmentation tasks, as compared to the conventional approach when the individual tasks are performed independently of each other. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 2440 KiB  
Review
An Overview to Molecularly Imprinted Electrochemical Sensors for the Detection of Bisphenol A
by Ying Pan, Mengfan Wu, Mingjiao Shi, Peizheng Shi, Ningbin Zhao, Yangguang Zhu, Hassan Karimi-Maleh, Chen Ye, Cheng-Te Lin and Li Fu
Sensors 2023, 23(20), 8656; https://doi.org/10.3390/s23208656 - 23 Oct 2023
Cited by 1 | Viewed by 1409
Abstract
Bisphenol A (BPA) is an industrial chemical used extensively in plastics and resins. However, its endocrine-disrupting properties pose risks to human health and the environment. Thus, accurate and rapid detection of BPA is crucial for exposure monitoring and risk mitigation. Molecularly imprinted electrochemical [...] Read more.
Bisphenol A (BPA) is an industrial chemical used extensively in plastics and resins. However, its endocrine-disrupting properties pose risks to human health and the environment. Thus, accurate and rapid detection of BPA is crucial for exposure monitoring and risk mitigation. Molecularly imprinted electrochemical sensors (MIES) have emerged as a promising tool for BPA detection due to their high selectivity, sensitivity, affordability, and portability. This review provides a comprehensive overview of recent advances in MIES for BPA detection. We discuss the operating principles, fabrication strategies, materials, and methods used in MIES. Key findings show that MIES demonstrate detection limits comparable or superior to conventional methods like HPLC and GC-MS. Selectivity studies reveal excellent discrimination between BPA and structural analogs. Recent innovations in nanomaterials, novel monomers, and fabrication techniques have enhanced sensitivity, selectivity, and stability. However, limitations exist in reproducibility, selectivity, and stability. While challenges remain, MIES provide a low-cost portable detection method suitable for on-site BPA monitoring in diverse sectors. Further optimization of sensor fabrication and characterization will enable the immense potential of MIES for field-based BPA detection. Full article
(This article belongs to the Special Issue Electrochemical Sensors in Environment, Food and Healthcare)
Show Figures

Figure 1

19 pages, 5655 KiB  
Article
A Robust and Integrated Visual Odometry Framework Exploiting the Optical Flow and Feature Point Method
by Haiyang Qiu, Xu Zhang, Hui Wang, Dan Xiang, Mingming Xiao, Zhiyu Zhu and Lei Wang
Sensors 2023, 23(20), 8655; https://doi.org/10.3390/s23208655 - 23 Oct 2023
Viewed by 1012
Abstract
In this paper, we propose a robust and integrated visual odometry framework exploiting the optical flow and feature point method that achieves faster pose estimate and considerable accuracy and robustness during the odometry process. Our method utilizes optical flow tracking to accelerate the [...] Read more.
In this paper, we propose a robust and integrated visual odometry framework exploiting the optical flow and feature point method that achieves faster pose estimate and considerable accuracy and robustness during the odometry process. Our method utilizes optical flow tracking to accelerate the feature point matching process. In the odometry, two visual odometry methods are used: global feature point method and local feature point method. When there is good optical flow tracking and enough key points optical flow tracking matching is successful, the local feature point method utilizes prior information from the optical flow to estimate relative pose transformation information. In cases where there is poor optical flow tracking and only a small number of key points successfully match, the feature point method with a filtering mechanism is used for posing estimation. By coupling and correlating the two aforementioned methods, this visual odometry greatly accelerates the computation time for relative pose estimation. It reduces the computation time of relative pose estimation to 40% of that of the ORB_SLAM3 front-end odometry, while ensuring that it is not too different from the ORB_SLAM3 front-end odometry in terms of accuracy and robustness. The effectiveness of this method was validated and analyzed using the EUROC dataset within the ORB_SLAM3 open-source framework. The experimental results serve as supporting evidence for the efficacy of the proposed approach. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 5585 KiB  
Article
GPU Implementation of the Improved CEEMDAN Algorithm for Fast and Efficient EEG Time–Frequency Analysis
by Zeyu Wang and Zoltan Juhasz
Sensors 2023, 23(20), 8654; https://doi.org/10.3390/s23208654 - 23 Oct 2023
Cited by 1 | Viewed by 1508
Abstract
Time–frequency analysis of EEG data is a key step in exploring the internal activities of the human brain. Studying oscillations is an important part of the analysis, as they are thought to provide the underlying mechanism for communication between neural assemblies. Traditional methods [...] Read more.
Time–frequency analysis of EEG data is a key step in exploring the internal activities of the human brain. Studying oscillations is an important part of the analysis, as they are thought to provide the underlying mechanism for communication between neural assemblies. Traditional methods of analysis, such as Short-Time FFT and Wavelet Transforms, are not ideal for this task due to the time–frequency uncertainty principle and their reliance on predefined basis functions. Empirical Mode Decomposition and its variants are more suited to this task as they are able to extract the instantaneous frequency and phase information but are too time consuming for practical use. Our aim was to design and develop a massively parallel and performance-optimized GPU implementation of the Improved Complete Ensemble EMD with the Adaptive Noise (CEEMDAN) algorithm that significantly reduces the computational time (from hours to seconds) of such analysis. The resulting GPU program, which is publicly available, was validated against a MATLAB reference implementation and reached over a 260× speedup for actual EEG measurement data, and provided predicted speedups in the range of 3000–8300× for longer measurements when sufficient memory was available. The significance of our research is that this implementation can enable researchers to perform EMD-based EEG analysis routinely, even for high-density EEG measurements. The program is suitable for execution on desktop, cloud, and supercomputer systems and can be the starting point for future large-scale multi-GPU implementations. Full article
(This article belongs to the Special Issue Computational Challenges of High-Density Biosensor Data Analysis)
Show Figures

Figure 1

16 pages, 4691 KiB  
Article
Revisiting Mehrotra and Nichani’s Corner Detection Method for Improvement with Truncated Anisotropic Gaussian Filtering
by Baptiste Magnier and Khizar Hayat
Sensors 2023, 23(20), 8653; https://doi.org/10.3390/s23208653 - 23 Oct 2023
Cited by 1 | Viewed by 981
Abstract
In the early 1990s, Mehrotra and Nichani developed a filtering-based corner detection method, which, though conceptually intriguing, suffered from limited reliability, leading to minimal references in the literature. Despite its underappreciation, the core concept of this method, rooted in the half-edge concept and [...] Read more.
In the early 1990s, Mehrotra and Nichani developed a filtering-based corner detection method, which, though conceptually intriguing, suffered from limited reliability, leading to minimal references in the literature. Despite its underappreciation, the core concept of this method, rooted in the half-edge concept and directional truncated first derivative of Gaussian, holds significant promise. This article presents a comprehensive assessment of the enhanced corner detection algorithm, combining both qualitative and quantitative evaluations. We thoroughly explore the strengths, limitations, and overall effectiveness of our approach by incorporating visual examples and conducting evaluations. Through experiments conducted on both synthetic and real images, we demonstrate the efficiency and reliability of the proposed algorithm. Collectively, our experimental assessments substantiate that our modifications have transformed the method into one that outperforms established benchmark techniques. Due to its ease of implementation, our improved corner detection process has the potential to become a valuable reference for the computer vision community when dealing with corner detection algorithms. This article thus highlights the quantitative achievements of our refined corner detection algorithm, building upon the groundwork laid by Mehrotra and Nichani, and offers valuable insights for the computer vision community seeking robust corner detection solutions. Full article
(This article belongs to the Special Issue Digital Image Processing and Sensing Technologies)
Show Figures

Figure 1

16 pages, 4405 KiB  
Article
Testing and Evaluation of Low-Cost Sensors for Developing Open Smart Campus Systems Based on IoT
by Pascal Neis, Dominik Warch and Max Hoppe
Sensors 2023, 23(20), 8652; https://doi.org/10.3390/s23208652 - 23 Oct 2023
Cited by 3 | Viewed by 1299
Abstract
Urbanization has led to the need for the intelligent management of various urban challenges, from traffic to energy. In this context, smart campuses and buildings emerge as microcosms of smart cities, offering both opportunities and challenges in technology and communication integration. This study [...] Read more.
Urbanization has led to the need for the intelligent management of various urban challenges, from traffic to energy. In this context, smart campuses and buildings emerge as microcosms of smart cities, offering both opportunities and challenges in technology and communication integration. This study sets itself apart by prioritizing sustainable, adaptable, and reusable solutions through an open-source framework and open data protocols. We utilized the Internet of Things (IoT) and cost-effective sensors to capture real-time data for three different use cases: real-time monitoring of visitor counts, room and parking occupancy, and the collection of environment and climate data. Our analysis revealed that the implementation of the utilized hardware and software combination significantly improved the implementation of open smart campus systems, providing a usable visitor information system for students. Moreover, our focus on data privacy and technological versatility offers valuable insights into real-world applicability and limitations. This study contributes a novel framework that not only drives technological advancements but is also readily adaptable, improvable, and reusable across diverse settings, thereby showcasing the untapped potential of smart, sustainable systems. Full article
(This article belongs to the Special Issue Internet of Things for Smart City Application)
Show Figures

Figure 1

27 pages, 6199 KiB  
Article
End-to-End Autonomous Navigation Based on Deep Reinforcement Learning with a Survival Penalty Function
by Shyr-Long Jeng and Chienhsun Chiang
Sensors 2023, 23(20), 8651; https://doi.org/10.3390/s23208651 - 23 Oct 2023
Cited by 5 | Viewed by 1231
Abstract
An end-to-end approach to autonomous navigation that is based on deep reinforcement learning (DRL) with a survival penalty function is proposed in this paper. Two actor–critic (AC) frameworks, namely, deep deterministic policy gradient (DDPG) and twin-delayed DDPG (TD3), are employed to enable a [...] Read more.
An end-to-end approach to autonomous navigation that is based on deep reinforcement learning (DRL) with a survival penalty function is proposed in this paper. Two actor–critic (AC) frameworks, namely, deep deterministic policy gradient (DDPG) and twin-delayed DDPG (TD3), are employed to enable a nonholonomic wheeled mobile robot (WMR) to perform navigation in dynamic environments containing obstacles and for which no maps are available. A comprehensive reward based on the survival penalty function is introduced; this approach effectively solves the sparse reward problem and enables the WMR to move toward its target. Consecutive episodes are connected to increase the cumulative penalty for scenarios involving obstacles; this method prevents training failure and enables the WMR to plan a collision-free path. Simulations are conducted for four scenarios—movement in an obstacle-free space, in a parking lot, at an intersection without and with a central obstacle, and in a multiple obstacle space—to demonstrate the efficiency and operational safety of our method. For the same navigation environment, compared with the DDPG algorithm, the TD3 algorithm exhibits faster numerical convergence and higher stability in the training phase, as well as a higher task execution success rate in the evaluation phase. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 1587 KiB  
Article
Parameter-Free State Estimation Based on Kalman Filter with Attention Learning for GPS Tracking in Autonomous Driving System
by Xue-Bo Jin, Wei Chen, Hui-Jun Ma, Jian-Lei Kong, Ting-Li Su and Yu-Ting Bai
Sensors 2023, 23(20), 8650; https://doi.org/10.3390/s23208650 - 23 Oct 2023
Viewed by 1695
Abstract
GPS-based maneuvering target localization and tracking is a crucial aspect of autonomous driving and is widely used in navigation, transportation, autonomous vehicles, and other fields.The classical tracking approach employs a Kalman filter with precise system parameters to estimate the state. However, it is [...] Read more.
GPS-based maneuvering target localization and tracking is a crucial aspect of autonomous driving and is widely used in navigation, transportation, autonomous vehicles, and other fields.The classical tracking approach employs a Kalman filter with precise system parameters to estimate the state. However, it is difficult to model their uncertainty because of the complex motion of maneuvering targets and the unknown sensor characteristics. Furthermore, GPS data often involve unknown color noise, making it challenging to obtain accurate system parameters, which can degrade the performance of the classical methods. To address these issues, we present a state estimation method based on the Kalman filter that does not require predefined parameters but instead uses attention learning. We use a transformer encoder with a long short-term memory (LSTM) network to extract dynamic characteristics, and estimate the system model parameters online using the expectation maximization (EM) algorithm, based on the output of the attention learning module. Finally, the Kalman filter computes the dynamic state estimates using the parameters of the learned system, dynamics, and measurement characteristics. Based on GPS simulation data and the Geolife Beijing vehicle GPS trajectory dataset, the experimental results demonstrated that our method outperformed classical and pure model-free network estimation approaches in estimation accuracy, providing an effective solution for practical maneuvering-target tracking applications. Full article
Show Figures

Figure 1

22 pages, 4957 KiB  
Article
SLAV-Sim: A Framework for Self-Learning Autonomous Vehicle Simulation
by Jacob Crewe, Aditya Humnabadkar, Yonghuai Liu, Amr Ahmed and Ardhendu Behera
Sensors 2023, 23(20), 8649; https://doi.org/10.3390/s23208649 - 23 Oct 2023
Viewed by 1899
Abstract
With the advent of autonomous vehicles, sensors and algorithm testing have become crucial parts of the autonomous vehicle development cycle. Having access to real-world sensors and vehicles is a dream for researchers and small-scale original equipment manufacturers (OEMs) due to the software and [...] Read more.
With the advent of autonomous vehicles, sensors and algorithm testing have become crucial parts of the autonomous vehicle development cycle. Having access to real-world sensors and vehicles is a dream for researchers and small-scale original equipment manufacturers (OEMs) due to the software and hardware development life-cycle duration and high costs. Therefore, simulator-based virtual testing has gained traction over the years as the preferred testing method due to its low cost, efficiency, and effectiveness in executing a wide range of testing scenarios. Companies like ANSYS and NVIDIA have come up with robust simulators, and open-source simulators such as CARLA have also populated the market. However, there is a lack of lightweight and simple simulators catering to specific test cases. In this paper, we introduce the SLAV-Sim, a lightweight simulator that specifically trains the behaviour of a self-learning autonomous vehicle. This simulator has been created using the Unity engine and provides an end-to-end virtual testing framework for different reinforcement learning (RL) algorithms in a variety of scenarios using camera sensors and raycasts. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

73 pages, 7689 KiB  
Review
Wearable Nano-Based Gas Sensors for Environmental Monitoring and Encountered Challenges in Optimization
by Sara Hooshmand, Panagiotis Kassanos, Meysam Keshavarz, Pelin Duru, Cemre Irmak Kayalan, İzzet Kale and Mustafa Kemal Bayazit
Sensors 2023, 23(20), 8648; https://doi.org/10.3390/s23208648 - 23 Oct 2023
Cited by 6 | Viewed by 4218
Abstract
With a rising emphasis on public safety and quality of life, there is an urgent need to ensure optimal air quality, both indoors and outdoors. Detecting toxic gaseous compounds plays a pivotal role in shaping our sustainable future. This review aims to elucidate [...] Read more.
With a rising emphasis on public safety and quality of life, there is an urgent need to ensure optimal air quality, both indoors and outdoors. Detecting toxic gaseous compounds plays a pivotal role in shaping our sustainable future. This review aims to elucidate the advancements in smart wearable (nano)sensors for monitoring harmful gaseous pollutants, such as ammonia (NH3), nitric oxide (NO), nitrous oxide (N2O), nitrogen dioxide (NO2), carbon monoxide (CO), carbon dioxide (CO2), hydrogen sulfide (H2S), sulfur dioxide (SO2), ozone (O3), hydrocarbons (CxHy), and hydrogen fluoride (HF). Differentiating this review from its predecessors, we shed light on the challenges faced in enhancing sensor performance and offer a deep dive into the evolution of sensing materials, wearable substrates, electrodes, and types of sensors. Noteworthy materials for robust detection systems encompass 2D nanostructures, carbon nanomaterials, conducting polymers, nanohybrids, and metal oxide semiconductors. A dedicated section dissects the significance of circuit integration, miniaturization, real-time sensing, repeatability, reusability, power efficiency, gas-sensitive material deposition, selectivity, sensitivity, stability, and response/recovery time, pinpointing gaps in the current knowledge and offering avenues for further research. To conclude, we provide insights and suggestions for the prospective trajectory of smart wearable nanosensors in addressing the extant challenges. Full article
(This article belongs to the Topic Advanced Nanomaterials for Sensing Applications)
Show Figures

Graphical abstract

18 pages, 6630 KiB  
Article
Structural Uncertainty Analysis of High-Temperature Strain Gauge Based on Monte Carlo Stochastic Finite Element Method
by Yazhi Zhao, Fengling Zhang, Yanting Ai, Jing Tian and Zhi Wang
Sensors 2023, 23(20), 8647; https://doi.org/10.3390/s23208647 - 23 Oct 2023
Cited by 1 | Viewed by 888
Abstract
The high-temperature strain gauge is a sensor for strain measurement in high-temperature environments. The measurement results often have a certain divergence, so the uncertainty of the high-temperature strain gauge system is analyzed theoretically. Firstly, in the conducted research, a deterministic finite element analysis [...] Read more.
The high-temperature strain gauge is a sensor for strain measurement in high-temperature environments. The measurement results often have a certain divergence, so the uncertainty of the high-temperature strain gauge system is analyzed theoretically. Firstly, in the conducted research, a deterministic finite element analysis of the temperature field of the strain gauge is carried out using MATLAB software. Then, the primary sub-model method is used to model the system; an equivalent thermal load and force are loaded onto the model. The thermal response of the grid wire is calculated by the finite element method (FEM). Thermal–mechanical coupling analysis is carried out by ANSYS, and the MATLAB program is verified. Finally, the stochastic finite element method (SFEM) combined with the Monte Carlo method (MCM) is used to analyze the effects of the physical parameters, geometric parameters, and load uncertainties on the thermal response of the grid wire. The results show that the difference of temperature and strain calculated by ANSYS and MATLAB is 1.34% and 0.64%, respectively. The calculation program is accurate and effective. The primary sub-model method is suitable for the finite element modeling of strain gauge systems, and the number of elements is reduced effectively. The stochastic uncertainty analysis of the thermal response on the grid wire of a high-temperature strain gauge provides a theoretical basis for the dispersion of the measurement results of the strain gauge. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

25 pages, 8371 KiB  
Article
Multisensory System for Long-Term Activity Monitoring to Facilitate Aging-in-Place
by Sergio Lluva-Plaza, Ana Jiménez-Martín, David Gualda-Gómez, José Manuel Villadangos-Carrizo and Juan Jesús García-Domínguez
Sensors 2023, 23(20), 8646; https://doi.org/10.3390/s23208646 - 23 Oct 2023
Cited by 1 | Viewed by 1809
Abstract
Demographic changes and an ageing population require more effective methods to confront the increased prevalence of chronic diseases which generate dependence in older adults as well as an important rise in social expenditure. The challenge is not only to increase life expectancy, but [...] Read more.
Demographic changes and an ageing population require more effective methods to confront the increased prevalence of chronic diseases which generate dependence in older adults as well as an important rise in social expenditure. The challenge is not only to increase life expectancy, but also to ensure that the older adults can fully enjoy that moment in their lives, living where they wish to (private home, nursing home, …). Physical activity (PA) is a representative parameter of a person’s state of health, especially when we are getting older, because it plays an important role in the prevention of diseases, and that is the reason why it is promoted in older adults. One of the goals of this work is to assess the feasibility of objectively measuring the PA levels of older adults wherever they live. In addition, this work proposes long-term monitoring that helps to gather daily activity patterns. We fuse inertial measurements with other technologies (WiFi- and ultrasonic-based location) in order to provide not only PA, but also information about the place where the activities are carried out, including both room-level location and precise positioning (depending on the technology used). With this information, we would be able to generate information about the person’s daily routines which can be very useful for the early detection of physical or cognitive impairment. Full article
Show Figures

Figure 1

15 pages, 7893 KiB  
Article
Technical Solution for Monitoring Climatically Active Gases Using the Turbulent Pulsation Method
by Ekaterina Kulakova and Elena Muravyova
Sensors 2023, 23(20), 8645; https://doi.org/10.3390/s23208645 - 23 Oct 2023
Viewed by 1097
Abstract
This article introduces a technical solution for investigating the movement of gases in the atmosphere through the turbulent pulsation method. A comprehensive control system was developed to measure and record the concentrations of carbon dioxide and methane, temperature, humidity, atmospheric air pressure, wind [...] Read more.
This article introduces a technical solution for investigating the movement of gases in the atmosphere through the turbulent pulsation method. A comprehensive control system was developed to measure and record the concentrations of carbon dioxide and methane, temperature, humidity, atmospheric air pressure, wind direction, and speed in the vertical plane. The selection and validation of sensor types and brands for each parameter, along with the system for data collection, registration, and device monitoring, were meticulously executed. The AHT21 + ENS160 sensor was chosen for temperature measurement, the BME680 was identified as the optimal sensor for humidity and atmospheric pressure control, Eu-M-CH4-OD was designated for methane gas analysis, and CM1107N for carbon dioxide measurements. Wind direction and speed are best measured with the SM5386V anemometer. The control system utilizes the Arduino controller, and software was developed for the multicomponent gas analyzer. Full article
Show Figures

Figure 1

14 pages, 2440 KiB  
Article
Cranial Electrode Belt Position Improves Diagnostic Possibilities of Electrical Impedance Tomography during Laparoscopic Surgery with Capnoperitoneum
by Kristyna Koldova, Ales Rara, Martin Muller, Tomas Tyll and Karel Roubik
Sensors 2023, 23(20), 8644; https://doi.org/10.3390/s23208644 - 23 Oct 2023
Viewed by 978
Abstract
Laparoscopic surgery with capnoperitoneum brings many advantages to patients, but also emphasizes the negative impact of anesthesia and mechanical ventilation on the lungs. Even though many studies use electrical impedance tomography (EIT) for lung monitoring during these surgeries, it is not clear what [...] Read more.
Laparoscopic surgery with capnoperitoneum brings many advantages to patients, but also emphasizes the negative impact of anesthesia and mechanical ventilation on the lungs. Even though many studies use electrical impedance tomography (EIT) for lung monitoring during these surgeries, it is not clear what the best position of the electrode belt on the patient’s thorax is, considering the cranial shift of the diaphragm. We monitored 16 patients undergoing a laparoscopic surgery with capnoperitoneum using EIT with two independent electrode belts at different tomographic levels: in the standard position of the 4th–6th intercostal space, as recommended by the manufacturer, and in a more cranial position at the level of the axilla. Functional residual capacity (FRC) was measured, and a recruitment maneuver was performed at the end of the procedure by raising the positive end-expiratory pressure (PEEP) by 5 cmH2O. The results based on the spectral analysis of the EIT signal show that the ventilation-related impedance changes are not detectable by the belt in the standard position. In general, the cranial belt position might be more suitable for the lung monitoring during the capnoperitoneum since the ventilation signal remains dominant in the obtained impedance waveform. FRC was significantly decreased by the capnoperitoneum and remained lower also after desufflation. Full article
Show Figures

Figure 1

15 pages, 4525 KiB  
Article
Research on Emotion Recognition Method of Cerebral Blood Oxygen Signal Based on CNN-Transformer Network
by Zihao Jin, Zhiming Xing, Yiran Wang, Shuqi Fang, Xiumin Gao and Xiangmei Dong
Sensors 2023, 23(20), 8643; https://doi.org/10.3390/s23208643 - 23 Oct 2023
Cited by 1 | Viewed by 1107
Abstract
In recent years, research on emotion recognition has become more and more popular, but there are few studies on emotion recognition based on cerebral blood oxygen signals. Since the electroencephalogram (EEG) is easily disturbed by eye movement and the portability is not high, [...] Read more.
In recent years, research on emotion recognition has become more and more popular, but there are few studies on emotion recognition based on cerebral blood oxygen signals. Since the electroencephalogram (EEG) is easily disturbed by eye movement and the portability is not high, this study uses a more comfortable and convenient functional near-infrared spectroscopy (fNIRS) system to record brain signals from participants while watching three different types of video clips. During the experiment, the changes in cerebral blood oxygen concentration in the 8 channels of the prefrontal cortex of the brain were collected and analyzed. We processed and divided the collected cerebral blood oxygen data, and used multiple classifiers to realize the identification of the three emotional states of joy, neutrality, and sadness. Since the classification accuracy of the convolutional neural network (CNN) in this research is not significantly superior to that of the XGBoost algorithm, this paper proposes a CNN-Transformer network based on the characteristics of time series data to improve the classification accuracy of ternary emotions. The network first uses convolution operations to extract channel features from multi-channel time series, then the features and the output information of the fully connected layer are input to the Transformer netork structure, and its multi-head attention mechanism is used to focus on different channel domain information, which has better spatiality. The experimental results show that the CNN-Transformer network can achieve 86.7% classification accuracy for ternary emotions, which is about 5% higher than the accuracy of CNN, and this provides some help for other research in the field of emotion recognition based on time series data such as fNIRS. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

24 pages, 1759 KiB  
Article
Distributed Denial of Service Attack Detection in Network Traffic Using Deep Learning Algorithm
by Mahrukh Ramzan, Muhammad Shoaib, Ayesha Altaf, Shazia Arshad, Faiza Iqbal, Ángel Kuc Castilla and Imran Ashraf
Sensors 2023, 23(20), 8642; https://doi.org/10.3390/s23208642 - 23 Oct 2023
Cited by 4 | Viewed by 2843
Abstract
Internet security is a major concern these days due to the increasing demand for information technology (IT)-based platforms and cloud computing. With its expansion, the Internet has been facing various types of attacks. Viruses, denial of service (DoS) attacks, distributed DoS (DDoS) attacks, [...] Read more.
Internet security is a major concern these days due to the increasing demand for information technology (IT)-based platforms and cloud computing. With its expansion, the Internet has been facing various types of attacks. Viruses, denial of service (DoS) attacks, distributed DoS (DDoS) attacks, code injection attacks, and spoofing are the most common types of attacks in the modern era. Due to the expansion of IT, the volume and severity of network attacks have been increasing lately. DoS and DDoS are the most frequently reported network traffic attacks. Traditional solutions such as intrusion detection systems and firewalls cannot detect complex DDoS and DoS attacks. With the integration of artificial intelligence-based machine learning and deep learning methods, several novel approaches have been presented for DoS and DDoS detection. In particular, deep learning models have played a crucial role in detecting DDoS attacks due to their exceptional performance. This study adopts deep learning models including recurrent neural network (RNN), long short-term memory (LSTM), and gradient recurrent unit (GRU) to detect DDoS attacks on the most recent dataset, CICDDoS2019, and a comparative analysis is conducted with the CICIDS2017 dataset. The comparative analysis contributes to the development of a competent and accurate method for detecting DDoS attacks with reduced execution time and complexity. The experimental results demonstrate that models perform equally well on the CICDDoS2019 dataset with an accuracy score of 0.99, but there is a difference in execution time, with GRU showing less execution time than those of RNN and LSTM. Full article
(This article belongs to the Special Issue Security and Privacy in Wireless Communication and Internet of Things)
Show Figures

Figure 1

14 pages, 3155 KiB  
Article
A Compact RF Energy Harvesting Wireless Sensor Node with an Energy Intensity Adaptive Management Algorithm
by Xiaoqiang Liu, Mingxue Li, Xinkai Chen, Yiheng Zhao, Liyi Xiao and Yufeng Zhang
Sensors 2023, 23(20), 8641; https://doi.org/10.3390/s23208641 - 23 Oct 2023
Cited by 2 | Viewed by 884
Abstract
This paper presents a compact RF energy harvesting wireless sensor node with the antenna, rectifier, energy management circuits, and load integrated on a single printed circuit board and a total size of 53 mm × 59.77 mm × 4.5 mm. By etching rectangular [...] Read more.
This paper presents a compact RF energy harvesting wireless sensor node with the antenna, rectifier, energy management circuits, and load integrated on a single printed circuit board and a total size of 53 mm × 59.77 mm × 4.5 mm. By etching rectangular slots in the radiation patch, the antenna area is reduced by 13.9%. The antenna is tested to have an S11 of −24.9 dB at 2.437 GHz and a maximum gain of 4.8 dBi. The rectifier has a maximum RF-to-DC conversion efficiency of 52.53% at 7 dBm input energy. The proposed WSN can achieve self-powered operation at a distance of 13.4 m from the transmitter source. To enhance the conversion efficiency under different input energy densities, this paper establishes an energy model for two operating modes and proposes an energy-intensity adaptive management algorithm. The experiments demonstrated that the proposed WSN can effectively distinguish between the two operating modes based on input energy intensity and realize efficient energy management. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

16 pages, 10952 KiB  
Article
Control Algorithm Design of a Force-Balance Accelerometer
by Zhiqiang Liu, Lei Xia, Bin Wu, Ronghua Huan and Zhilong Huang
Sensors 2023, 23(20), 8640; https://doi.org/10.3390/s23208640 - 23 Oct 2023
Viewed by 1208
Abstract
The force-balanced accelerometer (FBA), unlike other types of sensors, incorporates a closed-loop control. The efficacy of the system is contingent not solely on the hardware, but more critically on the formulation of the control algorithm. Conventional control strategies are usually designed for the [...] Read more.
The force-balanced accelerometer (FBA), unlike other types of sensors, incorporates a closed-loop control. The efficacy of the system is contingent not solely on the hardware, but more critically on the formulation of the control algorithm. Conventional control strategies are usually designed for the purpose of response minimization of the sensitive elements, which limits the measurement accuracy and applicable frequency bandwidth of FBAs. In this paper, based on the model predictive control (MPC), a control algorithm of a force-balance accelerometer considering time delay is designed. The variable augmentation method is proposed to convert the force-balance control into an easy-handed measurement error minimization control problem. The discretization method is applied to deal with the time delay problem in the closed loop. The control algorithm is integrated into a practical FBA. The effectiveness of the proposed control is demonstrated through experiments conducted in an ultra-quiet chamber, as well as simulations. The results show that the closed loop in the FBA has a time delay 10 times of the control period, and, utilizing the proposed control, the acceleration signals can be accurately measured with a frequency range larger than 500 Hz. Meanwhile, the vibration response of the sensitive element of the controlled FBA is maintained at the level of microns, which guarantees a large measurement range of the FBA. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

18 pages, 2099 KiB  
Article
Depressive Disorder Recognition Based on Frontal EEG Signals and Deep Learning
by Yanting Xu, Hongyang Zhong, Shangyan Ying, Wei Liu, Guibin Chen, Xiaodong Luo and Gang Li
Sensors 2023, 23(20), 8639; https://doi.org/10.3390/s23208639 - 23 Oct 2023
Cited by 2 | Viewed by 1541
Abstract
Depressive disorder (DD) has become one of the most common mental diseases, seriously endangering both the affected person’s psychological and physical health. Nowadays, a DD diagnosis mainly relies on the experience of clinical psychiatrists and subjective scales, lacking objective, accurate, practical, and automatic [...] Read more.
Depressive disorder (DD) has become one of the most common mental diseases, seriously endangering both the affected person’s psychological and physical health. Nowadays, a DD diagnosis mainly relies on the experience of clinical psychiatrists and subjective scales, lacking objective, accurate, practical, and automatic diagnosis technologies. Recently, electroencephalogram (EEG) signals have been widely applied for DD diagnosis, but mainly with high-density EEG, which can severely limit the efficiency of the EEG data acquisition and reduce the practicability of diagnostic techniques. The current study attempts to achieve accurate and practical DD diagnoses based on combining frontal six-channel electroencephalogram (EEG) signals and deep learning models. To this end, 10 min clinical resting-state EEG signals were collected from 41 DD patients and 34 healthy controls (HCs). Two deep learning models, multi-resolution convolutional neural network (MRCNN) combined with long short-term memory (LSTM) (named MRCNN-LSTM) and MRCNN combined with residual squeeze and excitation (RSE) (named MRCNN-RSE), were proposed for DD recognition. The results of this study showed that the higher EEG frequency band obtained the better classification performance for DD diagnosis. The MRCNN-RSE model achieved the highest classification accuracy of 98.48 ± 0.22% with 8–30 Hz EEG signals. These findings indicated that the proposed analytical framework can provide an accurate and practical strategy for DD diagnosis, as well as essential theoretical and technical support for the treatment and efficacy evaluation of DD. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications—2nd Edition)
Show Figures

Figure 1

27 pages, 13491 KiB  
Article
Safety Evaluation of Reinforced Concrete Structures Using Multi-Source Fusion Uncertainty Cloud Inference and Experimental Study
by Zhao Liu, Huiyong Guo and Bo Zhang
Sensors 2023, 23(20), 8638; https://doi.org/10.3390/s23208638 - 22 Oct 2023
Viewed by 950
Abstract
Structural damage detection and safety evaluations have emerged as a core driving force in structural health monitoring (SHM). Focusing on the multi-source monitoring data in sensing systems and the uncertainty caused by initial defects and monitoring errors, in this study, we develop a [...] Read more.
Structural damage detection and safety evaluations have emerged as a core driving force in structural health monitoring (SHM). Focusing on the multi-source monitoring data in sensing systems and the uncertainty caused by initial defects and monitoring errors, in this study, we develop a comprehensive method for evaluating structural safety, named multi-source fusion uncertainty cloud inference (MFUCI), that focuses on characterizing the relationship between condition indexes and structural performance in order to quantify the structural health status. Firstly, based on cloud theory, the cloud numerical characteristics of the condition index cloud drops are used to establish the qualitative rule base. Next, the proposed multi-source fusion generator yields a multi-source joint certainty degree, which is then transformed into cloud drops with certainty degree information. Lastly, a quantitative structural health evaluation is performed through precision processing. This study focuses on the numerical simulation of an RC frame at the structural level and an RC T-beam damage test at the component level, based on the stiffness degradation process. The results show that the proposed method is effective at evaluating the health of components and structures in a quantitative manner. It demonstrates reliability and robustness by incorporating uncertainty information through noise immunity and cross-domain inference, outperforming baseline models such as Bayesian neural network (BNN) in uncertainty estimations and LSTM in point estimations. Full article
(This article belongs to the Topic AI Enhanced Civil Infrastructure Safety)
Show Figures

Figure 1

16 pages, 2229 KiB  
Article
Full-Perception Robotic Surgery Environment with Anti-Occlusion Global–Local Joint Positioning
by Hongpeng Wang, Tianzuo Liu, Jianren Chen, Chongshan Fan, Yanding Qin and Jianda Han
Sensors 2023, 23(20), 8637; https://doi.org/10.3390/s23208637 - 22 Oct 2023
Cited by 1 | Viewed by 1137
Abstract
The robotic surgery environment represents a typical scenario of human–robot cooperation. In such a scenario, individuals, robots, and medical devices move relative to each other, leading to unforeseen mutual occlusion. Traditional methods use binocular OTS to focus on the local surgical site, without [...] Read more.
The robotic surgery environment represents a typical scenario of human–robot cooperation. In such a scenario, individuals, robots, and medical devices move relative to each other, leading to unforeseen mutual occlusion. Traditional methods use binocular OTS to focus on the local surgical site, without considering the integrity of the scene, and the work space is also restricted. To address this challenge, we propose the concept of a fully perception robotic surgery environment and build a global–local joint positioning framework. Furthermore, based on data characteristics, an improved Kalman filter method is proposed to improve positioning accuracy. Finally, drawing from the view margin model, we design a method to evaluate positioning accuracy in a dynamic occlusion environment. The experimental results demonstrate that our method yields better positioning results than classical filtering methods. Full article
(This article belongs to the Special Issue Sensing Technologies in Medical Robot)
Show Figures

Figure 1

21 pages, 8278 KiB  
Article
ECG Multi-Emotion Recognition Based on Heart Rate Variability Signal Features Mining
by Ling Wang, Jiayu Hao and Tie Hua Zhou
Sensors 2023, 23(20), 8636; https://doi.org/10.3390/s23208636 - 22 Oct 2023
Viewed by 2705
Abstract
Heart rate variability (HRV) serves as a significant physiological measure that mirrors the regulatory capacity of the cardiac autonomic nervous system. It not only indicates the extent of the autonomic nervous system’s influence on heart function but also unveils the connection between emotions [...] Read more.
Heart rate variability (HRV) serves as a significant physiological measure that mirrors the regulatory capacity of the cardiac autonomic nervous system. It not only indicates the extent of the autonomic nervous system’s influence on heart function but also unveils the connection between emotions and psychological disorders. Currently, in the field of emotion recognition using HRV, most methods focus on feature extraction through the comprehensive analysis of signal characteristics; however, these methods lack in-depth analysis of the local features in the HRV signal and cannot fully utilize the information of the HRV signal. Therefore, we propose the HRV Emotion Recognition (HER) method, utilizing the amplitude level quantization (ALQ) technique for feature extraction. First, we employ the emotion quantification analysis (EQA) technique to impartially assess the semantic resemblance of emotions within the domain of emotional arousal. Then, we use the ALQ method to extract rich local information features by analyzing the local information in each frequency range of the HRV signal. Finally, the extracted features are classified using a logistic regression (LR) classification algorithm, which can achieve efficient and accurate emotion recognition. According to the experiment findings, the approach surpasses existing techniques in emotion recognition accuracy, achieving an average accuracy rate of 84.3%. Therefore, the HER method proposed in this paper can effectively utilize the local features in HRV signals to achieve efficient and accurate emotion recognition. This will provide strong support for emotion research in psychology, medicine, and other fields. Full article
(This article belongs to the Special Issue Advanced-Sensors-Based Emotion Sensing and Recognition)
Show Figures

Figure 1

19 pages, 6511 KiB  
Article
Lightweight 3D Dense Autoencoder Network for Hyperspectral Remote Sensing Image Classification
by Yang Bai, Xiyan Sun, Yuanfa Ji, Wentao Fu and Xiaoyu Duan
Sensors 2023, 23(20), 8635; https://doi.org/10.3390/s23208635 - 22 Oct 2023
Viewed by 1097
Abstract
The lack of labeled training samples restricts the improvement of Hyperspectral Remote Sensing Image (HRSI) classification accuracy based on deep learning methods. In order to improve the HRSI classification accuracy when there are few training samples, a Lightweight 3D Dense Autoencoder Network (L3DDAN) [...] Read more.
The lack of labeled training samples restricts the improvement of Hyperspectral Remote Sensing Image (HRSI) classification accuracy based on deep learning methods. In order to improve the HRSI classification accuracy when there are few training samples, a Lightweight 3D Dense Autoencoder Network (L3DDAN) is proposed. Structurally, the L3DDAN is designed as a stacked autoencoder which consists of an encoder and a decoder. The encoder is a hybrid combination of 3D convolutional operations and 3D dense block for extracting deep features from raw data. The decoder composed of 3D deconvolution operations is designed to reconstruct data. The L3DDAN is trained by unsupervised learning without labeled samples and supervised learning with a small number of labeled samples, successively. The network composed of the fine-tuned encoder and trained classifier is used for classification tasks. The extensive comparative experiments on three benchmark HRSI datasets demonstrate that the proposed framework with fewer trainable parameters can maintain superior performance to the other eight state-of-the-art algorithms when there are only a few training samples. The proposed L3DDAN can be applied to HRSI classification tasks, such as vegetation classification. Future work mainly focuses on training time reduction and applications on more real-world datasets. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 4324 KiB  
Article
Optical Multimode Fiber-Based Pipe Leakage Sensor Using Speckle Pattern Analysis
by Jonathan Philosof, Yevgeny Beiderman, Sergey Agdarov, Yafim Beiderman and Zeev Zalevsky
Sensors 2023, 23(20), 8634; https://doi.org/10.3390/s23208634 - 22 Oct 2023
Viewed by 1165
Abstract
Water is an invaluable resource quickly becoming scarce in many parts of the world. Therefore, the importance of efficiency in water supply and distribution has greatly increased. Some of the main tools for limiting losses in supply and distribution networks are leakage sensors [...] Read more.
Water is an invaluable resource quickly becoming scarce in many parts of the world. Therefore, the importance of efficiency in water supply and distribution has greatly increased. Some of the main tools for limiting losses in supply and distribution networks are leakage sensors that enable real-time monitoring. With fiber optics recently becoming a commodity, along with the sound advances in computing power and its miniaturization, multipurpose sensors relying on these technologies have gradually become common. In this study, we explore the development and testing of a multimode optic-fiber-based pipe monitoring and leakage detector based on statistical and machine learning analyses of speckle patterns captured from the fiber’s outlet by a defocused camera. The sensor was placed inside or over a PVC pipe with covered and exposed core configurations, while 2 to 8 mm diameter pipe leaks were simulated under varied water flow and pressure. We found an overall leak size determination accuracy of 75.8% for a 400 µm covered fiber and of 68.3% for a 400 µm exposed fiber and demonstrated that our sensor detected pipe bursts, outside interventions, and shocks. This result was consistent for the sensors fixed inside and outside the pipe with both covered and exposed fibers. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

19 pages, 3705 KiB  
Article
FilterformerPose: Satellite Pose Estimation Using Filterformer
by Ruida Ye, Lifen Wang, Yuan Ren, Yujing Wang, Xiaocen Chen and Yufei Liu
Sensors 2023, 23(20), 8633; https://doi.org/10.3390/s23208633 - 22 Oct 2023
Viewed by 1101
Abstract
Satellite pose estimation plays a crucial role within the aerospace field, impacting satellite positioning, navigation, control, orbit design, on-orbit maintenance (OOM), and collision avoidance. However, the accuracy of vision-based pose estimation is severely constrained by the complex spatial environment, including variable solar illumination [...] Read more.
Satellite pose estimation plays a crucial role within the aerospace field, impacting satellite positioning, navigation, control, orbit design, on-orbit maintenance (OOM), and collision avoidance. However, the accuracy of vision-based pose estimation is severely constrained by the complex spatial environment, including variable solar illumination and the diffuse reflection of the Earth’s background. To overcome these problems, we introduce a novel satellite pose estimation network, FilterformerPose, which uses a convolutional neural network (CNN) backbone for feature learning and extracts feature maps at various CNN layers. Subsequently, these maps are fed into distinct translation and orientation regression networks, effectively decoupling object translation and orientation information. Within the pose regression network, we have devised a filter-based transformer encoder model, named filterformer, and constructed a hypernetwork-like design based on the filter self-attention mechanism to effectively remove noise and generate adaptive weight information. The related experiments were conducted using the Unreal Rendered Spacecraft On-Orbit (URSO) dataset, yielding superior results compared to alternative methods. We also achieved better results in the camera pose localization task, indicating that FilterformerPose can be adapted to other computer vision downstream tasks. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 4489 KiB  
Article
Loop Closure Detection Method Based on Similarity Differences between Image Blocks
by Yizhe Huang, Bin Huang, Zhifu Zhang, Yuanyuan Shi, Yizhao Yuan and Jinfeng Sun
Sensors 2023, 23(20), 8632; https://doi.org/10.3390/s23208632 - 22 Oct 2023
Viewed by 1022
Abstract
Variations with respect to perspective, lighting, weather, and interference from dynamic objects may all have an impact on the accuracy of the entire system during autonomous positioning and during the navigation of mobile visual simultaneous localization and mapping (SLAM) robots. As it is [...] Read more.
Variations with respect to perspective, lighting, weather, and interference from dynamic objects may all have an impact on the accuracy of the entire system during autonomous positioning and during the navigation of mobile visual simultaneous localization and mapping (SLAM) robots. As it is an essential element of visual SLAM systems, loop closure detection plays a vital role in eradicating front-end-induced accumulated errors and guaranteeing the map’s general consistency. Presently, deep-learning-based loop closure detection techniques place more emphasis on enhancing the robustness of image descriptors while neglecting similarity calculations or the connections within the internal regions of the image. In response to this issue, this article proposes a loop closure detection method based on similarity differences between image blocks. Firstly, image descriptors are extracted using a lightweight convolutional neural network (CNN) model with effective loop closure detection. Subsequently, the image pairs with the greatest degree of similarity are evenly divided into blocks, and the level of similarity among the blocks is used to recalculate the degree of the overall similarity of the image pairs. The block similarity calculation module can effectively reduce the similarity of incorrect loop closure image pairs, which makes it easier to identify the correct loopback. Finally, the approach proposed in this article is compared with loop closure detection methods based on four distinct CNN models with a recall rate of 100% accuracy; said approach performs significantly superiorly. The application of the block similarity calculation module proposed in this article to the aforementioned four CNN models can increase the recall rate’s accuracy to 100%; this proves that the proposed method can successfully improve the loop closure detection effect, and the similarity calculation module in the algorithm has a certain degree of universality. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 2593 KiB  
Article
Adaptive Output Containment Tracking Control for Heterogeneous Wide-Area Networks with Aperiodic Intermittent Communication and Uncertain Leaders
by Yanpeng Shi, Jiangping Hu and Bijoy Kumar Ghosh
Sensors 2023, 23(20), 8631; https://doi.org/10.3390/s23208631 - 22 Oct 2023
Viewed by 804
Abstract
This paper proposes an adaptive distributed hybrid control approach to investigate the output containment tracking problem of heterogeneous wide-area networks with intermittent communication. First, a clustered network is modeled for a wide-area scenario. An aperiodic intermittent communication mechanism is exerted on the clusters [...] Read more.
This paper proposes an adaptive distributed hybrid control approach to investigate the output containment tracking problem of heterogeneous wide-area networks with intermittent communication. First, a clustered network is modeled for a wide-area scenario. An aperiodic intermittent communication mechanism is exerted on the clusters such that clusters only communicate through leaders. Second, in order to remove the assumption that each follower must know the system matrix of the leaders and achieve output containment, a distributed adaptive hybrid control strategy is proposed for each agent under the internal model and adaptive estimation mechanism. Third, sufficient conditions based on average dwell-time are provided for the output containment achievement using a Lyapunov function method, from which the exponential stability of the closed-loop system is analyzed. Finally, simulation results are presented to demonstrate the effectiveness of the proposed adaptive distributed intermittent control strategy. Full article
Show Figures

Figure 1

17 pages, 3060 KiB  
Article
Phased Feature Extraction Network for Vehicle Search Tasks Based on Cross-Camera for Vehicle–Road Collaborative Perception
by Hai Wang, Yaqing Niu, Long Chen, Yicheng Li and Tong Luo
Sensors 2023, 23(20), 8630; https://doi.org/10.3390/s23208630 - 22 Oct 2023
Cited by 1 | Viewed by 1034
Abstract
The objective of vehicle search is to locate and identify vehicles in uncropped, real-world images, which is the combination of two tasks: vehicle detection and re-identification (Re-ID). As an emerging research topic, vehicle search plays a significant role in the perception of cooperative [...] Read more.
The objective of vehicle search is to locate and identify vehicles in uncropped, real-world images, which is the combination of two tasks: vehicle detection and re-identification (Re-ID). As an emerging research topic, vehicle search plays a significant role in the perception of cooperative autonomous vehicles and road driving in the distant future and has become a trend in the future development of intelligent driving. However, there is no suitable dataset for this study. The Tsinghua University DAIR-V2X dataset is utilized to create the first cross-camera vehicle search dataset, DAIR-V2XSearch, which combines the cameras at both ends of the vehicle and the road in real-world scenes. The primary purpose of the current search network is to address the pedestrian issue. Due to varying task scenarios, it is necessary to re-establish the network in order to resolve the problem of vast differences in different perspectives caused by vehicle searches. A phased feature extraction network (PFE-Net) is proposed as a solution to the cross-camera vehicle search problem. Initially, the anchor-free YOLOX framework is selected as the backbone network, which not only improves the network’s performance but also eliminates the fuzzy situation in which multiple anchor boxes correspond to a single vehicle ID in the Re-ID branch. Second, for the vehicle Re-ID branch, a camera grouping module is proposed to effectively address issues such as sudden changes in perspective and disparities in shooting under different cameras. Finally, a cross-level feature fusion module is designed to enhance the model’s ability to extract subtle vehicle features and the Re-ID’s precision. Experiments demonstrate that our proposed PFE-Net achieves the highest precision in the DAIR-V2XSearch dataset. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

17 pages, 5480 KiB  
Article
Adaptive VMD–K-SVD-Based Rolling Bearing Fault Signal Enhancement Study
by Meijiao Mao, Kaixin Zeng, Zhifei Tan, Zhi Zeng, Zihua Hu, Xiaogao Chen and Changjiang Qin
Sensors 2023, 23(20), 8629; https://doi.org/10.3390/s23208629 - 22 Oct 2023
Cited by 1 | Viewed by 1017
Abstract
To address the challenges associated with nonlinearity, non-stationarity, susceptibility to redundant noise interference, and the difficulty in extracting fault feature signals from rolling bearing signals, this study introduces a novel combined approach. The proposed method utilizes the variational mode decomposition (VMD) and K-singular [...] Read more.
To address the challenges associated with nonlinearity, non-stationarity, susceptibility to redundant noise interference, and the difficulty in extracting fault feature signals from rolling bearing signals, this study introduces a novel combined approach. The proposed method utilizes the variational mode decomposition (VMD) and K-singular value decomposition (K-SVD) algorithms to effectively denoise and enhance the collected rolling bearing signals. Initially, the VMD method is employed to separate the overall noise into intrinsic mode functions (IMFs), reducing the noise content within each IMF. To optimize the mode component, K, and the penalty factor, α, in VMD, an improved arithmetic optimization algorithm (IAOA) is employed. This ensures the selection of optimal parameters and the decomposition of the signal into a set of IMFs, forming the original dictionary. Subsequently, the signals are decomposed into multiple IMFs using VMD, and an original dictionary is constructed based on these IMFs. K-SVD is then applied to the original dictionary to further reduce the noise in each IMF, resulting in a denoised and enhanced signal. To validate the efficacy of the proposed method, rolling bearing signals collected from Case Western Reserve University (CWRU) and thrust bearing test rigs were utilized. The experimental results demonstrate the feasibility and effectiveness of the proposed approach in denoising and enhancing the rolling bearing signals. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

16 pages, 894 KiB  
Article
Accurate Global Point Cloud Registration Using GPU-Based Parallel Angular Radon Spectrum
by Ernesto Fontana and Dario Lodi Rizzini
Sensors 2023, 23(20), 8628; https://doi.org/10.3390/s23208628 - 22 Oct 2023
Cited by 1 | Viewed by 988
Abstract
Accurate robot localization and mapping can be improved through the adoption of globally optimal registration methods, like the Angular Radon Spectrum (ARS). In this paper, we present Cud-ARS, an efficient variant of the ARS algorithm for 2D registration designed for parallel execution of [...] Read more.
Accurate robot localization and mapping can be improved through the adoption of globally optimal registration methods, like the Angular Radon Spectrum (ARS). In this paper, we present Cud-ARS, an efficient variant of the ARS algorithm for 2D registration designed for parallel execution of the most computationally expensive steps on Nvidia™ Graphics Processing Units (GPUs). Cud-ARS is able to compute the ARS in parallel blocks, with each associated to a subset of input points. We also propose a global branch-and-bound method for translation estimation. This novel parallel algorithm has been tested on multiple datasets. The proposed method is able to speed up the execution time by two orders of magnitude while obtaining more accurate results in rotation estimation than state-of-the-art correspondence-based algorithms. Our experiments also assess the potential of this novel approach in mapping applications, showing the contribution of GPU programming to efficient solutions of robotic tasks. Full article
(This article belongs to the Collection Sensors and Data Processing in Robotics)
Show Figures

Figure 1

14 pages, 608 KiB  
Article
GaitSG: Gait Recognition with SMPLs in Graph Structure
by Jiayi Yan, Shaohui Wang, Jing Lin, Peihao Li, Ruxin Zhang and Haoqian Wang
Sensors 2023, 23(20), 8627; https://doi.org/10.3390/s23208627 - 22 Oct 2023
Viewed by 1094
Abstract
Gait recognition aims to identify a person based on his unique walking pattern. Compared with silhouettes and skeletons, skinned multi-person linear (SMPL) models can simultaneously provide human pose and shape information and are robust to viewpoint and clothing variances. However, previous approaches have [...] Read more.
Gait recognition aims to identify a person based on his unique walking pattern. Compared with silhouettes and skeletons, skinned multi-person linear (SMPL) models can simultaneously provide human pose and shape information and are robust to viewpoint and clothing variances. However, previous approaches have only considered SMPL parameters as a whole and are yet to explore their potential for gait recognition thoroughly. To address this problem, we concentrate on SMPL representations and propose a novel SMPL-based method named GaitSG for gait recognition, which takes SMPL parameters in the graph structure as input. Specifically, we represent the SMPL model as graph nodes and employ graph convolution techniques to effectively model the human model topology and generate discriminative gait features. Further, we utilize prior knowledge of the human body and elaborately design a novel part graph pooling block, PGPB, to encode viewpoint information explicitly. The PGPB also alleviates the physical distance-unaware limitation of the graph structure. Comprehensive experiments on public gait recognition datasets, Gait3D and CASIA-B, demonstrate that GaitSG can achieve better performance and faster convergence than existing model-based approaches. Specifically, compared with the baseline SMPLGait (3D only), our model achieves approximately twice the Rank-1 accuracy and requires three times fewer training iterations on Gait3D. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop