Journal Description
Sensors
Sensors
is an international, peer-reviewed, open access journal on the science and technology of sensors. Sensors is published semimonthly online by MDPI. The Polish Society of Applied Electromagnetics (PTZE), Japan Society of Photogrammetry and Remote Sensing (JSPRS), Spanish Society of Biomedical Engineering (SEIB) and International Society for the Measurement of Physical Behaviour (ISMPB) are affiliated with Sensors and their members receive a discount on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, MEDLINE, PMC, Ei Compendex, Inspec, Astrophysics Data System, and other databases.
- Journal Rank: JCR - Q2 (Instruments & Instrumentation) / CiteScore - Q1 (Instrumentation)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17 days after submission; acceptance to publication is undertaken in 2.8 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our editors and authors say about Sensors.
- Companion journals for Sensors include: Chips, Automation, JCP and Targets.
Impact Factor:
3.9 (2022);
5-Year Impact Factor:
4.1 (2022)
Latest Articles
Correction: Soršak et al. Design and Investigation of Optical Properties of N-(Rhodamine-B)-Lactam-Ethylenediamine (RhB-EDA) Fluorescent Probe. Sensors 2018, 18, 1201
Sensors 2024, 24(10), 3043; https://doi.org/10.3390/s24103043 (registering DOI) - 11 May 2024
Abstract
In the published publication [...]
Full article
(This article belongs to the Section Chemical Sensors)
Open AccessArticle
A Sensorized 3D-Printed Knee Test Rig for Preliminary Experimental Validation of Patellar Tracking and Contact Simulation
by
Florian Michaud, Francisco Mouzo, Daniel Dopico and Javier Cuadrado
Sensors 2024, 24(10), 3042; https://doi.org/10.3390/s24103042 (registering DOI) - 10 May 2024
Abstract
Experimental validation of computational simulations is important because it provides empirical evidence to verify the accuracy and reliability of the simulated results. This validation ensures that the simulation accurately represents real-world phenomena, increasing confidence in the model’s predictive capabilities and its applicability to
[...] Read more.
Experimental validation of computational simulations is important because it provides empirical evidence to verify the accuracy and reliability of the simulated results. This validation ensures that the simulation accurately represents real-world phenomena, increasing confidence in the model’s predictive capabilities and its applicability to practical scenarios. The use of musculoskeletal models in orthopedic surgery allows for objective prediction of postoperative function and optimization of results for each patient. To ensure that simulations are trustworthy and can be used for predictive purposes, comparing simulation results with experimental data is crucial. Although progress has been made in obtaining 3D bone geometry and estimating contact forces, validation of these predictions has been limited due to the lack of direct in vivo measurements and the economic and ethical constraints associated with available alternatives. In this study, an existing commercial surgical training station was transformed into a sensorized test bench to replicate a knee subject to a total knee replacement. The original knee inserts of the training station were replaced with personalized 3D-printed bones incorporating their corresponding implants, and multiple sensors with their respective supports were added. The recorded movement of the patella was used in combination with the forces recorded by the pressure sensor and the load cells, to validate the results obtained from the simulation, which was performed by means of a multibody dynamics formulation implemented in a custom-developed library. The utilization of 3D-printed models and sensors facilitated cost-effective and replicable experimental validation of computational simulations, thereby advancing orthopedic surgery while circumventing ethical concerns.
Full article
(This article belongs to the Special Issue Use of Marker and Markerless Motion Capturing Technologies for Digital Human Modeling)
Open AccessArticle
Environmental Surveillance through Machine Learning-Empowered Utilization of Optical Networks
by
Hasan Awad, Fehmida Usmani, Emanuele Virgillito, Rudi Bratovich, Roberto Proietti, Stefano Straullu, Francesco Aquilino, Rosanna Pastorelli and Vittorio Curri
Sensors 2024, 24(10), 3041; https://doi.org/10.3390/s24103041 (registering DOI) - 10 May 2024
Abstract
We present the use of interconnected optical mesh networks for early earthquake detection and localization, exploiting the existing terrestrial fiber infrastructure. Employing a waveplate model, we integrate real ground displacement data from seven earthquakes with magnitudes ranging from four to six to simulate
[...] Read more.
We present the use of interconnected optical mesh networks for early earthquake detection and localization, exploiting the existing terrestrial fiber infrastructure. Employing a waveplate model, we integrate real ground displacement data from seven earthquakes with magnitudes ranging from four to six to simulate the strains within fiber cables and collect a large set of light polarization evolution data. These simulations help to enhance a machine learning model that is trained and validated to detect primary wave arrivals that precede earthquakes’ destructive surface waves. The validation results show that the model achieves over 95% accuracy. The machine learning model is then tested against an M4.3 earthquake, exploiting three interconnected mesh networks as a smart sensing grid. Each network is equipped with a sensing fiber placed to correspond with three distinct seismic stations. The objective is to confirm earthquake detection across the interconnected networks, localize the epicenter coordinates via a triangulation method and calculate the fiber-to-epicenter distance. This setup allows early warning generation for municipalities close to the epicenter location, progressing to those further away. The model testing shows a 98% accuracy in detecting primary waves and a one second detection time, affording nearby areas 21 s to take countermeasures, which extends to 57 s in more distant areas.
Full article
(This article belongs to the Special Issue Feature Papers in Optical Sensors 2024)
Open AccessArticle
Enhancing Classification Accuracy with Integrated Contextual Gate Network: Deep Learning Approach for Functional Near-Infrared Spectroscopy Brain–Computer Interface Application
by
Jamila Akhter, Noman Naseer, Hammad Nazeer, Haroon Khan and Peyman Mirtaheri
Sensors 2024, 24(10), 3040; https://doi.org/10.3390/s24103040 (registering DOI) - 10 May 2024
Abstract
Brain–computer interface (BCI) systems include signal acquisition, preprocessing, feature extraction, classification, and an application phase. In fNIRS-BCI systems, deep learning (DL) algorithms play a crucial role in enhancing accuracy. Unlike traditional machine learning (ML) classifiers, DL algorithms eliminate the need for manual feature
[...] Read more.
Brain–computer interface (BCI) systems include signal acquisition, preprocessing, feature extraction, classification, and an application phase. In fNIRS-BCI systems, deep learning (DL) algorithms play a crucial role in enhancing accuracy. Unlike traditional machine learning (ML) classifiers, DL algorithms eliminate the need for manual feature extraction. DL neural networks automatically extract hidden patterns/features within a dataset to classify the data. In this study, a hand-gripping (closing and opening) two-class motor activity dataset from twenty healthy participants is acquired, and an integrated contextual gate network (ICGN) algorithm (proposed) is applied to that dataset to enhance the classification accuracy. The proposed algorithm extracts the features from the filtered data and generates the patterns based on the information from the previous cells within the network. Accordingly, classification is performed based on the similar generated patterns within the dataset. The accuracy of the proposed algorithm is compared with the long short-term memory (LSTM) and bidirectional long short-term memory (Bi-LSTM). The proposed ICGN algorithm yielded a classification accuracy of 91.23 ± 1.60%, which is significantly (p < 0.025) higher than the 84.89 ± 3.91 and 88.82 ± 1.96 achieved by LSTM and Bi-LSTM, respectively. An open access, three-class (right- and left-hand finger tapping and dominant foot tapping) dataset of 30 subjects is used to validate the proposed algorithm. The results show that ICGN can be efficiently used for the classification of two- and three-class problems in fNIRS-based BCI applications.
Full article
(This article belongs to the Special Issue Brain Computer Interface for Biomedical Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Efficient Structure from Motion for Large-Size Videos from an Open Outdoor UAV Dataset
by
Ruilin Xiang, Jiagang Chen and Shunping Ji
Sensors 2024, 24(10), 3039; https://doi.org/10.3390/s24103039 (registering DOI) - 10 May 2024
Abstract
Modern UAVs (unmanned aerial vehicles) equipped with video cameras can provide large-scale high-resolution video data. This poses significant challenges for structure from motion (SfM) and simultaneous localization and mapping (SLAM) algorithms, as most of them are developed for relatively small-scale and low-resolution scenes.
[...] Read more.
Modern UAVs (unmanned aerial vehicles) equipped with video cameras can provide large-scale high-resolution video data. This poses significant challenges for structure from motion (SfM) and simultaneous localization and mapping (SLAM) algorithms, as most of them are developed for relatively small-scale and low-resolution scenes. In this paper, we present a video-based SfM method specifically designed for high-resolution large-size UAV videos. Despite the wide range of applications for SfM, performing mainstream SfM methods on such videos poses challenges due to their high computational cost. Our method consists of three main steps. Firstly, we employ a visual SLAM (VSLAM) system to efficiently extract keyframes, keypoints, initial camera poses, and sparse structures from downsampled videos. Next, we propose a novel two-step keypoint adjustment method. Instead of matching new points in the original videos, our method effectively and efficiently adjusts the existing keypoints at the original scale. Finally, we refine the poses and structures using a rotation-averaging constrained global bundle adjustment (BA) technique, incorporating the adjusted keypoints. To enrich the resources available for SLAM or SfM studies, we provide a large-size (3840 × 2160) outdoor video dataset with millimeter-level-accuracy ground control points, which supplements the current relatively low-resolution video datasets. Experiments demonstrate that, compared with other SLAM or SfM methods, our method achieves an average efficiency improvement of 100% on our collected dataset and 45% on the EuRoc dataset. Our method also demonstrates superior localization accuracy when compared with state-of-the-art SLAM or SfM methods.
Full article
(This article belongs to the Section Navigation and Positioning)
Open AccessArticle
Fabrication of Microgel-Modified Hydrogel Flexible Strain Sensors Using Electrohydrodynamic Direct Printing Method
by
Junyan Feng, Peng Cao, Tao Yang, Hezheng Ao and Bo Xing
Sensors 2024, 24(10), 3038; https://doi.org/10.3390/s24103038 (registering DOI) - 10 May 2024
Abstract
Hydrogel flexible strain sensors, renowned for their high stretchability, flexibility, and wearable comfort, have been employed in various applications in the field of human motion monitoring. However, the predominant method for fabricating hydrogels is the template method, which is particularly inefficient and costly
[...] Read more.
Hydrogel flexible strain sensors, renowned for their high stretchability, flexibility, and wearable comfort, have been employed in various applications in the field of human motion monitoring. However, the predominant method for fabricating hydrogels is the template method, which is particularly inefficient and costly for hydrogels with complex structural requirements, thereby limiting the development of flexible hydrogel electronic devices. Herein, we propose a novel method that involves using microgels to modify a hydrogel solution, printing the hydrogel ink using an electrohydrodynamic printing device, and subsequently forming the hydrogel under UV illumination. The resulting hydrogel exhibited a high tensile ratio (639.73%), high tensile strength (0.4243 MPa), and an ionic conductivity of 0.2256 S/m, along with excellent electrochemical properties. Moreover, its high linearity and sensitivity enabled the monitoring of a wide range of subtle changes in human movement. This novel approach offers a promising pathway for the development of high-performance, complexly structured hydrogel flexible sensors.
Full article
(This article belongs to the Section Sensor Materials)
►▼
Show Figures
Figure 1
Open AccessArticle
MDE and LLM Synergy for Network Experimentation: Case Analysis of Wireless System Performance in Beaulieu-Xie Fading and κ-µ Co-Channel Interference Environment with Diversity Combining
by
Dragana Krstic, Suad Suljovic, Goran Djordjevic, Nenad Petrovic and Dejan Milic
Sensors 2024, 24(10), 3037; https://doi.org/10.3390/s24103037 (registering DOI) - 10 May 2024
Abstract
Channel modeling is a first step towards the successful projecting of any wireless communication system. Hence, in this paper, we analyze the performance at the output of a multi-branch selection combining (SC) diversity receiver in a wireless environment that has been distracted by
[...] Read more.
Channel modeling is a first step towards the successful projecting of any wireless communication system. Hence, in this paper, we analyze the performance at the output of a multi-branch selection combining (SC) diversity receiver in a wireless environment that has been distracted by fading and co-channel interference (CCI), whereby the fading is modelled by newer Beaulieu-Xie (BX) distribution, and the CCI is modelled by the κ-µ distribution. The BX distribution provides the ability to include in consideration any number of line-of-sight (LOS) useful signal components and non-LOS (NLOS) useful signal components. This distribution contains characteristics of some other fading models thanks to its flexible fading parameters, which also applies to the κ-µ distribution. We derived here the expressions for the probability density function (PDF) and cumulative distribution function (CDF) for the output signal-to-co-channel interference ratio (SIR). After that, other performances are obtained, namely: outage probability (Pout), channel capacity (CC), moment-generating function (MGF), average bit error probability (ABEP), level crossing rate (LCR), and average fade duration (AFD). Numerical results are presented in several graphs versus the SIR for different values of fading and CCI parameters, as well as the number of input branches in the SC receiver. Then, the impact of parameters on all performance is checked. From our numerical results, it is possible to directly obtain the performance for all derived and displayed quantities for cases of previously known distributions of fading and CCI by inserting the appropriate parameter values. In the second part of the paper, a workflow for automated network experimentation relying on the synergy of Large Language Models (LLMs) and model-driven engineering (MDE) is presented, while the previously derived expressions are used for evaluation. Due to the aforementioned, the biggest value of the obtained results is the applicability to the cases of a large number of other distributions for fading and CCI by replacing the corresponding parameters in the formulas for the respective performances.
Full article
(This article belongs to the Special Issue Recent Trends and Advances in Telecommunications and Sensing)
Open AccessArticle
Research on Human Posture Estimation Algorithm Based on YOLO-Pose
by
Jing Ding, Shanwei Niu, Zhigang Nie and Wenyu Zhu
Sensors 2024, 24(10), 3036; https://doi.org/10.3390/s24103036 (registering DOI) - 10 May 2024
Abstract
In response to the numerous challenges faced by traditional human pose recognition methods in practical applications, such as dense targets, severe edge occlusion, limited application scenarios, complex backgrounds, and poor recognition accuracy when targets are occluded, this paper proposes a YOLO-Pose algorithm for
[...] Read more.
In response to the numerous challenges faced by traditional human pose recognition methods in practical applications, such as dense targets, severe edge occlusion, limited application scenarios, complex backgrounds, and poor recognition accuracy when targets are occluded, this paper proposes a YOLO-Pose algorithm for human pose estimation. The specific improvements are divided into four parts. Firstly, in the Backbone section of the YOLO-Pose model, lightweight GhostNet modules are introduced to reduce the model’s parameter count and computational requirements, making it suitable for deployment on unmanned aerial vehicles (UAVs). Secondly, the ACmix attention mechanism is integrated into the Neck section to improve detection speed during object judgment and localization. Furthermore, in the Head section, key points are optimized using coordinate attention mechanisms, significantly enhancing key point localization accuracy. Lastly, the paper improves the loss function and confidence function to enhance the model’s robustness. Experimental results demonstrate that the improved model achieves a 95.58% improvement in mAP50 and a 69.54% improvement in mAP50-95 compared to the original model, with a reduction of 14.6 M parameters. The model achieves a detection speed of 19.9 ms per image, optimized by 30% and 39.5% compared to the original model. Comparisons with other algorithms such as Faster R-CNN, SSD, YOLOv4, and YOLOv7 demonstrate varying degrees of performance improvement.
Full article
(This article belongs to the Special Issue Deep Learning Applications for Pose Estimation and Human Action Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
A Novel Frame-Selection Metric for Video Inpainting to Enhance Urban Feature Extraction
by
Yuhu Feng, Jiahuan Zhang, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Sensors 2024, 24(10), 3035; https://doi.org/10.3390/s24103035 (registering DOI) - 10 May 2024
Abstract
In our digitally driven society, advances in software and hardware to capture video data allow extensive gathering and analysis of large datasets. This has stimulated interest in extracting information from video data, such as buildings and urban streets, to enhance understanding of the
[...] Read more.
In our digitally driven society, advances in software and hardware to capture video data allow extensive gathering and analysis of large datasets. This has stimulated interest in extracting information from video data, such as buildings and urban streets, to enhance understanding of the environment. Urban buildings and streets, as essential parts of cities, carry valuable information relevant to daily life. Extracting features from these elements and integrating them with technologies such as VR and AR can contribute to more intelligent and personalized urban public services. Despite its potential benefits, collecting videos of urban environments introduces challenges because of the presence of dynamic objects. The varying shape of the target building in each frame necessitates careful selection to ensure the extraction of quality features. To address this problem, we propose a novel evaluation metric that considers the video-inpainting-restoration quality and the relevance of the target object, considering minimizing areas with cars, maximizing areas with the target building, and minimizing overlapping areas. This metric extends existing video-inpainting-evaluation metrics by considering the relevance of the target object and interconnectivity between objects. We conducted experiment to validate the proposed metrics using real-world datasets from Japanese cities Sapporo and Yokohama. The experiment results demonstrate feasibility of selecting video frames conducive to building feature extraction.
Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
Research on the Postural Stability of Underwater Bottom Platforms with Different Burial Depths
by
Yong Wei, Nan Li, Ming Wu and Daming Zhou
Sensors 2024, 24(10), 3034; https://doi.org/10.3390/s24103034 (registering DOI) - 10 May 2024
Abstract
The bottom platform is an important underwater sensor that can be used in communications, early warning, monitoring, and other fields. It may be affected by earthquakes, winds, waves, and other loads in the working environment, causing changes in posture and affecting its sensing
[...] Read more.
The bottom platform is an important underwater sensor that can be used in communications, early warning, monitoring, and other fields. It may be affected by earthquakes, winds, waves, and other loads in the working environment, causing changes in posture and affecting its sensing function. Therefore, it is of practical engineering significance to analyze the force conditions and posture changes in the bottom platform. In order to solve the problem of postural stability of the underwater bottom platform, this paper establishes a fluid and structural simulation model of the underwater bottom platform. First, computational fluid dynamics (CFD) technology is used to solve the velocity distribution and forces in the watershed around the bottom platform under a 3 kn ocean current, where the finite element method (FEM) numerical calculation method is used to solve the initial equilibrium state of the bottom platform after it is buried. On this basis, this paper calculates the forces on the bottom platform and the posture of the bottom platform at different burial depths under the action of ocean currents. Additionally, the effects of different burial depths on the maximum displacement, deflection angle, and postural stability of the bottom platform are studied. The calculation results show that when the burial depth is greater than 0.6 m, and the deflection angle of the bottom platform under the action of the 3 kn sea current is less than 5°, the bottom platform can maintain a stable posture. This paper could be used to characterize the postural stability of underwater bottom platforms at different burial depths for the application of underwater sensors in ocean engineering.
Full article
(This article belongs to the Section Navigation and Positioning)
Open AccessArticle
Expert–Novice Level Classification Using Graph Convolutional Network Introducing Confidence-Aware Node-Level Attention Mechanism
by
Tatsuki Seino, Naoki Saito, Takahiro Ogawa, Satoshi Asamizu and Miki Haseyama
Sensors 2024, 24(10), 3033; https://doi.org/10.3390/s24103033 (registering DOI) - 10 May 2024
Abstract
In this study, we propose a classification method of expert–novice levels using a graph convolutional network (GCN) with a confidence-aware node-level attention mechanism. In classification using an attention mechanism, highlighted features may not be significant for accurate classification, thereby degrading classification performance. To
[...] Read more.
In this study, we propose a classification method of expert–novice levels using a graph convolutional network (GCN) with a confidence-aware node-level attention mechanism. In classification using an attention mechanism, highlighted features may not be significant for accurate classification, thereby degrading classification performance. To address this issue, the proposed method introduces a confidence-aware node-level attention mechanism into a spatiotemporal attention GCN (STA-GCN) for the classification of expert–novice levels. Consequently, our method can contrast the attention value of each node on the basis of the confidence measure of the classification, which solves the problem of classification approaches using attention mechanisms and realizes accurate classification. Furthermore, because the expert–novice levels have ordinalities, using a classification model that considers ordinalities improves the classification performance. The proposed method involves a model that minimizes a loss function that considers the ordinalities of classes to be classified. By implementing the above approaches, the expert–novice level classification performance is improved.
Full article
(This article belongs to the Section Intelligent Sensors)
►▼
Show Figures
Figure 1
Open AccessArticle
Biosensor-Driven IoT Wearables for Accurate Body Motion Tracking and Localization
by
Nouf Abdullah Almujally, Danyal Khan, Naif Al Mudawi, Mohammed Alonazi, Abdulwahab Alazeb, Asaad Algarni, Ahmad Jalal and Hui Liu
Sensors 2024, 24(10), 3032; https://doi.org/10.3390/s24103032 (registering DOI) - 10 May 2024
Abstract
The domain of human locomotion identification through smartphone sensors is witnessing rapid expansion within the realm of research. This domain boasts significant potential across various sectors, including healthcare, sports, security systems, home automation, and real-time location tracking. Despite the considerable volume of existing
[...] Read more.
The domain of human locomotion identification through smartphone sensors is witnessing rapid expansion within the realm of research. This domain boasts significant potential across various sectors, including healthcare, sports, security systems, home automation, and real-time location tracking. Despite the considerable volume of existing research, the greater portion of it has primarily concentrated on locomotion activities. Comparatively less emphasis has been placed on the recognition of human localization patterns. In the current study, we introduce a system by facilitating the recognition of both human physical and location-based patterns. This system utilizes the capabilities of smartphone sensors to achieve its objectives. Our goal is to develop a system that can accurately identify different human physical and localization activities, such as walking, running, jumping, indoor, and outdoor activities. To achieve this, we perform preprocessing on the raw sensor data using a Butterworth filter for inertial sensors and a Median Filter for Global Positioning System (GPS) and then applying Hamming windowing techniques to segment the filtered data. We then extract features from the raw inertial and GPS sensors and select relevant features using the variance threshold feature selection method. The extrasensory dataset exhibits an imbalanced number of samples for certain activities. To address this issue, the permutation-based data augmentation technique is employed. The augmented features are optimized using the Yeo–Johnson power transformation algorithm before being sent to a multi-layer perceptron for classification. We evaluate our system using the K-fold cross-validation technique. The datasets used in this study are the Extrasensory and Sussex Huawei Locomotion (SHL), which contain both physical and localization activities. Our experiments demonstrate that our system achieves high accuracy with 96% and 94% over Extrasensory and SHL in physical activities and 94% and 91% over Extrasensory and SHL in the location-based activities, outperforming previous state-of-the-art methods in recognizing both types of activities.
Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition II)
►▼
Show Figures
Figure 1
Open AccessArticle
Deep Reinforcement Learning for Optimizing Restricted Access Window in IEEE 802.11ah MAC Layer
by
Xiaojun Jiang, Shimin Gong, Chengyi Deng, Lanhua Li and Bo Gu
Sensors 2024, 24(10), 3031; https://doi.org/10.3390/s24103031 (registering DOI) - 10 May 2024
Abstract
The IEEE 802.11ah standard is introduced to address the growing scale of internet of things (IoT) applications. To reduce contention and enhance energy efficiency in the system, the restricted access window (RAW) mechanism is introduced in the medium access control (MAC) layer to
[...] Read more.
The IEEE 802.11ah standard is introduced to address the growing scale of internet of things (IoT) applications. To reduce contention and enhance energy efficiency in the system, the restricted access window (RAW) mechanism is introduced in the medium access control (MAC) layer to manage the significant number of stations accessing the network. However, to achieve optimized network performance, it is necessary to appropriately determine the RAW parameters, including the number of RAW groups, the number of slots in each RAW, and the duration of each slot. In this paper, we optimize the configuration of RAW parameters in the uplink IEEE 802.11ah-based IoT network. To improve network throughput, we analyze and establish a RAW parameters optimization problem. To effectively cope with the complex and dynamic network conditions, we propose a deep reinforcement learning (DRL) approach to determine the preferable RAW parameters to optimize network throughput. To enhance learning efficiency and stability, we employ the proximal policy optimization (PPO) algorithm. We construct network environments with periodic and random traffic in an NS-3 simulator to validate the performance of the proposed PPO-based RAW parameters optimization algorithm. The simulation results reveal that using the PPO-based DRL algorithm, optimized RAW parameters can be obtained under different network conditions, and network throughput can be improved significantly.
Full article
(This article belongs to the Special Issue Empowering the IoT: Scalable, Sustainable, and Ultra-Low Power Solutions)
Open AccessArticle
Image Classifier for an Online Footwear Marketplace to Distinguish between Counterfeit and Real Sneakers for Resale
by
Joshua Onalaja, Essa Q. Shahra, Shadi Basurra and Waheb A. Jabbar
Sensors 2024, 24(10), 3030; https://doi.org/10.3390/s24103030 (registering DOI) - 10 May 2024
Abstract
The sneaker industry is continuing to expand at a fast rate and will be worth over USD 120 billion in the next few years. This is, in part due to social media and online retailers building hype around releases of limited-edition sneakers, which
[...] Read more.
The sneaker industry is continuing to expand at a fast rate and will be worth over USD 120 billion in the next few years. This is, in part due to social media and online retailers building hype around releases of limited-edition sneakers, which are usually collaborations between well-known global icons and footwear companies. These limited-edition sneakers are typically released in low quantities using an online raffle system, meaning only a few people can get their hands on them. As expected, this causes their value to skyrocket and has created an extremely lucrative resale market for sneakers. This has given rise to numerous counterfeit sneakers flooding the resale market, resulting in online platforms having to hand-verify a sneaker’s authenticity, which is an important but time-consuming procedure that slows the selling and buying process. To speed up the authentication process, Support Vector Machines and a convolutional neural network were used to classify images of fake and real sneakers and then their accuracies were compared to see which performed better. The results showed that the CNNs performed much better at this task than the SVMs with some accuracies over 95%. Therefore, a CNN is well equipped to be a sneaker authenticator and will be of great benefit to the reselling industry.
Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems—2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Fault-Tolerant Control of Autonomous Underwater Vehicle Actuators Based on Takagi and Sugeno Fuzzy and Pseudo-Inverse Quadratic Programming under Constraints
by
Zimu Zhang, Yunkai Wu, Yang Zhou and Dahai Hu
Sensors 2024, 24(10), 3029; https://doi.org/10.3390/s24103029 (registering DOI) - 10 May 2024
Abstract
Autonomous Underwater Vehicles (AUVs) play a significant role in ocean-related research fields as tools for human exploration and the development of marine resources. However, the uncertainty of the underwater environment and the complexity of underwater motion pose significant challenges to the fault-tolerant control
[...] Read more.
Autonomous Underwater Vehicles (AUVs) play a significant role in ocean-related research fields as tools for human exploration and the development of marine resources. However, the uncertainty of the underwater environment and the complexity of underwater motion pose significant challenges to the fault-tolerant control of AUV actuators. This paper presents a fault-tolerant control strategy for AUV actuators based onTakagi and Sugeno (T-S) fuzzy logic and pseudo-inverse quadratic programming under control constraints, aimed at addressing potential actuator faults. Firstly, considering the steady-state performance and dynamic performance of the control system, a T-S fuzzy controller is designed. Next, based on the redundant configuration of the actuators, the propulsion system is normalized, and the fault-tolerant control of AUV actuators is achieved using the pseudo-inverse method under thrust allocation. When control is constrained, a quadratic programming approach is used to compensate for the input control quantity. Finally, the effectiveness of the fuzzy control and fault-tolerant control allocation methods studied in this paper is validated through mathematical simulation. The experimental results indicate that in various fault scenarios, the pseudo-inverse combined with a nonlinear quadratic programming algorithm can compensate for the missing control inputs due to control constraints, ensuring the normal thrust of AUV actuators and achieving the expected fault-tolerant effect.
Full article
(This article belongs to the Section Sensors and Robotics)
►▼
Show Figures
Figure 1
Open AccessReview
Effectiveness of Telerehabilitation in Dizziness: A Systematic Review with Meta-Analysis
by
Davide Grillo, Mirko Zitti, Błażej Cieślik, Stefano Vania, Silvia Zangarini, Stefano Bargellesi and Pawel Kiper
Sensors 2024, 24(10), 3028; https://doi.org/10.3390/s24103028 (registering DOI) - 10 May 2024
Abstract
Dizziness can be a debilitating condition with various causes, with at least one episode reported in 17% to 30% of the international adult population. Given the effectiveness of rehabilitation in treating dizziness and the recent advancements in telerehabilitation, this systematic review aims to
[...] Read more.
Dizziness can be a debilitating condition with various causes, with at least one episode reported in 17% to 30% of the international adult population. Given the effectiveness of rehabilitation in treating dizziness and the recent advancements in telerehabilitation, this systematic review aims to investigate the effectiveness of telerehabilitation in the treatment of this disorder. The search, conducted across Medline, Cochrane Central Register of Controlled Trials, and PEDro databases, included randomized controlled trials assessing the efficacy of telerehabilitation interventions, delivered synchronously, asynchronously, or via tele-support/monitoring. Primary outcomes focused on dizziness frequency/severity and disability, with secondary outcomes assessing anxiety and depression measures. Seven articles met the eligibility criteria, whereas five articles contributed to the meta-analysis. Significant findings were observed regarding the frequency and severity of dizziness (mean difference of 3.01, p < 0.001), disability (mean difference of −4.25, p < 0.001), and anxiety (standardized mean difference of −0.16, p = 0.02), favoring telerehabilitation. Telerehabilitation shows promise as a treatment for dizziness, aligning with the positive outcomes seen in traditional rehabilitation studies. However, the effectiveness of different telerehabilitation approaches requires further investigation, given the moderate methodological quality and the varied nature of existing methods and programs.
Full article
(This article belongs to the Section Wearables)
►▼
Show Figures
Figure 1
Open AccessArticle
Underwater Multi-Channel MAC with Cognitive Acoustics for Distributed Underwater Acoustic Networks
by
Changho Yun
Sensors 2024, 24(10), 3027; https://doi.org/10.3390/s24103027 - 10 May 2024
Abstract
The advancement of underwater cognitive acoustic network (UCAN) technology aims to improve spectral efficiency and ensure coexistence with the underwater ecosystem. As the demand for short-term underwater applications operated under distributed topologies, like autonomous underwater vehicle cluster operations, continues to grow, this paper
[...] Read more.
The advancement of underwater cognitive acoustic network (UCAN) technology aims to improve spectral efficiency and ensure coexistence with the underwater ecosystem. As the demand for short-term underwater applications operated under distributed topologies, like autonomous underwater vehicle cluster operations, continues to grow, this paper presents Underwater Multi-channel Medium Access Control with Cognitive Acoustics (UMMAC-CA) as a suitable channel access protocol for distributed UCANs. UMMAC-CA operates on a per-frame basis, similar to the Multi-channel Medium Access Control with Cognitive Radios (MMAC-CR) designed for distributed cognitive radio networks, but with notable differences. It employs a pre-determined data transmission matrix to allow all nodes to access the channel without contention, thus reducing the channel access overhead. In addition, to mitigate the communication failures caused by randomly occurring interferers, UMMAC-CA allocates at least 50% of frame time for interferer sensing. This is possible because of the fixed data transmission scheduling, which allows other nodes to sense for interferers simultaneously while a specific node is transmitting data. Simulation results demonstrate that UMMAC-CA outperforms MMAC-CR across various metrics, including those of the sensing time rate, controlling time rate, and throughput. In addition, except for in the case where the data transmission time coefficient equals 1, the message overhead performance of UMMAC-CA is also superior to that of MMAC-CR. These results underscore the suitability of UMMAC-CA for use in challenging underwater applications requiring multi-channel cognitive communication within a distributed network architecture.
Full article
(This article belongs to the Special Issue Underwater Sensor Networks for Communication, Navigation, and Localization)
►▼
Show Figures
Figure 1
Open AccessArticle
Experimental Study of White Light Interferometry in Mach–Zehnder Interferometers Based on Standard Single Mode Fiber
by
José Luis Cano-Perez, Jaime Gutiérrez-Gutiérrez, Christian Perezcampos-Mayoral, Eduardo L. Pérez-Campos, María del Socorro Pina-Canseco, Lorenzo Tepech-Carrillo, Marciano Vargas-Treviño, Erick Israel Guerra-Hernández, Abraham Martínez-Helmes, Julián Moisés Estudillo-Ayala, Juan Manuel Sierra-Hernández and Roberto Rojas-Laguna
Sensors 2024, 24(10), 3026; https://doi.org/10.3390/s24103026 - 10 May 2024
Abstract
In this work, we experimentally analyzed and demonstrated the performance of an in-line Mach–Zehnder interferometer in the visible region, with an LED light source. The different waist diameter taper and asymmetric core-offset interferometers proposed used a single-mode fiber (SMF). The visibility achieved was
[...] Read more.
In this work, we experimentally analyzed and demonstrated the performance of an in-line Mach–Zehnder interferometer in the visible region, with an LED light source. The different waist diameter taper and asymmetric core-offset interferometers proposed used a single-mode fiber (SMF). The visibility achieved was V = 0.14 with an FSR of 23 nm for the taper MZI structure and visibilities of V = 0.3, V = 0.27, and V = 0.34 with FSRs of 23 nm, 17 nm, and 8 nm and separation lengths L of 2.5 cm, 4.0 cm, and 5.0 cm between the core-offset structure, respectively. The experimental investigation of the response to the temperature sensor yielded values from 50 °C to 300 °C; the sensitivity obtained was 3.53 a.u./°C, with of 0.99769 and 1% every 1 °C in the transmission. For a range of 50 °C to 150 °C, 20.3 pm/°C with a of 0.96604 was obtained.
Full article
(This article belongs to the Topic Advance and Applications of Fiber Optic Measurement: 2nd Edition)
►▼
Show Figures
Figure 1
Open AccessArticle
Filtering Empty Video Frames for Efficient Real-Time Object Detection
by
Yu Liu and Kyoung-Don Kang
Sensors 2024, 24(10), 3025; https://doi.org/10.3390/s24103025 - 10 May 2024
Abstract
Deep learning models have significantly improved object detection, which is essential for visual sensing. However, their increasing complexity results in higher latency and resource consumption, making real-time object detection challenging. In order to address the challenge, we propose a new lightweight filtering method
[...] Read more.
Deep learning models have significantly improved object detection, which is essential for visual sensing. However, their increasing complexity results in higher latency and resource consumption, making real-time object detection challenging. In order to address the challenge, we propose a new lightweight filtering method called L-filter to predict empty video frames that include no object of interest (e.g., vehicles) with high accuracy via hybrid time series analysis. L-filter drops those frames deemed empty and conducts object detection for nonempty frames only, significantly enhancing the frame processing rate and scalability of real-time object detection. Our evaluation demonstrates that L-filter improves the frame processing rate by 31–47% for a single traffic video stream compared to three standalone state-of-the-art object detection models without L-filter. Additionally, L-filter significantly enhances scalability; it can process up to six concurrent video streams in one commodity GPU, supporting over 57 fps per stream, by working alongside the fastest object detection model among the three models.
Full article
(This article belongs to the Special Issue Image Processing and Sensing Technologies for Object Detection)
►▼
Show Figures
Figure 1
Open AccessReview
Recent Developments to the SimSphere Land Surface Modelling Tool for the Study of Land–Atmosphere Interactions
by
George P. Petropoulos and Christina Lekka
Sensors 2024, 24(10), 3024; https://doi.org/10.3390/s24103024 - 10 May 2024
Abstract
Soil–Vegetation–Atmosphere Transfer (SVAT) models are a promising avenue towards gaining a better insight into land surface interactions and Earth’s system dynamics. One such model developed for the academic and research community is the SimSphere SVAT model, a popular software toolkit employed for simulating
[...] Read more.
Soil–Vegetation–Atmosphere Transfer (SVAT) models are a promising avenue towards gaining a better insight into land surface interactions and Earth’s system dynamics. One such model developed for the academic and research community is the SimSphere SVAT model, a popular software toolkit employed for simulating interactions among the layers of vegetation, soil, and atmosphere on the land surface. The aim of the present review is two-fold: (1) to deliver a critical assessment of the model’s usage by the scientific and wider community over the last 15 years, and (2) to provide information on current software developments implemented in the model. From the review conducted herein, it is clearly evident that from the models’ inception to current day, SimSphere has received notable interest worldwide, and the dissemination of the model has continuously grown over the years. SimSphere has been used so far in several applications to study land surface interactions. The validation of the model performed worldwide has shown that it is able to produce realistic estimates of land surface parameters that have been validated, whereas detailed sensitivity analysis experiments conducted with the model have further confirmed its structure and architectural coherence. Furthermore, the recent inclusion of novel functionalities in the model, as outlined in the present review, has clearly resulted in improving its capabilities and in opening up new opportunities for its use by the wider community. SimSphere developments are also ongoing in different aspects, and its use as a toolkit towards advancing our understanding of land surface interactions from both educational and research points of view is anticipated to grow in the coming years.
Full article
(This article belongs to the Section Remote Sensors)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Sensors Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal Browser-
arrow_forward_ios
Forthcoming issue
arrow_forward_ios Current issue - Vol. 24 (2024)
- Vol. 23 (2023)
- Vol. 22 (2022)
- Vol. 21 (2021)
- Vol. 20 (2020)
- Vol. 19 (2019)
- Vol. 18 (2018)
- Vol. 17 (2017)
- Vol. 16 (2016)
- Vol. 15 (2015)
- Vol. 14 (2014)
- Vol. 13 (2013)
- Vol. 12 (2012)
- Vol. 11 (2011)
- Vol. 10 (2010)
- Vol. 9 (2009)
- Vol. 8 (2008)
- Vol. 7 (2007)
- Vol. 6 (2006)
- Vol. 5 (2005)
- Vol. 4 (2004)
- Vol. 3 (2003)
- Vol. 2 (2002)
- Vol. 1 (2001)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Materials, Nanomaterials, Photonics, Polymers, Applied Sciences, Sensors
Optical and Optoelectronic Properties of Materials and Their Applications
Topic Editors: Zhiping Luo, Gibin George, Navadeep ShrivastavaDeadline: 20 May 2024
Topic in
Remote Sensing, Sensors, Smart Cities, Vehicles, Geomatics
Information Sensing Technology for Intelligent/Driverless Vehicle, 2nd Volume
Topic Editors: Yan Huang, Yi Ren, Penghui Huang, Jun Wan, Zhanye Chen, Shiyang TangDeadline: 31 May 2024
Topic in
Applied Sciences, Electricity, Electronics, Energies, Sensors
Power System Protection
Topic Editors: Seyed Morteza Alizadeh, Akhtar KalamDeadline: 20 June 2024
Topic in
Applied Sciences, Energies, Machines, Sensors, Vehicles
Vehicle Dynamics and Control
Topic Editors: Peter Gaspar, Junnian WangDeadline: 30 June 2024
Conferences
Special Issues
Special Issue in
Sensors
Meta-User Interfaces for Ambient Environments
Guest Editors: Marco Romano, Phillip C-Y. Sheu, Giuliana VitielloDeadline: 20 May 2024
Special Issue in
Sensors
Selected Papers from 20th World Conference on Non-Destructive Testing (WCNDT 2022)
Guest Editor: Seunghee ParkDeadline: 31 May 2024
Special Issue in
Sensors
Novel Sensors and Algorithms for Outdoor Mobile Robot
Guest Editors: Levente Tamás, Andras MajdikDeadline: 20 June 2024
Special Issue in
Sensors
Deep Learning Methods for Human Activity Recognition and Emotion Detection
Guest Editor: Mario Munoz-OrganeroDeadline: 30 June 2024
Topical Collections
Topical Collection in
Sensors
Robotic and Sensor Technologies in Environmental Exploration and Monitoring
Collection Editors: Jacopo Aguzzi, Corrado Costa, Sergio Stefanni, Valerio Funari
Topical Collection in
Sensors
Microfluidic Sensors
Collection Editors: Sabina Merlo, Klaus Stefan Drese