Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (140)

Search Parameters:
Keywords = hybrid motion sensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3556 KiB  
Article
Multi-Sensor Fusion for Autonomous Mobile Robot Docking: Integrating LiDAR, YOLO-Based AprilTag Detection, and Depth-Aided Localization
by Yanyan Dai and Kidong Lee
Electronics 2025, 14(14), 2769; https://doi.org/10.3390/electronics14142769 - 10 Jul 2025
Viewed by 286
Abstract
Reliable and accurate docking remains a fundamental challenge for autonomous mobile robots (AMRs) operating in complex industrial environments with dynamic lighting, motion blur, and occlusion. This study proposes a novel multi-sensor fusion-based docking framework that significantly enhances robustness and precision by integrating YOLOv8-based [...] Read more.
Reliable and accurate docking remains a fundamental challenge for autonomous mobile robots (AMRs) operating in complex industrial environments with dynamic lighting, motion blur, and occlusion. This study proposes a novel multi-sensor fusion-based docking framework that significantly enhances robustness and precision by integrating YOLOv8-based AprilTag detection, depth-aided 3D localization, and LiDAR-based orientation correction. A key contribution of this work is the construction of a custom AprilTag dataset featuring real-world visual disturbances, enabling the YOLOv8 model to achieve high-accuracy detection and ID classification under challenging conditions. To ensure precise spatial localization, 2D visual tag coordinates are fused with depth data to compute 3D positions in the robot’s frame. A LiDAR group-symmetry mechanism estimates heading deviation, which is combined with visual feedback in a hybrid PID controller to correct angular errors. A finite-state machine governs the docking sequence, including detection, approach, yaw alignment, and final engagement. Simulation and experimental results demonstrate that the proposed system achieves higher docking success rates and improved pose accuracy under various challenging conditions compared to traditional vision- or LiDAR-only approaches. Full article
Show Figures

Figure 1

25 pages, 2723 KiB  
Article
A Human-Centric, Uncertainty-Aware Event-Fused AI Network for Robust Face Recognition in Adverse Conditions
by Akmalbek Abdusalomov, Sabina Umirzakova, Elbek Boymatov, Dilnoza Zaripova, Shukhrat Kamalov, Zavqiddin Temirov, Wonjun Jeong, Hyoungsun Choi and Taeg Keun Whangbo
Appl. Sci. 2025, 15(13), 7381; https://doi.org/10.3390/app15137381 - 30 Jun 2025
Cited by 1 | Viewed by 235
Abstract
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into [...] Read more.
Face recognition systems often falter when deployed in uncontrolled settings, grappling with low light, unexpected occlusions, motion blur, and the degradation of sensor signals. Most contemporary algorithms chase raw accuracy yet overlook the pragmatic need for uncertainty estimation and multispectral reasoning rolled into a single framework. This study introduces HUE-Net—a Human-centric, Uncertainty-aware, Event-fused Network—designed specifically to thrive under severe environmental stress. HUE-Net marries the visible RGB band with near-infrared (NIR) imagery and high-temporal-event data through an early-fusion pipeline, proven more responsive than serial approaches. A custom hybrid backbone that couples convolutional networks with transformers keeps the model nimble enough for edge devices. Central to the architecture is the perturbed multi-branch variational module, which distills probabilistic identity embeddings while delivering calibrated confidence scores. Complementing this, an Adaptive Spectral Attention mechanism dynamically reweights each stream to amplify the most reliable facial features in real time. Unlike previous efforts that compartmentalize uncertainty handling, spectral blending, or computational thrift, HUE-Net unites all three in a lightweight package. Benchmarks on the IJB-C and N-SpectralFace datasets illustrate that the system not only secures state-of-the-art accuracy but also exhibits unmatched spectral robustness and reliable probability calibration. The results indicate that HUE-Net is well-positioned for forensic missions and humanitarian scenarios where trustworthy identification cannot be deferred. Full article
Show Figures

Figure 1

31 pages, 3621 KiB  
Review
Electromyography Signal Acquisition, Filtering, and Data Analysis for Exoskeleton Development
by Jung-Hoon Sul, Lasitha Piyathilaka, Diluka Moratuwage, Sanura Dunu Arachchige, Amal Jayawardena, Gayan Kahandawa and D. M. G. Preethichandra
Sensors 2025, 25(13), 4004; https://doi.org/10.3390/s25134004 - 27 Jun 2025
Viewed by 553
Abstract
Electromyography (EMG) has emerged as a vital tool in the development of wearable robotic exoskeletons, enabling intuitive and responsive control by capturing neuromuscular signals. This review presents a comprehensive analysis of the EMG signal processing pipeline tailored to exoskeleton applications, spanning signal acquisition, [...] Read more.
Electromyography (EMG) has emerged as a vital tool in the development of wearable robotic exoskeletons, enabling intuitive and responsive control by capturing neuromuscular signals. This review presents a comprehensive analysis of the EMG signal processing pipeline tailored to exoskeleton applications, spanning signal acquisition, noise mitigation, data preprocessing, feature extraction, and control strategies. Various EMG acquisition methods, including surface, intramuscular, and high-density surface EMG, are evaluated for their applicability in real-time control. The review addresses prevalent signal quality challenges, such as motion artifacts, power-line interference, and crosstalk. It also highlights both traditional filtering techniques and advanced methods, such as wavelet transforms, empirical mode decomposition, and adaptive filtering. Feature extraction techniques are explored to support pattern recognition and motion classification. Machine learning approaches are examined for their roles in pattern recognition-based and hybrid control architectures. This article emphasizes muscle synergy analysis and adaptive control algorithms to enhance personalization and fatigue compensation, followed by the benefits of multimodal sensing and edge computing in addressing the limitations of EMG-only systems. By focusing on EMG-driven strategies through signal processing, machine learning, and sensor fusion innovations, this review bridges gaps in human–machine interaction, offering insights into improving the precision, adaptability, and robustness of next generation exoskeletons. Full article
(This article belongs to the Special Issue Sensors-Based Healthcare Diagnostics, Monitoring and Medical Devices)
Show Figures

Figure 1

15 pages, 2240 KiB  
Article
Wearable Sensors and Artificial Intelligence for the Diagnosis of Parkinson’s Disease
by Yacine Benyoucef, Islem Melliti, Jouhayna Harmouch, Borhan Asadi, Antonio Del Mastro, Diego Lapuente-Hernández and Pablo Herrero
J. Clin. Med. 2025, 14(12), 4207; https://doi.org/10.3390/jcm14124207 - 13 Jun 2025
Viewed by 719
Abstract
Background/Objectives: This study explores the integration of wearable sensors and artificial intelligence (AI) for Human Activity Recognition (HAR) in the diagnosis and rehabilitation of Parkinson’s disease (PD). The objective was to develop a proof-of-concept model based on internal reproducibility, without external generalization, that [...] Read more.
Background/Objectives: This study explores the integration of wearable sensors and artificial intelligence (AI) for Human Activity Recognition (HAR) in the diagnosis and rehabilitation of Parkinson’s disease (PD). The objective was to develop a proof-of-concept model based on internal reproducibility, without external generalization, that is capable of distinguishing pathological movements from healthy ones while ensuring clinical relevance and patient safety. Methods: Nine subjects, including eight patients with Parkinson’s disease and one healthy control, were included. Motion data were collected using the Motigravity platform, which integrates inertial sensors in a controlled environment. The signals were automatically segmented into fixed-length windows, with poor-quality segments excluded through preprocessing. A hybrid CNN-LSTM (Convolutional Neural Networks—Long Short-Term Memory) model was trained to classify motion patterns, leveraging convolutional layers for spatial feature extraction and LSTM layers for temporal dependencies. The Motigravity system provided a controlled hypogravity environment for data collection and rehabilitation exercises. Results: The proposed CNN-LSTM model achieved a validation accuracy of 100%, demonstrating classification potential. The Motigravity system contributed to improved data reliability and ensured patient safety. Despite increasing class imbalance in extended experiments, the model consistently maintained perfect accuracy, suggesting strong generalizability after external validation to overcome the limitations. Conclusions: Integrating AI and wearable sensors has significant potential to improve the HAR-based classification of movement impairments and guide rehabilitation strategies in PD. While challenges such as dataset size remain, expanding real-world validation and enhancing automated segmentation could further improve clinical impact. Future research should explore larger cohorts, extend the model to other neurodegenerative diseases, and evaluate its integration into clinical rehabilitation workflows. Full article
(This article belongs to the Section Clinical Neurology)
Show Figures

Figure 1

15 pages, 5421 KiB  
Article
Contour Error Control for a Hybrid Robot Equipped with Grating Sensors
by Xianlei Shan, Tianyu Zou, Haitao Liu, Yu Deng and Juliang Xiao
Machines 2025, 13(6), 502; https://doi.org/10.3390/machines13060502 - 9 Jun 2025
Viewed by 717
Abstract
To mitigate the detrimental effects of joint elasticity and transmission errors on contour accuracy and to improve the multi-axis motion performance of hybrid robots, this study investigates contour error modeling and control by leveraging additional grating sensors for real-time measurements. Accounting for the [...] Read more.
To mitigate the detrimental effects of joint elasticity and transmission errors on contour accuracy and to improve the multi-axis motion performance of hybrid robots, this study investigates contour error modeling and control by leveraging additional grating sensors for real-time measurements. Accounting for the inherent pose coupling characteristics of hybrid robots, a novel contour error modeling method is proposed that employs six-dimensional exponential coordinates for error description and incorporates an efficient search algorithm for foot point determination. Building upon an existing grating sensor feedback control framework, a proportional contour controller is developed. Experimental validation on the TriMule-200 hybrid robot demonstrates an enhancement in end-effector contour accuracy. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

15 pages, 11557 KiB  
Article
Toward Versatile Transient Electronics: Electrospun Biocompatible Silk Fibroin/Carbon Quantum Dot-Based Green-Emission, Water-Soluble Piezoelectric Nanofibers
by Zhipei Xia, Chubao Liu, Juan Li, Biyao Huang, Chu Pan, Yu Lai, Zhu Liu, Dongling Wu, Sen Liang, Xuanlun Wang, Weiqing Yang and Jun Lu
Polymers 2025, 17(11), 1579; https://doi.org/10.3390/polym17111579 - 5 Jun 2025
Viewed by 525
Abstract
The rapid development of wearable electronics requires multifunctional, transient electronic devices to reduce the ecological footprint and ensure data security. Unfortunately, existing transient electronic materials need to be degraded in chemical solvents or body fluids. Here, we report green luminescent, water-soluble, and biocompatible [...] Read more.
The rapid development of wearable electronics requires multifunctional, transient electronic devices to reduce the ecological footprint and ensure data security. Unfortunately, existing transient electronic materials need to be degraded in chemical solvents or body fluids. Here, we report green luminescent, water-soluble, and biocompatible piezoelectric nanofibers developed by electrospinning green carbon quantum dots (G-CQDs), mulberry silk fibroin (SF), and polyvinyl alcohol (PVA). The introduction of G-CQDs significantly enhances the piezoelectric output of silk fibroin-based fiber materials. Meanwhile, the silk fibroin-based hybrid fibers maintain the photoluminescent response of G-CQDs without sacrificing valuable biocompatibility. Notably, the piezoelectric output of a G-CQD/PVA/SF fiber-based nanogenerator is more than three times higher than that of a PVA/SF fiber-based nanogenerator. This is one of the highest levels of state-of-the-art piezoelectric devices based on biological organic materials. As a proof of concept, in the actual scenario of a rope skipping exercise, the G-CQD/PVA/SF fiber-based nanogenerator is further employed as a self-powered wearable sensor for real-time sensing of athletic motions. It demonstrates high portability, good flexibility, and stable piezoresponse for smart sports applications. This class of water-disposable, piezo/photoactive biological materials could be compelling building blocks for applications in a new generation of versatile, transient, wearable/implantable devices. Full article
(This article belongs to the Special Issue Polymer-Based Wearable Electronics)
Show Figures

Figure 1

17 pages, 5730 KiB  
Article
EMG-Controlled Soft Robotic Bicep Enhancement
by Jiayue Zhang, Daniel Vanderbilt, Ethan Fitz and Janet Dong
Bioengineering 2025, 12(5), 526; https://doi.org/10.3390/bioengineering12050526 - 15 May 2025
Viewed by 386
Abstract
Industrial workers often engage in repetitive lifting tasks. This type of continual loading on their arms throughout the workday can lead to muscle or tendon injuries. A non-intrusive system designed to assist a worker’s arms would help alleviate strain on their muscles, thereby [...] Read more.
Industrial workers often engage in repetitive lifting tasks. This type of continual loading on their arms throughout the workday can lead to muscle or tendon injuries. A non-intrusive system designed to assist a worker’s arms would help alleviate strain on their muscles, thereby preventing injury and minimizing productivity losses. The goal of this project is to develop a wearable soft robotic arm enhancement device that supports a worker’s muscles by sharing the load during lifting tasks, thereby increasing their lifting capacity, reducing fatigue, and improving their endurance to help prevent injury. The device should be easy to use and wear, functioning in relative harmony with the user’s own muscles. It should not restrict the user’s range of motion or flexibility. The human arm consists of numerous muscles that work together to enable its movement. However, as a proof of concept, this project focuses on developing a prototype to enhance the biceps brachii muscle, the primary muscle involved in pulling movements during lifting. Key components of the prototype include a soft robotic muscle or actuator analogous to the biceps, a control system for the pneumatic muscle actuator, and a method for securing the soft muscle to the user’s arm. The McKibben-inspired pneumatic muscle was chosen as the soft actuator for the prototype. A hybrid control algorithm, incorporating PID and model-based control methods, was developed. Electromyography (EMG) and pressure sensors were utilized as inputs for the control algorithms. This paper discusses the design strategies for the device and the preliminary results of the feasibility testing. Based on the results, a wearable EMG-controlled soft robotic arm augmentation could effectively enhance the endurance of industrial workers engaged in repetitive lifting tasks. Full article
(This article belongs to the Special Issue Advances in Robotic-Assisted Rehabilitation)
Show Figures

Figure 1

21 pages, 630 KiB  
Article
Hybrid Deep Learning Framework for Continuous User Authentication Based on Smartphone Sensors
by Bandar Alotaibi and Munif Alotaibi
Sensors 2025, 25(9), 2817; https://doi.org/10.3390/s25092817 - 30 Apr 2025
Viewed by 680
Abstract
Continuous user authentication is critical to mobile device security, addressing vulnerabilities associated with traditional one-time authentication methods. This research proposes a hybrid deep learning framework that combines techniques from computer vision and sequence modeling, namely, ViT-inspired patch extraction, multi-head attention, and BiLSTM networks, [...] Read more.
Continuous user authentication is critical to mobile device security, addressing vulnerabilities associated with traditional one-time authentication methods. This research proposes a hybrid deep learning framework that combines techniques from computer vision and sequence modeling, namely, ViT-inspired patch extraction, multi-head attention, and BiLSTM networks, to authenticate users continuously from smartphone sensor data. Unlike many existing approaches that directly apply these techniques for specific recognition tasks, our method reshapes raw motion signals into ViT-like patches to capture short-range patterns, employs multi-head attention to emphasize the most discriminative temporal segments, and then processes these enhanced embeddings through a bidirectional LSTM to integrate broader contextual information. This integrated pipeline effectively extracts local and global motion features specific to each user’s unique behavior, improving accuracy over conventional Transformer, Informer, CNN, and LSTM baselines. Experiments on the MotionSense and UCI HAR datasets show accuracies of 97.51% and 89.37%, respectively, indicating strong user-identification performance. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

26 pages, 2006 KiB  
Article
Edge AI for Real-Time Anomaly Detection in Smart Homes
by Manuel J. C. S. Reis and Carlos Serôdio
Future Internet 2025, 17(4), 179; https://doi.org/10.3390/fi17040179 - 18 Apr 2025
Viewed by 3457
Abstract
The increasing adoption of smart home technologies has intensified the demand for real-time anomaly detection to improve security, energy efficiency, and device reliability. Traditional cloud-based approaches introduce latency, privacy concerns, and network dependency, making Edge AI a compelling alternative for low-latency, on-device processing. [...] Read more.
The increasing adoption of smart home technologies has intensified the demand for real-time anomaly detection to improve security, energy efficiency, and device reliability. Traditional cloud-based approaches introduce latency, privacy concerns, and network dependency, making Edge AI a compelling alternative for low-latency, on-device processing. This paper presents an Edge AI-based anomaly detection framework that combines Isolation Forest (IF) and Long Short-Term Memory Autoencoder (LSTM-AE) models to identify anomalies in IoT sensor data. The system is evaluated on both synthetic and real-world smart home datasets, including temperature, motion, and energy consumption signals. Experimental results show that LSTM-AE achieves higher detection accuracy (up to 93.6%) and recall but requires more computational resources. In contrast, IF offers faster inference and lower power consumption, making it suitable for constrained environments. A hybrid architecture integrating both models is proposed to balance accuracy and efficiency, achieving sub-50 ms inference latency on embedded platforms such as Raspberry Pi and NVIDEA Jetson Nano. Optimization strategies such as quantization reduced LSTM-AE inference time by 76% and power consumption by 35%. Adaptive learning mechanisms, including federated learning, are also explored to minimize cloud dependency and enhance data privacy. These findings demonstrate the feasibility of deploying real-time, privacy-preserving, and energy-efficient anomaly detection directly on edge devices. The proposed framework can be extended to other domains such as smart buildings and industrial IoT. Future work will investigate self-supervised learning, transformer-based detection, and deployment in real-world operational settings. Full article
Show Figures

Graphical abstract

22 pages, 5557 KiB  
Article
Flight Trajectory Prediction Based on Automatic Dependent Surveillance-Broadcast Data Fusion with Interacting Multiple Model and Informer Framework
by Fan Li, Xuezhi Xu, Rihan Wang, Mingyuan Ma and Zijing Dong
Sensors 2025, 25(8), 2531; https://doi.org/10.3390/s25082531 - 17 Apr 2025
Viewed by 786
Abstract
Aircraft trajectory prediction is challenging because of the flight process with uncertain kinematic motion and varying dynamics, which is characterized by intricate temporal dependencies of the flight surveillance data. To address these challenges, this study proposes a novel hybrid prediction framework, the IMM-Informer, [...] Read more.
Aircraft trajectory prediction is challenging because of the flight process with uncertain kinematic motion and varying dynamics, which is characterized by intricate temporal dependencies of the flight surveillance data. To address these challenges, this study proposes a novel hybrid prediction framework, the IMM-Informer, which integrates an interacting multiple model (IMM) approach with the deep learning-based Informer model. The IMM processes flight tracking with multiple typical motion models to produce the initial state predictions. Within the Informer framework, the encoder captures the temporal features with the ProbSparse self-attention mechanism, and the decoder generates trajectory deviation predictions. A final fusion combines the IMM estimators with Informer correction outputs and leverages their respective strengths to achieve accurate and robust predictions. The experiments are conducted using real flight surveillance data received from automatic dependent surveillance-broadcast (ADS-B) sensors to validate the effectiveness of the proposed method. The results demonstrate that the IMM-Informer framework has notable prediction error reductions and significantly outperforms the prediction accuracies of the standalone sequence prediction network models. Full article
(This article belongs to the Special Issue Multi-Sensor Data Fusion)
Show Figures

Figure 1

13 pages, 3466 KiB  
Article
A Multimodal CNN–Transformer Network for Gait Pattern Recognition with Wearable Sensors in Weak GNSS Scenarios
by Jiale Wang, Nanzhu Liu, Yuxin Xie, Shengmao Que and Ming Xia
Electronics 2025, 14(8), 1537; https://doi.org/10.3390/electronics14081537 - 10 Apr 2025
Viewed by 722
Abstract
Human motion recognition is crucial for applications like navigation, health monitoring, and smart healthcare, especially in weak GNSS scenarios. Current methods face challenges such as limited sensor diversity and inadequate feature extraction. This study proposes a CNN–Transformer–Attention framework with multimodal enhancement to address [...] Read more.
Human motion recognition is crucial for applications like navigation, health monitoring, and smart healthcare, especially in weak GNSS scenarios. Current methods face challenges such as limited sensor diversity and inadequate feature extraction. This study proposes a CNN–Transformer–Attention framework with multimodal enhancement to address these challenges. We first designed a lightweight wearable system integrating synchronized accelerometer, gyroscope, and magnetometer modules at wrist, chest, and foot positions, enabling multi-dimensional biomechanical data acquisition. A hybrid preprocessing pipeline combining cubic spline interpolation, adaptive Kalman filtering, and spectral analysis was developed to extract discriminative spatiotemporal-frequency features. The core architecture employs parallel CNN pathways for local sensor feature extraction and Transformer-based attention layers to model global temporal dependencies across body positions. Experimental validation on 12 motion patterns demonstrated 98.21% classification accuracy, outperforming single-sensor configurations by 0.43–7.98% and surpassing conventional models (BP-Network, CNN, LSTM, Transformer, KNN) through effective cross-modal fusion. The framework also exhibits improved generalization with 3.2–8.7% better accuracy in cross-subject scenarios, providing a robust solution for human activity recognition and accurate positioning in challenging environments such as autonomous navigation and smart cities. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

25 pages, 2530 KiB  
Article
Combining Camera–LiDAR Fusion and Motion Planning Using Bird’s-Eye View Representation for End-to-End Autonomous Driving
by Ze Yu, Jun Li, Yuzhen Wei, Yuandong Lyu and Xiaojun Tan
Drones 2025, 9(4), 281; https://doi.org/10.3390/drones9040281 - 8 Apr 2025
Viewed by 1802
Abstract
End-to-end autonomous driving has become a key research focus in autonomous vehicles. However, existing methods struggle with effectively fusing heterogeneous sensor inputs and converting dense perceptual features into sparse motion representations. To address these challenges, we propose BevDrive, a novel end-to-end autonomous driving [...] Read more.
End-to-end autonomous driving has become a key research focus in autonomous vehicles. However, existing methods struggle with effectively fusing heterogeneous sensor inputs and converting dense perceptual features into sparse motion representations. To address these challenges, we propose BevDrive, a novel end-to-end autonomous driving framework that unifies camera–LiDAR fusion and motion planning through a bird’s-eye view (BEV) representation. BevDrive consists of three core modules: the bidirectionally guided BEV feature construction module, the dual-attention BEV feature fusion module, and the BEV-based motion planning module. The bidirectionally guided BEV feature construction module comprises two branches: depth-guided image BEV feature construction and image-guided LiDAR BEV feature construction. Depth-guided image BEV feature construction employs a lifting and projection approach guided by depth information from LiDAR, transforming image features into a BEV representation. Meanwhile, image-guided LiDAR BEV feature construction enriches sparse LiDAR BEV features by integrating complementary information from the images. Then, the dual-attention BEV feature fusion module combines multi-modal BEV features at both local and global levels using a hybrid approach of window self-attention and global self-attention mechanisms. Finally, the BEV-based motion planning module integrates perception and planning by refining control and trajectory queries through interactions with the scene context in the fused BEV features, generating precise trajectory points and control commands. Extensive experiments on the CARLA Town05 Long benchmark demonstrate that BevDrive achieves state-of-the-art performance. Furthermore, we validate the feasibility of the proposed algorithm on a real-world vehicle platform, confirming its practical applicability and robustness. Full article
(This article belongs to the Section Innovative Urban Mobility)
Show Figures

Figure 1

57 pages, 8107 KiB  
Review
Machine Learning for Human Activity Recognition: State-of-the-Art Techniques and Emerging Trends
by Md Amran Hossen and Pg Emeroylariffion Abas
J. Imaging 2025, 11(3), 91; https://doi.org/10.3390/jimaging11030091 - 20 Mar 2025
Cited by 1 | Viewed by 3340
Abstract
Human activity recognition (HAR) has emerged as a transformative field with widespread applications, leveraging diverse sensor modalities to accurately identify and classify human activities. This paper provides a comprehensive review of HAR techniques, focusing on the integration of sensor-based, vision-based, and hybrid methodologies. [...] Read more.
Human activity recognition (HAR) has emerged as a transformative field with widespread applications, leveraging diverse sensor modalities to accurately identify and classify human activities. This paper provides a comprehensive review of HAR techniques, focusing on the integration of sensor-based, vision-based, and hybrid methodologies. It explores the strengths and limitations of commonly used modalities, such as RGB images/videos, depth sensors, motion capture systems, wearable devices, and emerging technologies like radar and Wi-Fi channel state information. The review also discusses traditional machine learning approaches, including supervised and unsupervised learning, alongside cutting-edge advancements in deep learning, such as convolutional and recurrent neural networks, attention mechanisms, and reinforcement learning frameworks. Despite significant progress, HAR still faces critical challenges, including handling environmental variability, ensuring model interpretability, and achieving high recognition accuracy in complex, real-world scenarios. Future research directions emphasise the need for improved multimodal sensor fusion, adaptive and personalised models, and the integration of edge computing for real-time analysis. Additionally, addressing ethical considerations, such as privacy and algorithmic fairness, remains a priority as HAR systems become more pervasive. This study highlights the evolving landscape of HAR and outlines strategies for future advancements that can enhance the reliability and applicability of HAR technologies in diverse domains. Full article
Show Figures

Figure 1

18 pages, 6300 KiB  
Article
Shell-Optimized Hybrid Generator for Ocean Wave Energy Harvesting
by Heng Liu, Dongxin Guo, Hengda Zhu, Honggui Wen, Jiawei Li and Lingyu Wan
Energies 2025, 18(6), 1502; https://doi.org/10.3390/en18061502 - 18 Mar 2025
Viewed by 461
Abstract
With the increasing global emphasis on sustainable energy, wave energy has gained recognition as a significant renewable marine resource, drawing substantial research attention. However, the efficient conversion of low-frequency, random, and low-energy wave motion into electrical power remains a considerable challenge. In this [...] Read more.
With the increasing global emphasis on sustainable energy, wave energy has gained recognition as a significant renewable marine resource, drawing substantial research attention. However, the efficient conversion of low-frequency, random, and low-energy wave motion into electrical power remains a considerable challenge. In this study, an advanced hybrid generator design is introduced which enhances wave energy harvesting by optimizing wave–body coupling characteristics and incorporating both a triboelectric nanogenerator (TENG) and an electromagnetic generator (EMG) within the shell. The optimized asymmetric trapezoidal shell (ATS) improves output frequency and energy harvesting efficiency in marine environments. Experimental findings under simulated water wave excitation indicate that the accelerations in the x, y, and z directions for the ATS are 1.9 m·s−2, 0.5 m·s−2, and 1.4 m·s−2, respectively, representing 1.2, 5.5, and 2.3 times those observed in the cubic shell. Under real ocean conditions, a single TENG unit embedded in the ATS achieves a maximum transferred charge of 1.54 μC, a short-circuit current of 103 μA, and an open-circuit voltage of 363 V, surpassing the cubic shell by factors of 1.21, 1.24, and 2.13, respectively. These performance metrics closely align with those obtained under six-degree-of-freedom platform oscillation (0.4 Hz, swing angle range of ±6°), exceeding the results observed in laboratory-simulated waves. Notably, the most probable output frequency of the ATS along the x-axis reaches 0.94 Hz in ocean trials, which is 1.94 times the significant wave frequency of ambient sea waves. The integrated hybrid generator efficiently captures low-quality wave energy to power water quality sensors in marine environments. This study highlights the potential of combining synergistic geometric shell design and generator integration to achieve high-performance wave energy harvesting through improved wave–body coupling. Full article
(This article belongs to the Topic Advanced Energy Harvesting Technology)
Show Figures

Figure 1

21 pages, 5979 KiB  
Article
Sign Language Sentence Recognition Using Hybrid Graph Embedding and Adaptive Convolutional Networks
by Pathomthat Chiradeja, Yijuan Liang and Chaiyan Jettanasen
Appl. Sci. 2025, 15(6), 2957; https://doi.org/10.3390/app15062957 - 10 Mar 2025
Cited by 1 | Viewed by 960
Abstract
Sign language plays a crucial role in bridging communication barriers within the Deaf community. Recognizing sign language sentences remains a significant challenge due to their complex structure, variations in signing styles, and temporal dynamics. This study introduces an innovative sign language sentence recognition [...] Read more.
Sign language plays a crucial role in bridging communication barriers within the Deaf community. Recognizing sign language sentences remains a significant challenge due to their complex structure, variations in signing styles, and temporal dynamics. This study introduces an innovative sign language sentence recognition (SLSR) approach using Hybrid Graph Embedding and Adaptive Convolutional Networks (HGE-ACN) specifically developed for single-handed wearable glove devices. The system relies on sensor data from a glove with six-axis inertial sensors and five-finger curvature sensors. The proposed HGE-ACN framework integrates graph-based embeddings to capture dynamic spatial–temporal relationships in motion and curvature data. At the same time, the Adaptive Convolutional Networks extract robust glove-based features to handle variations in signing speed, transitions between gestures, and individual signer styles. The lightweight design enables real-time processing and enhances recognition accuracy, making it suitable for practical use. Extensive experiments demonstrate that HGE-ACN achieves superior accuracy and computational efficiency compared to existing glove-based recognition methods. The system maintains robustness under various conditions, including inconsistent signing speeds and environmental noise. This work has promising applications in real-time assistive tools, educational technologies, and human–computer interaction systems, facilitating more inclusive and accessible communication platforms for the deaf and hard-of-hearing communities. Future work will explore multi-lingual sign language recognition and real-world deployment across diverse environments. Full article
Show Figures

Figure 1

Back to TopTop