Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,256)

Search Parameters:
Keywords = inertial sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1350 KB  
Article
A Readout Circuit Applied for an Ultrafast CMOS Image Sensor
by Houzhi Cai, Zhaoyang Xie, Zhiying Deng, Youlin Ma and Lijuan Xiang
Photonics 2026, 13(4), 390; https://doi.org/10.3390/photonics13040390 (registering DOI) - 18 Apr 2026
Abstract
Microchannel plate gated framing camera is commonly used in inertial confinement fusion diagnostics. However, it is a vacuum electronic device with bulkiness and non-single-line-of-sight imaging. To reduce the size of the camera and achieve a single line of sight image, a CMOS image [...] Read more.
Microchannel plate gated framing camera is commonly used in inertial confinement fusion diagnostics. However, it is a vacuum electronic device with bulkiness and non-single-line-of-sight imaging. To reduce the size of the camera and achieve a single line of sight image, a CMOS image sensor composed of a pixel unit and a readout circuit is presented to form the framing camera. The CMOS image sensor has a 32 × 32 × 4 pixel array with ultrashort shutter-time and four-frame imaging. The pixel array and analog to digital converter (ADC) readout circuit are designed using a standard 0.18 μm CMOS process. The pixel array includes 5T structured pixel units, a voltage-controlled delay, a clock tree and the row decoding scan circuits. A temporal resolution of 65 ps for the pixel circuit is achieved. The ADC readout circuit is composed of a counter, a comparator, a ramp generator and a register, which operates at a sampling frequency of 24.41 kS/s. An effective number of bits of 11.3, a spurious free dynamic range (SFDR) of 73.4 dB, and a signal-to-noise ratio (SNR) of 70.0 dB for the ADC are achieved. The CMOS image sensor will provide a novel and important imaging method for the field of ultrafast science. Full article
(This article belongs to the Special Issue Advances in Ultrafast Science and Applications)
24 pages, 2768 KB  
Article
Enhancing Wearable-Based Elderly Activity Recognition Through a Hybrid Deep Residual Network
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Mach. Learn. Knowl. Extr. 2026, 8(4), 107; https://doi.org/10.3390/make8040107 (registering DOI) - 18 Apr 2026
Abstract
The rapid growth of the elderly population worldwide demands reliable activity recognition technologies to support independent living and continuous health supervision. However, conventional wearable sensor-based human activity recognition (HAR) techniques often fail to capture the complex temporal behaviour and subtle motion patterns characteristic [...] Read more.
The rapid growth of the elderly population worldwide demands reliable activity recognition technologies to support independent living and continuous health supervision. However, conventional wearable sensor-based human activity recognition (HAR) techniques often fail to capture the complex temporal behaviour and subtle motion patterns characteristic of the elderly. To address these limitations, this study introduces a hybrid deep residual architecture—CNN-CBAM-BiGRU—that integrates convolutional neural networks (CNNs), the convolutional block attention module (CBAM), and bidirectional gated recurrent units (BiGRUs) to improve activity recognition using inertial measurement unit (IMU) data. In the proposed CNN-CBAM-BiGRU framework, CNN layers automatically derive representative features from raw sensor signals, CBAM applies adaptive channel and spatial attention to highlight informative patterns, and BiGRU captures long-range temporal relationships within activity sequences. The approach was evaluated on three benchmark datasets designed for elderly populations—HAR70+, HARTH, and SisFall—covering daily activities and fall events. The proposed model consistently outperforms existing methods across all datasets, achieving accuracies exceeding 96%, F1-scores above 93%, and a fall detection recall of 93.74%, confirming its robustness and suitability for safety-critical monitoring applications. Class-level evaluation indicates excellent recognition of static postures and consistent performance for dynamic actions. Convergence analysis further confirms efficient learning with limited overfitting across datasets. The proposed framework thus provides a robust and accurate solution for wearable-based elderly activity recognition, with strong potential for deployment in fall detection, health monitoring, and ambient assisted living systems. Full article
(This article belongs to the Special Issue Sustainable Applications for Machine Learning—2nd Edition)
8 pages, 1128 KB  
Proceeding Paper
Possibilities of Using Quaternion Methods in Helmet-Mounted Cueing Systems in Order to Increase Their Operation Reliability
by Sławomir Michalak, Andrzej Szelmanowski, Andrzej Pazur and Pawel Janik
Eng. Proc. 2026, 133(1), 9; https://doi.org/10.3390/engproc2026133009 - 16 Apr 2026
Abstract
The article reviews the methods of determining the angular position of a pilot’s helmet used on board modern aircraft, and analyzes the methods of determining the angular position of an object used in aviation spatial orientation and inertial navigation systems. A functional analysis [...] Read more.
The article reviews the methods of determining the angular position of a pilot’s helmet used on board modern aircraft, and analyzes the methods of determining the angular position of an object used in aviation spatial orientation and inertial navigation systems. A functional analysis of the NSC-1 Orion helmet-mounted targeting system developed at AFIT was performed. The main part of the work consists of the development of new, original mathematical models for determining the angular position of the pilot’s helmet using quaternions, simulation studies of these models, and experimental verification of their results. The stages necessary for the development of mathematical models and their proper testing for disturbances occurring in the measurement of gravitational acceleration (sensor errors and acceleration from maneuvers) and the magnetic field (sensor errors and the influence of the aircraft’s own magnetic field) are presented. Full article
Show Figures

Figure 1

35 pages, 6272 KB  
Article
AI-Enhanced Thermal–Visual–Inertial Odometry and Autonomous Planning for GPS-Denied Search-and- Rescue Robotics
by Islam T. Almalkawi, Sabya Shtaiwi, Alaa Alhowaide and Manel Guerrero Zapata
Sensors 2026, 26(8), 2462; https://doi.org/10.3390/s26082462 - 16 Apr 2026
Abstract
Search and rescue (SAR) missions in collapsed or underground environments remain challenging due to GPS unavailability, which hinders localization and autonomous navigation. Systems that rely on single-sensor inputs or structured settings often degrade under smoke, dust, or dynamic clutter. This paper presents an [...] Read more.
Search and rescue (SAR) missions in collapsed or underground environments remain challenging due to GPS unavailability, which hinders localization and autonomous navigation. Systems that rely on single-sensor inputs or structured settings often degrade under smoke, dust, or dynamic clutter. This paper presents an autonomous ground robot for GPS-denied SAR that integrates low-cost thermal, visual, inertial, and acoustic cues within a unified, computation-efficient architecture. The stack combines Thermal–Visual Odometry (TV–VO) with Zero-Velocity Updates (ZUPT) for drift-resistant localization, RescueGraph for multimodal survivor detection, and a Proximal Policy Optimization (PPO) planner for adaptive navigation under uncertainty. Across simulated disaster scenarios and benchmark corridor runs, the system shows embedded-feasible runtime behavior and supports return to base without external beacons under the evaluated conditions. Quantitatively, TV–VO+ZUPT reduces drift in short internal evaluations, while RescueGraph attains an F1-score of 0.6923 and an area under the ROC curve (AUC) of 0.976 for survivor detection. At the system level, the integrated navigation stack achieves full mission completion in the reported SAR-style trials, while the separate A*/PPO comparison highlights a trade-off between completion rate, traversal time, and collisions. Overall, the results support the practical promise of a low-cost sensor-fusion and learning-assisted navigation framework for GPS-denied SAR robotics. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Graphical abstract

38 pages, 8935 KB  
Article
3D-IMB-APDR: Inertial-Geomagnetic-Barometric-Based Adaptive Infrastructure-Free 3D Pedestrian Dead Reckoning Method
by Tianqi Tian, Yanzhu Hu, Bin Hu, Yingjian Wang and Xinghao Zhao
Electronics 2026, 15(8), 1669; https://doi.org/10.3390/electronics15081669 - 16 Apr 2026
Viewed by 59
Abstract
With the rapid development of underground spaces and demand for infrastructure-independent autonomous positioning in post-disaster rescue, Pedestrian Dead Reckoning (PDR) has become a key research focus. However, traditional PDR suffers from cumulative heading drift, inadequate 3D positioning performance, and poor anti-magnetic interference capabilities, [...] Read more.
With the rapid development of underground spaces and demand for infrastructure-independent autonomous positioning in post-disaster rescue, Pedestrian Dead Reckoning (PDR) has become a key research focus. However, traditional PDR suffers from cumulative heading drift, inadequate 3D positioning performance, and poor anti-magnetic interference capabilities, failing to meet the high-precision positioning requirements of rescuers in underground and multistory buildings. To address these issues, this paper proposes an adaptive 3D-PDR method fusing inertial, geomagnetic, and barometric (3D-IMB-APDR). Sensor data are optimized via FFT dominant frequency extraction and Butterworth zero-phase filtering, with magnetic interference compensated by geomagnetic ellipse fitting. A segmental heading correction with a multi-criteria dynamic geomagnetic reliability model suppresses heading drift. A barometer-based coarse estimation and inertial fine correction architecture is adopted, where a lightweight CNN-BiLSTM network extracts inertial features for step height, and AEKF fuses multi-source data to achieve accurate vertical height estimation and precise 3D positioning. Validated in sports fields, underground parking garages, and staircases, the method outperforms four comparative methods, reducing positional RMSE by 65.77–98.23%, with endpoint errors of 1.40 m, 2.56 m, and 0.32 m, respectively. Relying solely on chest-worn sensors, it provides a reliable 3D autonomous positioning solution for rescuers in post-disaster rescue and underground engineering. Full article
(This article belongs to the Special Issue Recent Advance of Auto Navigation in Indoor Scenarios)
Show Figures

Figure 1

38 pages, 6558 KB  
Article
Multimodal Sensor Fusion and Temporal Deep Learning for Computer Numerical Control Toolpath and Condition Classification: A Cross-Validated Ablation Study
by Stephen S. Eacuello, Romesh S. Prasad and Manbir S. Sodhi
Sensors 2026, 26(8), 2405; https://doi.org/10.3390/s26082405 - 14 Apr 2026
Viewed by 328
Abstract
Classifying which operation a Computer Numerical Control (CNC) machine is executing, not just detecting whether it is functioning correctly, is a monitoring challenge that existing sensor-based studies rarely address. Unlike tool wear estimation, operation-type classification must resolve toolpath strategies and cutting conditions within [...] Read more.
Classifying which operation a Computer Numerical Control (CNC) machine is executing, not just detecting whether it is functioning correctly, is a monitoring challenge that existing sensor-based studies rarely address. Unlike tool wear estimation, operation-type classification must resolve toolpath strategies and cutting conditions within heterogeneous, noisy sensor streams in which modalities differ widely in their discriminative value. Which sensors are genuinely necessary, and how many can be removed before performance degrades, directly informs retrofit cost and monitoring system design. We present a systematic cross-validated ablation study for a nine-class CNC toolpath and condition classification task, using 120 operation files collected from a desktop CNC mill instrumented with six distributed sensor units spanning inertial, acoustic, environmental, and electrical modalities. To handle multimodal fusion under sensor noise, we introduce the Multimodal Denoising Temporal Attention Encoder with Long Short-Term Memory (MM-DTAE-LSTM), which combines learned modality weighting, cross-modal attention, and a self-supervised denoising objective, followed by recurrent temporal modeling for classification. We evaluate MM-DTAE-LSTM against five baseline model families across five cumulative sensor-ablation levels and ten temporal resolutions, using file-level cross-validation to prevent data leakage from overlapping windows. MM-DTAE-LSTM maintains 92.5% classification accuracy when nearly half the sensor channels are removed (56 of 110 features), whereas simpler baselines degrade by up to 10.7 percentage points under the same reduction. Analysis of variance reveals that pressure channels encode session-level atmospheric variation rather than machining dynamics, exposing how models that cannot suppress uninformative modalities rely on environmental confounds rather than machining physics. Together, these findings translate into concrete sensor-selection and deployment recommendations for cost-effective CNC process monitoring at under USD 500 in hardware, though generalization to industrial machines, diverse materials, and production environments requires further validation. Full article
(This article belongs to the Special Issue Sensors and IoT Technologies for the Smart Industry)
Show Figures

Figure 1

14 pages, 579 KB  
Article
Wearable Sensor-Free Adult Physical Activity Monitoring Using Smartphone IMU Signals: Cross-Subject Deep Learning with Window-Length and Sensor Modality Studies
by Mussa Turdalyuly, Ay Zholdassova, Tolganay Turdalykyzy and Aydin Doshybekov
Information 2026, 17(4), 368; https://doi.org/10.3390/info17040368 - 14 Apr 2026
Viewed by 197
Abstract
Human activity recognition (HAR) using inertial sensors is essential for health monitoring and wellness applications, yet robust classification in real-world adult scenarios remains challenging due to subject variability and activity transitions in smartphone sensing environments. This study investigated smartphone-based physical activity recognition using [...] Read more.
Human activity recognition (HAR) using inertial sensors is essential for health monitoring and wellness applications, yet robust classification in real-world adult scenarios remains challenging due to subject variability and activity transitions in smartphone sensing environments. This study investigated smartphone-based physical activity recognition using accelerometer and gyroscope signals under a cross-subject evaluation protocol. To reduce label ambiguity and improve generalization, the original activity set was grouped into a reduced 6-class taxonomy. We evaluated lightweight deep learning models, including a smartphone-only convolutional neural network (CNN) and a multimodal fusion model combining smartphone and smartwatch signals. Using GroupKFold cross-subject validation, the smartphone-only CNN achieved competitive performance with Macro-F1 ≈ 0.46, while multimodal fusion did not provide consistent improvements. We also examined temporal segmentation and showed that shorter windows (2.0 s) yield better results than longer windows. Sensor ablation confirmed the importance of gyroscope information, and per-class analysis indicated that dynamic activities could be recognized reliably, whereas stairs and static categories remained difficult. Overall, the results demonstrate the practicality of smartphone-based activity recognition using built-in smartphone sensors without external wearable devices for adult activity monitoring and provide recommendations for window length and sensor selection in cross-subject HAR. Full article
Show Figures

Figure 1

25 pages, 7380 KB  
Article
Integrated Air–Ground Robotic System for Autonomous Post-Blast Operations in GNSS-Denied Tunnels
by Goretti Arias-Ferreiro, Marco A. Montes-Grova, Francisco J. Pérez-Grau, Sergio Noriega-del-Rivero, Rafael Herguedas, María T. Lázaro, Amaia Castelruiz-Aguirre, José Carlos Jimenez Fernandez, Mustafa Karahan and Antonio Alonso-Cepeda
Remote Sens. 2026, 18(8), 1133; https://doi.org/10.3390/rs18081133 - 10 Apr 2026
Viewed by 434
Abstract
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader [...] Read more.
Post-blast operations in tunnel construction represent a critical bottleneck due to mandatory downtime and hazardous environmental conditions. This study addresses these challenges by developing and validating an integrated cyber–physical architecture that coordinates an autonomous Unmanned Aerial Vehicle (UAV) and an Autonomous Wheel Loader (AWL) under the supervision of a Digital Twin acting as central operational digital interface. Specifically, this technology was designed to access the tunnel, evaluate post-blasting conditions, and initiate operations during mandatory exclusion periods for personnel. The system was validated in a realistic, Global Navigation Satellite System (GNSS)-denied tunnel environment emulating post-detonation visibility constraints. The results demonstrate that the aerial agent successfully navigated and mapped the excavation front in less than 8 min, establishing a shared coordinate system for the ground machinery. Through this collaborative workflow, the autonomous deployment enabled operations to commence 50% to 80% earlier than conventional manual procedures. Furthermore, the system reduced daily operational time by approximately 8%, with an estimated return on financial investment between one and seven months. Overall, the proposed framework eliminates human exposure during high-risk inspections and transforms the fragmented excavation cycle into a continuous, data-driven process. Full article
(This article belongs to the Special Issue Mobile Laser Scanning Systems for Underground Applications)
Show Figures

Figure 1

15 pages, 1233 KB  
Article
Sensor-Based Analysis of the Influence of Score Status and Playing Position on the Most Demanding Passages in Elite Women’s Football
by Baris Karakoc, Alper Asci and Paweł Chmura
Sensors 2026, 26(8), 2349; https://doi.org/10.3390/s26082349 - 10 Apr 2026
Viewed by 375
Abstract
This study aimed to investigate how score status and playing position affect the most demanding passages (MDPs) in elite women’s football. Data from ten matches from eighteen outfield players of the Turkish Women’s National Team were collected during UEFA Nations League fixtures in [...] Read more.
This study aimed to investigate how score status and playing position affect the most demanding passages (MDPs) in elite women’s football. Data from ten matches from eighteen outfield players of the Turkish Women’s National Team were collected during UEFA Nations League fixtures in the 2024–2025 seasons. Players were monitored using wearable GPS sensors, and all locomotor variables were segmented into one-minute windows to identify peak demands. The analysed variables included total distance (TD), high-speed running (HSR), sprint distance (SD), high-acceleration distance (HIAccD), high-deceleration distance (HIDecD), high metabolic power distance (HMPD), and player load (PL). Generalised Estimating Equations (GEE) were used to assess the effects of score status and playing position. Wingers (WG) showed the highest TD, HSR, and HMPD values, while centre backs covered less TD and HSR than WG. Full-backs and forwards (FW) also recorded lower TD, although FW exceeded WG in sprinting (p = 0.045, d values = 0.66 [moderate effect]). Score status influenced MDPs, with TD decreasing when the match was tied and further declining when the team was behind; similar reductions occurred in HSR, HIAccD, HIDecD, and HMPD. In conclusion, both score status and position significantly shaped peak locomotor and mechanical demands. These findings may inform individualised training, recovery programmes, and score-dependent tactical planning in elite women’s football. Full article
(This article belongs to the Collection Sensor Technology for Sports Science)
Show Figures

Figure 1

19 pages, 623 KB  
Article
A Unified AI-Driven Multimodal Framework Integrating Visual Sensing and Wearable Sensors for Robust Human Motion Monitoring in Biomedical Applications
by Qiang Chen, Xiaoya Wang, Ranran Chen, Surui Hua, Yufei Li, Siyuan Liu and Yan Zhan
Sensors 2026, 26(8), 2314; https://doi.org/10.3390/s26082314 - 9 Apr 2026
Viewed by 278
Abstract
This study proposes a unified multimodal temporal motion state perception framework for optical imaging-oriented biomedical applications, integrating visual skeleton sequences, inertial measurement unit (IMU) signals, and surface electromyography (EMG) signals. The framework utilizes modality-specific encoders and a cross-modal temporal alignment attention mechanism to [...] Read more.
This study proposes a unified multimodal temporal motion state perception framework for optical imaging-oriented biomedical applications, integrating visual skeleton sequences, inertial measurement unit (IMU) signals, and surface electromyography (EMG) signals. The framework utilizes modality-specific encoders and a cross-modal temporal alignment attention mechanism to explicitly model temporal offsets from heterogeneous sensing streams. A multimodal temporal Transformer backbone is introduced to capture long-range motion dependencies and cross-modal interactions, while an uncertainty-aware fusion module dynamically allocates weights based on modality confidence. Experimental results demonstrate that the proposed approach achieves an accuracy of 94.37%, an F1-score of 93.95%, and a mean average precision of 96.02%, outperforming mainstream baseline models. Robustness evaluations further confirm stable performance under visual occlusion and sensor noise. These results indicate that the framework provides a highly accurate and robust solution for rehabilitation assessment, sports training monitoring, and wearable intelligent interaction systems. Full article
(This article belongs to the Special Issue Application of Optical Imaging in Medical and Biomedical Research)
Show Figures

Figure 1

27 pages, 4791 KB  
Article
Combining Fast Orthogonal Search with Deep Learning to Improve Low-Cost IMU Signal Accuracy
by Jialin Guan, Eslam Mounier, Umar Iqbal and Michael J. Korenberg
Sensors 2026, 26(8), 2300; https://doi.org/10.3390/s26082300 - 8 Apr 2026
Viewed by 302
Abstract
Inertial measurement units (IMUs) in low-cost navigation systems suffer from significant drift and noise errors due to sensor biases, scale factor instability, and nonlinear stochastic noise. This paper proposes a hybrid error compensation approach that combines Fast Orthogonal Search (FOS), a nonlinear system [...] Read more.
Inertial measurement units (IMUs) in low-cost navigation systems suffer from significant drift and noise errors due to sensor biases, scale factor instability, and nonlinear stochastic noise. This paper proposes a hybrid error compensation approach that combines Fast Orthogonal Search (FOS), a nonlinear system identification technique, with deep Long Short-Term Memory (LSTM) neural networks to improve IMU signal accuracy in GNSS-denied navigation. The FOS algorithm efficiently models deterministic error patterns (such as bias drift and scale factor errors) using a small training dataset, while the LSTM learns the IMU’s complex time-dependent error dynamics from much longer training data. In the proposed method, FOS is first used to predict the output of a high-end IMU based on that of a low-end IMU, and the trained FOS model is then used to extend the training data for an LSTM-based predictor. We demonstrate the efficacy of this FOS–LSTM hybrid on real vehicular IMU data by training with a limited segment of high-precision reference measurements and testing on extended operation periods. The hybrid model achieves high predictive accuracy for predicting the high-end signal based on the low-end signal, with a mean squared error below 0.1% and yields more stable velocity estimates than models using FOS or LSTM alone. Although long-term position drift is not fully eliminated, the proposed method significantly reduces short-term uncertainty in the inertial solution. These results highlight a promising synergy between model-based system identification and data-driven learning for sensor error calibration in navigation systems. Key contributions include FOS-based pseudo-label bootstrapping for data-efficient LSTM training and a navigation-level evaluation illustrating how signal correction impacts dead reckoning drift. Full article
Show Figures

Figure 1

10 pages, 512 KB  
Proceeding Paper
Multitask Deep Neural Network for IMU Calibration, Denoising, and Dynamic Noise Adaption for Vehicle Navigation
by Frieder Schmid and Jan Fischer
Eng. Proc. 2026, 126(1), 44; https://doi.org/10.3390/engproc2026126044 - 7 Apr 2026
Viewed by 306
Abstract
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture [...] Read more.
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture time-varying, non-linear, and non-Gaussian noise characteristics. Likewise, Kalman filters typically assume static measurement noise levels for non-holonomic constraints (NHCs), resulting in suboptimal performance in dynamic environments. Furthermore, zero-velocity detection plays a vital role in preventing error accumulation by enabling reliable zero-velocity updates during motion stops, but classical thresholding approaches often lack robustness and precision. To address these limitations, we propose a novel multitask deep neural network (MTDNN) architecture that jointly learns IMU calibration, adaptive noise level estimation for NHC, and zero-velocity detection solely from raw IMU data. This shared-encoder design is utilized to minimize computational overhead, enabling real-time deployment on resource-constrained platforms such as Raspberry Pi. The model is trained using post-processed GNSS-RTK ground truth trajectories obtained from both a proprietary dataset and the publicly available 4Seasons dataset. Experimental results confirm the proposed system’s superior accuracy, efficiency, and real-time capability in GNSS-denied conditions. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

22 pages, 551 KB  
Review
Convergence of Artificial Intelligence and Wearables in Strength Training and Performance Monitoring: A Scoping Review
by Eleftherios Fyntikakis, Spyridon Plakias, Themistoklis Tsatalas, Minas A. Mina, Anthi Xenofondos and Christos Kokkotis
Appl. Sci. 2026, 16(7), 3565; https://doi.org/10.3390/app16073565 - 6 Apr 2026
Viewed by 834
Abstract
Background: Strength training (ST) is essential for enhancing athletic performance and reducing injury risk, yet traditional monitoring relies heavily on subjective assessment, limiting objective and individualized evaluation. Objective: This scoping review critically synthesizes current applications of artificial intelligence (AI) and wearable technologies (WT) [...] Read more.
Background: Strength training (ST) is essential for enhancing athletic performance and reducing injury risk, yet traditional monitoring relies heavily on subjective assessment, limiting objective and individualized evaluation. Objective: This scoping review critically synthesizes current applications of artificial intelligence (AI) and wearable technologies (WT) in ST, with emphasis on methodological approaches, data characteristics, explainability, and practical readiness. Methods: Searches of PubMed and Scopus identified 13 peer-reviewed studies (2015–2025). Evidence was charted and synthesized to compare AI models, wearable sensor configurations, validation strategies, and translational potential. Results: Studies employed classical machine learning, deep learning, and hybrid approaches alongside inertial, force, strain, and physiological sensors to support exercise classification, load estimation, fatigue detection, and performance monitoring. Deep learning models dominated movement recognition tasks, whereas simpler models often aligned better with small datasets and interpretability requirements. However, most studies relied on limited, homogeneous samples and internal validation, restricting generalizability and real-world applicability. Explainability was inconsistently addressed, particularly in higher-risk applications such as injury prediction. Conclusions: AI-enhanced wearables provide objective and individualized ST monitoring, but current evidence remains largely experimental. To ensure a practical application is implemented, standardized datasets, robust external validation, and greater integration of explainable AI are required to support and deliver trustworthy decision-making. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

29 pages, 6180 KB  
Article
A Comparative Study of a Real-Time Ankle Mobility Monitoring Wearable System
by Giovanni Mastrangelo, Betsy Dayana Marcela Chaparro Rico, Matteo Russo, Marco Ceccarelli and Daniele Cafolla
Robotics 2026, 15(4), 76; https://doi.org/10.3390/robotics15040076 - 4 Apr 2026
Viewed by 329
Abstract
This paper presents a low-cost, lightweight wearable sensing module for real-time multi-degree-of-freedom motion analysis, which is validated using ankle movements from a representative case study. The system is based on a compact inertial measurement unit integrated into a custom-made enclosure and employs Kalman [...] Read more.
This paper presents a low-cost, lightweight wearable sensing module for real-time multi-degree-of-freedom motion analysis, which is validated using ankle movements from a representative case study. The system is based on a compact inertial measurement unit integrated into a custom-made enclosure and employs Kalman filter-based sensor fusion to estimate three-dimensional joint orientation. An experimental campaign involving sixteen healthy participants was conducted, and measurements were compared against a gold-standard optical motion capture system, Optitrack V120 Trio. Ankle kinematics were analysed across all anatomical planes, including dorsiflexion/plantarflexion, inversion/eversion, and adduction/abduction. Quantitative metrics, including cosine similarity consistently above 0.98 across all movements and root mean square error within 4° on average, demonstrate strong agreement between the angular measuring device and motion capture data, with errors remaining within clinically acceptable limits. The results confirm the feasibility of the proposed system as a reliable, portable, and affordable alternative to laboratory-based measurement technologies. Beyond ankle assessment, the sensing approach is applicable to a wide range of motion-assistive and rehabilitation systems, supporting continuous monitoring, personalised therapy, and future integration into intelligent wearable devices. Full article
Show Figures

Figure 1

20 pages, 3255 KB  
Article
Seamless Indoor and Outdoor Navigation Using IMU-GNSS Sensor Data Fusion
by Bismark Kweku Asiedu Asante and Hiroki Imamura
Sensors 2026, 26(7), 2215; https://doi.org/10.3390/s26072215 - 3 Apr 2026
Viewed by 436
Abstract
Seamless localization across indoor and outdoor environments remains a fundamental challenge for wearable navigation systems, particularly those intended to assist visually impaired individuals. This challenge arises from the unreliability of GNSS signals in indoor and transitional spaces and the cumulative drift inherent to [...] Read more.
Seamless localization across indoor and outdoor environments remains a fundamental challenge for wearable navigation systems, particularly those intended to assist visually impaired individuals. This challenge arises from the unreliability of GNSS signals in indoor and transitional spaces and the cumulative drift inherent to IMU–based dead reckoning. To address these limitations, this paper proposes a physics-informed GNSS–IMU sensor fusion framework that enables robust, real-time wearable navigation across heterogeneous environments. The proposed system dynamically adapts to environmental context, employing GNSS dominant localization in outdoor settings and PINN enhanced IMU-based dead reckoning during GNSS denied indoor operation. At the core of the framework is a tightly coupled Physics-Informed Neural Network (PINN) and Extended Kalman Filter (EKF), where the PINN embeds kinematic motion constraints to correct inertial drift and suppress sensor noise, while the EKF performs probabilistic state estimation and sensor fusion. The framework is implemented on a compact, energy-efficient wearable platform and evaluated using real-world indoor–outdoor pedestrian trajectories. Experimental results demonstrate improved localization accuracy, significantly reduced drift during indoor navigation, and stable indoor–outdoor transitions compared to conventional GNSS–IMU fusion methods. The proposed approach offers a practical and reliable solution for wearable assistive navigation and has broader applicability in smart mobility and autonomous wearable systems. Full article
(This article belongs to the Topic AI Sensors and Transducers)
Show Figures

Figure 1

Back to TopTop