Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (67)

Search Parameters:
Keywords = azure kinect

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 9134 KB  
Article
An Autonomous Robotic System for Object Retrieval and Delivery: Enhancing Independence for Users Living with Disability and Older Adults
by Jincheng Li, Chenghao Lin, Amna Mazen and Youssef A. Bazzi
Robotics 2026, 15(2), 41; https://doi.org/10.3390/robotics15020041 - 12 Feb 2026
Viewed by 770
Abstract
As the global population ages, there is a growing need for assistive technologies to help older adults maintain their independence. This work presents a cost-effective autonomous socially assistive robot designed for object retrieval and delivery, enhancing accessibility in home environments. The system is [...] Read more.
As the global population ages, there is a growing need for assistive technologies to help older adults maintain their independence. This work presents a cost-effective autonomous socially assistive robot designed for object retrieval and delivery, enhancing accessibility in home environments. The system is built on the Robot Operating System (ROS) framework and integrates three key components: the Pioneer P3-DX mobile robot for autonomous navigation, the ReactorX-200 robotic arm for pick-and-place operations, and the Kinect v2 RGB-D camera for object detection and localization. Users interact with the robot through natural language processing by issuing voice commands to retrieve various objects. Microsoft Azure-powered speech recognition processes these commands to extract keywords and then localize requested objects on a predefined building map. Pioneer P3-DX, equipped with a Hokuyo LiDAR, enables autonomous navigation and obstacle avoidance, while Kinect v2, integrated with the YOLOv8 algorithm, facilitates object recognition and localization. The robot retrieves and delivers the user’s requested objects while following the shortest available path. Experimental evaluations in a home environment demonstrate the system’s effectiveness in identifying and retrieving requested objects. The subsystems achieve a success rate of 85–95% across more than 50 runs, highlighting their strong performance. The proposed approach provides a proof of concept for future advancements in assistive robotics, demonstrating the seamless integration of advanced technologies into a cost-effective and user-friendly platform. Full article
(This article belongs to the Special Issue AI-Powered Robotic Systems: Learning, Perception and Decision-Making)
Show Figures

Figure 1

22 pages, 3651 KB  
Article
Preliminary Exploration of a Gait Alteration Index to Detect Abnormal Walking Through a RGB-D Camera and Human Pose Estimation
by Gianluca Amprimo, Lorenzo Priano, Luca Vismara and Claudia Ferraris
Algorithms 2026, 19(2), 146; https://doi.org/10.3390/a19020146 - 11 Feb 2026
Viewed by 412
Abstract
Quantitative gait analysis is essential for assessing motor function, as altered walking patterns are linked to functional decline and increased fall risk. Although recent advances in markerless motion analysis and human pose estimation enable gait feature extraction from low-cost video systems compared to [...] Read more.
Quantitative gait analysis is essential for assessing motor function, as altered walking patterns are linked to functional decline and increased fall risk. Although recent advances in markerless motion analysis and human pose estimation enable gait feature extraction from low-cost video systems compared to expensive motion analysis laboratories, clinical translation remains limited by fragmented descriptors or approaches that directly regress clinical scores, often reducing interpretability and generalizability. We propose the Gait Alteration Index (GAI), an interpretable index that quantifies gait abnormality as a functional deviation from typical walking patterns, independently of specific pathologies. The GAI is computed from a small set of gait parameters and integrates three complementary domains: spatio-temporal characteristics, surrogates of dynamic stability, and arm swing behaviour, providing both a global index and domain-specific sub-indices. Preliminary evaluation on a heterogeneous cohort using clinician-derived assessments showed that the GAI captures clinically meaningful gait alterations (Spearman’s ρ=0.65), with the strongest agreement for spatio-temporal features (ρ=0.77). These results suggest that the GAI is a promising low-cost, and interpretable tool for objective gait assessment, screening, and longitudinal monitoring. Full article
Show Figures

Figure 1

31 pages, 3468 KB  
Article
From RGB-D to RGB-Only: Reliability and Clinical Relevance of Markerless Skeletal Tracking for Postural Assessment in Parkinson’s Disease
by Claudia Ferraris, Gianluca Amprimo, Gabriella Olmo, Marco Ghislieri, Martina Patera, Antonio Suppa, Silvia Gallo, Gabriele Imbalzano, Leonardo Lopiano and Carlo Alberto Artusi
Sensors 2026, 26(4), 1146; https://doi.org/10.3390/s26041146 - 10 Feb 2026
Viewed by 617
Abstract
Axial postural abnormalities in Parkinson’s Disease (PD) are traditionally assessed using clinical rating scales, although picture-based assessment is considered the gold standard. This study evaluates the reliability and clinical relevance of two markerless body-tracking frameworks, the RGB-D-based Microsoft Azure Kinect (providing the reference [...] Read more.
Axial postural abnormalities in Parkinson’s Disease (PD) are traditionally assessed using clinical rating scales, although picture-based assessment is considered the gold standard. This study evaluates the reliability and clinical relevance of two markerless body-tracking frameworks, the RGB-D-based Microsoft Azure Kinect (providing the reference KIN_3D model) and the RGB-only Google MediaPipe Pose (MP), using a synchronous dual-camera setup. Forty PD patients performed a 60 s static standing task. We compared KIN_3D with three MP models (at different complexity levels) across horizontal, vertical, sagittal, and 3D joint angles. Results show that lower-complexity MP models achieved high congruence with KIN_3D for trunk and shoulder alignment (ρ > 0.75), while the lateral view significantly improved tracking of sagittal angles (ρ ≥ 0.72). Conversely, the high-complexity model introduced significant skeletal distortions. Clinically, several angular parameters emerged as robust metrics for postural assessment and global motor impairments, while sagittal angles correlated with motor complications. Unexpectedly, a more upright frontal alignment was associated with greater freezing of gait severity, suggesting that static postural metrics may serve as proxies for dynamic gait performance. In addition, both RGB-only and RGB-D frameworks effectively discriminated between postural severity clusters. While the higher-complexity MP model should be avoided due to inaccurate 3D reconstructions, our findings demonstrate that low- and medium-complexity MP models represent a reliable alternative to RGB-D sensors for objective postural assessment in PD, facilitating the widespread application of objective posture measurements in clinical contexts. Full article
(This article belongs to the Special Issue Sensors for Human Motion Analysis and Applications)
Show Figures

Figure 1

15 pages, 3765 KB  
Communication
Non-Contact Fatigue Estimation in Healthy Individuals Using Azure Kinect: Contribution of Multiple Kinematic Features
by Takafumi Yamada and Kai Kondo
Sensors 2025, 25(21), 6633; https://doi.org/10.3390/s25216633 - 29 Oct 2025
Viewed by 1295
Abstract
Monitoring exercise-induced fatigue is important for maintaining the effectiveness of training and preventing injury. We evaluated a non-contact approach that estimates perceived fatigue from full-body kinematics captured by an Azure Kinect depth camera. Ten healthy young adults repeatedly performed simple, reproducible whole-body movements, [...] Read more.
Monitoring exercise-induced fatigue is important for maintaining the effectiveness of training and preventing injury. We evaluated a non-contact approach that estimates perceived fatigue from full-body kinematics captured by an Azure Kinect depth camera. Ten healthy young adults repeatedly performed simple, reproducible whole-body movements, and 3D skeletal coordinates from 32 joints were recorded. After smoothing, 24 kinematic features (joint angles, angular velocities, and cycle timing) were extracted. Fatigue labels (Low, Medium, and High) were obtained using the Borg CR10 scale at 30-s intervals. A random forest classifier was trained and evaluated with leave-one-subject-out cross-validation, and class imbalance was addressed by comparing no correction, class weighting, and random oversampling within the training folds. The model discriminated fatigue levels with high performance (overall accuracy 86%; macro ROC AUC 0.98 (LOSO point estimate) under oversampling), and feature importance analysis indicated distributed contributions across feature categories. These results suggest that simple camera-based kinematic analysis can feasibly estimate perceived fatigue during basic movements. Future work will expand the cohort, diversify tasks, and integrate physiological signals to improve generalization and provide segment-level interpretability. Full article
Show Figures

Figure 1

13 pages, 1263 KB  
Communication
Center of Mass (CoM) Motions and Foot Placement During Treadmill Walking Using One Time-of-Flight Camera
by Joshua T. Chang, Alisha Ragatz, Anjana Ganesh, Ana P. Quiros Padilla, Mikayla R. Devins, Christina V. Mihova and John G. Milton
Sensors 2025, 25(18), 5850; https://doi.org/10.3390/s25185850 - 19 Sep 2025
Viewed by 1162
Abstract
Assessing the fall risk of a patient in a busy clinical setting is challenging. Tests such as the timed-up-and-go test and narrow beam walking are difficult to perform due to space restrictions. Moreover, it is not easy to directly connect the results of [...] Read more.
Assessing the fall risk of a patient in a busy clinical setting is challenging. Tests such as the timed-up-and-go test and narrow beam walking are difficult to perform due to space restrictions. Moreover, it is not easy to directly connect the results of these tests to fundamental biomechanical principles of gait stability, which emphasize the interplay between the movements of the body’s center of mass (CoM) and its base of support (BoS). Herein, we show how a 1.2 m-long treadmill and a single “time-of-flight” Azure Kinect camera can capture the CoM-BoS interplay within 5 min. The CoM was calculated by dividing the body into 14 segments determined from 20 joint positions measured by the Kinect camera’s body tracking SDK. By tracking the CoM and joint positions from stride to stride, we can evaluate different gait stability metrics using a markerless, contactless, space-efficient approach. A large digital database of CoM movements relative to foot placement will be useful for the future development of statistical and machine learning techniques for identifying subjects at higher risk of falling. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

25 pages, 1716 KB  
Article
Comparison of Wearable and Depth-Sensing Technologies with Electronic Walkway for Comprehensive Gait Analysis
by Marjan Nassajpour, Mahmoud Seifallahi, Amie Rosenfeld, Magdalena I. Tolea, James E. Galvin and Behnaz Ghoraani
Sensors 2025, 25(17), 5501; https://doi.org/10.3390/s25175501 - 4 Sep 2025
Cited by 3 | Viewed by 3190
Abstract
Accurate and scalable gait assessment is essential for clinical and research applications, including fall risk evaluation, rehabilitation monitoring, and early detection of neurodegenerative diseases. While electronic walkways remain the clinical gold standard, their high cost and limited portability restrict widespread use. Wearable inertial [...] Read more.
Accurate and scalable gait assessment is essential for clinical and research applications, including fall risk evaluation, rehabilitation monitoring, and early detection of neurodegenerative diseases. While electronic walkways remain the clinical gold standard, their high cost and limited portability restrict widespread use. Wearable inertial measurement units (IMUs) and markerless depth cameras have emerged as promising alternatives; however, prior studies have typically assessed these systems under tightly controlled conditions, with single participants in view, limited marker sets, and without direct cross-technology comparisons. This study addresses these gaps by simultaneously evaluating three sensing technologies—APDM wearable IMUs (tested in two separate configurations: foot-mounted and lumbar-mounted) and the Azure Kinect depth camera—against ProtoKinetics Zeno™ Walkway Gait Analysis System in a realistic clinical environment where multiple individuals were present in the camera’s field of view. Gait data from 20 older adults (mean age 70.06±9.45 years) performing Single-Task and Dual-Task walking trials were synchronously captured using custom hardware for precise temporal alignment. Eleven gait markers spanning macro, micro-temporal, micro-spatial, and spatiotemporal domains were compared using mean absolute error (MAE), Pearson correlation (r), and Bland–Altman analysis. Foot-mounted IMUs demonstrated the highest accuracy (MAE =0.006.12, r=0.921.00), followed closely by the Azure Kinect (MAE =0.016.07, r=0.68–0.98). Lumbar-mounted IMUs showed consistently lower agreement with the reference system. These findings provide the first comprehensive comparison of wearable and depth-sensing technologies with a clinical gold standard under real-world conditions and across an extensive set of gait markers. The results establish a foundation for deploying scalable, low-cost gait assessment systems in diverse healthcare contexts, supporting early detection, mobility monitoring, and rehabilitation outcomes across multiple patient populations. Full article
Show Figures

Figure 1

17 pages, 2763 KB  
Article
Extended Reality-Based Proof-of-Concept for Clinical Assessment Balance and Postural Disorders for Personalized Innovative Protocol
by Fabiano Bini, Michela Franzò, Alessia Finti, Francesca Tiberi, Veronica Maria Teresa Grillo, Edoardo Covelli, Maurizio Barbara and Franco Marinozzi
Bioengineering 2025, 12(8), 850; https://doi.org/10.3390/bioengineering12080850 - 7 Aug 2025
Cited by 2 | Viewed by 1142
Abstract
Background: Clinical assessment of balance and postural disorders is usually carried out through several common practices including tests such as the Subjective Visual Vertical (SVV) and Limit of Stability (LOS). Nowadays, several cutting-edge technologies have been proposed as supporting tools for stability evaluation. [...] Read more.
Background: Clinical assessment of balance and postural disorders is usually carried out through several common practices including tests such as the Subjective Visual Vertical (SVV) and Limit of Stability (LOS). Nowadays, several cutting-edge technologies have been proposed as supporting tools for stability evaluation. Extended Reality (XR) emerges as a powerful instrument. This proof-of-concept study aims to assess the feasibility and potential clinical utility of a novel MR-based framework integrating HoloLens 2, Wii Balance Board, and Azure Kinect for multimodal balance assessment. An innovative test is also introduced, the Innovative Dynamic Balance Assessment (IDBA), alongside an MR version of the SVV test and the evaluation of their performance in a cohort of healthy individuals. Results: All participants reported SVV deviations within the clinically accepted ±2° range. The IDBA results revealed consistent sway and angular profiles across participants, with statistically significant differences in posture control between opposing target directions. System outputs were consistent, with integrated parameters offering a comprehensive representation of postural strategies. Conclusions: The MR-based framework successfully delivers integrated, multimodal measurements of postural control in healthy individuals. These findings support its potential use in future clinical applications for balance disorder assessment and personalized rehabilitation. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Graphical abstract

17 pages, 14808 KB  
Article
Operatic Singing Biomechanics: Skeletal Tracking Sensor Integration for Pedagogical Innovation
by Evangelos Angelakis, Konstantinos Bakogiannis, Anastasia Georgaki and Areti Andreopoulou
Sensors 2025, 25(15), 4713; https://doi.org/10.3390/s25154713 - 30 Jul 2025
Cited by 1 | Viewed by 2660
Abstract
Operatic singing, traditionally taught through empirical and subjective methods, demands innovative approaches to enhance its pedagogical effectiveness today. This paper introduces a novel integration of advanced skeletal tracking technology into a prototype framework for operatic singing pedagogy research. Using the Microsoft Kinect Azure [...] Read more.
Operatic singing, traditionally taught through empirical and subjective methods, demands innovative approaches to enhance its pedagogical effectiveness today. This paper introduces a novel integration of advanced skeletal tracking technology into a prototype framework for operatic singing pedagogy research. Using the Microsoft Kinect Azure DK sensor, this prototype extracts detailed data on spinal, cervical, and shoulder alignment and movement data, with the aim of quantifying biomechanical movements during vocal performance. Preliminary results confirmed high face validity and biomechanical relevance. The incorporation of skeletal-tracking technology into vocal pedagogy research could help clarify certain technical aspects of singing and enhance sensorimotor feedback for the training of operatic singers. Full article
Show Figures

Figure 1

10 pages, 592 KB  
Article
Assessing the Accuracy and Reliability of the Monitored Augmented Rehabilitation System for Measuring Shoulder and Elbow Range of Motion
by Samuel T. Lauman, Lindsey J. Patton, Pauline Chen, Shreya Ravi, Stephen J. Kimatian and Sarah E. Rebstock
Sensors 2025, 25(14), 4269; https://doi.org/10.3390/s25144269 - 9 Jul 2025
Cited by 2 | Viewed by 1188
Abstract
Accurate range of motion (ROM) assessment is essential for evaluating musculoskeletal function and guiding rehabilitation, particularly in pediatric populations. Traditional methods, such as optical motion capture and handheld goniometry, are often limited by cost, accessibility, and inter-rater variability. This study evaluated the feasibility [...] Read more.
Accurate range of motion (ROM) assessment is essential for evaluating musculoskeletal function and guiding rehabilitation, particularly in pediatric populations. Traditional methods, such as optical motion capture and handheld goniometry, are often limited by cost, accessibility, and inter-rater variability. This study evaluated the feasibility and accuracy of the Microsoft Azure Kinect-powered Monitored Augmented Rehabilitation System (MARS) compared to Kinovea. Sixty-five pediatric participants (ages 5–18) performed standardized shoulder and elbow movements in the frontal and sagittal planes. ROM data were recorded using MARS and compared to Kinovea. Measurement reliability was evaluated using intraclass correlation coefficients (ICC3k), and accuracy was evaluated using root mean squared error (RMSE) analysis. MARS demonstrated excellent reliability with an average ICC3k of 0.993 and met the predefined accuracy threshold (RMSE ≤ 8°) for most movements, with the exception of sagittal elbow flexion. These findings suggest that MARS is a reliable, accurate, and cost-effective alternative for clinical ROM assessment, offering a markerless solution that enhances measurement precision and accessibility in pediatric rehabilitation. Future studies should enhance accuracy in sagittal plane movements and further validate MARS against gold-standard systems. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 2822 KB  
Article
Non-Contact Platform for the Assessment of Physical Function in Older Adults: A Pilot Study
by Ana Sobrino-Santos, Pedro Anuarbe, Carlos Fernandez-Viadero, Roberto García-García, José Miguel López-Higuera, Luis Rodríguez-Cobo and Adolfo Cobo
Technologies 2025, 13(6), 225; https://doi.org/10.3390/technologies13060225 - 2 Jun 2025
Cited by 1 | Viewed by 1764
Abstract
In the context of global population aging, identifying reliable, objective tools to assess physical function and postural stability in older adults is increasingly important to mitigate fall risk. This study presents a non-contact platform that uses a Microsoft Azure Kinect depth camera to [...] Read more.
In the context of global population aging, identifying reliable, objective tools to assess physical function and postural stability in older adults is increasingly important to mitigate fall risk. This study presents a non-contact platform that uses a Microsoft Azure Kinect depth camera to evaluate functional performance related to lower-limb muscular capacity and static balance through self-selected depth squats and four progressively challenging stances (feet apart, feet together, semitandem, and tandem). By applying markerless motion capture algorithms, the system provides key biomechanical parameters such as center of mass displacement, knee angles, and sway trajectories. A comparison of older and younger individuals showed that the older group tended to perform shallower squats and exhibit greater mediolateral and anteroposterior sway, aligning with age-related declines in strength and postural control. Longitudinal tracking also illustrated how performance varied following a fall, indicating potential for ongoing risk assessment. Notably, in 30 s balance trials, the first 10 s often captured meaningful differences in stability, suggesting that short-duration stance tests can reliably detect early signs of imbalance. These findings highlight the feasibility of low-cost, user-friendly depth-camera technologies to complement traditional clinical measures and guide targeted fall-prevention strategies in older populations. Full article
Show Figures

Graphical abstract

23 pages, 2042 KB  
Article
StructScan3D v1: A First RGB-D Dataset for Indoor Building Elements Segmentation and BIM Modeling
by Ishraq Rached, Rafika Hajji, Tania Landes and Rashid Haffadi
Sensors 2025, 25(11), 3461; https://doi.org/10.3390/s25113461 - 30 May 2025
Viewed by 4491
Abstract
The integration of computer vision and deep learning into Building Information Modeling (BIM) workflows has created a growing need for structured datasets that enable the semantic segmentation of indoor building elements. This paper presents StructScan3D v1, the first version of an RGB-D dataset [...] Read more.
The integration of computer vision and deep learning into Building Information Modeling (BIM) workflows has created a growing need for structured datasets that enable the semantic segmentation of indoor building elements. This paper presents StructScan3D v1, the first version of an RGB-D dataset specifically designed to facilitate the automated segmentation and modeling of architectural and structural components. Captured using the Kinect Azure sensor, StructScan3D v1 comprises 2594 annotated frames from diverse indoor environments, including residential and office spaces. The dataset focuses on six key building elements: walls, floors, ceilings, windows, doors, and miscellaneous objects. To establish a benchmark for indoor RGB-D semantic segmentation, we evaluate D-Former, a transformer-based model that leverages self-attention mechanisms for enhanced spatial understanding. Additionally, we compare its performance against state-of-the-art models such as Gemini and TokenFusion, providing a comprehensive analysis of segmentation accuracy. Experimental results show that D-Former achieves a mean Intersection over Union (mIoU) of 67.5%, demonstrating strong segmentation capabilities despite challenges like occlusions and depth variations. As an evolving dataset, StructScan3D v1 lays the foundation for future expansions, including increased scene diversity and refined annotations. By bridging the gap between deep learning-driven segmentation and real-world BIM applications, this dataset provides researchers and practitioners with a valuable resource for advancing indoor scene reconstruction, robotics, and augmented reality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 2695 KB  
Article
Comparing Classification Algorithms to Recognize Selected Gestures Based on Microsoft Azure Kinect Joint Data
by Marc Funken and Thomas Hanne
Information 2025, 16(5), 421; https://doi.org/10.3390/info16050421 - 21 May 2025
Cited by 1 | Viewed by 1393
Abstract
This study aims to explore the potential of exergaming (which can be used along with prescriptive medication for children with spinal muscular atrophy) and examine its effects on monitoring and diagnosis. The present study focuses on comparing models trained on joint data for [...] Read more.
This study aims to explore the potential of exergaming (which can be used along with prescriptive medication for children with spinal muscular atrophy) and examine its effects on monitoring and diagnosis. The present study focuses on comparing models trained on joint data for gesture detection, which has not been extensively explored in previous studies. The study investigates three approaches to detect gestures based on 3D Microsoft Azure Kinect joint data. We discuss simple decision rules based on angles and distances to label gestures. In addition, we explore supervised learning methods to increase the accuracy of gesture recognition in gamification. The compared models performed well on the recorded sample data, with the recurrent neural networks outperforming feedforward neural networks and decision trees on the captured motions. The findings suggest that gesture recognition based on joint data can be a valuable tool for monitoring and diagnosing children with spinal muscular atrophy. This study contributes to the growing body of research on the potential of virtual solutions in rehabilitation. The results also highlight the importance of using joint data for gesture recognition and provide insights into the most effective models for this task. The findings of this study can inform the development of more accurate and effective monitoring and diagnostic tools for children with spinal muscular atrophy. Full article
Show Figures

Figure 1

14 pages, 2198 KB  
Article
Real-Time Current Volume Estimation System from an Azure Kinect Camera in Pediatric Intensive Care: Technical Development
by Florian Chavernac, Kévin Albert, Hoang Vu Huy, Srinivasan Ramachandran, Rita Noumeir and Philippe Jouvet
Sensors 2025, 25(10), 3069; https://doi.org/10.3390/s25103069 - 13 May 2025
Cited by 2 | Viewed by 1812
Abstract
Monitoring respiratory parameters is essential in pediatric intensive care units (PICUs), yet bedside tidal volume (Vt) measurement is rarely performed due to the need for invasive airflow sensors. We present a real-time, non-contact respiratory monitoring system using the Azure Kinect DK (Microsoft, Redmond, [...] Read more.
Monitoring respiratory parameters is essential in pediatric intensive care units (PICUs), yet bedside tidal volume (Vt) measurement is rarely performed due to the need for invasive airflow sensors. We present a real-time, non-contact respiratory monitoring system using the Azure Kinect DK (Microsoft, Redmond, WA, USA) depth camera, specifically designed for use in the PICU. The system automatically tracks thoracic volume variations to derive a comprehensive set of ventilator equivalent parameters: tidal volume, respiratory rate, minute ventilation, inspiratory/expiratory times, I:E ratio, and peak flows. Results are displayed via an ergonomic web interface for clinical use. This system introduces several innovations: real-time estimation of a complete set of respiratory parameters, a novel infrared-based region-of-interest detection method using YOLO-OBBs, enabling robust operation regardless of lighting conditions, even in total darkness, making it ideal for continuous monitoring of sleeping patients, and a pixel-wise 3D volume computation method that achieves a mean absolute error under 5% on tidal volume. The system was evaluated on both a healthy adult (compared to spirometry) and a critically ill child (compared to ventilator data). To our knowledge, this is the first study to validate such a contactless respiratory monitoring system on a non-intubated child in the PICU. Further clinical validation is ongoing. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

14 pages, 1038 KB  
Article
The Effect of the FIFA 11+ Warm-Up Program on Knee Instability and Motor Performance in Male Youth Soccer Players
by Badis Soussi, Tamás Horváth, Zsombor Lacza and Mira Ambrus
Sensors 2025, 25(8), 2425; https://doi.org/10.3390/s25082425 - 11 Apr 2025
Cited by 4 | Viewed by 4185
Abstract
This study aimed to investigate the effect of the FIFA 11+ program on knee instability and motor performance in male youth soccer players. Thirty male youth soccer players were divided into two groups: the experimental group (FIFA+) performed the FIFA 11+ program for [...] Read more.
This study aimed to investigate the effect of the FIFA 11+ program on knee instability and motor performance in male youth soccer players. Thirty male youth soccer players were divided into two groups: the experimental group (FIFA+) performed the FIFA 11+ program for 10 weeks, while the control group followed their usual warm-up routine. Dynamic knee valgus (DKV) and squat depth were assessed using a Microsoft Azure Kinect camera and dynaknee software. Maximal isometric muscle force was measured with a dynamometer. The Y Balance test was used to evaluate dynamic balance, while a countermovement jump test assessed lower limb power. The knee range of motion was measured with a goniometer, and the t-test was used to evaluate agility. After the intervention, the FIFA+ group showed a significant decrease in DKV and squat depth (p < 0.05), while the control group showed no significant changes (p > 0.05). Both groups improved in motor performance, with slight progress noted in the FIFA+ group. However, neither group demonstrated significant improvement in dynamic balance (p > 0.05). While the FIFA 11+ program may not substantially enhance overall motor performance or match the effectiveness of other training regimens, it shows potential for addressing biomechanical deficiencies and reducing the risk of injuries, particularly those related to dynamic knee valgus. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

21 pages, 4663 KB  
Article
The Feasibility of RGB-D Gaze Intention Measurement in Children with Autism Using Azure Kinect
by Abderrahmen Bendimered, Rim Cherif, Rabah Iguernaissi, Mohamad Motasem Nawaf, Susanne Thümmler, Séverine Dubuisson and Djamal Merad
Bioengineering 2025, 12(4), 370; https://doi.org/10.3390/bioengineering12040370 - 1 Apr 2025
Cited by 2 | Viewed by 1375
Abstract
Gaze interpretation is a fundamental aspect of social communication, especially for children with autism spectrum disorder (ASD), who frequently encounter many difficulties in social interactions. Despite the considerable advances made in gaze tracking technologies, such as those based on RGB and RGB-D, the [...] Read more.
Gaze interpretation is a fundamental aspect of social communication, especially for children with autism spectrum disorder (ASD), who frequently encounter many difficulties in social interactions. Despite the considerable advances made in gaze tracking technologies, such as those based on RGB and RGB-D, the accurate measurement of gaze direction remains a significant scientific challenge. This paper proposes a novel approach utilizing the Azure Kinect to improve the measurement of gaze intention in children with ASD, providing accurate estimations of both gaze direction and head position. To evaluate the effectiveness of the proposed methodology, an experimental trial was conducted with eight participants of varying statures. The intersection of the estimated gaze with the target plane was also analyzed throughout 38-min sessions. The results demonstrated high accuracy, with a minimum angular error of 2.5° using pupil positions, 2.06° using head orientation, and average errors of 4.46° and 3.19°, respectively. This approach was tested on a dataset of children with ASD to track their gaze towards a clinician, as this information is essential for assessing the children’s social intent and interactions with others, facilitating more precise clinical assessments for children with autism. Full article
(This article belongs to the Special Issue Machine Learning-Aided Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop