Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = LeapMotion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4998 KiB  
Article
Computer Vision-Based Robotic System Framework for the Real-Time Identification and Grasping of Oysters
by Hao-Ran Qu, Jue Wang, Lang-Rui Lei and Wen-Hao Su
Appl. Sci. 2025, 15(7), 3971; https://doi.org/10.3390/app15073971 - 3 Apr 2025
Viewed by 961
Abstract
This study addresses the labor-intensive and safety-critical challenges of manual oyster processing by innovating an advanced robotic intelligent sorting system. Central to this system is the integration of a high-resolution vision module, dual operational controllers, and the collaborative AUBO-i3 robot, all harmonized through [...] Read more.
This study addresses the labor-intensive and safety-critical challenges of manual oyster processing by innovating an advanced robotic intelligent sorting system. Central to this system is the integration of a high-resolution vision module, dual operational controllers, and the collaborative AUBO-i3 robot, all harmonized through a sophisticated Robot Operating System (ROS) framework. A specialized oyster image dataset was curated and augmented to train a robust You Only Look Once version 8 Oriented Bounding Box (YOLOv8-OBB) model, further enhanced through the incorporation of MobileNet Version 4 (MobileNetV4). This optimization reduced the number of model parameters by 50% and lowered the computational load by 23% in terms of GFLOPS (Giga Floating-point Operations Per Second). In order to capture oyster motion dynamically on a conveyor belt, a Kalman filter (KF) combined with a Low-Pass filter algorithm was employed to predict oyster trajectories, thereby improving noise reduction and motion stability. This approach achieves superior noise reduction compared to traditional Moving Average methods. The system achieved a 95.54% success rate in static gripping tests and an impressive 84% in dynamic conditions. These technological advancements demonstrate a significant leap towards revolutionizing seafood processing, offering substantial gains in operational efficiency, reducing potential contamination risks, and paving the way for a transition to fully automated, unmanned production systems in the seafood industry. Full article
Show Figures

Figure 1

19 pages, 6442 KiB  
Article
Synergy-Based Evaluation of Hand Motor Function in Object Handling Using Virtual and Mixed Realities
by Yuhei Sorimachi, Hiroki Akaida, Kyo Kutsuzawa, Dai Owaki and Mitsuhiro Hayashibe
Sensors 2025, 25(7), 2080; https://doi.org/10.3390/s25072080 - 26 Mar 2025
Viewed by 556
Abstract
This study introduces a novel system for evaluating hand motor function through synergy-based analysis during object manipulation in virtual and mixed-reality environments. Conventional assessments of hand function are often subjective, relying on visual observation by therapists or patient-reported outcomes. To address these limitations, [...] Read more.
This study introduces a novel system for evaluating hand motor function through synergy-based analysis during object manipulation in virtual and mixed-reality environments. Conventional assessments of hand function are often subjective, relying on visual observation by therapists or patient-reported outcomes. To address these limitations, we developed a system that utilizes the leap motion controller (LMC) to capture finger motion data without the constraints of glove-type devices. Spatial synergies were extracted using principal component analysis (PCA) and Varimax rotation, providing insights into finger motor coordination with the sparse decomposition. Additionally, we incorporated the HoloLens 2 to create a mixed-reality object manipulation task that enhances spatial awareness for the user, improving natural interaction with virtual objects. Our results demonstrate that synergy-based analysis allows for the systematic detection of hand movement abnormalities that are not captured through traditional task performance metrics. This system demonstrates promise in advancing rehabilitation by enabling more objective and detailed evaluations of finger motor function, facilitating personalized therapy, and potentially contributing to the early detection of motor impairments in the future. Full article
Show Figures

Figure 1

21 pages, 2045 KiB  
Article
A Novel Improvement of Feature Selection for Dynamic Hand Gesture Identification Based on Double Machine Learning
by Keyue Yan, Chi-Fai Lam, Simon Fong, João Alexandre Lobo Marques, Richard Charles Millham and Sabah Mohammed
Sensors 2025, 25(4), 1126; https://doi.org/10.3390/s25041126 - 13 Feb 2025
Viewed by 1094
Abstract
Causal machine learning is an approach that combines causal inference and machine learning to understand and utilize causal relationships in data. In current research and applications, traditional machine learning and deep learning models always focus on prediction and pattern recognition. In contrast, causal [...] Read more.
Causal machine learning is an approach that combines causal inference and machine learning to understand and utilize causal relationships in data. In current research and applications, traditional machine learning and deep learning models always focus on prediction and pattern recognition. In contrast, causal machine learning goes a step further by revealing causal relationships between different variables. We explore a novel concept called Double Machine Learning that embraces causal machine learning in this research. The core goal is to select independent variables from a gesture identification problem that are causally related to final gesture results. This selection allows us to classify and analyze gestures more efficiently, thereby improving models’ performance and interpretability. Compared to commonly used feature selection methods such as Variance Threshold, Select From Model, Principal Component Analysis, Least Absolute Shrinkage and Selection Operator, Artificial Neural Network, and TabNet, Double Machine Learning methods focus more on causal relationships between variables rather than correlations. Our research shows that variables selected using the Double Machine Learning method perform well under different classification models, with final results significantly better than those of traditional methods. This novel Double Machine Learning-based approach offers researchers a valuable perspective for feature selection and model construction. It enhances the model’s ability to uncover causal relationships within complex data. Variables with causal significance can be more informative than those with only correlative significance, thus improving overall prediction performance and reliability. Full article
(This article belongs to the Special Issue Advances in Big Data and Internet of Things)
Show Figures

Figure 1

18 pages, 1037 KiB  
Article
Optimisation and Comparison of Markerless and Marker-Based Motion Capture Methods for Hand and Finger Movement Analysis
by Valentin Maggioni, Christine Azevedo-Coste, Sam Durand and François Bailly
Sensors 2025, 25(4), 1079; https://doi.org/10.3390/s25041079 - 11 Feb 2025
Cited by 2 | Viewed by 2270
Abstract
Ensuring the accurate tracking of hand and fingers movements is an ongoing challenge for upper limb rehabilitation assessment, as the high number of degrees of freedom and segments in the limited volume of the hand makes this a difficult task. The objective of [...] Read more.
Ensuring the accurate tracking of hand and fingers movements is an ongoing challenge for upper limb rehabilitation assessment, as the high number of degrees of freedom and segments in the limited volume of the hand makes this a difficult task. The objective of this study is to evaluate the performance of two markerless approaches (the Leap Motion Controller and the Google MediaPipe API) in comparison to a marker-based one, and to improve the precision of the markerless methods by introducing additional data processing algorithms fusing multiple recording devices. Fifteen healthy participants were instructed to perform five distinct hand movements while being recorded by the three motion capture methods simultaneously. The captured movement data from each device was analyzed using a skeletal model of the hand through the inverse kinematics method of the OpenSim software. Finally, the root mean square errors of the angles formed by each finger segment were calculated for the markerless and marker-based motion capture methods to compare their accuracy. Our results indicate that the MediaPipe-based setup is more accurate than the Leap Motion Controller-based one (average root mean square error of 10.9° versus 14.7°), showing promising results for the use of markerless-based methods in clinical applications. Full article
(This article belongs to the Collection Sensors for Gait, Human Movement Analysis, and Health Monitoring)
Show Figures

Figure 1

13 pages, 1214 KiB  
Article
Is It Feasible to Apply a Virtual Box and Block Test in Children with Unilateral Cerebral Palsy?: A Pilot Study
by Soraya Pérez-Nombela, Javier Merino-Andrés, Julio Gómez-Soriano, María Álvarez-Rodríguez, Silvia Ceruelo-Abajo, Purificación López-Muñoz, Rocío Palomo-Carrión and Ana de los Reyes-Guzmán
J. Clin. Med. 2025, 14(2), 391; https://doi.org/10.3390/jcm14020391 - 9 Jan 2025
Viewed by 1035
Abstract
Background: With technological advancements, virtual versions of the Box and Block Test (BBT) employing the Leap Motion Controller have been developed for evaluating hand dexterity. Currently, there are no studies about the usefulness of this system in children with unilateral cerebral palsy [...] Read more.
Background: With technological advancements, virtual versions of the Box and Block Test (BBT) employing the Leap Motion Controller have been developed for evaluating hand dexterity. Currently, there are no studies about the usefulness of this system in children with unilateral cerebral palsy (UCP). Thus, our main objective is to apply a virtual BBT based on the Leap Motion Controller in children with UCP compared with the real BTT for assessing upper limb function within a pilot study. Methods: Seven children between the ages of 4 and 8 years who were diagnosed with UCP were assessed three times using the real and virtual BBT. Results: For all the participants, performance was greater in the real BBT than in the virtual BBT. During the last assessment, the participants reached 28.17 (SD:6.31) blocks in the real test and 9.00 (SD:5.90) in the virtual test. The correlation index between the two modalities of the BBT was moderate (r = 0.708). Conclusions: The results obtained in this study suggest that the application of the virtual BBT in children with UCP is feasible. Future studies are needed to validate the application of the virtual BBT in children with UCP. Full article
Show Figures

Figure 1

17 pages, 8323 KiB  
Article
A Symmetrical Leech-Inspired Soft Crawling Robot Based on Gesture Control
by Jiabiao Li, Ruiheng Liu, Tianyu Zhang and Jianbin Liu
Biomimetics 2025, 10(1), 35; https://doi.org/10.3390/biomimetics10010035 - 8 Jan 2025
Viewed by 1051
Abstract
This paper presents a novel soft crawling robot controlled by gesture recognition, aimed at enhancing the operability and adaptability of soft robots through natural human–computer interactions. The Leap Motion sensor is employed to capture hand gesture data, and Unreal Engine is used for [...] Read more.
This paper presents a novel soft crawling robot controlled by gesture recognition, aimed at enhancing the operability and adaptability of soft robots through natural human–computer interactions. The Leap Motion sensor is employed to capture hand gesture data, and Unreal Engine is used for gesture recognition. Using the UE4Duino, gesture semantics are transmitted to an Arduino control system, enabling direct control over the robot’s movements. For accurate and real-time gesture recognition, we propose a threshold-based method for static gestures and a backpropagation (BP) neural network model for dynamic gestures. In terms of design, the robot utilizes cost-effective thermoplastic polyurethane (TPU) film as the primary pneumatic actuator material. Through a positive and negative pressure switching circuit, the robot’s actuators achieve controllable extension and contraction, allowing for basic movements such as linear motion and directional changes. Experimental results demonstrate that the robot can successfully perform diverse motions under gesture control, highlighting the potential of gesture-based interaction in soft robotics. Full article
(This article belongs to the Special Issue Design, Actuation, and Fabrication of Bio-Inspired Soft Robotics)
Show Figures

Figure 1

22 pages, 9192 KiB  
Article
A Deep-Learning-Driven Aerial Dialing PIN Code Input Authentication System via Personal Hand Features
by Jun Wang, Haojie Wang, Kiminori Sato and Bo Wu
Electronics 2025, 14(1), 119; https://doi.org/10.3390/electronics14010119 - 30 Dec 2024
Viewed by 727
Abstract
The dialing-type authentication as a common PIN code input system has gained popularity due to the simple and intuitive design. However, this type of system has the security risk of “shoulder surfing attack”, so that attackers can physically view the device screen and [...] Read more.
The dialing-type authentication as a common PIN code input system has gained popularity due to the simple and intuitive design. However, this type of system has the security risk of “shoulder surfing attack”, so that attackers can physically view the device screen and keypad to obtain personal information. Therefore, based on the use of “Leap Motion” device and “Media Pipe” solutions, in this paper, we try to propose a new two-factor dialing-type input authentication system powered by aerial hand motions and features without contact. To be specific, based on the design of the aerial dialing system part, as the first authentication part, we constructed a total of two types of hand motion input subsystems using Leap Motion and Media Pipe, separately. The results of FRR (False Rejection Rate) and FAR (False Acceptance Rate) experiments of the two subsystems show that Media Pipe is more comprehensive and superior in terms of applicability, accuracy, and speed. Moreover, as the second authentication part, the user’s hand features (e.g., proportional characteristics associated with fingers and palm) were used for specialized CNN-LSTM model training to ultimately obtain a satisfactory accuracy. Full article
(This article belongs to the Special Issue Biometrics and Pattern Recognition)
Show Figures

Figure 1

22 pages, 13474 KiB  
Article
Multimodal Human–Robot Interaction Using Gestures and Speech: A Case Study for Printed Circuit Board Manufacturing
by Ángel-Gabriel Salinas-Martínez, Joaquín Cunillé-Rodríguez, Elías Aquino-López and Angel-Iván García-Moreno
J. Manuf. Mater. Process. 2024, 8(6), 274; https://doi.org/10.3390/jmmp8060274 - 30 Nov 2024
Viewed by 2974
Abstract
In recent years, technologies for human–robot interaction (HRI) have undergone substantial advancements, facilitating more intuitive, secure, and efficient collaborations between humans and machines. This paper presents a decentralized HRI platform, specifically designed for printed circuit board manufacturing. The proposal incorporates many input devices, [...] Read more.
In recent years, technologies for human–robot interaction (HRI) have undergone substantial advancements, facilitating more intuitive, secure, and efficient collaborations between humans and machines. This paper presents a decentralized HRI platform, specifically designed for printed circuit board manufacturing. The proposal incorporates many input devices, including gesture recognition via Leap Motion and Tap Strap, and speech recognition. The gesture recognition system achieved an average accuracy of 95.42% and 97.58% for each device, respectively. The speech control system, called Cellya, exhibited a markedly reduced Word Error Rate of 22.22% and a Character Error Rate of 11.90%. Furthermore, a scalable user management framework, the decentralized multimodal control server, employs biometric security to facilitate the efficient handling of multiple users, regulating permissions and control privileges. The platform’s flexibility and real-time responsiveness are achieved through advanced sensor integration and signal processing techniques, which facilitate intelligent decision-making and enable accurate manipulation of manufacturing cells. The results demonstrate the system’s potential to improve operational efficiency and adaptability in smart manufacturing environments. Full article
(This article belongs to the Special Issue Smart Manufacturing in the Era of Industry 4.0)
Show Figures

Figure 1

24 pages, 74134 KiB  
Article
Upper and Lower Limb Training Evaluation System Based on Virtual Reality Technology
by Jian Zhao, Hanlin Gao, Chen Yang, Zhejun Kuang, Mingliang Liu, Zhuozheng Dang and Lijuan Shi
Sensors 2024, 24(21), 6909; https://doi.org/10.3390/s24216909 - 28 Oct 2024
Cited by 1 | Viewed by 1771
Abstract
Upper and lower limb rehabilitation training is essential for restoring patients’ physical movement ability and enhancing muscle strength and coordination. However, traditional rehabilitation training methods have limitations, such as high costs, low patient participation, and lack of real-time feedback. The purpose of this [...] Read more.
Upper and lower limb rehabilitation training is essential for restoring patients’ physical movement ability and enhancing muscle strength and coordination. However, traditional rehabilitation training methods have limitations, such as high costs, low patient participation, and lack of real-time feedback. The purpose of this study is to design and implement a rehabilitation training evaluation system based on virtual reality to improve the quality of patients’ rehabilitation training. This paper proposes an upper and lower limb rehabilitation training evaluation system based on virtual reality technology, aiming to solve the problems existing in traditional rehabilitation training. The system provides patients with an immersive and interactive rehabilitation training environment through virtual reality technology, aiming to improve patients’ participation and rehabilitation effects. This study used Kinect 2.0 and Leap Motion sensors to capture patients’ motion data and transmit them to virtual training scenes. The system designed multiple virtual scenes specifically for different upper and lower limbs, with a focus on hand function training. Through these scenes, patients can perform various movement training, and the system will provide real-time feedback based on the accuracy of the patient’s movements. The experimental results show that patients using the system show higher participation and better rehabilitation training effects. Compared with patients receiving traditional rehabilitation training, patients using the virtual reality system have significantly improved movement accuracy and training participation. The virtual reality rehabilitation training evaluation system developed in this study improves the quality of patients’ rehabilitation and provides personalized treatment information to medical personnel through data collection and analysis, promoting the systematization and personalization of rehabilitation training. This system is innovative and has broad application potential in the field of rehabilitation medicine. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

20 pages, 5140 KiB  
Article
MOVING: A Multi-Modal Dataset of EEG Signals and Virtual Glove Hand Tracking
by Enrico Mattei, Daniele Lozzi, Alessandro Di Matteo, Alessia Cipriani, Costanzo Manes and Giuseppe Placidi
Sensors 2024, 24(16), 5207; https://doi.org/10.3390/s24165207 - 11 Aug 2024
Viewed by 3981
Abstract
Brain–computer interfaces (BCIs) are pivotal in translating neural activities into control commands for external assistive devices. Non-invasive techniques like electroencephalography (EEG) offer a balance of sensitivity and spatial-temporal resolution for capturing brain signals associated with motor activities. This work introduces MOVING, a Multi-Modal [...] Read more.
Brain–computer interfaces (BCIs) are pivotal in translating neural activities into control commands for external assistive devices. Non-invasive techniques like electroencephalography (EEG) offer a balance of sensitivity and spatial-temporal resolution for capturing brain signals associated with motor activities. This work introduces MOVING, a Multi-Modal dataset of EEG signals and Virtual Glove Hand Tracking. This dataset comprises neural EEG signals and kinematic data associated with three hand movements—open/close, finger tapping, and wrist rotation—along with a rest period. The dataset, obtained from 11 subjects using a 32-channel dry wireless EEG system, also includes synchronized kinematic data captured by a Virtual Glove (VG) system equipped with two orthogonal Leap Motion Controllers. The use of these two devices allows for fast assembly (∼1 min), although introducing more noise than the gold standard devices for data acquisition. The study investigates which frequency bands in EEG signals are the most informative for motor task classification and the impact of baseline reduction on gesture recognition. Deep learning techniques, particularly EEGnetV4, are applied to analyze and classify movements based on the EEG data. This dataset aims to facilitate advances in BCI research and in the development of assistive devices for people with impaired hand mobility. This study contributes to the repository of EEG datasets, which is continuously increasing with data from other subjects, which is hoped to serve as benchmarks for new BCI approaches and applications. Full article
Show Figures

Figure 1

24 pages, 18997 KiB  
Article
Finger-Individuating Exoskeleton System with Non-Contact Leader–Follower Control Strategy
by Zhenyu Sun, Xiaobei Jing, Xinyu Zhang, Biaofeng Shan, Yinlai Jiang, Guanglin Li, Hiroshi Yokoi and Xu Yong
Bioengineering 2024, 11(8), 754; https://doi.org/10.3390/bioengineering11080754 - 25 Jul 2024
Cited by 1 | Viewed by 2227
Abstract
This paper proposes a novel finger-individuating exoskeleton system with a non-contact leader–follower control strategy that effectively combines motion functionality and individual adaptability. Our solution comprises the following two interactive components: the leader side and the follower side. The leader side processes joint angle [...] Read more.
This paper proposes a novel finger-individuating exoskeleton system with a non-contact leader–follower control strategy that effectively combines motion functionality and individual adaptability. Our solution comprises the following two interactive components: the leader side and the follower side. The leader side processes joint angle information from the healthy hand during motion via a Leap Motion Controller as the system input, providing more flexible and active operations owing to the non-contact manner. Then, as the follower side, the exoskeleton is driven to assist the user’s hand for rehabilitation training according to the input. The exoskeleton mechanism is designed as a universal module that can adapt to various digit sizes and weighs only 40 g. Additionally, the current motion of the exoskeleton is fed back to the system in real time, forming a closed loop to ensure control accuracy. Finally, four experiments validate the design effectiveness and motion performance of the proposed exoskeleton system. The experimental results indicate that our prototype can provide an average force of about 16.5 N for the whole hand during flexing, and the success rate reaches 82.03% in grasping tasks. Importantly, the proposed prototype holds promise for improving rehabilitation outcomes, offering diverse options for different stroke stages or application scenarios. Full article
(This article belongs to the Special Issue Advanced 3D Bioprinting for Soft Robotics, Sensing, and Healthcare)
Show Figures

Graphical abstract

17 pages, 3512 KiB  
Article
Single Sequential Trajectory Optimization with Centroidal Dynamics and Whole-Body Kinematics for Vertical Jump of Humanoid Robot
by Yaliang Liu, Xuechao Chen, Zhangguo Yu, Haoxiang Qi and Chuanku Yi
Biomimetics 2024, 9(5), 274; https://doi.org/10.3390/biomimetics9050274 - 2 May 2024
Cited by 1 | Viewed by 2018
Abstract
High vertical jumping motion, which enables a humanoid robot to leap over obstacles, is a direct reflection of its extreme motion capabilities. This article proposes a single sequential kino-dynamic trajectory optimization method to solve the whole-body motion trajectory for high vertical jumping motion. [...] Read more.
High vertical jumping motion, which enables a humanoid robot to leap over obstacles, is a direct reflection of its extreme motion capabilities. This article proposes a single sequential kino-dynamic trajectory optimization method to solve the whole-body motion trajectory for high vertical jumping motion. The trajectory optimization process is decomposed into two sequential optimization parts: optimization computation of centroidal dynamics and coherent whole-body kinematics. Both optimization problems converge on the common variables (the center of mass, momentum, and foot position) using cost functions while allowing for some tolerance in the consistency of the foot position. Additionally, complementarity conditions and a pre-defined contact sequence are implemented to constrain the contact force and foot position during the launching and flight phases. The whole-body trajectory, including the launching and flight phases, can be efficiently solved by a single sequential optimization, which is an efficient solution for the vertical jumping motion. Finally, the whole-body trajectory generated by the proposed optimized method is demonstrated on a real humanoid robot platform, and a vertical jumping motion of 0.5 m in height (foot lifting distance) is achieved. Full article
(This article belongs to the Special Issue Bio-Inspired Locomotion and Manipulation of Legged Robot: 2nd Edition)
Show Figures

Figure 1

16 pages, 5593 KiB  
Article
Portable Head-Mounted System for Mobile Forearm Tracking
by Matteo Polsinelli, Alessandro Di Matteo, Daniele Lozzi, Enrico Mattei, Filippo Mignosi, Lorenzo Nazzicone, Vincenzo Stornelli and Giuseppe Placidi
Sensors 2024, 24(7), 2227; https://doi.org/10.3390/s24072227 - 30 Mar 2024
Cited by 6 | Viewed by 1696
Abstract
Computer vision (CV)-based systems using cameras and recognition algorithms offer touchless, cost-effective, precise, and versatile hand tracking. These systems allow unrestricted, fluid, and natural movements without the constraints of wearable devices, gaining popularity in human–system interaction, virtual reality, and medical procedures. However, traditional [...] Read more.
Computer vision (CV)-based systems using cameras and recognition algorithms offer touchless, cost-effective, precise, and versatile hand tracking. These systems allow unrestricted, fluid, and natural movements without the constraints of wearable devices, gaining popularity in human–system interaction, virtual reality, and medical procedures. However, traditional CV-based systems, relying on stationary cameras, are not compatible with mobile applications and demand substantial computing power. To address these limitations, we propose a portable hand-tracking system utilizing the Leap Motion Controller 2 (LMC) mounted on the head and controlled by a single-board computer (SBC) powered by a compact power bank. The proposed system enhances portability, enabling users to interact freely with their surroundings. We present the system’s design and conduct experimental tests to evaluate its robustness under variable lighting conditions, power consumption, CPU usage, temperature, and frame rate. This portable hand-tracking solution, which has minimal weight and runs independently of external power, proves suitable for mobile applications in daily life. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

9 pages, 296 KiB  
Article
Virtual Reality-Based Assessment for Rehabilitation of the Upper Limb in Patients with Parkinson’s Disease: A Pilot Cross-Sectional Study
by Luciano Bissolotti, Justo Artiles-Sánchez, José Luís Alonso-Pérez, Josué Fernández-Carnero, Vanesa Abuín-Porras, Pierluigi Sinatti and Jorge Hugo Villafañe
Medicina 2024, 60(4), 555; https://doi.org/10.3390/medicina60040555 - 29 Mar 2024
Cited by 3 | Viewed by 2510
Abstract
Background and Objectives: This study aimed to examine the responsiveness and concurrent validity of a serious game and its correlation between the use of serious games and upper limbs (UL) performance in Parkinson’s Disease (PD) patients. Materials and Methods: Twenty-four consecutive [...] Read more.
Background and Objectives: This study aimed to examine the responsiveness and concurrent validity of a serious game and its correlation between the use of serious games and upper limbs (UL) performance in Parkinson’s Disease (PD) patients. Materials and Methods: Twenty-four consecutive upper limbs (14 males, 8 females, age: 55–83 years) of PD patients were assessed. The clinical assessment included: the Box and Block test (BBT), Nine-Hole Peg test (9HPT), and sub-scores of the Unified Parkinson’s Disease Rating-Scale Motor section (UPDRS-M) to assess UL disability. Performance scores obtained in two different tests (Ex. A and Ex. B, respectively, the Trolley test and Mushrooms test) based on leap motion (LM) sensors were used to study the correlations with clinical scores. Results: The subjective fatigue experienced during LM tests was measured by the Borg Rating of Perceived Exertion (RPE, 0–10); the BBT and 9HPT showed the highest correlation coefficients with UPDRS-M scores (ICCs: −0.652 and 0.712, p < 0.05). Exercise A (Trolley test) correlated with UPDRS-M (ICC: 0.31, p < 0.05), but not with the 9HPT and BBT tests (ICCs: −0.447 and 0.390, p < 0.05), while Exercise B (Mushroom test) correlated with UPDRS-M (ICC: −0.40, p < 0.05), as did these last two tests (ICCs: −0.225 and 0.272, p < 0.05). The mean RPE during LM tests was 3.4 ± 3.2. The evaluation of upper limb performance is feasible and does not induce relevant fatigue. Conclusions: The analysis of the ICC supports the use of Test B to evaluate UL disability and performance in PD patients, while Test A is mostly correlated with disability. Specifically designed serious games on LM can serve as a method of impairment in the PD population. Full article
(This article belongs to the Section Neurology)
11 pages, 1091 KiB  
Article
Kinematic Assessment of Fine Motor Skills in Children: Comparison of a Kinematic Approach and a Standardized Test
by Ewa Niechwiej-Szwedo, Taylor A. Brin, Benjamin Thompson and Lisa W. T. Christian
Vision 2024, 8(1), 6; https://doi.org/10.3390/vision8010006 - 17 Feb 2024
Cited by 3 | Viewed by 3129
Abstract
Deficits in fine motor skills have been reported in some children with neurodevelopmental disorders such as amblyopia or strabismus. Therefore, monitoring the development of motor skills and any potential improvement due to therapy is an important clinical goal. The aim of this study [...] Read more.
Deficits in fine motor skills have been reported in some children with neurodevelopmental disorders such as amblyopia or strabismus. Therefore, monitoring the development of motor skills and any potential improvement due to therapy is an important clinical goal. The aim of this study was to test the feasibility of performing a kinematic assessment within an optometric setting using inexpensive, portable, off-the-shelf equipment. The study also assessed whether kinematic data could enhance the information provided by a routine motor function screening test (the Movement Assessment Battery for Children, MABC). Using the MABC-2, upper limb dexterity was measured in a cohort of 47 typically developing children (7–15 years old), and the Leap motion capture system was used to record hand kinematics while children performed a bead-threading task. Two children with a history of amblyopia were also tested to explore the utility of a kinematic assessment in a clinical population. For the typically developing children, visual acuity and stereoacuity were within the normal range; however, the average standardized MABC-2 scores were lower than published norms. Comparing MABC-2 and kinematic measures in the two children with amblyopia revealed that both assessments provide convergent results and revealed deficits in fine motor control. In conclusion, kinematic assessment can augment standardized tests of fine motor skills in an optometric setting and may be useful for measuring visuomotor function and monitoring treatment outcomes in children with binocular vision anomalies. Full article
Show Figures

Figure 1

Back to TopTop