Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (67)

Search Parameters:
Keywords = virtual glove

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2771 KB  
Article
The Haptic Fidelity Paradox in VR: Cognitive Load and User Satisfaction
by Yoona Jeong and Tack Woo
Appl. Sci. 2026, 16(8), 3722; https://doi.org/10.3390/app16083722 - 10 Apr 2026
Viewed by 310
Abstract
High-fidelity haptic interfaces are widely assumed to enhance virtual reality (VR) training; however, they can trigger a “fidelity paradox” where hardware complexity paradoxically degrades usability. Grounded in Task-Technology Fit (TTF) theory and Hassenzahl’s pragmatic-hedonic quality framework, this study investigates the mechanisms underlying this [...] Read more.
High-fidelity haptic interfaces are widely assumed to enhance virtual reality (VR) training; however, they can trigger a “fidelity paradox” where hardware complexity paradoxically degrades usability. Grounded in Task-Technology Fit (TTF) theory and Hassenzahl’s pragmatic-hedonic quality framework, this study investigates the mechanisms underlying this paradox through a within-subject experiment (N=70) in a VR cooking simulation comparing three interface paradigms: VR controllers (VRC), hand tracking (HT), and haptic gloves (HG). Results confirmed that HG’s low task-technology fit—manifested as tracking errors, physical resistance, and increased operational overhead—generated significantly higher extraneous cognitive load (H1) and degraded interaction satisfaction (H2) despite its superior intended sensory resolution. Critically, in the HG condition, pragmatic quality (technical reliability) was identified as the dominant driver of satisfaction, while hedonic quality additions (thermal feedback) did not show a significant independent contribution to satisfaction in the HG condition. Perceived training effectiveness remained above the neutral threshold across all conditions (H3), indicating that content-level TTF is preserved independently of interface-level TTF mismatch. These findings suggest that VR interface design should prioritize “functional sufficiency”—ensuring tools serve as transparent, seamless extensions of the user—over the blind pursuit of sensory maximization. Full article
Show Figures

Figure 1

14 pages, 2318 KB  
Article
A Flexible Wearable Data Glove Based on Hybrid Fiber-Optic Sensing for Hand Motion Monitoring
by Jing Li, Xiangting Hou, Ke Du, Huiying Piao and Cheng Li
Materials 2026, 19(8), 1525; https://doi.org/10.3390/ma19081525 - 10 Apr 2026
Viewed by 425
Abstract
Wearable data gloves often suffer from electromagnetic interference, insufficient substrate stability, and limited capability for multi-degree-of-freedom motion measurement. To address these limitations, a flexible glove incorporating a hybrid POF-FBG sensing scheme was designed and fabricated. Plastic optical fibers (POFs) were side-polished and patterned [...] Read more.
Wearable data gloves often suffer from electromagnetic interference, insufficient substrate stability, and limited capability for multi-degree-of-freedom motion measurement. To address these limitations, a flexible glove incorporating a hybrid POF-FBG sensing scheme was designed and fabricated. Plastic optical fibers (POFs) were side-polished and patterned with long-period gratings to improve sensitivity to wrist flexion-extension and abduction-adduction. Then fiber Bragg gratings (FBGs) were embedded in a polydimethylsiloxane substrate and encapsulated using thermoplastic polyurethane fixtures to reduce the influence of skin stretching and improve measurement accuracy of finger-joint angle. Moreover, a thermoplastic polyurethane skeleton with an adaptive sliding-rail structure was 3D printed to maintain the stability of the sensor placement at the joints. Experimental results demonstrated the mean absolute errors of 4.06°, 1.38° and 1.70° for wrist flexion-extension, abduction-adduction and finger-joint bending, respectively, along with excellent gesture classification using a support vector machine algorithm, which indicates great potential in virtual reality interaction and hand rehabilitation applications. Full article
(This article belongs to the Special Issue Advances in Optical Fiber Materials and Their Applications)
Show Figures

Figure 1

19 pages, 2660 KB  
Article
A Shallow-Torque Haptic Device for Wrist Postural Guidance: Design and System Evaluation in a Virtual Rehabilitation Task
by Federica Serra, Cristian Camardella, Antonio Frisoli and Daniele Leonardis
Robotics 2026, 15(3), 59; https://doi.org/10.3390/robotics15030059 - 13 Mar 2026
Viewed by 591
Abstract
This research presents a new glove-shaped wearable device, designed to deliver torsional cues on the wrist as a tactile guidance tool. The device integrates four tactile modules that apply modulated shallow torque to the anatomical wrist articulation, providing torsional hints for both ulnar–radial [...] Read more.
This research presents a new glove-shaped wearable device, designed to deliver torsional cues on the wrist as a tactile guidance tool. The device integrates four tactile modules that apply modulated shallow torque to the anatomical wrist articulation, providing torsional hints for both ulnar–radial deviation and flexion–extension degrees of freedom (DOF). The aim of this research is to evaluate whether this new type of stimulation can convey accurate directional cues on 2-DOF wrist movements, with the main target application as a guidance and support tool in virtual motor rehabilitation. Effectiveness was tested in virtual reality (VR) serious games designed to exercise wrist movements through a virtual navigation task. The glove-shaped haptic device was introduced to guide the user by directional cues provided through the shallow-torques approach. Results showed that the tactile sensations were effective in conveying accurate directional cues, reliably guiding subjects’ wrist movements on 2-DOF. This research highlights the potential of a compact, non-bulky glove-shaped device for providing clear directional cues at the wrist across 2-DOF. The shallow-torque approach, combining the natural interaction of force feedback with hardware simplicity and lightness closer to vibrotactile devices, has the potential of scalability on other body segments, and shows promise for applications in rehabilitation, postural guidance, and virtual interaction. Full article
(This article belongs to the Section Neurorobotics)
Show Figures

Figure 1

20 pages, 3729 KB  
Proceeding Paper
A Smart Glove-Based System for Dynamic Sign Language Translation Using LSTM Networks
by Tabassum Kanwal, Saud Altaf, Rehan Mehmood Yousaf and Kashif Sattar
Eng. Proc. 2025, 118(1), 45; https://doi.org/10.3390/ECSA-12-26530 - 7 Nov 2025
Viewed by 1863
Abstract
This research presents a novel, real-time Pakistani Sign Language (PSL) recognition system utilizing a custom-designed sensory glove integrated with advanced machine learning techniques. The system aims to bridge communication gaps for individuals with hearing and speech impairments by translating hand gestures into readable [...] Read more.
This research presents a novel, real-time Pakistani Sign Language (PSL) recognition system utilizing a custom-designed sensory glove integrated with advanced machine learning techniques. The system aims to bridge communication gaps for individuals with hearing and speech impairments by translating hand gestures into readable text. At the core of this work is a smart glove engineered with five resistive flex sensors for precise finger flexion detection and a 9-DOF Inertial Measurement Unit (IMU) for capturing hand orientation and movement. The glove is powered by a compact microcontroller, which processes the analog and digital sensor inputs and transmits the data wirelessly to a host computer. A rechargeable 3.7 V Li-Po battery ensures portability, while a dynamic dataset comprising both static alphabet gestures and dynamic PSL phrases was recorded using this setup. The collected data was used to train two models: a Support Vector Machine with feature extraction (SVM-FE) and a Long Short-Term Memory (LSTM) deep learning network. The LSTM model outperformed traditional methods, achieving an accuracy of 98.6% in real-time gesture recognition. The proposed system demonstrates robust performance and offers practical applications in smart home interfaces, virtual and augmented reality, gaming, and assistive technologies. By combining ergonomic hardware with intelligent algorithms, this research takes a significant step toward inclusive communication and more natural human–machine interaction. Full article
Show Figures

Figure 1

19 pages, 1142 KB  
Review
Virtual Reality Exergaming in Outpatient Stroke Rehabilitation: A Scoping Review and Clinician Roadmap
by Błażej Cieślik
J. Clin. Med. 2025, 14(20), 7227; https://doi.org/10.3390/jcm14207227 - 13 Oct 2025
Viewed by 3173
Abstract
Background/Objectives: Outpatient stroke rehabilitation is expanding as inpatient episodes shorten. Virtual reality (VR) exergaming can extend practice and standardize progression, but setting-specific effectiveness and implementation factors remain unclear. This scoping review mapped VR exergaming in outpatient stroke care and identified technology typologies and [...] Read more.
Background/Objectives: Outpatient stroke rehabilitation is expanding as inpatient episodes shorten. Virtual reality (VR) exergaming can extend practice and standardize progression, but setting-specific effectiveness and implementation factors remain unclear. This scoping review mapped VR exergaming in outpatient stroke care and identified technology typologies and functional outcomes. Methods: Guided by the JBI Manual and PRISMA-ScR, searches of MEDLINE, Embase, CENTRAL, Scopus, and Web of Science were conducted in April 2025. The study included adults post-stroke undergoing VR exergaming programs with movement tracking delivered in clinic-based outpatient or home-based outpatient settings. Interventions focused on functional rehabilitation using interactive VR. Results: Sixty-six studies met the criteria, forty-four clinic-based and twenty-two home-based. Serious games accounted for 65% of interventions and commercial exergames for 35%. Superiority on a prespecified functional endpoint was reported in 41% of trials, 29% showed within-group improvement only, and 30% found no between-group difference; effects were more consistent in supervised clinic programs than in home-based implementations. Signals were most consistent for commercial off-the-shelf and camera-based systems. Gloves or haptics and locomotor platforms were promising but less studied. Head-mounted display interventions showed mixed findings. Adherence was generally high, and adverse events were infrequent and mild. Conclusions: VR exergaming appears clinically viable for outpatient stroke rehabilitation, with the most consistent gains in supervised clinic-based programs; home-based effects are more variable and sensitive to dose and supervision. Future work should compare platform types by therapeutic goal; embed mechanistic measures; strengthen home delivery with dose control and remote supervision; and standardize the reporting of fidelity, adherence, and cost. Full article
(This article belongs to the Special Issue Chronic Disease Management and Rehabilitation in Older Adults)
Show Figures

Figure 1

18 pages, 1181 KB  
Article
Inclusion in Higher Education: An Analysis of Teaching Materials for Deaf Students
by Maria Aparecida Lima, Ana Garcia-Valcárcel and Manuel Meirinhos
Educ. Sci. 2025, 15(10), 1290; https://doi.org/10.3390/educsci15101290 - 30 Sep 2025
Cited by 1 | Viewed by 2628
Abstract
This study investigates the challenges of promoting accessibility for deaf teachers and students in higher education, focusing on the development of inclusive teaching materials. A qualitative case study was conducted in ten teacher training programmes at the Federal University of Alagoas (Brazil), including [...] Read more.
This study investigates the challenges of promoting accessibility for deaf teachers and students in higher education, focusing on the development of inclusive teaching materials. A qualitative case study was conducted in ten teacher training programmes at the Federal University of Alagoas (Brazil), including nine distance learning courses and one face-to-face LIBRAS programme. Analysis of the Virtual Learning Environment revealed a predominance of text-based content, with limited use of Libras videos, visual resources, or assistive technologies. The integration of Brazilian Sign Language into teaching practices was minimal, and digital translation tools were rarely used or contextually appropriate. Educators reported limited training, technical support, and institutional guidance for the creation of accessible materials. Time constraints and resource scarcity further hampered inclusive practices. The results highlight the urgent need for institutional policies, continuous teacher training, multidisciplinary support teams, and the strategic use of digital technologies and Artificial Intelligence (AI). Compared with previous studies, significant progress has been made. The present study highlights the establishment of an Accessibility Centre (NAC) and an Accessibility Laboratory (LAB) at the university. These facilities are designed to support the development of policies for the inclusion of people with disabilities, including deaf students, and to assist teachers in designing educational resources, which is essential for enhancing accessibility and learning outcomes. Artificial intelligence tools—such as sign language translators including Hand Talk, VLibras, SignSpeak, Glove-Based Systems, the LIBRAS Online Dictionary, and the Spreadthesign Dictionary—can serve as valuable resources in the teaching and learning process. Full article
Show Figures

Figure 1

15 pages, 2559 KB  
Article
Quasi-Static and Dynamic Measurement Capabilities Provided by an Electromagnetic Field-Based Sensory Glove
by Giovanni Saggio, Luca Pietrosanti, I-Jung Lee and Bor-Shing Lin
Biosensors 2025, 15(10), 640; https://doi.org/10.3390/bios15100640 - 25 Sep 2025
Cited by 1 | Viewed by 1345
Abstract
The sensory glove (also known as data or instrumented glove) plays a key role in measuring and tracking hand dexterity. It has been adopted in a variety of different domains, including medical, robotics, virtual reality, and human–computer interaction, to assess hand motor skills [...] Read more.
The sensory glove (also known as data or instrumented glove) plays a key role in measuring and tracking hand dexterity. It has been adopted in a variety of different domains, including medical, robotics, virtual reality, and human–computer interaction, to assess hand motor skills and to improve control accuracy. However, no particular technology has been established as the most suitable for all domains, so that different sensory gloves have been developed, adopting different sensors mainly based on optic, electric, magnetic, or mechanical properties. This work investigates the performances of the MANUS Quantum sensory glove that sources an electromagnetic field and measures its changing value at the fingertips during fingers’ flexion. Its performance is determined in terms of measurement repeatability, reproducibility, and reliability during both quasi-static and dynamic hand motor tests. Full article
Show Figures

Figure 1

12 pages, 8520 KB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Cited by 6 | Viewed by 4519
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

15 pages, 3685 KB  
Article
Wearable Glove with Enhanced Sensitivity Based on Push–Pull Optical Fiber Sensor
by Qi Xia, Xiaotong Zhang, Hongye Wang, Libo Yuan and Tingting Yuan
Biosensors 2025, 15(7), 414; https://doi.org/10.3390/bios15070414 - 27 Jun 2025
Cited by 1 | Viewed by 1606
Abstract
Hand motion monitoring plays a vital role in medical rehabilitation, sports training, and human–computer interaction. High-sensitivity wearable biosensors are essential for accurate gesture recognition and precise motion analysis. In this work, we propose a high-sensitivity wearable glove based on a push–pull optical fiber [...] Read more.
Hand motion monitoring plays a vital role in medical rehabilitation, sports training, and human–computer interaction. High-sensitivity wearable biosensors are essential for accurate gesture recognition and precise motion analysis. In this work, we propose a high-sensitivity wearable glove based on a push–pull optical fiber sensor, designed to enhance the sensitivity and accuracy of hand motion biosensing. The sensor employs diagonal core reflectors fabricated at the tip of a four-core fiber, which interconnect symmetric fiber channels to form a push–pull sensing mechanism. This mechanism induces opposite wavelength shifts in fiber Bragg gratings positioned symmetrically under bending, effectively decoupling temperature and strain effects while significantly enhancing bending sensitivity. Experimental results demonstrate superior bending-sensing performance, establishing a solid foundation for high-precision gesture recognition. The integrated wearable glove offers a compact, flexible structure and straightforward fabrication process, with promising applications in precision medicine, intelligent human–machine interaction, virtual reality, and continuous health monitoring. Full article
(This article belongs to the Section Wearable Biosensors)
Show Figures

Graphical abstract

15 pages, 6626 KB  
Article
A Self-Powered Smart Glove Based on Triboelectric Sensing for Real-Time Gesture Recognition and Control
by Shuting Liu, Xuanxuan Duan, Jing Wen, Qiangxing Tian, Lin Shi, Shurong Dong and Liang Peng
Electronics 2025, 14(12), 2469; https://doi.org/10.3390/electronics14122469 - 18 Jun 2025
Cited by 3 | Viewed by 2743
Abstract
Glove-based human–machine interfaces (HMIs) offer a natural, intuitive way to capture finger motions for gesture recognition, virtual interaction, and robotic control. However, many existing systems suffer from complex fabrication, limited sensitivity, and reliance on external power. Here, we present a flexible, self-powered glove [...] Read more.
Glove-based human–machine interfaces (HMIs) offer a natural, intuitive way to capture finger motions for gesture recognition, virtual interaction, and robotic control. However, many existing systems suffer from complex fabrication, limited sensitivity, and reliance on external power. Here, we present a flexible, self-powered glove HMI based on a minimalist triboelectric nanogenerator (TENG) sensor composed of a conductive fabric electrode and textured Ecoflex layer. Surface micro-structuring via 3D-printed molds enhances triboelectric performance without added complexity, achieving a peak power density of 75.02 μW/cm2 and stable operation over 13,000 cycles. The glove system enables real-time LED brightness control via finger-bending kinematics and supports intelligent recognition applications. A convolutional neural network (CNN) achieves 99.2% accuracy in user identification and 97.0% in object classification. By combining energy autonomy, mechanical simplicity, and machine learning capabilities, this work advances scalable, multi-functional HMIs for applications in assistive robotics, augmented reality (AR)/(virtual reality) VR environments, and secure interactive systems. Full article
Show Figures

Figure 1

23 pages, 2568 KB  
Article
Reinforcement Learning-Driven Digital Twin for Zero-Delay Communication in Smart Greenhouse Robotics
by Cristian Bua, Luca Borgianni, Davide Adami and Stefano Giordano
Agriculture 2025, 15(12), 1290; https://doi.org/10.3390/agriculture15121290 - 15 Jun 2025
Cited by 6 | Viewed by 3095
Abstract
This study presents a networked cyber-physical architecture that integrates a Reinforcement Learning-based Digital Twin (DT) to enable zero-delay interaction between physical and digital components in smart agriculture. The proposed system allows real-time remote control of a robotic arm inside a hydroponic greenhouse, using [...] Read more.
This study presents a networked cyber-physical architecture that integrates a Reinforcement Learning-based Digital Twin (DT) to enable zero-delay interaction between physical and digital components in smart agriculture. The proposed system allows real-time remote control of a robotic arm inside a hydroponic greenhouse, using a sensor-equipped Wearable Glove (SWG) for hand motion capture. The DT operates in three coordinated modes: Real2Digital, Digital2Real, and Digital2Digital, supporting bidirectional synchronization and predictive simulation. A core innovation lies in the use of a Reinforcement Learning model to anticipate hand motions, thereby compensating for network latency and enhancing the responsiveness of the virtual–physical interaction. The architecture was experimentally validated through a detailed communication delay analysis, covering sensing, data processing, network transmission, and 3D rendering. While results confirm the system’s effectiveness under typical conditions, performance may vary under unstable network scenarios. This work represents a promising step toward real-time adaptive DTs in complex smart greenhouse environments. Full article
Show Figures

Figure 1

15 pages, 2437 KB  
Article
The Impacts of Incorporating Virtual Reality and Data Gloves in Exergames on Intrinsic Motivation in Upper-Extremity Assessments: A Study in a Young and Healthy Group
by He Kunze, Noppon Choosri and Supara Grudpan
Multimodal Technol. Interact. 2025, 9(6), 57; https://doi.org/10.3390/mti9060057 - 9 Jun 2025
Viewed by 2304
Abstract
Virtual reality (VR) technology has shown potential as a viable tool for rehabilitation. VR is a well-recognized technology that creates immersive experiences to enhance engagement and encourage more effective participation in activities. In the current study, it has been shown that using a [...] Read more.
Virtual reality (VR) technology has shown potential as a viable tool for rehabilitation. VR is a well-recognized technology that creates immersive experiences to enhance engagement and encourage more effective participation in activities. In the current study, it has been shown that using a standard VR system setup can effectively increase participant motivation for various rehabilitation applications. However, there is a research gap in terms of participant motivation, relating to the intervention of integrating data gloves into VR to improve visibility in hand tracking for rehabilitation. This study presents and assesses an integrated approach utilizing VR and data glove technology to evaluate upper extremity function in a young, healthy population, comparing this to traditional methods. Participants’ intrinsic motivation was measured using the Intrinsic Motivation Inventory (IMI). The findings indicate that the combined immersive environment outperforms conventional practice in most aspects. Therefore, this research also sheds light on the fact that a data glove is promising technology in rehabilitation applications that can augment positive experiences while having no adverse effects on the VR system. Full article
Show Figures

Graphical abstract

25 pages, 4902 KB  
Article
Hand Dynamics in Healthy Individuals and Spinal Cord Injury Patients During Real and Virtual Box and Block Test
by Verónica Gracia-Ibáñez, Ana de los Reyes-Guzmán, Margarita Vergara, Néstor J. Jarque-Bou and Joaquín-Luis Sancho-Bru
Appl. Sci. 2025, 15(11), 5842; https://doi.org/10.3390/app15115842 - 22 May 2025
Cited by 1 | Viewed by 1052
Abstract
Virtual reality (VR) is a promising tool in spinal cord injury (SCI) rehabilitation, particularly through virtual adaptations of functional tests like the Box and Block test (BBT). However, a comprehensive dynamic comparison between real and virtual BBT is lacking. This study investigates the [...] Read more.
Virtual reality (VR) is a promising tool in spinal cord injury (SCI) rehabilitation, particularly through virtual adaptations of functional tests like the Box and Block test (BBT). However, a comprehensive dynamic comparison between real and virtual BBT is lacking. This study investigates the kinematic and electromyographic (EMG) differences between healthy individuals and SCI patients performing both real (RBBT) and virtual (VBBT) versions of the BBT. An electromagnetic motion-tracking system, an instrumented glove, and surface EMG electrodes were used to capture hand trajectories, joint angles, and forearm muscle activation. The analysis included cycle-averaged and temporal kinematic and EMG parameters. Our findings reveal that both groups showed increased trajectory length and velocity peaks during the VBBT, with more pronounced increases in SCI patients. Unlike healthy individuals, SCI patients also showed increased finger and thumb flexion during VBBT. Cycle-averaged EMG values were lower in healthy participants during VBBT, likely due to reduced motor demands and lack of real grasping. Conversely, SCI patients exhibited higher muscle activity, suggesting impaired coordination and compensatory overactivation. Healthy individuals showed consistent temporal kinematic synergies and muscle activation, whereas they were altered in SCI patients, especially during reaching. These findings highlight the need for rehabilitation strategies to improve motor control and feedback integration. Full article
Show Figures

Figure 1

19 pages, 6442 KB  
Article
Synergy-Based Evaluation of Hand Motor Function in Object Handling Using Virtual and Mixed Realities
by Yuhei Sorimachi, Hiroki Akaida, Kyo Kutsuzawa, Dai Owaki and Mitsuhiro Hayashibe
Sensors 2025, 25(7), 2080; https://doi.org/10.3390/s25072080 - 26 Mar 2025
Viewed by 1510
Abstract
This study introduces a novel system for evaluating hand motor function through synergy-based analysis during object manipulation in virtual and mixed-reality environments. Conventional assessments of hand function are often subjective, relying on visual observation by therapists or patient-reported outcomes. To address these limitations, [...] Read more.
This study introduces a novel system for evaluating hand motor function through synergy-based analysis during object manipulation in virtual and mixed-reality environments. Conventional assessments of hand function are often subjective, relying on visual observation by therapists or patient-reported outcomes. To address these limitations, we developed a system that utilizes the leap motion controller (LMC) to capture finger motion data without the constraints of glove-type devices. Spatial synergies were extracted using principal component analysis (PCA) and Varimax rotation, providing insights into finger motor coordination with the sparse decomposition. Additionally, we incorporated the HoloLens 2 to create a mixed-reality object manipulation task that enhances spatial awareness for the user, improving natural interaction with virtual objects. Our results demonstrate that synergy-based analysis allows for the systematic detection of hand movement abnormalities that are not captured through traditional task performance metrics. This system demonstrates promise in advancing rehabilitation by enabling more objective and detailed evaluations of finger motor function, facilitating personalized therapy, and potentially contributing to the early detection of motor impairments in the future. Full article
Show Figures

Figure 1

22 pages, 13198 KB  
Article
Design of an Environment for Virtual Training Based on Digital Reconstruction: From Real Vegetation to Its Tactile Simulation
by Alessandro Martinelli, Davide Fabiocchi, Francesca Picchio, Hermes Giberti and Marco Carnevale
Designs 2025, 9(2), 32; https://doi.org/10.3390/designs9020032 - 10 Mar 2025
Cited by 6 | Viewed by 2459
Abstract
The exploitation of immersive simulation platforms to improve traditional training techniques in the agricultural industry sector would enable year-round accessibility, flexibility, safety, and consistent high-quality training for agricultural operators. An innovative workflow in virtual simulations for training and educational purposes includes an immersive [...] Read more.
The exploitation of immersive simulation platforms to improve traditional training techniques in the agricultural industry sector would enable year-round accessibility, flexibility, safety, and consistent high-quality training for agricultural operators. An innovative workflow in virtual simulations for training and educational purposes includes an immersive environment in which the operator can interact with plants through haptic interfaces, following instructions imparted by a non-playing character (NPC) instructor. This study allows simulating the pruning of a complex case study, a hazelnut tree, reproduced in very high detail to offer agricultural operators a more realistic and immersive training environment than those currently existing. The process of creating a multisensorial environment starts with the integrated survey of the plant with a laser scanner and photogrammetry and then generates a controllable parametric model from roots to leaves with the exact positioning of the original branches. The model is finally inserted into a simulation, where haptic gloves with tactile resistance responsive to model collisions are tested. The results of the experimentation demonstrate the correct execution of this innovative design simulation, in which branches and leaves can be cut using a shear, with immediate sensory feedback. The project therefore aims to finalize this product as a realistic training platform for pruning, but not limited to it, paving the way for high-fidelity simulation for many other types of operations and specializations. Full article
(This article belongs to the Special Issue Mixture of Human and Machine Intelligence in Digital Manufacturing)
Show Figures

Figure 1

Back to TopTop