Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (132)

Search Parameters:
Keywords = finger gesture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1197 KB  
Article
An Inclusive Offline Learning Platform Integrating Gesture Recognition and Local AI Models
by Marius-Valentin Drăgoi, Ionuț Nisipeanu, Roxana-Adriana Puiu, Florentina-Geanina Tache, Teodora-Mihaela Spiridon-Mocioacă, Alexandru Hank and Cozmin Cristoiu
Biomimetics 2025, 10(10), 693; https://doi.org/10.3390/biomimetics10100693 - 14 Oct 2025
Viewed by 478
Abstract
This paper introduces a gesture-controlled conversational interface driven by a local AI model, aimed at improving accessibility and facilitating hands-free interaction within digital environments. The technology utilizes real-time hand gesture recognition via a typical laptop camera and connects with a local AI engine [...] Read more.
This paper introduces a gesture-controlled conversational interface driven by a local AI model, aimed at improving accessibility and facilitating hands-free interaction within digital environments. The technology utilizes real-time hand gesture recognition via a typical laptop camera and connects with a local AI engine to produce customized learning materials. Users can peruse educational documents, obtain topic summaries, and generate automated quizzes with intuitive gestures, including lateral finger movements, a two-finger gesture, or an open palm, without the need for conventional input devices. Upon selection of a file, the AI model analyzes its whole content, producing a structured summary and a multiple-choice assessment, both of which are immediately saved for subsequent inspection. A unified set of gestures facilitates seamless navigating within the user interface and the opened documents. The system underwent testing with university students and faculty (n = 31), utilizing assessment measures such as gesture detection accuracy, command-response latency, and user satisfaction. The findings demonstrate that the system offers a seamless, hands-free user experience with significant potential for usage in accessibility, human–computer interaction, and intelligent interface design. This work advances the creation of multimodal AI-driven educational aids, providing a pragmatic framework for gesture-based document navigation and intelligent content enhancement. Full article
(This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation: 3rd Edition)
Show Figures

Figure 1

14 pages, 1917 KB  
Article
Moroccan Sign Language Recognition with a Sensory Glove Using Artificial Neural Networks
by Hasnae El Khoukhi, Assia Belatik, Imane El Manaa, My Abdelouahed Sabri, Yassine Abouch and Abdellah Aarab
Digital 2025, 5(4), 53; https://doi.org/10.3390/digital5040053 - 8 Oct 2025
Viewed by 644
Abstract
Every day, countless individuals with hearing or speech disabilities struggle to communicate effectively, as their conditions limit conventional verbal interaction. For them, sign language becomes an essential and often sole tool for expressing thoughts and engaging with others. However, the general public’s limited [...] Read more.
Every day, countless individuals with hearing or speech disabilities struggle to communicate effectively, as their conditions limit conventional verbal interaction. For them, sign language becomes an essential and often sole tool for expressing thoughts and engaging with others. However, the general public’s limited understanding of sign language poses a major barrier, often resulting in social, educational, and professional exclusion. To bridge this communication gap, the present study proposes a smart wearable glove system designed to translate Arabic sign language (ArSL), especially Moroccan sign language (MSL), into a written alphabet in real time. The glove integrates five MPU6050 motion sensors, one on each finger, capable of capturing detailed motion data, including angular velocity and linear acceleration. These motion signals are processed using an Artificial Neural Network (ANN), implemented directly on a Raspberry Pi Pico through embedded machine learning techniques. A custom dataset comprising labeled gestures corresponding to the MSL alphabet was developed for training the model. Following the training phase, the neural network attained a gesture recognition accuracy of 98%, reflecting strong performance in terms of reliability and classification precision. We developed an affordable and portable glove system aimed at improving daily communication for individuals with hearing impairments in Morocco, contributing to greater inclusivity and improved accessibility. Full article
Show Figures

Figure 1

24 pages, 10828 KB  
Article
Data-Driven Twisted String Actuation for Lightweight and Compliant Anthropomorphic Dexterous Hands
by Zhiyao Zheng, Jingwei Zhan, Zhaochun Li, Yucheng Wang, Chanchan Xu and Xiaojie Wang
Biomimetics 2025, 10(9), 621; https://doi.org/10.3390/biomimetics10090621 - 15 Sep 2025
Viewed by 869
Abstract
Anthropomorphic dexterous hands are crucial for robotic interaction in unstructured environments, yet their performance is often constrained by traditional actuation systems, which suffer from excessive weight, complexity, and limited compliance. Twisted String Actuators (TSAs) offer a promising alternative due to their high transmission [...] Read more.
Anthropomorphic dexterous hands are crucial for robotic interaction in unstructured environments, yet their performance is often constrained by traditional actuation systems, which suffer from excessive weight, complexity, and limited compliance. Twisted String Actuators (TSAs) offer a promising alternative due to their high transmission ratio, lightweight design, and inherent compliance. However, their strong nonlinearity under variable loads poses significant challenges for high-precision control. This study presents an integrated approach combining data-driven modeling and biomimetic mechanism innovation to overcome these limitations. First, a data-driven modeling approach based on a dual hidden-layer Back Propagation Neural Network (BPNN) is proposed to predict TSA displacement under variable loads (0.1–4.2 kg) with high accuracy. Second, a lightweight, underactuated five-finger dexterous hand is developed, featuring a biomimetic three-phalanx structure and a tendon-spring transmission mechanism, achieving an ultra-lightweight design. Finally, a comprehensive experimental platform validates the system’s performance, demonstrating precise bending angle prediction (via integrated BPNN–kinematic modeling), versatile gesture replication, and robust grasping capabilities (with a maximum fingertip force of 7.4 N). This work not only advances TSA modeling for variable-load applications but also provides a new paradigm for designing high-performance, lightweight dexterous hands in robotics. Full article
(This article belongs to the Special Issue Advanced Service Robots: Exoskeleton Robots 2025)
Show Figures

Figure 1

21 pages, 1740 KB  
Article
The Dual Functions of Adaptors
by Renia Lopez-Ozieblo
Languages 2025, 10(9), 231; https://doi.org/10.3390/languages10090231 - 10 Sep 2025
Viewed by 928
Abstract
Adaptors, self-touching movements that supposedly lack communicative significance, have often been overlooked by researchers focusing on co-speech gestures. A significant complication in their study arises from the somewhat ambiguous definition of adaptors. Examples of these movements include self-manipulations like scratching a leg, bringing [...] Read more.
Adaptors, self-touching movements that supposedly lack communicative significance, have often been overlooked by researchers focusing on co-speech gestures. A significant complication in their study arises from the somewhat ambiguous definition of adaptors. Examples of these movements include self-manipulations like scratching a leg, bringing a hand to the mouth or head, and fidgeting, nervous tics, and micro hand or finger movements. Research rooted in psychology indicates a link between adaptors and negative emotional states. However, psycholinguistic approaches suggest that these movements might be related to the communicative task. This study analyzes adaptors in forty Cantonese speakers of English as a second language in monologues and dialogues in face-to-face and online contexts, revealing that adaptors serve functions beyond emotional expression. Our data indicate that adaptors might have cognitive functions. We also identify micro-movements, flutter-like adaptors or “flutters” for short, that may have interactive functions conveying engagement. These findings challenge the traditional view of adaptors as purely non-communicative. Participants’ self-reports corroborate these interpretations, highlighting the complexity and individual variability in adaptor use. This study advocates for the inclusion of adaptors in gesture analysis, which may enrich understanding of gesture–speech integration and cognitive and emotional processes in communication. Full article
(This article belongs to the Special Issue Non-representational Gestures: Types, Use, and Functions)
Show Figures

Figure 1

16 pages, 15007 KB  
Article
Analysis of Surface EMG Signals to Control of a Bionic Hand Prototype with Its Implementation
by Adam Pieprzycki, Daniel Król, Bartosz Srebro and Marcin Skobel
Sensors 2025, 25(17), 5335; https://doi.org/10.3390/s25175335 - 28 Aug 2025
Viewed by 1003
Abstract
The primary objective of the presented study is to develop a comprehensive system for the acquisition of surface electromyographic (sEMG) data and to perform time–frequency analysis aimed at extracting discriminative features for the classification of hand gestures intended for the control of a [...] Read more.
The primary objective of the presented study is to develop a comprehensive system for the acquisition of surface electromyographic (sEMG) data and to perform time–frequency analysis aimed at extracting discriminative features for the classification of hand gestures intended for the control of a simplified bionic hand prosthesis. The proposed system is designed to facilitate precise finger gesture execution in both prosthetic and robotic hand applications. This article outlines the methodology for multi-channel sEMG signal acquisition and processing, as well as the extraction of relevant features for gesture recognition using artificial neural networks (ANNs) and other well-established machine learning (ML) algorithms. Electromyographic signals were acquired using a prototypical LPCXpresso LPC1347 ARM Cortex M3 (NXP, Eindhoven, Holland) development board in conjunction with surface EMG sensors of the Gravity OYMotion SEN0240 type (DFRobot, Shanghai, China). Signal processing and feature extraction were carried out in the MATLAB 2024b environment, utilizing both the Fourier transform and the Hilbert–Huang transform to extract selected time–frequency characteristics of the sEMG signals. An artificial neural network (ANN) was implemented and trained within the same computational framework. The experimental protocol involved 109 healthy volunteers, each performing five predefined gestures of the right hand. The first electrode was positioned on the brachioradialis (BR) muscle, with subsequent channels arranged laterally outward from the perspective of the participant. Comprehensive analyses were conducted in the time domain, frequency domain, and time–frequency domain to evaluate signal properties and identify features relevant to gesture classification. The bionic hand prototype was fabricated using 3D printing technology with a PETG filament (Spectrum, Pęcice, Poland). Actuation of the fingers was achieved using six MG996R servo motors (TowerPro, Shenzhen, China), each with an angular range of 180, controlled via a PCA9685 driver board (Adafruit, New York, NY, USA) connected to the main control unit. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

20 pages, 16450 KB  
Article
A Smart Textile-Based Tactile Sensing System for Multi-Channel Sign Language Recognition
by Keran Chen, Longnan Li, Qinyao Peng, Mengyuan He, Liyun Ma, Xinxin Li and Zhenyu Lu
Sensors 2025, 25(15), 4602; https://doi.org/10.3390/s25154602 - 25 Jul 2025
Viewed by 1024
Abstract
Sign language recognition plays a crucial role in enabling communication for deaf individuals, yet current methods face limitations such as sensitivity to lighting conditions, occlusions, and lack of adaptability in diverse environments. This study presents a wearable multi-channel tactile sensing system based on [...] Read more.
Sign language recognition plays a crucial role in enabling communication for deaf individuals, yet current methods face limitations such as sensitivity to lighting conditions, occlusions, and lack of adaptability in diverse environments. This study presents a wearable multi-channel tactile sensing system based on smart textiles, designed to capture subtle wrist and finger motions for static sign language recognition. The system leverages triboelectric yarns sewn into gloves and sleeves to construct a skin-conformal tactile sensor array, capable of detecting biomechanical interactions through contact and deformation. Unlike vision-based approaches, the proposed sensor platform operates independently of environmental lighting or occlusions, offering reliable performance in diverse conditions. Experimental validation on American Sign Language letter gestures demonstrates that the proposed system achieves high signal clarity after customized filtering, leading to a classification accuracy of 94.66%. Experimental results show effective recognition of complex gestures, highlighting the system’s potential for broader applications in human-computer interaction. Full article
(This article belongs to the Special Issue Advanced Tactile Sensors: Design and Applications)
Show Figures

Figure 1

17 pages, 5876 KB  
Article
Optimization of Knitted Strain Sensor Structures for a Real-Time Korean Sign Language Translation Glove System
by Youn-Hee Kim and You-Kyung Oh
Sensors 2025, 25(14), 4270; https://doi.org/10.3390/s25144270 - 9 Jul 2025
Viewed by 743
Abstract
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the [...] Read more.
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the presence or absence of elastic yarn are set as experimental variables, and five distinct sensors are manufactured. A comprehensive analysis of the electrical and mechanical performance, including sensitivity, responsiveness, reliability, and repeatability, reveals that the sensor with a plain-plated-knit structure, no elastic yarn included, and the conductive yarn positioned uniformly on the back exhibits the best performance, with a gauge factor (GF) of 88. The sensor exhibited a response time of less than 0.1 s at 50 cycles per minute (cpm), demonstrating that it detects and responds promptly to finger joint bending movements. Moreover, it exhibits stable repeatability and reliability across various angles and speeds, confirming its optimization for sign language recognition applications. Based on this design, an integrated textile-based system is developed by incorporating the sensor, interconnections, snap connectors, and a microcontroller unit (MCU) with built-in Bluetooth Low Energy (BLE) technology into the knitted glove. The complete system successfully recognized 12 Korean Sign Language (KSL) gestures in real time and output them as both text and audio through a dedicated application, achieving a high recognition accuracy of 98.67%. Thus, the present study quantitatively elucidates the structure–performance relationship of a knitted sensor and proposes a wearable system that accounts for real-world usage environments, thereby demonstrating the commercialization potential of the technology. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

12 pages, 8520 KB  
Article
Integrated Haptic Feedback with Augmented Reality to Improve Pinching and Fine Moving of Objects
by Jafar Hamad, Matteo Bianchi and Vincenzo Ferrari
Appl. Sci. 2025, 15(13), 7619; https://doi.org/10.3390/app15137619 - 7 Jul 2025
Cited by 1 | Viewed by 2221
Abstract
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack [...] Read more.
Hand gestures are essential for interaction in augmented and virtual reality (AR/VR), allowing users to intuitively manipulate virtual objects and engage with human–machine interfaces (HMIs). Accurate gesture recognition is critical for effective task execution. However, users often encounter difficulties due to the lack of immediate and clear feedback from head-mounted displays (HMDs). Current tracking technologies cannot always guarantee reliable recognition, leaving users uncertain about whether their gestures have been successfully detected. To address this limitation, haptic feedback can play a key role by confirming gesture recognition and compensating for discrepancies between the visual perception of fingertip contact with virtual objects and the actual system recognition. The goal of this paper is to compare a simple vibrotactile ring with a full glove device and identify their possible improvements for a fundamental gesture like pinching and fine moving of objects using Microsoft HoloLens 2. Where the pinch action is considered an essential fine motor skill, augmented reality integrated with haptic feedback can be useful to notify the user of the recognition of the gestures and compensate for misaligned visual perception between the tracked fingertip with respect to virtual objects to determine better performance in terms of spatial precision. In our experiments, the participants’ median distance error using bare hands over all axes was 10.3 mm (interquartile range [IQR] = 13.1 mm) in a median time of 10.0 s (IQR = 4.0 s). While both haptic devices demonstrated improvement in participants precision with respect to the bare-hands case, participants achieved with the full glove median errors of 2.4 mm (IQR = 5.2) in a median time of 8.0 s (IQR = 6.0 s), and with the haptic rings they achieved even better performance with median errors of 2.0 mm (IQR = 2.0 mm) in an even better median time of only 6.0 s (IQR= 5.0 s). Our outcomes suggest that simple devices like the described haptic rings can be better than glove-like devices, offering better performance in terms of accuracy, execution time, and wearability. The haptic glove probably compromises hand and finger tracking with the Microsoft HoloLens 2. Full article
Show Figures

Figure 1

27 pages, 8848 KB  
Article
Empirical Investigation on Practical Robustness of Keystroke Recognition Using WiFi Sensing for Future IoT Applications
by Haoming Wang, Aryan Sharma, Deepak Mishra, Aruna Seneviratne and Eliathamby Ambikairajah
Future Internet 2025, 17(7), 288; https://doi.org/10.3390/fi17070288 - 27 Jun 2025
Viewed by 889
Abstract
The widespread use of WiFi Internet-of-Things (IoT) devices has rendered them valuable tools for detecting information about the physical environment. Recent studies have demonstrated that WiFi Channel State Information (CSI) can detect physical events like movement, occupancy increases, and gestures. This paper empirically [...] Read more.
The widespread use of WiFi Internet-of-Things (IoT) devices has rendered them valuable tools for detecting information about the physical environment. Recent studies have demonstrated that WiFi Channel State Information (CSI) can detect physical events like movement, occupancy increases, and gestures. This paper empirically investigates the conditions under which WiFi sensing technology remains effective for keystroke detection. To achieve this timely goal of assessing whether it can raise any privacy concerns, experiments are conducted using commodity hardware to predict the accuracy of WiFi CSI in detecting keys pressed on a keyboard. Our novel results show that, in an ideal setting with a robotic arm, the position of a specific key can be predicted with 99% accuracy using a simple machine learning classifier. Furthermore, human finger localisation over a key and actual key-press recognition is also successfully achieved, with 94% and 89% reduced accuracy values, respectively. Moreover, our detailed investigation reveals that to ensure high accuracy, the gap distance between each test object must be substantial, while the size of the test group should be limited. Finally, we show WiFi sensing technology has limitations in small-scale gesture recognition for generic settings where proper device positioning is crucial. Specifically, detecting keyed words achieves an overall accuracy of 94% for the forefinger and 87% for multiple fingers when only the right hand is used. Accuracy drops to 56% when using both hands. We conclude WiFi sensing is effective in controlled indoor environments, but it has limitations due to the device location and the limited granularity of sensing objects. Full article
Show Figures

Graphical abstract

33 pages, 5057 KB  
Article
Exploring Preferential Ring-Based Gesture Interaction Across 2D Screen and Spatial Interface Environments
by Hoon Yoon, Hojeong Im, Seonha Chung and Taeha Yi
Appl. Sci. 2025, 15(12), 6879; https://doi.org/10.3390/app15126879 - 18 Jun 2025
Viewed by 1792
Abstract
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television [...] Read more.
As gesture-based interactions expand across traditional 2D screens and immersive XR platforms, designing intuitive input modalities tailored to specific contexts becomes increasingly essential. This study explores how users cognitively and experientially engage with gesture-based interactions in two distinct environments: a lean-back 2D television interface and an immersive XR spatial environment. A within-subject experimental design was employed, utilizing a gesture-recognizable smart ring to perform tasks using three gesture modalities: (a) Surface-Touch gesture, (b) mid-air gesture, and (c) micro finger-touch gesture. The results revealed clear, context-dependent user preferences; Surface-Touch gestures were preferred in the 2D context due to their controlled and pragmatic nature, whereas mid-air gestures were favored in the XR context for their immersive, intuitive qualities. Interestingly, longer gesture execution times did not consistently reduce user satisfaction, indicating that compatibility between the gesture modality and the interaction environment matters more than efficiency alone. This study concludes that successful gesture-based interface design must carefully consider the contextual alignment, highlighting the nuanced interplay among user expectations, environmental context, and gesture modality. Consequently, these findings provide practical considerations for designing Natural User Interfaces (NUIs) for various interaction contexts. Full article
Show Figures

Figure 1

15 pages, 6626 KB  
Article
A Self-Powered Smart Glove Based on Triboelectric Sensing for Real-Time Gesture Recognition and Control
by Shuting Liu, Xuanxuan Duan, Jing Wen, Qiangxing Tian, Lin Shi, Shurong Dong and Liang Peng
Electronics 2025, 14(12), 2469; https://doi.org/10.3390/electronics14122469 - 18 Jun 2025
Cited by 2 | Viewed by 1528
Abstract
Glove-based human–machine interfaces (HMIs) offer a natural, intuitive way to capture finger motions for gesture recognition, virtual interaction, and robotic control. However, many existing systems suffer from complex fabrication, limited sensitivity, and reliance on external power. Here, we present a flexible, self-powered glove [...] Read more.
Glove-based human–machine interfaces (HMIs) offer a natural, intuitive way to capture finger motions for gesture recognition, virtual interaction, and robotic control. However, many existing systems suffer from complex fabrication, limited sensitivity, and reliance on external power. Here, we present a flexible, self-powered glove HMI based on a minimalist triboelectric nanogenerator (TENG) sensor composed of a conductive fabric electrode and textured Ecoflex layer. Surface micro-structuring via 3D-printed molds enhances triboelectric performance without added complexity, achieving a peak power density of 75.02 μW/cm2 and stable operation over 13,000 cycles. The glove system enables real-time LED brightness control via finger-bending kinematics and supports intelligent recognition applications. A convolutional neural network (CNN) achieves 99.2% accuracy in user identification and 97.0% in object classification. By combining energy autonomy, mechanical simplicity, and machine learning capabilities, this work advances scalable, multi-functional HMIs for applications in assistive robotics, augmented reality (AR)/(virtual reality) VR environments, and secure interactive systems. Full article
Show Figures

Figure 1

18 pages, 4185 KB  
Article
An Empirical Study on Pointing Gestures Used in Communication in Household Settings
by Tymon Kukier, Alicja Wróbel, Barbara Sienkiewicz, Julia Klimecka, Antonio Galiza Cerdeira Gonzalez, Paweł Gajewski and Bipin Indurkhya
Electronics 2025, 14(12), 2346; https://doi.org/10.3390/electronics14122346 - 8 Jun 2025
Viewed by 1101
Abstract
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts [...] Read more.
Gestures play an integral role in human communication. Our research aims to develop a gesture understanding system that allows for better interpretation of human instructions in household robotics settings. We conducted an experiment with 34 participants who used pointing gestures to teach concepts to an assistant. Gesture data were analyzed using manual annotations (MAXQDA) and the computational methods of pose estimation and k-means clustering. The study revealed that participants tend to maintain consistent pointing styles, with one-handed pointing and index finger gestures being the most common. Gaze and pointing often co-occur, as do leaning forward and pointing. Using our gesture categorization algorithm, we analyzed gesture information values. As the experiment progressed, the information value of gestures remained stable, although the trends varied between participants and were associated with factors such as age and gender. These findings underscore the need for gesture recognition systems to balance generalization with personalization for more effective human–robot interaction. Full article
(This article belongs to the Special Issue Applications of Computer Vision, 3rd Edition)
Show Figures

Figure 1

14 pages, 4259 KB  
Article
Preparation and Performance of a Grid-Based PCL/TPU@MWCNTs Nanofiber Membrane for Pressure Sensor
by Ping Zhu and Qian Lan
Sensors 2025, 25(10), 3201; https://doi.org/10.3390/s25103201 - 19 May 2025
Cited by 1 | Viewed by 1052
Abstract
The intrinsic trade-off among sensitivity, response speed, and measurement range continues to hinder the wider adoption of flexible pressure sensors in areas such as medical diagnostics and gesture recognition. In this work, we propose a grid-structured polycaprolactone/thermoplastic-polyurethane nanofiber pressure sensor decorated with multi-walled [...] Read more.
The intrinsic trade-off among sensitivity, response speed, and measurement range continues to hinder the wider adoption of flexible pressure sensors in areas such as medical diagnostics and gesture recognition. In this work, we propose a grid-structured polycaprolactone/thermoplastic-polyurethane nanofiber pressure sensor decorated with multi-walled carbon nanotubes (PCL/TPU@MWCNTs). By introducing a gradient grid membrane, the strain distribution and reconstruction of the conductive network can be modulated, thereby alleviating the conflict between sensitivity, response speed, and operating range. First, static mechanical simulations were performed to compare the mechanical responses of planar and grid membranes, confirming that the grid architecture offers superior sensitivity. Next, PCL/TPU@MWCNT nanofiber membranes were fabricated via coaxial electrospinning followed by vacuum-filtration and assembled into three-layer planar and grid piezoresistive pressure sensors. Their sensing characteristics were evaluated by simple index-finger motions and slide the mouse wheel identified. Within 0–34 kPa, the sensitivities of the planar and grid sensors reached 1.80 kPa−1 and 2.24 kPa−1, respectively; in the 35–75 kPa range, they were 1.03 kPa−1 and 1.27 kPa−1. The rise/decay times of the output signals were 10.53 ms/11.20 ms for the planar sensor and 9.17 ms/9.65 ms for the grid sensor. Both sensors successfully distinguished active index-finger bending at 0–0.5 Hz. The dynamic range of the grid sensor during the extension motion of the index finger is 105 dB and, during the scrolling mouse motion, is 55 dB, affording higher measurement stability and a broader operating window, fully meeting the requirements for high-precision hand-motion recognition. Full article
(This article belongs to the Special Issue Advanced Flexible Electronics and Wearable Biosensing Systems)
Show Figures

Figure 1

18 pages, 1082 KB  
Article
ITap: Index Finger Tap Interaction by Gaze and Tabletop Integration
by Jeonghyeon Kim, Jemin Lee, Jung-Hoon Ahn and Youngwon Kim
Sensors 2025, 25(9), 2833; https://doi.org/10.3390/s25092833 - 30 Apr 2025
Viewed by 911
Abstract
This paper presents ITap, a novel interaction method utilizing hand tracking to create a virtual touchpad on a tabletop. ITap facilitates touch interactions such as tapping, dragging, and swiping using the index finger. The technique combines gaze-based object selection with touch gestures, while [...] Read more.
This paper presents ITap, a novel interaction method utilizing hand tracking to create a virtual touchpad on a tabletop. ITap facilitates touch interactions such as tapping, dragging, and swiping using the index finger. The technique combines gaze-based object selection with touch gestures, while a pinch gesture performed with the opposite hand activates a manual mode, enabling precise cursor control independently of gaze direction. The primary purpose of this research is to enhance interaction efficiency, reduce user fatigue, and improve accuracy in gaze-based object selection tasks, particularly in complex and cluttered XR environments. Specifically, we addressed two research questions: (1) How does ITap’s manual mode compare with the traditional gaze + pinch method regarding speed and accuracy in object selection tasks across varying distances and densities? (2) Does ITap provide improved user comfort, naturalness, and reduced fatigue compared to the traditional method during prolonged scrolling and swiping tasks? To evaluate these questions, two studies were conducted. The first study compared ITap’s manual mode with the traditional gaze + pinch method for object selection tasks across various distances and in cluttered environments. The second study examined both methods for scrolling and swiping tasks, focusing on user comfort, naturalness, and fatigue. The findings revealed that ITap outperformed gaze + pinch in terms of object selection speed and error reduction, particularly in scenarios involving distant or densely arranged objects. Additionally, ITap demonstrated superior performance in scrolling and swiping tasks, with participants reporting greater comfort and reduced fatigue. The integration of gaze-based input and touch gestures provided by ITap offers a more efficient and user-friendly interaction method compared to the traditional gaze + pinch technique. Its ability to reduce fatigue and improve accuracy makes it especially suitable for tasks involving complex environments or extended usage in XR settings. Full article
Show Figures

Figure 1

19 pages, 1357 KB  
Article
Performance Measurement of Gesture-Based Human–Machine Interfaces Within eXtended Reality Head-Mounted Displays
by Leopoldo Angrisani, Mauro D’Arco, Egidio De Benedetto, Luigi Duraccio, Fabrizio Lo Regio, Michele Sansone and Annarita Tedesco
Sensors 2025, 25(9), 2831; https://doi.org/10.3390/s25092831 - 30 Apr 2025
Cited by 1 | Viewed by 1181
Abstract
This paper proposes a method for measuring the performance of Human–Machine Interfaces based on hand-gesture recognition, implemented within eXtended Reality Head-Mounted Displays. The proposed method leverages a systematic approach, enabling performance measurement in compliance with the Guide to the Expression of Uncertainty in [...] Read more.
This paper proposes a method for measuring the performance of Human–Machine Interfaces based on hand-gesture recognition, implemented within eXtended Reality Head-Mounted Displays. The proposed method leverages a systematic approach, enabling performance measurement in compliance with the Guide to the Expression of Uncertainty in Measurement. As an initial step, a testbed is developed, comprising a series of icons accommodated within the field of view of the eXtended Reality Head-Mounted Display considered. Each icon must be selected through a cue-guided task using the hand gestures under evaluation. Multiple selection cycles involving different individuals are conducted to derive suitable performance metrics. These metrics are derived considering the specific parameters characterizing the hand gestures, as well as the uncertainty contributions arising from intra- and inter-individual variability in the measured quantity values. As a case study, the eXtended Reality Head-Mounted Display Microsoft HoloLens 2 and the finger-tapping gesture were investigated. Without compromising generality, the obtained results show that the proposed method can provide valuable insights into performance trends across individuals and gesture parameters. Moreover, the statistical analyses employed can determine whether increased individual familiarity with the Human–Machine Interface results in faster task completion without a corresponding decrease in accuracy. Overall, the proposed method provides a comprehensive framework for evaluating the compliance of hand-gesture-based Human–Machine Interfaces with target performance specifications related to specific application contexts. Full article
(This article belongs to the Special Issue Advances in Wearable Sensors for Continuous Health Monitoring)
Show Figures

Figure 1

Back to TopTop