sensors-logo

Journal Browser

Journal Browser

Sensor Systems for Gesture Recognition (3rd Edition)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 20 January 2026 | Viewed by 17993

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electronic Engineering, University of Tor Vergata Rome, 00133 Rome, Italy
Interests: wearable sensors; brain–computer interface; motion tracking; gait analysis; sensory glove; biotechnologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Gesture recognition (GR) aims to interpret human gestures by means of math algorithms. Its achievement will have widespread applications in a number of different fields, with impacts that can help or meaningfully improve our quality of life.

In the real world, GR can interpret communication meanings at a distance or can “translate” sign language into written sentences or a synthetized voice. In a virtual reality (VR) and augmented reality (AR) world, GR enables navigation and/or interaction, for instance, with the user interface (UI) of a smart TV controlled by hand gestures.

The possible applications are countless, and we can mention just a few. In the health field, GR allows us to augment the motion capabilities of people with disabilities or to support surgeons in surgical settings. In gaming, GR frees gamers from input devices, such as their keyboards, mouse, and joysticks. In the automotive industry, GR allows drivers to control car appliances (see BMW 7 Series). In cinematography, GR is used to computer generate effects or creatures. In everyday life, GR is the means to interact with smartphone apps (see uSens, Inc. and Gestigon GmbH, for example). In human–robot interactions, GR keeps the operator in safe conditions, while his/her gestures become the remote commands for tele-operating a robot. GR facilitates music creation too, converting human movements into sounds.

GR is achieved through (1) data acquisition, (2) the identification of patterns, and (3) interpretation (each of these phases can consist of different stages).

Data can be acquired by means of sensor systems based on different measurement principles, such as mechanical, magnetic, optic, acoustic, and inertial principles, or hybrid sensors. Within this frame, optical technologies are historically the most explored ones (since 1870, when animal movements were analyzed via picture sequences) and represent the current state of the art. However, optical technologies are expensive and require a dedicated room and skilled personnel. Therefore, non-optical technologies, in particular those based on wearable sensors, are becoming increasingly more important.

In order to obtain GR, different methods can be adopted for data segmentation, feature extraction, and classification. These methods highly depend on the type of data (according to the adopted type of sensor system) and the type of gestures to be recognized.

The (supervised on unsupervised) recognition of patterns in data, i.e., regularities, arrangements, and characteristics, can be approached by machine learning or heuristics and can be linked to artificial intelligence (AI).

In sum, sensor systems for gesture recognition deal with an ensemble of topics that can singularly or jointly be accessed and that represent a great opportunity for further development, with widespread potential applications.

This call for papers invites technical contributions to a Sensors Special Issue providing an up-to-date overview on “Sensor Systems for Gesture Recognition”. This Special Issue will deal with theory, solutions, and innovative applications. Potential topics include, but are not limited to, the following:

  • Sensor systems;
  • Gesture recognition;
  • Gesture recognition technologies;
  • Gesture extraction methods;
  • Gesture detection sensors;
  • Wearable sensors;
  • Human tracking;
  • Human postures and movements;
  • Motion detection and tracking;
  • Hand gesture recognition;
  • Sign language recognition;
  • Gait analysis;
  • Remote controlling;
  • Pattern recognition for gesture recognition;
  • Machine learning for gesture recognition;
  • Applications of gesture recognition;
  • Algorithms for gesture recognition.

Prof. Dr. Giovanni Saggio
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor systems
  • gesture recognition
  • gesture recognition technologies
  • gesture extraction methods
  • gesture detection sensors
  • wearable sensors
  • human tracking
  • human postures and movements
  • motion detection and tracking
  • hand gesture recognition
  • sign language recognition
  • gait analysis
  • remote controlling
  • pattern recognition for gesture recognition
  • machine learning for gesture recognition
  • applications of gesture recognition
  • algorithms for gesture recognition

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3211 KB  
Article
From Static to Dynamic: Complementary Roles of FSR and Piezoelectric Sensors in Wearable Gait and Pressure Monitoring
by Sara Sêco, Vítor Miguel Santos, Sara Valvez, Beatriz Branquinho Gomes, Maria Augusta Neto and Ana Martins Amaro
Sensors 2025, 25(23), 7377; https://doi.org/10.3390/s25237377 - 4 Dec 2025
Abstract
Objective: Plantar pressure abnormalities have a significant impact on mobility and quality of life. Real-time pressure monitoring is essential in clinical and rehabilitation settings for assessing patient progress and refining treatment protocols. Instrumental and particularly smart insoles offer a promising solution by collecting [...] Read more.
Objective: Plantar pressure abnormalities have a significant impact on mobility and quality of life. Real-time pressure monitoring is essential in clinical and rehabilitation settings for assessing patient progress and refining treatment protocols. Instrumental and particularly smart insoles offer a promising solution by collecting biomechanical data during daily activities. However, determining the optimal combination of sensor type, number, and placement remains a key challenge for ensuring accurate and reliable measurements. This study proposes a methodology for identifying the most appropriate sensor technology for wearable insoles, with a focus on data accuracy, system efficiency, and practical applicability. Additionally, it examines the correlation between sensor signals and material behavior during compression testing. Methods: Two insole prototypes underwent compression testing: one equipped with a Force Sensitive Resistor (FSR) sensor and one with a piezoelectric sensor, both positioned at the heel. Three trials per prototype assessed consistency and repeatability. Real-time data acquisition utilized a microcontroller system, and signals were processed using a sixth-order Butterworth low-pass filter with a 5 Hz cutoff frequency to reduce noise. Results: FSR sensors demonstrated stable static responses but saturated rapidly beyond 20 N, with performance degradation observed after repeated loading cycles. Piezoelectric sensors exhibited excellent dynamic sensitivity with sharp voltage peaks but proved unable to measure sustained static pressure. Conclusions: FSR sensors are well-suited for static postural assessment and continuous pressure monitoring, while piezoelectric sensors excel in dynamic gait analysis. This comparative framework establishes a foundation for developing future smart insole systems that deliver accurate, real-time rehabilitation monitoring. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

21 pages, 10003 KB  
Article
Differentiating Human Falls from Daily Activities Using Machine Learning Methods Based on Accelerometer and Altimeter Sensor Fusion Feature Engineering
by Krunoslav Jurčić and Ratko Magjarević
Sensors 2025, 25(23), 7220; https://doi.org/10.3390/s25237220 - 26 Nov 2025
Viewed by 360
Abstract
This paper presents a detailed analysis of signal data acquired from wearable sensors such as accelerometers and barometric altimeters for human activity recognition, with an emphasis on fall detection. This research addressed two types of activity recognition tasks: a binary classification problem between [...] Read more.
This paper presents a detailed analysis of signal data acquired from wearable sensors such as accelerometers and barometric altimeters for human activity recognition, with an emphasis on fall detection. This research addressed two types of activity recognition tasks: a binary classification problem between activities of daily living (ADLs) and simulated fall activities and a multiclass classification problem involving five different activities (running, walking, sitting down, jumping, and falling). By combining features derived from both sensors, traditional machine models such as random forest, support vector machine, XGBoost, logistic regression, and majority voter models were used for both classification problems. All of the aforementioned methods generally produced better results using combined features of both sensors compared to single-sensor models, highlighting the potential of sensor fusion approaches for fall detection. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

27 pages, 4773 KB  
Article
Micro Gesture Recognition with Multi-Dimensional Feature Fusion and CQ-MobileNetV3 Using FMCW Radar
by Wei Xue, Rui Wang, Jianyun Wei and Li Liu
Sensors 2025, 25(22), 6949; https://doi.org/10.3390/s25226949 - 13 Nov 2025
Viewed by 378
Abstract
Radar-based gesture recognition technology has gained increasing attention in the context of contactless human–computer interaction (HCI). Micro gestures have smaller motion amplitudes and shorter duration compared with traditional gestures, which increases the difficulty of motion feature extraction. In addition, improving recognition accuracy while [...] Read more.
Radar-based gesture recognition technology has gained increasing attention in the context of contactless human–computer interaction (HCI). Micro gestures have smaller motion amplitudes and shorter duration compared with traditional gestures, which increases the difficulty of motion feature extraction. In addition, improving recognition accuracy while maintaining low computational and storage costs is also a challenge. In this paper, a micro gesture recognition method combining multi-dimensional feature fusion and a lightweight CQ-MobileNetV3 network is proposed. For feature extraction, the range–time map, velocity–time map, and angle–time map of gestures are first constructed. Then, normalization and adaptive filtering are performed to refine the three maps. Finally, the three refined maps are fused to form a range–velocity–angle–time map, which can accurately describe the motion characteristics of gestures. For recognition, a lightweight CQ-MobileNetV3 network is designed. First, the network structure of MobileNetV3 is optimized to reduce computational complexity. Then, the improved convolutional block attention module (CBAM) and the improved self-attention (SA) module are constructed and integrated into different bottleneck blocks to improve recognition accuracy. A series of experiments are conducted with a 77 GHz frequency-modulated continuous wave (FMCW) radar. The results indicate that CQ-MobileNetV3 achieves a recognition accuracy of 97.16% for 14 micro gestures, with a parameter count of 0.207 M and a computational complexity of 0.027 GFLOPs, surpassing several other deep neural networks. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

13 pages, 1560 KB  
Article
Towards a Lightweight Arabic Sign Language Translation System
by Mohammed Algabri, Mohamed A. Mekhtiche, Mohamed A. Bencherif and Fahman Saeed
Sensors 2025, 25(17), 5504; https://doi.org/10.3390/s25175504 - 4 Sep 2025
Viewed by 1647
Abstract
There is a pressing need to build a sign-to-text translation system to simplify communication between deaf and non-deaf people. This study investigates the building of a high-performance, lightweight sign language translation system suitable for real-time applications. Two Saudi Sign Language datasets are used [...] Read more.
There is a pressing need to build a sign-to-text translation system to simplify communication between deaf and non-deaf people. This study investigates the building of a high-performance, lightweight sign language translation system suitable for real-time applications. Two Saudi Sign Language datasets are used for evaluation. We also investigate the effects of the number of signers and number of repetitions in sign language datasets. To this end, eight experiments are conducted in both signer-dependent and signer-independent modes. A comprehensive ablation study is presented to study the impacts of model components, network depth, and the size of the hidden dimension. The best accuracies achieved are 97.7% and 90.7% for the signer-dependent and signer-independent modes, respectively, using the KSU-SSL dataset. Similarly, the model achieves 98.38% and 96.22% for signer-dependent and signer-independent modes using the ArSL dataset. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

11 pages, 1540 KB  
Article
Extraction of Clinically Relevant Temporal Gait Parameters from IMU Sensors Mimicking the Use of Smartphones
by Aske G. Larsen, Line Ø. Sadolin, Trine R. Thomsen and Anderson S. Oliveira
Sensors 2025, 25(14), 4470; https://doi.org/10.3390/s25144470 - 18 Jul 2025
Viewed by 1299
Abstract
As populations age and workforces decline, the need for accessible health assessment methods grows. The merging of accessible and affordable sensors such as inertial measurement units (IMUs) and advanced machine learning techniques now enables gait assessment beyond traditional laboratory settings. A total of [...] Read more.
As populations age and workforces decline, the need for accessible health assessment methods grows. The merging of accessible and affordable sensors such as inertial measurement units (IMUs) and advanced machine learning techniques now enables gait assessment beyond traditional laboratory settings. A total of 52 participants walked at three speeds while carrying a smartphone-sized IMU in natural positions (hand, trouser pocket, or jacket pocket). A previously trained Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM)-based machine learning model predicted gait events, which were then used to calculate stride time, stance time, swing time, and double support time. Stride time predictions were highly accurate (<5% error), while stance and swing times exhibited moderate variability and double support time showed the highest errors (>20%). Despite these variations, moderate-to-strong correlations between the predicted and experimental spatiotemporal gait parameters suggest the feasibility of IMU-based gait tracking in real-world settings. These associations preserved inter-subject patterns that are relevant for detecting gait disorders. Our study demonstrated the feasibility of extracting clinically relevant gait parameters using IMU data mimicking smartphone use, especially parameters with longer durations such as stride time. Robustness across sensor locations and walking speeds supports deep learning on single-IMU data as a viable tool for remote gait monitoring. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

25 pages, 4082 KB  
Article
Multi-Scale Attention Fusion Gesture-Recognition Algorithm Based on Strain Sensors
by Zhiqiang Zhang, Jun Cai, Xueyu Dai and Hui Xiao
Sensors 2025, 25(13), 4200; https://doi.org/10.3390/s25134200 - 5 Jul 2025
Cited by 1 | Viewed by 822
Abstract
Surface electromyography (sEMG) signals are commonly employed for dynamic-gesture recognition. However, their robustness is often compromised by individual variability and sensor placement inconsistencies, limiting their reliability in complex and unconstrained scenarios. In contrast, strain-gauge signals offer enhanced environmental adaptability by stably capturing joint [...] Read more.
Surface electromyography (sEMG) signals are commonly employed for dynamic-gesture recognition. However, their robustness is often compromised by individual variability and sensor placement inconsistencies, limiting their reliability in complex and unconstrained scenarios. In contrast, strain-gauge signals offer enhanced environmental adaptability by stably capturing joint deformation processes. To address the challenges posed by the multi-channel, temporal, and amplitude-varying nature of strain signals, this paper proposes a lightweight hybrid attention network, termed MACLiteNet. The network integrates a local temporal modeling branch, a multi-scale fusion module, and a channel reconstruction mechanism to jointly capture local dynamic transitions and inter-channel structural correlations. Experimental evaluations conducted on both a self-collected strain-gauge dataset and the public sEMG benchmark NinaPro DB1 demonstrate that MACLiteNet achieves recognition accuracies of 99.71% and 98.45%, respectively, with only 0.22M parameters and a computational cost as low as 0.10 GFLOPs. Extensive experimental results demonstrate that the proposed method achieves superior performance in terms of accuracy, efficiency, and cross-modal generalization, offering a promising solution for building efficient and reliable strain-driven interactive systems. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

21 pages, 5202 KB  
Article
Real-Time American Sign Language Interpretation Using Deep Learning and Keypoint Tracking
by Bader Alsharif, Easa Alalwany, Ali Ibrahim, Imad Mahgoub and Mohammad Ilyas
Sensors 2025, 25(7), 2138; https://doi.org/10.3390/s25072138 - 28 Mar 2025
Cited by 7 | Viewed by 12464
Abstract
Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning have gained prominence. This study presents a [...] Read more.
Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning have gained prominence. This study presents a real-time American Sign Language (ASL) interpretation system that integrates deep learning with keypoint tracking to enhance accessibility and foster inclusivity. By combining the YOLOv11 model for gesture recognition with MediaPipe for precise hand tracking, the system achieves high accuracy in identifying ASL alphabet letters in real time. The proposed approach addresses challenges such as gesture ambiguity, environmental variations, and computational efficiency. Additionally, this system enables users to spell out names and locations, further improving its practical applications. Experimental results demonstrate that the model attains a mean Average Precision (mAP@0.5) of 98.2%, with an inference speed optimized for real-world deployment. This research underscores the critical role of AI-driven assistive technologies in empowering the DHH community by enabling seamless communication and interaction. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

Back to TopTop