sensors-logo

Journal Browser

Journal Browser

Sensor Systems for Gesture Recognition (3rd Edition)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 20 January 2026 | Viewed by 7607

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electronic Engineering, University of Tor Vergata Rome, 00133 Rome, Italy
Interests: wearable sensors; brain–computer interface; motion tracking; gait analysis; sensory glove; biotechnologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Gesture recognition (GR) aims to interpret human gestures by means of math algorithms. Its achievement will have widespread applications in a number of different fields, with impacts that can help or meaningfully improve our quality of life.

In the real world, GR can interpret communication meanings at a distance or can “translate” sign language into written sentences or a synthetized voice. In a virtual reality (VR) and augmented reality (AR) world, GR enables navigation and/or interaction, for instance, with the user interface (UI) of a smart TV controlled by hand gestures.

The possible applications are countless, and we can mention just a few. In the health field, GR allows us to augment the motion capabilities of people with disabilities or to support surgeons in surgical settings. In gaming, GR frees gamers from input devices, such as their keyboards, mouse, and joysticks. In the automotive industry, GR allows drivers to control car appliances (see BMW 7 Series). In cinematography, GR is used to computer generate effects or creatures. In everyday life, GR is the means to interact with smartphone apps (see uSens, Inc. and Gestigon GmbH, for example). In human–robot interactions, GR keeps the operator in safe conditions, while his/her gestures become the remote commands for tele-operating a robot. GR facilitates music creation too, converting human movements into sounds.

GR is achieved through (1) data acquisition, (2) the identification of patterns, and (3) interpretation (each of these phases can consist of different stages).

Data can be acquired by means of sensor systems based on different measurement principles, such as mechanical, magnetic, optic, acoustic, and inertial principles, or hybrid sensors. Within this frame, optical technologies are historically the most explored ones (since 1870, when animal movements were analyzed via picture sequences) and represent the current state of the art. However, optical technologies are expensive and require a dedicated room and skilled personnel. Therefore, non-optical technologies, in particular those based on wearable sensors, are becoming increasingly more important.

In order to obtain GR, different methods can be adopted for data segmentation, feature extraction, and classification. These methods highly depend on the type of data (according to the adopted type of sensor system) and the type of gestures to be recognized.

The (supervised on unsupervised) recognition of patterns in data, i.e., regularities, arrangements, and characteristics, can be approached by machine learning or heuristics and can be linked to artificial intelligence (AI).

In sum, sensor systems for gesture recognition deal with an ensemble of topics that can singularly or jointly be accessed and that represent a great opportunity for further development, with widespread potential applications.

This call for papers invites technical contributions to a Sensors Special Issue providing an up-to-date overview on “Sensor Systems for Gesture Recognition”. This Special Issue will deal with theory, solutions, and innovative applications. Potential topics include, but are not limited to, the following:

  • Sensor systems;
  • Gesture recognition;
  • Gesture recognition technologies;
  • Gesture extraction methods;
  • Gesture detection sensors;
  • Wearable sensors;
  • Human tracking;
  • Human postures and movements;
  • Motion detection and tracking;
  • Hand gesture recognition;
  • Sign language recognition;
  • Gait analysis;
  • Remote controlling;
  • Pattern recognition for gesture recognition;
  • Machine learning for gesture recognition;
  • Applications of gesture recognition;
  • Algorithms for gesture recognition.

Prof. Dr. Giovanni Saggio
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor systems
  • gesture recognition
  • gesture recognition technologies
  • gesture extraction methods
  • gesture detection sensors
  • wearable sensors
  • human tracking
  • human postures and movements
  • motion detection and tracking
  • hand gesture recognition
  • sign language recognition
  • gait analysis
  • remote controlling
  • pattern recognition for gesture recognition
  • machine learning for gesture recognition
  • applications of gesture recognition
  • algorithms for gesture recognition

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 1540 KiB  
Article
Extraction of Clinically Relevant Temporal Gait Parameters from IMU Sensors Mimicking the Use of Smartphones
by Aske G. Larsen, Line Ø. Sadolin, Trine R. Thomsen and Anderson S. Oliveira
Sensors 2025, 25(14), 4470; https://doi.org/10.3390/s25144470 - 18 Jul 2025
Viewed by 389
Abstract
As populations age and workforces decline, the need for accessible health assessment methods grows. The merging of accessible and affordable sensors such as inertial measurement units (IMUs) and advanced machine learning techniques now enables gait assessment beyond traditional laboratory settings. A total of [...] Read more.
As populations age and workforces decline, the need for accessible health assessment methods grows. The merging of accessible and affordable sensors such as inertial measurement units (IMUs) and advanced machine learning techniques now enables gait assessment beyond traditional laboratory settings. A total of 52 participants walked at three speeds while carrying a smartphone-sized IMU in natural positions (hand, trouser pocket, or jacket pocket). A previously trained Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM)-based machine learning model predicted gait events, which were then used to calculate stride time, stance time, swing time, and double support time. Stride time predictions were highly accurate (<5% error), while stance and swing times exhibited moderate variability and double support time showed the highest errors (>20%). Despite these variations, moderate-to-strong correlations between the predicted and experimental spatiotemporal gait parameters suggest the feasibility of IMU-based gait tracking in real-world settings. These associations preserved inter-subject patterns that are relevant for detecting gait disorders. Our study demonstrated the feasibility of extracting clinically relevant gait parameters using IMU data mimicking smartphone use, especially parameters with longer durations such as stride time. Robustness across sensor locations and walking speeds supports deep learning on single-IMU data as a viable tool for remote gait monitoring. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

25 pages, 4082 KiB  
Article
Multi-Scale Attention Fusion Gesture-Recognition Algorithm Based on Strain Sensors
by Zhiqiang Zhang, Jun Cai, Xueyu Dai and Hui Xiao
Sensors 2025, 25(13), 4200; https://doi.org/10.3390/s25134200 - 5 Jul 2025
Viewed by 331
Abstract
Surface electromyography (sEMG) signals are commonly employed for dynamic-gesture recognition. However, their robustness is often compromised by individual variability and sensor placement inconsistencies, limiting their reliability in complex and unconstrained scenarios. In contrast, strain-gauge signals offer enhanced environmental adaptability by stably capturing joint [...] Read more.
Surface electromyography (sEMG) signals are commonly employed for dynamic-gesture recognition. However, their robustness is often compromised by individual variability and sensor placement inconsistencies, limiting their reliability in complex and unconstrained scenarios. In contrast, strain-gauge signals offer enhanced environmental adaptability by stably capturing joint deformation processes. To address the challenges posed by the multi-channel, temporal, and amplitude-varying nature of strain signals, this paper proposes a lightweight hybrid attention network, termed MACLiteNet. The network integrates a local temporal modeling branch, a multi-scale fusion module, and a channel reconstruction mechanism to jointly capture local dynamic transitions and inter-channel structural correlations. Experimental evaluations conducted on both a self-collected strain-gauge dataset and the public sEMG benchmark NinaPro DB1 demonstrate that MACLiteNet achieves recognition accuracies of 99.71% and 98.45%, respectively, with only 0.22M parameters and a computational cost as low as 0.10 GFLOPs. Extensive experimental results demonstrate that the proposed method achieves superior performance in terms of accuracy, efficiency, and cross-modal generalization, offering a promising solution for building efficient and reliable strain-driven interactive systems. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

21 pages, 5202 KiB  
Article
Real-Time American Sign Language Interpretation Using Deep Learning and Keypoint Tracking
by Bader Alsharif, Easa Alalwany, Ali Ibrahim, Imad Mahgoub and Mohammad Ilyas
Sensors 2025, 25(7), 2138; https://doi.org/10.3390/s25072138 - 28 Mar 2025
Cited by 1 | Viewed by 6342
Abstract
Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning have gained prominence. This study presents a [...] Read more.
Communication barriers pose significant challenges for the Deaf and Hard-of-Hearing (DHH) community, limiting their access to essential services, social interactions, and professional opportunities. To bridge this gap, assistive technologies leveraging artificial intelligence (AI) and deep learning have gained prominence. This study presents a real-time American Sign Language (ASL) interpretation system that integrates deep learning with keypoint tracking to enhance accessibility and foster inclusivity. By combining the YOLOv11 model for gesture recognition with MediaPipe for precise hand tracking, the system achieves high accuracy in identifying ASL alphabet letters in real time. The proposed approach addresses challenges such as gesture ambiguity, environmental variations, and computational efficiency. Additionally, this system enables users to spell out names and locations, further improving its practical applications. Experimental results demonstrate that the model attains a mean Average Precision (mAP@0.5) of 98.2%, with an inference speed optimized for real-world deployment. This research underscores the critical role of AI-driven assistive technologies in empowering the DHH community by enabling seamless communication and interaction. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

Back to TopTop