Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = sign glove

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 16450 KiB  
Article
A Smart Textile-Based Tactile Sensing System for Multi-Channel Sign Language Recognition
by Keran Chen, Longnan Li, Qinyao Peng, Mengyuan He, Liyun Ma, Xinxin Li and Zhenyu Lu
Sensors 2025, 25(15), 4602; https://doi.org/10.3390/s25154602 - 25 Jul 2025
Viewed by 319
Abstract
Sign language recognition plays a crucial role in enabling communication for deaf individuals, yet current methods face limitations such as sensitivity to lighting conditions, occlusions, and lack of adaptability in diverse environments. This study presents a wearable multi-channel tactile sensing system based on [...] Read more.
Sign language recognition plays a crucial role in enabling communication for deaf individuals, yet current methods face limitations such as sensitivity to lighting conditions, occlusions, and lack of adaptability in diverse environments. This study presents a wearable multi-channel tactile sensing system based on smart textiles, designed to capture subtle wrist and finger motions for static sign language recognition. The system leverages triboelectric yarns sewn into gloves and sleeves to construct a skin-conformal tactile sensor array, capable of detecting biomechanical interactions through contact and deformation. Unlike vision-based approaches, the proposed sensor platform operates independently of environmental lighting or occlusions, offering reliable performance in diverse conditions. Experimental validation on American Sign Language letter gestures demonstrates that the proposed system achieves high signal clarity after customized filtering, leading to a classification accuracy of 94.66%. Experimental results show effective recognition of complex gestures, highlighting the system’s potential for broader applications in human-computer interaction. Full article
(This article belongs to the Special Issue Advanced Tactile Sensors: Design and Applications)
Show Figures

Figure 1

17 pages, 5876 KiB  
Article
Optimization of Knitted Strain Sensor Structures for a Real-Time Korean Sign Language Translation Glove System
by Youn-Hee Kim and You-Kyung Oh
Sensors 2025, 25(14), 4270; https://doi.org/10.3390/s25144270 - 9 Jul 2025
Viewed by 298
Abstract
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the [...] Read more.
Herein, an integrated system is developed based on knitted strain sensors for real-time translation of sign language into text and audio voices. To investigate how the structural characteristics of the knit affect the electrical performance, the position of the conductive yarn and the presence or absence of elastic yarn are set as experimental variables, and five distinct sensors are manufactured. A comprehensive analysis of the electrical and mechanical performance, including sensitivity, responsiveness, reliability, and repeatability, reveals that the sensor with a plain-plated-knit structure, no elastic yarn included, and the conductive yarn positioned uniformly on the back exhibits the best performance, with a gauge factor (GF) of 88. The sensor exhibited a response time of less than 0.1 s at 50 cycles per minute (cpm), demonstrating that it detects and responds promptly to finger joint bending movements. Moreover, it exhibits stable repeatability and reliability across various angles and speeds, confirming its optimization for sign language recognition applications. Based on this design, an integrated textile-based system is developed by incorporating the sensor, interconnections, snap connectors, and a microcontroller unit (MCU) with built-in Bluetooth Low Energy (BLE) technology into the knitted glove. The complete system successfully recognized 12 Korean Sign Language (KSL) gestures in real time and output them as both text and audio through a dedicated application, achieving a high recognition accuracy of 98.67%. Thus, the present study quantitatively elucidates the structure–performance relationship of a knitted sensor and proposes a wearable system that accounts for real-world usage environments, thereby demonstrating the commercialization potential of the technology. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

34 pages, 5266 KiB  
Article
Energy-, Cost-, and Resource-Efficient IoT Hazard Detection System with Adaptive Monitoring
by Chiang Liang Kok, Jovan Bowen Heng, Yit Yan Koh and Tee Hui Teo
Sensors 2025, 25(6), 1761; https://doi.org/10.3390/s25061761 - 12 Mar 2025
Cited by 4 | Viewed by 1515
Abstract
Hazard detection in industrial and public environments is critical for ensuring safety and regulatory compliance. This paper presents an energy-efficient, cost-effective IoT-based hazard detection system utilizing an ESP32-CAM microcontroller integrated with temperature (DHT22) and motion (PIR) sensors. A custom-built convolutional neural network (CNN) [...] Read more.
Hazard detection in industrial and public environments is critical for ensuring safety and regulatory compliance. This paper presents an energy-efficient, cost-effective IoT-based hazard detection system utilizing an ESP32-CAM microcontroller integrated with temperature (DHT22) and motion (PIR) sensors. A custom-built convolutional neural network (CNN) deployed on a Flask server enabled real-time classification of hazard signs, including “high voltage”, “radioactive”, “corrosive”, “flammable”, “no hazard”, “no smoking”, and “wear gloves”. The CNN model, optimized for embedded applications, achieves high classification accuracy with an F1 score of 85.9%, ensuring reliable detection in diverse environmental conditions. A key feature of the system is its adaptive monitoring mechanism, which dynamically adjusts image capture frequency based on detected activity, leading to 31–37% energy savings compared to continuous monitoring approaches. This mechanism ensures efficient power usage by minimizing redundant image captures while maintaining real-time responsiveness in high-activity scenarios. Unlike traditional surveillance systems, which rely on high-cost infrastructure, centralized monitoring, and subscription-based alerting mechanisms, the proposed system operates at a total cost of SGD 38.60 (~USD 28.50) per unit and leverages free Telegram notifications for real-time alerts. The system was validated through experimental testing, demonstrating high classification accuracy, energy efficiency, and cost-effectiveness. In this study, a hazard refers to any environmental condition or object that poses a potential safety risk, including electrical hazards, chemical spills, fire outbreaks, and industrial dangers. The proposed system provides a scalable and adaptable solution for hazard detection in resource-constrained environments, such as construction sites, industrial facilities, and remote locations. The proposed approach effectively balances accuracy, real-time responsiveness, and low-power operation, making it suitable for large-scale deployment. Full article
(This article belongs to the Special Issue Sensors Based SoCs, FPGA in IoT Applications)
Show Figures

Figure 1

21 pages, 5979 KiB  
Article
Sign Language Sentence Recognition Using Hybrid Graph Embedding and Adaptive Convolutional Networks
by Pathomthat Chiradeja, Yijuan Liang and Chaiyan Jettanasen
Appl. Sci. 2025, 15(6), 2957; https://doi.org/10.3390/app15062957 - 10 Mar 2025
Cited by 1 | Viewed by 1015
Abstract
Sign language plays a crucial role in bridging communication barriers within the Deaf community. Recognizing sign language sentences remains a significant challenge due to their complex structure, variations in signing styles, and temporal dynamics. This study introduces an innovative sign language sentence recognition [...] Read more.
Sign language plays a crucial role in bridging communication barriers within the Deaf community. Recognizing sign language sentences remains a significant challenge due to their complex structure, variations in signing styles, and temporal dynamics. This study introduces an innovative sign language sentence recognition (SLSR) approach using Hybrid Graph Embedding and Adaptive Convolutional Networks (HGE-ACN) specifically developed for single-handed wearable glove devices. The system relies on sensor data from a glove with six-axis inertial sensors and five-finger curvature sensors. The proposed HGE-ACN framework integrates graph-based embeddings to capture dynamic spatial–temporal relationships in motion and curvature data. At the same time, the Adaptive Convolutional Networks extract robust glove-based features to handle variations in signing speed, transitions between gestures, and individual signer styles. The lightweight design enables real-time processing and enhances recognition accuracy, making it suitable for practical use. Extensive experiments demonstrate that HGE-ACN achieves superior accuracy and computational efficiency compared to existing glove-based recognition methods. The system maintains robustness under various conditions, including inconsistent signing speeds and environmental noise. This work has promising applications in real-time assistive tools, educational technologies, and human–computer interaction systems, facilitating more inclusive and accessible communication platforms for the deaf and hard-of-hearing communities. Future work will explore multi-lingual sign language recognition and real-world deployment across diverse environments. Full article
Show Figures

Figure 1

29 pages, 4988 KiB  
Article
Interaction Glove for 3-D Virtual Environments Based on an RGB-D Camera and Magnetic, Angular Rate, and Gravity Micro-Electromechanical System Sensors
by Pontakorn Sonchan, Neeranut Ratchatanantakit, Nonnarit O-Larnnithipong, Malek Adjouadi and Armando Barreto
Information 2025, 16(2), 127; https://doi.org/10.3390/info16020127 - 9 Feb 2025
Viewed by 3478
Abstract
This paper presents the theoretical foundation, practical implementation, and empirical evaluation of a glove for interaction with 3-D virtual environments. At the dawn of the “Spatial Computing Era”, where users continuously interact with 3-D Virtual and Augmented Reality environments, the need for a [...] Read more.
This paper presents the theoretical foundation, practical implementation, and empirical evaluation of a glove for interaction with 3-D virtual environments. At the dawn of the “Spatial Computing Era”, where users continuously interact with 3-D Virtual and Augmented Reality environments, the need for a practical and intuitive interaction system that can efficiently engage 3-D elements is becoming pressing. Over the last few decades, there have been attempts to provide such an interaction mechanism using a glove. However, glove systems are currently not in widespread use due to their high cost and, we propose, due to their inability to sustain high levels of performance under certain situations. Performance deterioration has been observed due to the distortion of the local magnetic field caused by ordinary ferromagnetic objects present near the glove’s operating space. There are several areas where reliable hand-tracking gloves could provide a next generation of improved solutions, such as American Sign Language training and automatic translation to text and training and evaluation for activities that require high motor skills in the hands (e.g., playing some musical instruments, training of surgeons, etc.). While the use of a hand-tracking glove toward these goals seems intuitive, some of the currently available glove systems may not meet the accuracy and reliability levels required for those use cases. This paper describes our concept of an interaction glove instrumented with miniature magnetic, angular rate, and gravity (MARG) sensors and aided by a single camera. The camera used is an off-the-shelf red, green, and blue–depth (RGB-D) camera. We describe a proof-of-concept implementation of the system using our custom “GMVDK” orientation estimation algorithm. This paper also describes the glove’s empirical evaluation with human-subject performance tests. The results show that the prototype glove, using the GMVDK algorithm, is able to operate without performance losses, even in magnetically distorted environments. Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
Show Figures

Figure 1

14 pages, 721 KiB  
Article
Determinants of Safe Pesticide Handling and Application Among Rural Farmers
by Olamide Stephanie Oshingbade, Haruna Musa Moda, Shade John Akinsete, Mumuni Adejumo and Norr Hassan
Int. J. Environ. Res. Public Health 2025, 22(2), 211; https://doi.org/10.3390/ijerph22020211 - 2 Feb 2025
Viewed by 1243
Abstract
The study investigated the determinants of safe pesticide handling and application among farmers in rural communities of Oyo State, ssouthwestern Nigeria. A cross-sectional design utilizing 2-stage cluster sampling techniques was used to select Ido and Ibarapa central Local Government Areas and to interview [...] Read more.
The study investigated the determinants of safe pesticide handling and application among farmers in rural communities of Oyo State, ssouthwestern Nigeria. A cross-sectional design utilizing 2-stage cluster sampling techniques was used to select Ido and Ibarapa central Local Government Areas and to interview 383 farmers via a structured questionnaire. Data were analyzed using descriptive statistics and logistic regression at p = 0.05. Results showed that 41.8% of the farmers had been working with pesticides on farms for at least 5 years, 33.0% attended training on pesticide application, 73.5% had good safety and health knowledge, and 72.3% had safe pesticide handling and application practices. About half (50.2%) stated that they wear coveralls, gloves, and masks to protect their body, face, and hands when applying pesticides, 9.8% use empty pesticide containers for other purposes in the house/farm, while 11.5% blow the nozzle with their mouth to unclog it if it becomes blocked. The three major health symptoms reported by the participants were skin irritation (65.0%), itchy eyes (51.3%), and excessive sweating (32.5%). Having attended training on pesticide application and use enhanced (OR = 2.821; C.I = 1.513–5.261) practicing safe pesticide handling and application. Farmers with good knowledge (OR = 5.494; C.I = 3.385–8.919) were more likely to practice safe pesticide handling and application than those with poor knowledge about pesticide use. It is essential to develop and deliver mandatory comprehensive training programs for farmers on impacts of pesticides on health and environment, along with sustainable safe handling, application, and disposal of pesticides using proper waste management techniques and recognizing early signs and seeking medical assistance. The urgent need to strengthen policy to regulate pesticide use and limit farmers’ access to banned products is also key. Full article
Show Figures

Figure 1

26 pages, 4673 KiB  
Article
Utilizing IoMT-Based Smart Gloves for Continuous Vital Sign Monitoring to Safeguard Athlete Health and Optimize Training Protocols
by Mustafa Hikmet Bilgehan Ucar, Arsene Adjevi, Faruk Aktaş and Serdar Solak
Sensors 2024, 24(20), 6500; https://doi.org/10.3390/s24206500 - 10 Oct 2024
Cited by 1 | Viewed by 2450
Abstract
This paper presents the development of a vital sign monitoring system designed specifically for professional athletes, with a focus on runners. The system aims to enhance athletic performance and mitigate health risks associated with intense training regimens. It comprises a wearable glove that [...] Read more.
This paper presents the development of a vital sign monitoring system designed specifically for professional athletes, with a focus on runners. The system aims to enhance athletic performance and mitigate health risks associated with intense training regimens. It comprises a wearable glove that monitors key physiological parameters such as heart rate, blood oxygen saturation (SpO2), body temperature, and gyroscope data used to calculate linear speed, among other relevant metrics. Additionally, environmental variables, including ambient temperature, are tracked. To ensure accuracy, the system incorporates an onboard filtering algorithm to minimize false positives, allowing for timely intervention during instances of physiological abnormalities. The study demonstrates the system’s potential to optimize performance and protect athlete well-being by facilitating real-time adjustments to training intensity and duration. The experimental results show that the system adheres to the classical “220-age” formula for calculating maximum heart rate, responds promptly to predefined thresholds, and outperforms a moving average filter in noise reduction, with the Gaussian filter delivering superior performance. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

20 pages, 4716 KiB  
Article
Novel Wearable System to Recognize Sign Language in Real Time
by İlhan Umut and Ümit Can Kumdereli
Sensors 2024, 24(14), 4613; https://doi.org/10.3390/s24144613 - 16 Jul 2024
Cited by 2 | Viewed by 3194
Abstract
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems [...] Read more.
The aim of this study is to develop a practical software solution for real-time recognition of sign language words using two arms. This will facilitate communication between hearing-impaired individuals and those who can hear. We are aware of several sign language recognition systems developed using different technologies, including cameras, armbands, and gloves. However, the system we propose in this study stands out for its practicality, utilizing surface electromyography (muscle activity) and inertial measurement unit (motion dynamics) data from both arms. We address the drawbacks of other methods, such as high costs, low accuracy due to ambient light and obstacles, and complex hardware requirements, which have limited their practical application. Our software can run on different operating systems using digital signal processing and machine learning methods specific to this study. For the test, we created a dataset of 80 words based on their frequency of use in daily life and performed a thorough feature extraction process. We tested the recognition performance using various classifiers and parameters and compared the results. The random forest algorithm emerged as the most successful, achieving a remarkable 99.875% accuracy, while the naïve Bayes algorithm had the lowest success rate with 87.625% accuracy. The new system promises to significantly improve communication for people with hearing disabilities and ensures seamless integration into daily life without compromising user comfort or lifestyle quality. Full article
Show Figures

Figure 1

25 pages, 1263 KiB  
Article
Cognitive Classifier of Hand Gesture Images for Automated Sign Language Recognition: Soft Robot Assistance Based on Neutrosophic Markov Chain Paradigm
by Muslem Al-Saidi, Áron Ballagi, Oday Ali Hassen and Saad M. Saad
Computers 2024, 13(4), 106; https://doi.org/10.3390/computers13040106 - 22 Apr 2024
Cited by 4 | Viewed by 2318
Abstract
In recent years, Sign Language Recognition (SLR) has become an additional topic of discussion in the human–computer interface (HCI) field. The most significant difficulty confronting SLR recognition is finding algorithms that will scale effectively with a growing vocabulary size and a limited supply [...] Read more.
In recent years, Sign Language Recognition (SLR) has become an additional topic of discussion in the human–computer interface (HCI) field. The most significant difficulty confronting SLR recognition is finding algorithms that will scale effectively with a growing vocabulary size and a limited supply of training data for signer-independent applications. Due to its sensitivity to shape information, automated SLR based on hidden Markov models (HMMs) cannot characterize the confusing distributions of the observations in gesture features with sufficiently precise parameters. In order to simulate uncertainty in hypothesis spaces, many scholars provide an extension of the HMMs, utilizing higher-order fuzzy sets to generate interval-type-2 fuzzy HMMs. This expansion is helpful because it brings the uncertainty and fuzziness of conventional HMM mapping under control. The neutrosophic sets are used in this work to deal with indeterminacy in a practical SLR setting. Existing interval-type-2 fuzzy HMMs cannot consider uncertain information that includes indeterminacy. However, the neutrosophic hidden Markov model successfully identifies the best route between states when there is vagueness. This expansion is helpful because it brings the uncertainty and fuzziness of conventional HMM mapping under control. The neutrosophic three membership functions (truth, indeterminate, and falsity grades) provide more layers of autonomy for assessing HMM’s uncertainty. This approach could be helpful for an extensive vocabulary and hence seeks to solve the scalability issue. In addition, it may function independently of the signer, without needing data gloves or any other input devices. The experimental results demonstrate that the neutrosophic HMM is nearly as computationally difficult as the fuzzy HMM but has a similar performance and is more robust to gesture variations. Full article
(This article belongs to the Special Issue Uncertainty-Aware Artificial Intelligence)
Show Figures

Figure 1

29 pages, 16023 KiB  
Article
Progression Learning Convolution Neural Model-Based Sign Language Recognition Using Wearable Glove Devices
by Yijuan Liang, Chaiyan Jettanasen and Pathomthat Chiradeja
Computation 2024, 12(4), 72; https://doi.org/10.3390/computation12040072 - 3 Apr 2024
Cited by 5 | Viewed by 2923
Abstract
Communication among hard-of-hearing individuals presents challenges, and to facilitate communication, sign language is preferred. Many people in the deaf and hard-of-hearing communities struggle to understand sign language due to their lack of sign-mode knowledge. Contemporary researchers utilize glove and vision-based approaches to capture [...] Read more.
Communication among hard-of-hearing individuals presents challenges, and to facilitate communication, sign language is preferred. Many people in the deaf and hard-of-hearing communities struggle to understand sign language due to their lack of sign-mode knowledge. Contemporary researchers utilize glove and vision-based approaches to capture hand movement and analyze communication; most researchers use vision-based techniques to identify disabled people’s communication because the glove-based approach causes individuals to feel uncomfortable. However, the glove solution successfully identifies motion and hand dexterity, even though it only recognizes the numbers, words, and letters being communicated, failing to identify sentences. Therefore, artificial intelligence (AI) is integrated with the sign language prediction system to identify disabled people’s sentence-based communication. Here, wearable glove-related sign language information is utilized to analyze the recognition system’s efficiency. The collected inputs are processed using progression learning deep convolutional neural networks (PLD-CNNs). The technique known as progression learning processes sentences by dividing them into words, creating a training dataset. The model assists in efforts to understand sign language sentences. A memetic optimization algorithm is used to calibrate network performance, minimizing recognition optimization problems. This process maximizes convergence speed and reduces translation difficulties, enhancing the overall learning process. The created system is developed using the MATLAB (R2021b) tool, and its proficiency is evaluated using performance metrics. The experimental findings illustrate that the proposed system works by recognizing sign language movements with excellent precision, recall, accuracy, and F1 scores, rendering it a powerful tool in the detection of gestures in general and sign-based sentences in particular. Full article
Show Figures

Figure 1

31 pages, 20979 KiB  
Article
Give Me a Sign: Using Data Gloves for Static Hand-Shape Recognition
by Philipp Achenbach, Sebastian Laux, Dennis Purdack, Philipp Niklas Müller and Stefan Göbel
Sensors 2023, 23(24), 9847; https://doi.org/10.3390/s23249847 - 15 Dec 2023
Cited by 11 | Viewed by 2262
Abstract
Human-to-human communication via the computer is mainly carried out using a keyboard or microphone. In the field of virtual reality (VR), where the most immersive experience possible is desired, the use of a keyboard contradicts this goal, while the use of a microphone [...] Read more.
Human-to-human communication via the computer is mainly carried out using a keyboard or microphone. In the field of virtual reality (VR), where the most immersive experience possible is desired, the use of a keyboard contradicts this goal, while the use of a microphone is not always desirable (e.g., silent commands during task-force training) or simply not possible (e.g., if the user has hearing loss). Data gloves help to increase immersion within VR, as they correspond to our natural interaction. At the same time, they offer the possibility of accurately capturing hand shapes, such as those used in non-verbal communication (e.g., thumbs up, okay gesture, …) and in sign language. In this paper, we present a hand-shape recognition system using Manus Prime X data gloves, including data acquisition, data preprocessing, and data classification to enable nonverbal communication within VR. We investigate the impact on accuracy and classification time of using an outlier detection and a feature selection approach in our data preprocessing. To obtain a more generalized approach, we also studied the impact of artificial data augmentation, i.e., we created new artificial data from the recorded and filtered data to augment the training data set. With our approach, 56 different hand shapes could be distinguished with an accuracy of up to 93.28%. With a reduced number of 27 hand shapes, an accuracy of up to 95.55% could be achieved. The voting meta-classifier (VL2) proved to be the most accurate, albeit slowest, classifier. A good alternative is random forest (RF), which was even able to achieve better accuracy values in a few cases and was generally somewhat faster. outlier detection was proven to be an effective approach, especially in improving the classification time. Overall, we have shown that our hand-shape recognition system using data gloves is suitable for communication within VR. Full article
(This article belongs to the Special Issue Sensing Technology in Virtual Reality)
Show Figures

Figure 1

15 pages, 7888 KiB  
Article
Sign Language Recognition with Multimodal Sensors and Deep Learning Methods
by Chenghong Lu, Misaki Kozakai and Lei Jing
Electronics 2023, 12(23), 4827; https://doi.org/10.3390/electronics12234827 - 29 Nov 2023
Cited by 12 | Viewed by 5742
Abstract
Sign language recognition is essential in hearing-impaired people’s communication. Wearable data gloves and computer vision are partially complementary solutions. However, sign language recognition using a general monocular camera suffers from occlusion and recognition accuracy issues. In this research, we aim to improve accuracy [...] Read more.
Sign language recognition is essential in hearing-impaired people’s communication. Wearable data gloves and computer vision are partially complementary solutions. However, sign language recognition using a general monocular camera suffers from occlusion and recognition accuracy issues. In this research, we aim to improve accuracy through data fusion of 2-axis bending sensors and computer vision. We obtain the hand key point information of sign language movements captured by a monocular RGB camera and use key points to calculate hand joint angles. The system achieves higher recognition accuracy by fusing multimodal data of the skeleton, joint angles, and finger curvature. In order to effectively fuse data, we spliced multimodal data and used CNN-BiLSTM to extract effective features for sign language recognition. CNN is a method that can learn spatial information, and BiLSTM can learn time series data. We built a data collection system with bending sensor data gloves and cameras. A dataset was collected that contains 32 Japanese sign language movements of seven people, including 27 static movements and 5 dynamic movements. Each movement is repeated 10 times, totaling about 112 min. In particular, we obtained data containing occlusions. Experimental results show that our system can fuse multimodal information and perform better than using only skeletal information, with the accuracy increasing from 68.34% to 84.13%. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning Based Pattern Recognition)
Show Figures

Figure 1

20 pages, 8947 KiB  
Article
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors
by Ang Ji, Yongzhen Wang, Xin Miao, Tianqi Fan, Bo Ru, Long Liu, Ruicheng Nie and Sen Qiu
Sensors 2023, 23(15), 6693; https://doi.org/10.3390/s23156693 - 26 Jul 2023
Cited by 8 | Viewed by 5142
Abstract
Finding ways to enable seamless communication between deaf and able-bodied individuals has been a challenging and pressing issue. This paper proposes a solution to this problem by designing a low-cost data glove that utilizes multiple inertial sensors with the purpose of achieving efficient [...] Read more.
Finding ways to enable seamless communication between deaf and able-bodied individuals has been a challenging and pressing issue. This paper proposes a solution to this problem by designing a low-cost data glove that utilizes multiple inertial sensors with the purpose of achieving efficient and accurate sign language recognition. In this study, four machine learning models—decision tree (DT), support vector machine (SVM), K-nearest neighbor method (KNN), and random forest (RF)—were employed to recognize 20 different types of dynamic sign language data used by deaf individuals. Additionally, a proposed attention-based mechanism of long and short-term memory neural networks (Attention-BiLSTM) was utilized in the process. Furthermore, this study verifies the impact of the number and position of data glove nodes on the accuracy of recognizing complex dynamic sign language. Finally, the proposed method is compared with existing state-of-the-art algorithms using nine public datasets. The results indicate that both the Attention-BiLSTM and RF algorithms have the highest performance in recognizing the twenty dynamic sign language gestures, with an accuracy of 98.85% and 97.58%, respectively. This provides evidence for the feasibility of our proposed data glove and recognition methods. This study may serve as a valuable reference for the development of wearable sign language recognition devices and promote easier communication between deaf and able-bodied individuals. Full article
(This article belongs to the Special Issue Motion Sensor)
Show Figures

Figure 1

13 pages, 2881 KiB  
Article
Assistive Data Glove for Isolated Static Postures Recognition in American Sign Language Using Neural Network
by Muhammad Saad Amin, Syed Tahir Hussain Rizvi, Alessandro Mazzei and Luca Anselma
Electronics 2023, 12(8), 1904; https://doi.org/10.3390/electronics12081904 - 18 Apr 2023
Cited by 10 | Viewed by 2860
Abstract
Sign language recognition is one of the most challenging tasks of today’s era. Most of the researchers working in this domain have focused on different types of implementations for sign recognition. These implementations require the development of smart prototypes for capturing and classifying [...] Read more.
Sign language recognition is one of the most challenging tasks of today’s era. Most of the researchers working in this domain have focused on different types of implementations for sign recognition. These implementations require the development of smart prototypes for capturing and classifying sign gestures. Keeping in mind the aspects of prototype design, sensor-based, vision-based, and hybrid approach-based prototypes have been designed. The authors in this paper have designed sensor-based assistive gloves to capture signs for the alphabet and digits. These signs are a small but important fraction of the ASL dictionary since they play an essential role in fingerspelling, which is a universal signed linguistic strategy for expressing personal names, technical terms, gaps in the lexicon, and emphasis. A scaled conjugate gradient-based back propagation algorithm is used to train a fully-connected neural network on a self-collected dataset of isolated static postures of digits, alphabetic, and alphanumeric characters. The authors also analyzed the impact of activation functions on the performance of neural networks. Successful implementation of the recognition network produced promising results for this small dataset of static gestures of digits, alphabetic, and alphanumeric characters. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 5578 KiB  
Article
Application of Wearable Gloves for Assisted Learning of Sign Language Using Artificial Neural Networks
by Hyeon-Jun Kim and Soo-Whang Baek
Processes 2023, 11(4), 1065; https://doi.org/10.3390/pr11041065 - 1 Apr 2023
Cited by 11 | Viewed by 4348
Abstract
This study proposes the design and application of wearable gloves that can recognize sign language expressions from input images via long short-term memory (LSTM) network models and can learn sign language through finger movement generation and vibration motor feedback. It is difficult for [...] Read more.
This study proposes the design and application of wearable gloves that can recognize sign language expressions from input images via long short-term memory (LSTM) network models and can learn sign language through finger movement generation and vibration motor feedback. It is difficult for nondisabled people who do not know sign language to express sign language accurately. Therefore, we suggest the use of wearable gloves for sign language education to help nondisabled people learn and accurately express sign language. The wearable glove consists of a direct current motor, a link (finger exoskeleton) that can generate finger movements, and a flexible sensor that recognizes the degree of finger bending. When the coordinates of the hand move in the input image, the sign language motion is fed back through the vibration motor attached to the wrist. The proposed wearable glove can learn 20 Korean sign language words, and the data used for learning are configured to represent the joint coordinates and joint angles of both the hands and body for these 20 sign language words. Prototypes were produced based on the design, and it was confirmed that the angle of each finger could be adjusted. Through experiments, a sign language recognition model was selected, and the validity of the proposed method was confirmed by comparing the generated learning results with the data sequence. Finally, we compared and verified the accuracy and learning loss using a recurrent neural network and confirmed that the test results of the LSTM model showed an accuracy of 85%. Full article
(This article belongs to the Special Issue Processes in Electrical, Electronics and Information Engineering)
Show Figures

Figure 1

Back to TopTop