Next Article in Journal
An LSTM Network for Apnea and Hypopnea Episodes Detection in Respiratory Signals
Next Article in Special Issue
Deep Learning Based Air-Writing Recognition with the Choice of Proper Interpolation Technique
Previous Article in Journal
Evaluating Localization Accuracy of Automated Driving Systems
Previous Article in Special Issue
A Bayesian Dynamical Approach for Human Action Recognition
 
 
Article

American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation

1
School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Fukushima 965-8580, Japan
2
Softbrain Co., Ltd., Tokyo 103-0027, Japan
3
Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
*
Author to whom correspondence should be addressed.
Academic Editor: Jiayi Ma
Sensors 2021, 21(17), 5856; https://doi.org/10.3390/s21175856
Received: 30 July 2021 / Revised: 18 August 2021 / Accepted: 25 August 2021 / Published: 31 August 2021
(This article belongs to the Special Issue Vision and Sensor-Based Sensing in Human Action Recognition)
Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important domain of research for a long time. Previously, sensor-based approaches have obtained higher accuracy than vision-based approaches. Due to the cost-effectiveness of vision-based approaches, researchers have been conducted here also despite the accuracy drop. The purpose of this research is to recognize American sign characters using hand images obtained from a web camera. In this work, the media-pipe hands algorithm was used for estimating hand joints from RGB images of hands obtained from a web camera and two types of features were generated from the estimated coordinates of the joints obtained for classification: one is the distances between the joint points and the other one is the angles between vectors and 3D axes. The classifiers utilized to classify the characters were support vector machine (SVM) and light gradient boosting machine (GBM). Three character datasets were used for recognition: the ASL Alphabet dataset, the Massey dataset, and the finger spelling A dataset. The results obtained were 99.39% for the Massey dataset, 87.60% for the ASL Alphabet dataset, and 98.45% for Finger Spelling A dataset. The proposed design for automatic American sign language recognition is cost-effective, computationally inexpensive, does not require any special sensors or devices, and has outperformed previous studies. View Full-Text
Keywords: american sign language recognition; massey dataset; finger spelling a dataset; media-pipe; distance-based features; angle-based features; support vector machine; light gradient boosting machine american sign language recognition; massey dataset; finger spelling a dataset; media-pipe; distance-based features; angle-based features; support vector machine; light gradient boosting machine
Show Figures

Figure 1

MDPI and ACS Style

Shin, J.; Matsuoka, A.; Hasan, M.A.M.; Srizon, A.Y. American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation. Sensors 2021, 21, 5856. https://doi.org/10.3390/s21175856

AMA Style

Shin J, Matsuoka A, Hasan MAM, Srizon AY. American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation. Sensors. 2021; 21(17):5856. https://doi.org/10.3390/s21175856

Chicago/Turabian Style

Shin, Jungpil, Akitaka Matsuoka, Md. Al Mehedi Hasan, and Azmain Yakin Srizon. 2021. "American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation" Sensors 21, no. 17: 5856. https://doi.org/10.3390/s21175856

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop