Open AccessThis article is
- freely available
Combined Hand Gesture — Speech Model for Human Action Recognition
Department of Computer Science and Information Engineering, National Cheng Kung University, No.1, University Road, Tainan City 701, Taiwan
* Author to whom correspondence should be addressed.
Received: 15 October 2013; in revised form: 2 December 2013 / Accepted: 6 December 2013 / Published: 12 December 2013
Abstract: This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions.
Keywords: hand gesture detection; hand gesture recognition; speech recognition; human behavior
Article StatisticsClick here to load and display the download statistics.
Notes: Multiple requests from the same IP address are counted as one view.
Cite This Article
MDPI and ACS Style
Cheng, S.-T.; Hsu, C.-W.; Li, J.-P. Combined Hand Gesture — Speech Model for Human Action Recognition. Sensors 2013, 13, 17098-17129.
Cheng S-T, Hsu C-W, Li J-P. Combined Hand Gesture — Speech Model for Human Action Recognition. Sensors. 2013; 13(12):17098-17129.
Cheng, Sheng-Tzong; Hsu, Chih-Wei; Li, Jian-Pan. 2013. "Combined Hand Gesture — Speech Model for Human Action Recognition." Sensors 13, no. 12: 17098-17129.