Next Article in Journal
Spectrometer-Free Graphene Plasmonics Based Refractive Index Sensor
Previous Article in Journal
Rotating Focused Field Eddy-Current Sensing for Arbitrary Orientation Defects Detection in Carbon Steel
Article

Human Interaction Recognition Based on Whole-Individual Detection

School of Information Science and Technology, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(8), 2346; https://doi.org/10.3390/s20082346
Received: 12 March 2020 / Revised: 15 April 2020 / Accepted: 18 April 2020 / Published: 20 April 2020
(This article belongs to the Section Intelligent Sensors)
Human interaction recognition technology is a hot topic in the field of computer vision, and its application prospects are very extensive. At present, there are many difficulties in human interaction recognition such as the spatial complexity of human interaction, the differences in action characteristics at different time periods, and the complexity of interactive action features. The existence of these problems restricts the improvement of recognition accuracy. To investigate the differences in the action characteristics at different time periods, we propose an improved fusion time-phase feature of the Gaussian model to obtain video keyframes and remove the influence of a large amount of redundant information. Regarding the complexity of interactive action features, we propose a multi-feature fusion network algorithm based on parallel Inception and ResNet. This multi-feature fusion network not only reduces the network parameter quantity, but also improves the network performance; it alleviates the network degradation caused by the increase in network depth and obtains higher classification accuracy. For the spatial complexity of human interaction, we combined the whole video features with the individual video features, making full use of the feature information of the interactive video. A human interaction recognition algorithm based on whole–individual detection is proposed, where the whole video contains the global features of both sides of action, and the individual video contains the individual detail features of a single person. Making full use of the feature information of the whole video and individual videos is the main contribution of this paper to the field of human interaction recognition and the experimental results in the UT dataset (UT–interaction dataset) showed that the accuracy of this method was 91.7%. View Full-Text
Keywords: human interaction recognition; whole-individual detection; parallel multi-feature fusion network; Gaussian model downsampling human interaction recognition; whole-individual detection; parallel multi-feature fusion network; Gaussian model downsampling
Show Figures

Figure 1

MDPI and ACS Style

Ye, Q.; Zhong, H.; Qu, C.; Zhang, Y. Human Interaction Recognition Based on Whole-Individual Detection. Sensors 2020, 20, 2346. https://doi.org/10.3390/s20082346

AMA Style

Ye Q, Zhong H, Qu C, Zhang Y. Human Interaction Recognition Based on Whole-Individual Detection. Sensors. 2020; 20(8):2346. https://doi.org/10.3390/s20082346

Chicago/Turabian Style

Ye, Qing, Haoxin Zhong, Chang Qu, and Yongmei Zhang. 2020. "Human Interaction Recognition Based on Whole-Individual Detection" Sensors 20, no. 8: 2346. https://doi.org/10.3390/s20082346

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop