Next Article in Journal
A Cost-Effective Inertial Measurement System for Tracking Movement and Triggering Kinesthetic Feedback in Lower-Limb Prosthesis Users
Previous Article in Journal
Radio-Frequency Biosensors for Real-Time and Continuous Glucose Detection
Article

Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning

1
Interdisciplinary Program in IT-Bio Convergence System, Department of Electronics Engineering, Chosun University, Gwangju 61452, Korea
2
Intelligent Robotics Research Division, Electronics Telecommunications Research Institute, Daejeon 34129, Korea
*
Author to whom correspondence should be addressed.
Academic Editor: Mario Munoz-Organero
Sensors 2021, 21(5), 1838; https://doi.org/10.3390/s21051838
Received: 30 January 2021 / Revised: 27 February 2021 / Accepted: 2 March 2021 / Published: 6 March 2021
(This article belongs to the Section Physical Sensors)
Behavior recognition has applications in automatic crime monitoring, automatic sports video analysis, and context awareness of so-called silver robots. In this study, we employ deep learning to recognize behavior based on body and hand–object interaction regions of interest (ROIs). We propose an ROI-based four-stream ensemble convolutional neural network (CNN). Behavior recognition data are mainly composed of images and skeletons. The first stream uses a pre-trained 2D-CNN by converting the 3D skeleton sequence into pose evolution images (PEIs). The second stream inputs the RGB video into the 3D-CNN to extract temporal and spatial features. The most important information in behavior recognition is identification of the person performing the action. Therefore, if the neural network is trained by removing ambient noise and placing the ROI on the person, feature analysis can be performed by focusing on the behavior itself rather than learning the entire region. Therefore, the third stream inputs the RGB video limited to the body-ROI into the 3D-CNN. The fourth stream inputs the RGB video limited to ROIs of hand–object interactions into the 3D-CNN. Finally, because better performance is expected by combining the information of the models trained with attention to these ROIs, better recognition will be possible through late fusion of the four stream scores. The Electronics and Telecommunications Research Institute (ETRI)-Activity3D dataset was used for the experiments. This dataset contains color images, images of skeletons, and depth images of 55 daily behaviors of 50 elderly and 50 young individuals. The experimental results showed that the proposed model improved recognition by at least 4.27% and up to 20.97% compared to other behavior recognition methods. View Full-Text
Keywords: behavior recognition; convolutional neural network; skeleton; RGB video; ensemble behavior recognition; convolutional neural network; skeleton; RGB video; ensemble
Show Figures

Figure 1

MDPI and ACS Style

Byeon, Y.-H.; Kim, D.; Lee, J.; Kwak, K.-C. Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning. Sensors 2021, 21, 1838. https://doi.org/10.3390/s21051838

AMA Style

Byeon Y-H, Kim D, Lee J, Kwak K-C. Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning. Sensors. 2021; 21(5):1838. https://doi.org/10.3390/s21051838

Chicago/Turabian Style

Byeon, Yeong-Hyeon, Dohyung Kim, Jaeyeon Lee, and Keun-Chang Kwak. 2021. "Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning" Sensors 21, no. 5: 1838. https://doi.org/10.3390/s21051838

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop