Video-Based Human Action Recognition Using Spatial Pyramid Pooling and 3D Densely Convolutional Networks
AbstractIn recent years, the application of deep neural networks to human behavior recognition has become a hot topic. Although remarkable achievements have been made in the field of image recognition, there are still many problems to be solved in the area of video. It is well known that convolutional neural networks require a fixed size image input, which not only limits the network structure but also affects the recognition accuracy. Although this problem has been solved in the field of images, it has not yet been broken through in the field of video. To address the input problem of fixed size video frames in video recognition, we propose a three-dimensional (3D) densely connected convolutional network based on spatial pyramid pooling (3D-DenseNet-SPP). As the name implies, the network structure is mainly composed of three parts: 3DCNN, DenseNet, and SPPNet. Our models were evaluated on a KTH dataset and UCF101 dataset separately. The experimental results showed that our model has better performance in the field of video-based behavior recognition in comparison to the existing models. View Full-Text
Share & Cite This Article
Yang, W.; Chen, Y.; Huang, C.; Gao, M. Video-Based Human Action Recognition Using Spatial Pyramid Pooling and 3D Densely Convolutional Networks. Future Internet 2018, 10, 115.
Yang W, Chen Y, Huang C, Gao M. Video-Based Human Action Recognition Using Spatial Pyramid Pooling and 3D Densely Convolutional Networks. Future Internet. 2018; 10(12):115.Chicago/Turabian Style
Yang, Wanli; Chen, Yimin; Huang, Chen; Gao, Mingke. 2018. "Video-Based Human Action Recognition Using Spatial Pyramid Pooling and 3D Densely Convolutional Networks." Future Internet 10, no. 12: 115.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.