Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features
AbstractForeground detection, which extracts moving objects from videos, is an important and fundamental problem of video analysis. Classic methods often build background models based on some hand-craft features. Recent deep neural network (DNN) based methods can learn more effective image features by training, but most of them do not use temporal feature or use simple hand-craft temporal features. In this paper, we propose a new dual multi-scale 3D fully-convolutional neural network for foreground detection problems. It uses an encoder–decoder structure to establish a mapping from image sequences to pixel-wise classification results. We also propose a two-stage training procedure, which trains the encoder and decoder separately to improve the training results. With multi-scale architecture, the network can learning deep and hierarchical multi-scale features in both spatial and temporal domains, which is proved to have good invariance for both spatial and temporal scales. We used the CDnet dataset, which is currently the largest foreground detection dataset, to evaluate our method. The experiment results show that the proposed method achieves state-of-the-art results in most test scenes, comparing to current DNN based methods. View Full-Text
Share & Cite This Article
Wang, Y.; Yu, Z.; Zhu, L. Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features. Sensors 2018, 18, 4269.
Wang Y, Yu Z, Zhu L. Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features. Sensors. 2018; 18(12):4269.Chicago/Turabian Style
Wang, Yao; Yu, Zujun; Zhu, Liqiang. 2018. "Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features." Sensors 18, no. 12: 4269.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.