Next Article in Journal
Driver’s Facial Expression Recognition in Real-Time for Safe Driving
Previous Article in Journal
Hybrid Blockchain and Internet-of-Things Network for Underground Structure Health Monitoring
Article Menu

Export Article

Open AccessArticle
Sensors 2018, 18(12), 4269; https://doi.org/10.3390/s18124269

Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features

1,2
,
1,2
and
1,2,*
1
School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 100044, China
2
Key Laboratory of Vehicle Advanced Manufacturing, Measuring and Control Technology (Beijing Jiaotong University), Ministry of Education, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Received: 2 November 2018 / Revised: 30 November 2018 / Accepted: 1 December 2018 / Published: 4 December 2018
(This article belongs to the Section Intelligent Sensors)
Full-Text   |   PDF [1239 KB, uploaded 10 December 2018]   |  

Abstract

Foreground detection, which extracts moving objects from videos, is an important and fundamental problem of video analysis. Classic methods often build background models based on some hand-craft features. Recent deep neural network (DNN) based methods can learn more effective image features by training, but most of them do not use temporal feature or use simple hand-craft temporal features. In this paper, we propose a new dual multi-scale 3D fully-convolutional neural network for foreground detection problems. It uses an encoder–decoder structure to establish a mapping from image sequences to pixel-wise classification results. We also propose a two-stage training procedure, which trains the encoder and decoder separately to improve the training results. With multi-scale architecture, the network can learning deep and hierarchical multi-scale features in both spatial and temporal domains, which is proved to have good invariance for both spatial and temporal scales. We used the CDnet dataset, which is currently the largest foreground detection dataset, to evaluate our method. The experiment results show that the proposed method achieves state-of-the-art results in most test scenes, comparing to current DNN based methods. View Full-Text
Keywords: fully convolutional networks; 3D convolutional networks; foreground detection; background modeling; deep learning; deep neural networks fully convolutional networks; 3D convolutional networks; foreground detection; background modeling; deep learning; deep neural networks
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Wang, Y.; Yu, Z.; Zhu, L. Foreground Detection with Deeply Learned Multi-Scale Spatial-Temporal Features. Sensors 2018, 18, 4269.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top