Next Article in Journal
AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation
Next Article in Special Issue
A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor
Previous Article in Journal
Virtual-Lattice Based Intrusion Detection Algorithm over Actuator-Assisted Underwater Wireless Sensor Networks
Previous Article in Special Issue
A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data
Article Menu
Issue 5 (May) cover image

Export Article

Open AccessArticle
Sensors 2017, 17(5), 1177; doi:10.3390/s17051177

Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe) Algorithm

1
College of Computer and Information Engineering, Hohai University, Nanjing 211100, China
2
Changzhou Key Laboratory of Robotics and Intelligent Technology, Changzhou 213022, China
3
Jiangsu Key Laboratory of Special Robots, Hohai University, Changzhou 213022, China
4
College of IoT Engineering, Hohai University, Changzhou 213022, China
5
College of Electronics, Communication and Physics, Shandong University of Science and Technology, Qingdao 266590, China
6
Zienkiewicz Centre for Computational Engineering, Swansea University, Swansea SA1 8EN, UK
*
Author to whom correspondence should be addressed.
Academic Editor: Joonki Paik
Received: 19 March 2017 / Revised: 12 May 2017 / Accepted: 18 May 2017 / Published: 21 May 2017
(This article belongs to the Special Issue Video Analysis and Tracking Using State-of-the-Art Sensors)
View Full-Text   |   Download PDF [6198 KB, uploaded 22 May 2017]   |  

Abstract

Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in three-dimensional space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D (Red-Green-Blue and Depth) algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency. View Full-Text
Keywords: object detection; background subtraction; video surveillance; Kinect sensor fusion object detection; background subtraction; video surveillance; Kinect sensor fusion
Figures

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Zhou, X.; Liu, X.; Jiang, A.; Yan, B.; Yang, C. Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe) Algorithm. Sensors 2017, 17, 1177.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top