Next Article in Journal
ECG Electrode Placements for Magnetohydrodynamic Voltage Suppression
Previous Article in Journal / Special Issue
Background Subtraction Based on a New Fuzzy Mixture of Gaussians for Moving Object Detection
Article Menu
Issue 7 (July) cover image

Export Article

J. Imaging 2018, 4(7), 93; doi:10.3390/jimaging4070093

Editorial
Detection of Moving Objects
Laboratoire MIA, University of La Rochelle, 17000 La Rochelle, France
Received: 5 July 2018 / Accepted: 12 July 2018 / Published: 13 July 2018
The Special Issue “Detection of Moving Objects” in the Journal of Imaging aims to address key challenges in the detection of moving objects in videos taken by either a static or moving camera. These challenges are related to background subtraction steps (i.e., background modeling, background initialization, background maintenance and foreground detection), to hand crafted features and deep learned features, and to metrics for the performance evaluation.
This Special Issue brings together seven papers that discuss such challenges. The first article deals with RGB-D video sequences. The work of Maddalena and Petrosino [1] provides a comprehensive review of methods that exploit RGB-D data for moving object detection based on background subtraction. For methods based only on RGB features, three works employ suitable mathematical and machine learning models. First, Darwish et al. [2] design a fuzzy method based on a new fuzzy Mixture of Gaussians (MOG) for moving objects detection in the presence of dynamic backgrounds. Second, Prativadibhayankaram et al. [3] provide a compressive online Robust Principal Component Analysis (RPCA) with optical flow that recursively separates a sequence of video frames into foreground (sparse) and background (low-rank) components. Third, in a valuable analysis, Minematsu et al. [4] study the excellent performance of a deep neural network-based (DNN-based) background subtraction method. For this, feature maps are observed in all layers of a DNN showing that DNNs are able to suppress false positives from dynamic backgrounds. In the context of automatic detection and recognition of anomalous events in crowded and complex scenes on video, Gunale and Mukherji [5] design a deep learning model with a spatiotemporal descriptor of appearance and motion estimation. For background initialization, Laugraud et al. [6] present a method based on semantic segmentation in background generation. Experiments conducted on the Scene Background Initialization (SBI) and SceneBackgroundModeling.NET (SBMnet) datasets show that LaBGen-P-Semantic has better robustness against intermittent motions, background motions and very short video sequences than previous LaBGen versions. For the evaluation of background generation methods, Shrotre and Karam [7] first discuss shortcomings in existing metrics and then propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales. Furthermore, two different datasets consisting of reconstructed background images and corresponding subjective scores are provided for the evaluation. The correlation results show that the proposed RBQI outperforms all the previous approaches. Thus, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics designed for the perceived quality of reconstructed background images.

Acknowledgments

The guest editor would also like to thank all the authors that have submitted papers to this Special Issue, all the reviewers for their contribution, and the Journal of Imaging Editors.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Maddalena, L.; Petrosino, A. Background Subtraction for Moving Object Detection in RGB-D Data: A Survey. J. Imaging 2018, 4, 71. [Google Scholar] [CrossRef]
  2. Darwich, A.; Hebert, P.; Bigand, A.; Mohanna, Y. Background subtraction based on a new Fuzzy Mixture of Gaussians for moving objects detection. J. Imaging 2018, 4, 92. [Google Scholar] [CrossRef]
  3. Prativadibhayankaram, S.; Luong, H.; Le, T.; Kaup, A. Compressive Online Video Background–Foreground Separation using Multiple Prior Information and Optical Flow. J. Imaging 2018, 4, 90. [Google Scholar] [CrossRef]
  4. Minematsu, T.; Shimada, A.; Uchiyama, H.; Taniguchi, R. Analytics of Deep Neural Network-based Background Subtraction. J. Imaging 2018, 4, 78. [Google Scholar] [CrossRef]
  5. Gunale, K.; Mukherji, P. Deep Learning with a Spatiotemporal Descriptor of Appearance and Motion Estimation for Video Anomaly Detection. J. Imaging 2018, 4, 79. [Google Scholar] [CrossRef]
  6. Laugraud, B.; Pierard, S.; van Droogenbroeck, M. LaBGen-P-Semantic: A First Step for Leveraging Semantic Segmentation in Background Generation. J. Imaging 2018, 4, 86. [Google Scholar] [CrossRef]
  7. Shrotre, A.; Karam, L. Full Reference Objective Quality Assessment for Reconstructed Background Images. J. Imaging 2018, 4, 82. [Google Scholar] [CrossRef]

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
J. Imaging EISSN 2313-433X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top