Special Issue "Detection of Moving Objects"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 April 2018)

Special Issue Editor

Guest Editor
Prof. Thierry Bouwmans

Laboratory MIA, University of La Rochelle, 17000 La Rochelle, France
Website | E-Mail
Phone: 0546457202
Interests: background subtraction; background modeling; foreground detection; fuzzy theory; Dempster-shafer theory; robust PCA; deep learning models

Special Issue Information

Dear Colleagues,

The detection of moving objects is one of the most important steps in the video processing field, such as in video-surveillance, optical motion capture, multimedia applications, teleconferencing, video editing, human-computer interface, etc. The last two decades have witnessed very significant publications on the detection of moving objects in video taken by static cameras; however, recently, new applications in which backgrounds are not static, such as recordings taken from drones, UAVs or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds and illumination changes in real scenes with fixed cameras or mobile devices are needed and so different models need to be used such as advanced statistical models, fuzzy models, robust subspace learning models and deep learning models.

The intent of this Special Issue is to provide: 1) new approaches in detection of moving objects, 2) new strategies to improve foreground detection algorithms to tackle critical scenarios, such as dynamic backgrounds, illumination changes, night videos and low-frame rate videos, and 3) new adaptive and incremental algorithms to achieve real-time applications.

This Special Issue is primarily focused on the following topics; however, we encourage all submissions related to detection of moving objects in videos taken by a static or moving cameras:

  • Background initialization
  • Background subtraction
  • Background modeling
  • Foreground detection
  • Feature selection
  • Statistical, Fuzzy, and Dempster-shafer concepts for detection of moving objects
  • Robust subspace learning models (RPCA, etc.).
  • Deep learning models.
  • HD camera, IR cameras, Light Field cameras, RGB-D cameras
  • Drones, UAV’s
  • Real-time implementations (GPU, FPGA, etc.)
Prof. Thierry Bouwmans
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Background initialization
  • Background subtraction
  • Background modeling
  • Foreground detection
  • Feature selection
  • Statistical, Fuzzy, and Dempster-shafer concepts for detection of moving objects
  • Robust subspace learning models (RPCA, etc.)
  • Deep learning models
  • HD camera, IR cameras
  • Light Field  Cameras, RGB-D cameras
  • Drone, UAV’s
  • Real-time implementations (GPU, FPGA, etc.)

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial Detection of Moving Objects
J. Imaging 2018, 4(7), 93; https://doi.org/10.3390/jimaging4070093
Received: 5 July 2018 / Revised: 9 July 2018 / Accepted: 12 July 2018 / Published: 13 July 2018
PDF Full-text (130 KB) | HTML Full-text | XML Full-text
(This article belongs to the Special Issue Detection of Moving Objects)

Research

Jump to: Editorial

Open AccessArticle Background Subtraction Based on a New Fuzzy Mixture of Gaussians for Moving Object Detection
J. Imaging 2018, 4(7), 92; https://doi.org/10.3390/jimaging4070092
Received: 15 May 2018 / Revised: 14 June 2018 / Accepted: 28 June 2018 / Published: 10 July 2018
Cited by 2 | PDF Full-text (1047 KB) | HTML Full-text | XML Full-text
Abstract
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must
[...] Read more.
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must overcome many obstacles, such as dynamic background changes, lighting variations, occlusions, and so on. In the presented work, we focus on this problem (foreground/background segmentation), using a type-2 fuzzy modeling to manage the uncertainty of the video process and of the data. The proposed method models the state of each pixel using an imprecise and adjustable Gaussian mixture model, which is exploited by several fuzzy classifiers to ultimately estimate the pixel class for each frame. More precisely, this decision not only takes into account the history of its evolution, but also its spatial neighborhood and its possible displacements in the previous frames. Then we compare the proposed method with other close methods, including methods based on a Gaussian mixture model or on fuzzy sets. This comparison will allow us to assess our method’s performance, and to propose some perspectives to this work. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Compressive Online Video Background–Foreground Separation Using Multiple Prior Information and Optical Flow
J. Imaging 2018, 4(7), 90; https://doi.org/10.3390/jimaging4070090
Received: 1 May 2018 / Revised: 15 June 2018 / Accepted: 27 June 2018 / Published: 3 July 2018
Cited by 2 | PDF Full-text (3290 KB) | HTML Full-text | XML Full-text
Abstract
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set
[...] Read more.
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set of measurements taken per frame, in contrast to conventional batch-based RPCA, which processes the full data. The proposed method also leverages multiple prior information by incorporating previously separated background and foreground frames in an n-1 minimization problem. Moreover, optical flow is utilized to estimate motions between the previous foreground frames and then compensate the motions to achieve higher quality prior foregrounds for improving the separation. Our method is tested on several video sequences in different scenarios for online background–foreground separation given compressive measurements. The visual and quantitative results show that the proposed method outperforms other existing methods. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle LaBGen-P-Semantic: A First Step for Leveraging Semantic Segmentation in Background Generation
J. Imaging 2018, 4(7), 86; https://doi.org/10.3390/jimaging4070086
Received: 16 May 2018 / Revised: 8 June 2018 / Accepted: 18 June 2018 / Published: 25 June 2018
Cited by 2 | PDF Full-text (10901 KB) | HTML Full-text | XML Full-text
Abstract
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method.
[...] Read more.
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method. In short, this method relies on a motion detection algorithm for selecting, for each pixel location, a given amount of pixel intensities that are most likely static by keeping the ones with the smallest quantities of motion. These quantities are estimated by aggregating the motion scores returned by the motion detection algorithm in the spatial neighborhood of the pixel. After this selection process, the background image is then generated by blending the selected intensities with a median filter. In our previous works, we showed that using a temporally-memoryless motion detection, detecting motion between two frames without relying on additional temporal information, leads our method to achieve the best performance. In this work, we go one step further by developing LaBGen-P-Semantic, a variant of LaBGen-P, the motion detection step of which is built on the current frame only by using semantic segmentation. For this purpose, two intra-frame motion detection algorithms, detecting motion from a unique frame, are presented and compared. Our experiments, carried out on the Scene Background Initialization (SBI) and SceneBackgroundModeling.NET (SBMnet) datasets, show that leveraging semantic segmentation improves the robustness against intermittent motions, background motions and very short video sequences, which are among the main challenges in the background generation field. Moreover, our results confirm that using an intra-frame motion detection is an appropriate choice for our method and paves the way for more techniques based on semantic segmentation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Full Reference Objective Quality Assessment for Reconstructed Background Images
J. Imaging 2018, 4(6), 82; https://doi.org/10.3390/jimaging4060082
Received: 16 May 2018 / Revised: 6 June 2018 / Accepted: 6 June 2018 / Published: 19 June 2018
Cited by 1 | PDF Full-text (6895 KB) | HTML Full-text | XML Full-text
Abstract
With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures
[...] Read more.
With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Deep Learning with a Spatiotemporal Descriptor of Appearance and Motion Estimation for Video Anomaly Detection
J. Imaging 2018, 4(6), 79; https://doi.org/10.3390/jimaging4060079
Received: 26 March 2018 / Revised: 23 May 2018 / Accepted: 5 June 2018 / Published: 8 June 2018
Cited by 1 | PDF Full-text (8886 KB) | HTML Full-text | XML Full-text
Abstract
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory
[...] Read more.
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory of the context of the scenes. Due to these challenges, this paper proposed a novel HOME FAST (Histogram of Orientation, Magnitude, and Entropy with Fast Accelerated Segment Test) spatiotemporal feature extraction approach based on optical flow information to capture anomalies. This descriptor performs the video analysis within the smart surveillance domain and detects anomalies. In deep learning, the training step learns all the normal patterns from the high-level and low-level information. The events are described in testing and, if they differ from the normal pattern, are considered as anomalous. The overall proposed system robustly identifies both local and global abnormal events from complex scenes and solves the problem of detection under various transformations with respect to the state-of-the-art approaches. The performance assessment of the simulation outcome validated that the projected model could handle different anomalous events in a crowded scene and automatically recognize anomalous events with success. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Analytics of Deep Neural Network-Based Background Subtraction
J. Imaging 2018, 4(6), 78; https://doi.org/10.3390/jimaging4060078
Received: 14 May 2018 / Revised: 5 June 2018 / Accepted: 5 June 2018 / Published: 8 June 2018
Cited by 1 | PDF Full-text (34950 KB) | HTML Full-text | XML Full-text
Abstract
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs
[...] Read more.
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs work well for change detection. This discussion helps to understand the potential of DNNs in background subtraction and to improve DNNs. In this paper, we observe feature maps in all layers of a DNN used in our investigation directly. The DNN provides feature maps with the same resolution as that of the input image. These feature maps help to analyze DNN behaviors because feature maps and the input image can be simultaneously compared. Furthermore, we analyzed important filters for the detection accuracy by removing specific filters from the trained DNN. From the experiments, we found that the DNN consists of subtraction operations in convolutional layers and thresholding operations in bias layers and scene-specific filters are generated to suppress false positives from dynamic backgrounds. In addition, we discuss the characteristics and issues of the DNN based on our observation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Background Subtraction for Moving Object Detection in RGBD Data: A Survey
J. Imaging 2018, 4(5), 71; https://doi.org/10.3390/jimaging4050071
Received: 16 April 2018 / Revised: 7 May 2018 / Accepted: 9 May 2018 / Published: 16 May 2018
Cited by 2 | PDF Full-text (9177 KB) | HTML Full-text | XML Full-text
Abstract
The paper provides a specific perspective view on background subtraction for moving object detection, as a building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. Since color information is not sufficient for
[...] Read more.
The paper provides a specific perspective view on background subtraction for moving object detection, as a building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. Since color information is not sufficient for dealing with problems like light switches or local gradual changes of illumination, shadows cast by the foreground objects, and color camouflage, new information needs to be caught to deal with these issues. Depth synchronized information acquired by low-cost RGBD sensors is considered in this paper to give evidence about which issues can be solved, but also to highlight new challenges and design opportunities in several applications and research areas. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Back to Top