Special Issue "Detection of Moving Objects"

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 April 2018)

Special Issue Editor

Guest Editor
Prof. Thierry Bouwmans

Laboratory MIA, University of La Rochelle, 17000 La Rochelle, France
Website | E-Mail
Phone: 0546457202
Interests: background subtraction; background modeling; foreground detection; fuzzy theory; Dempster-shafer theory; robust PCA; deep learning models

Special Issue Information

Dear Colleagues,

The detection of moving objects is one of the most important steps in the video processing field, such as in video-surveillance, optical motion capture, multimedia applications, teleconferencing, video editing, human-computer interface, etc. The last two decades have witnessed very significant publications on the detection of moving objects in video taken by static cameras; however, recently, new applications in which backgrounds are not static, such as recordings taken from drones, UAVs or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds and illumination changes in real scenes with fixed cameras or mobile devices are needed and so different models need to be used such as advanced statistical models, fuzzy models, robust subspace learning models and deep learning models.

The intent of this Special Issue is to provide: 1) new approaches in detection of moving objects, 2) new strategies to improve foreground detection algorithms to tackle critical scenarios, such as dynamic backgrounds, illumination changes, night videos and low-frame rate videos, and 3) new adaptive and incremental algorithms to achieve real-time applications.

This Special Issue is primarily focused on the following topics; however, we encourage all submissions related to detection of moving objects in videos taken by a static or moving cameras:

  • Background initialization
  • Background subtraction
  • Background modeling
  • Foreground detection
  • Feature selection
  • Statistical, Fuzzy, and Dempster-shafer concepts for detection of moving objects
  • Robust subspace learning models (RPCA, etc.).
  • Deep learning models.
  • HD camera, IR cameras, Light Field cameras, RGB-D cameras
  • Drones, UAV’s
  • Real-time implementations (GPU, FPGA, etc.)
Prof. Thierry Bouwmans
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Background initialization
  • Background subtraction
  • Background modeling
  • Foreground detection
  • Feature selection
  • Statistical, Fuzzy, and Dempster-shafer concepts for detection of moving objects
  • Robust subspace learning models (RPCA, etc.)
  • Deep learning models
  • HD camera, IR cameras
  • Light Field  Cameras, RGB-D cameras
  • Drone, UAV’s
  • Real-time implementations (GPU, FPGA, etc.)

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Open AccessArticle Deep Learning with a Spatiotemporal Descriptor of Appearance and Motion Estimation for Video Anomaly Detection
J. Imaging 2018, 4(6), 79; https://doi.org/10.3390/jimaging4060079
Received: 26 March 2018 / Revised: 23 May 2018 / Accepted: 5 June 2018 / Published: 8 June 2018
PDF Full-text (8886 KB) | HTML Full-text | XML Full-text
Abstract
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory
[...] Read more.
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory of the context of the scenes. Due to these challenges, this paper proposed a novel HOME FAST (Histogram of Orientation, Magnitude, and Entropy with Fast Accelerated Segment Test) spatiotemporal feature extraction approach based on optical flow information to capture anomalies. This descriptor performs the video analysis within the smart surveillance domain and detects anomalies. In deep learning, the training step learns all the normal patterns from the high-level and low-level information. The events are described in testing and, if they differ from the normal pattern, are considered as anomalous. The overall proposed system robustly identifies both local and global abnormal events from complex scenes and solves the problem of detection under various transformations with respect to the state-of-the-art approaches. The performance assessment of the simulation outcome validated that the projected model could handle different anomalous events in a crowded scene and automatically recognize anomalous events with success. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Analytics of Deep Neural Network-Based Background Subtraction
J. Imaging 2018, 4(6), 78; https://doi.org/10.3390/jimaging4060078
Received: 14 May 2018 / Revised: 5 June 2018 / Accepted: 5 June 2018 / Published: 8 June 2018
PDF Full-text (34950 KB) | HTML Full-text | XML Full-text
Abstract
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs
[...] Read more.
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs work well for change detection. This discussion helps to understand the potential of DNNs in background subtraction and to improve DNNs. In this paper, we observe feature maps in all layers of a DNN used in our investigation directly. The DNN provides feature maps with the same resolution as that of the input image. These feature maps help to analyze DNN behaviors because feature maps and the input image can be simultaneously compared. Furthermore, we analyzed important filters for the detection accuracy by removing specific filters from the trained DNN. From the experiments, we found that the DNN consists of subtraction operations in convolutional layers and thresholding operations in bias layers and scene-specific filters are generated to suppress false positives from dynamic backgrounds. In addition, we discuss the characteristics and issues of the DNN based on our observation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Open AccessArticle Background Subtraction for Moving Object Detection in RGBD Data: A Survey
J. Imaging 2018, 4(5), 71; https://doi.org/10.3390/jimaging4050071
Received: 16 April 2018 / Revised: 7 May 2018 / Accepted: 9 May 2018 / Published: 16 May 2018
PDF Full-text (9177 KB) | HTML Full-text | XML Full-text
Abstract
The paper provides a specific perspective view on background subtraction for moving object detection, as a building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. Since color information is not sufficient for
[...] Read more.
The paper provides a specific perspective view on background subtraction for moving object detection, as a building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. Since color information is not sufficient for dealing with problems like light switches or local gradual changes of illumination, shadows cast by the foreground objects, and color camouflage, new information needs to be caught to deal with these issues. Depth synchronized information acquired by low-cost RGBD sensors is considered in this paper to give evidence about which issues can be solved, but also to highlight new challenges and design opportunities in several applications and research areas. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Figures

Figure 1

Back to Top