Next Article in Journal
An On-Demand Scalable Model for Geographic Information System (GIS) Data Processing in a Cloud GIS
Next Article in Special Issue
Image Retrieval Based on Learning to Rank and Multiple Loss
Previous Article in Journal
The Feasibility of Three Prediction Techniques of the Artificial Neural Network, Adaptive Neuro-Fuzzy Inference System, and Hybrid Particle Swarm Optimization for Assessing the Safety Factor of Cohesive Slopes
Previous Article in Special Issue
Using Intelligent Clustering to Implement Geometric Computation for Electoral Districting
Open AccessArticle

Using Vehicle Synthesis Generative Adversarial Networks to Improve Vehicle Detection in Remote Sensing Images

Faculty of Information Technology, Beijing University of Technology, No.100, Pingleyuan Road, Beijing 100124, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2019, 8(9), 390; https://doi.org/10.3390/ijgi8090390
Received: 28 July 2019 / Revised: 23 August 2019 / Accepted: 29 August 2019 / Published: 4 September 2019
(This article belongs to the Special Issue Deep Learning and Computer Vision for GeoInformation Sciences)
Vehicle detection based on very high-resolution (VHR) remote sensing images is beneficial in many fields such as military surveillance, traffic control, and social/economic studies. However, intricate details about the vehicle and the surrounding background provided by VHR images require sophisticated analysis based on massive data samples, though the number of reliable labeled training data is limited. In practice, data augmentation is often leveraged to solve this conflict. The traditional data augmentation strategy uses a combination of rotation, scaling, and flipping transformations, etc., and has limited capabilities in capturing the essence of feature distribution and proving data diversity. In this study, we propose a learning method named Vehicle Synthesis Generative Adversarial Networks (VS-GANs) to generate annotated vehicles from remote sensing images. The proposed framework has one generator and two discriminators, which try to synthesize realistic vehicles and learn the background context simultaneously. The method can quickly generate high-quality annotated vehicle data samples and greatly helps in the training of vehicle detectors. Experimental results show that the proposed framework can synthesize vehicles and their background images with variations and different levels of details. Compared with traditional data augmentation methods, the proposed method significantly improves the generalization capability of vehicle detectors. Finally, the contribution of VS-GANs to vehicle detection in VHR remote sensing images was proved in experiments conducted on UCAS-AOD and NWPU VHR-10 datasets using up-to-date target detection frameworks. View Full-Text
Keywords: vehicle detection; remote sensing; deep learning; generative adversarial network; data augmentation vehicle detection; remote sensing; deep learning; generative adversarial network; data augmentation
Show Figures

Figure 1

MDPI and ACS Style

Zheng, K.; Wei, M.; Sun, G.; Anas, B.; Li, Y. Using Vehicle Synthesis Generative Adversarial Networks to Improve Vehicle Detection in Remote Sensing Images. ISPRS Int. J. Geo-Inf. 2019, 8, 390.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map

1
Back to TopTop