Domain Adversarial Neural Networks for Large-Scale Land Cover Classification
AbstractLearning classification models require sufficiently labeled training samples, however, collecting labeled samples for every new problem is time-consuming and costly. An alternative approach is to transfer knowledge from one problem to another, which is called transfer learning. Domain adaptation (DA) is a type of transfer learning that aims to find a new latent space where the domain discrepancy between the source and the target domain is negligible. In this work, we propose an unsupervised DA technique called domain adversarial neural networks (DANNs), composed of a feature extractor, a class predictor, and domain classifier blocks, for large-scale land cover classification. Contrary to the traditional methods that perform representation and classifier learning in separate stages, DANNs combine them into a single stage, thereby learning a new representation of the input data that is both domain-invariant and discriminative. Once trained, the classifier of a DANN can be used to predict both source and target domain labels. Additionally, we also modify the domain classifier of a DANN to evaluate its suitability for multi-target domain adaptation problems. Experimental results obtained for both single and multiple target DA problems show that the proposed method provides a performance gain of up to 40%. View Full-Text
Share & Cite This Article
Bejiga, M.B.; Melgani, F.; Beraldini, P. Domain Adversarial Neural Networks for Large-Scale Land Cover Classification. Remote Sens. 2019, 11, 1153.
Bejiga MB, Melgani F, Beraldini P. Domain Adversarial Neural Networks for Large-Scale Land Cover Classification. Remote Sensing. 2019; 11(10):1153.Chicago/Turabian Style
Bejiga, Mesay B.; Melgani, Farid; Beraldini, Pietro. 2019. "Domain Adversarial Neural Networks for Large-Scale Land Cover Classification." Remote Sens. 11, no. 10: 1153.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.