Next Article in Journal
Spatiotemporal Dynamics of a Generalized Viral Infection Model with Distributed Delays and CTL Immune Response
Next Article in Special Issue
From Complex System Analysis to Pattern Recognition: Experimental Assessment of an Unsupervised Feature Extraction Method Based on the Relevance Index Metrics
Previous Article in Journal
Thermal Prediction of Convective-Radiative Porous Fin Heatsink of Functionally Graded Material Using Adomian Decomposition Method
Previous Article in Special Issue
Development of Simple-To-Use Predictive Models to Determine Thermal Properties of Fe2O3/Water-Ethylene Glycol Nanofluid
Open AccessArticle

Invertible Autoencoder for Domain Adaptation

Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, 5 MetroTech Center, Brooklyn, NY 11201, USA
Authors to whom correspondence should be addressed.
Computation 2019, 7(2), 20;
Received: 17 December 2018 / Revised: 8 March 2019 / Accepted: 21 March 2019 / Published: 27 March 2019
(This article belongs to the Special Issue Machine Learning for Computational Science and Engineering)
The unsupervised image-to-image translation aims at finding a mapping between the source ( A ) and target ( B ) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings F A B : A B and F B A : B A is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., F A B ( F B A ( B ) ) B and F B A ( F A B ( A ) ) A . Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce F B A to be an inverse operation to F A B . We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones. View Full-Text
Keywords: image-to-image translation; autoencoder; invertible autoencoder image-to-image translation; autoencoder; invertible autoencoder
Show Figures

Figure 1

MDPI and ACS Style

Teng, Y.; Choromanska, A. Invertible Autoencoder for Domain Adaptation. Computation 2019, 7, 20.

AMA Style

Teng Y, Choromanska A. Invertible Autoencoder for Domain Adaptation. Computation. 2019; 7(2):20.

Chicago/Turabian Style

Teng, Yunfei; Choromanska, Anna. 2019. "Invertible Autoencoder for Domain Adaptation" Computation 7, no. 2: 20.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Search more from Scilit
Back to TopTop