Invertible Autoencoder for domain adaptation

The unsupervised image-to-image translation aims at finding a mapping between the source ($A$) and target ($B$) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings $F_{AB}: A \rightarrow B$ and $F_{BA}: B \rightarrow A$ is commonly used by the state-of-the-art methods, like CycleGAN [Zhu et al., 2017], to learn this translation by introducing cycle consistency requirement to the learning problem, i.e. $F_{AB}(F_{BA}(B)) \approx B$ and $F_{BA}(F_{AB}(A)) \approx A$. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce $F_{BA}$ to be an inverse operation to $F_{AB}$. We propose a new deep architecture that we call invertible autoencoder (InvAuto) to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to $2$ times). We present image translation results on benchmark data sets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.


Introduction
Inter-domain translation problem of converting an instance, e.g.: image or video, from one domain to another is applicable to a wide variety of learning tasks, including object detection and recognition, image categorization, sentiment analysis, action recognition, speech recognition, and more. High-quality domain translators ensure that an arbitrary learning model trained on the samples from the source domain, can perform well when tested on the translated samples 1 . The translation problem can be posed in the supervised learning framework, e.g.: Wang et al., 2017), where the learner has access to corresponding pairs of instances from both domains, or unsupervised learning framework, e.g.: Liu et al., 2017), where no such paired instances are available. This paper focuses on the latter case, which is more difficult but at the same time more realistic as acquiring the data set of paired images is often impossible in practice.
The unsupervised domain adaptation is typically solved using generative adversarial networks (GAN) framework (Goodfellow et al., 2014), where the generator performs domain translation and is trained to learn the mapping from the source to the target domain and the discriminator is trained to discriminate between original images from the target domain and those provided by the generator. In this setting, the generator usually has the structure of the autoencoder. The two most common state-of-theart domain adaptation approaches, CycleGAN  and UNIT , are built on this basic approach. CycleGAN addresses the problem of adaptation from domain A to domain B by training two translation networks, where one realizes the mapping F AB and the other realizes F BA . The cycle consistency loss ensures the correlation between input image and the corresponding translation. In particular, to achieve cycle consistency, Cy-cleGAN trains two autoencoders, where each minimizes its own adversarial loss and they both jointly minimize Cycle consistency loss is also incorporated into the recent implementations of UNIT. It is implicitly assumed that the model will learn the mappings F AB and F BA in such a way that F AB = F −1 BA , however it is not explicitly imposed. Consider a simple example. Assume the first autonecoder is a 2-layer linear multi-layer perceptron (MLP) where the weight matrix of the first layer (encoder) is denoted as E 1 and the weight matrix of the second layer (decoder) is denoted as D 1 . Thus, for an input x A ∈ A it outputs y B (x A ) = D 1 E 1 x A . The second autoencoder then is a 2-layer MLP with encoder weight matrix E 2 and decoder weight matrix D 2 that for an input data point x B should produce output To satisfy cycle consistency requirement, the following should hold: y A (y B (x A )) = (x A ) and y B (y A (x B )) = (x B ). These two conditions are equivalent to D 2 E 2 D 1 E 1 = I and D 1 E 1 D 2 E 2 = I. This holds for example when D 1 = E −1 2 and D 2 = E −1 1 . In contrast to this approach, we implicitly require F AB = F −1 BA . Thus, in the context of the given simple example, we correlate encoders and decoders to satisfy inversion conditions D 1 = E −1 2 and D 2 = E −1 1 . We avoid performing prohibitive inversions of large matrices and instead guarantee these conditions to hold through two steps: (i) introducing shared parametrization of encoder E 2 and decoder D 1 such that D 1 = E 2 (E 1 and D 2 is treated simi-larly) and (ii) appropriate training to achieve orthonormality E 2 = E −1 2 and E 1 = E −1 1 , i.e. we train autoencoder (E 2 , D 1 ) to satisfy D 1 E 2 x B = x B for arbitrary input x B and autoencoder (E 1 , D 2 ) to satisfy D 2 E 1 x A = x A for arbitrary input x A . Since the encoder and decoder are coupled as given in (i), such training leads to satisfying inversion conditions. Practical networks contain linear and non-linear transformations. We therefore propose specific architectures, which are invertible.   Figure 17, in the Supplement) and 2 illustrate the basic idea behind InvAuto. The plots were obtained by training a single autoencoder (E, D) to reconstruct its input. InvAuto has shared weights satisfying D = E and inverted non-linearities and clearly obtains matrix DE that is the closest to identity compared to other methods, i.e. vanilla autoencoder (Auto), autoencoder with cycle consistency (Cycle), and variational autoencoder (VAE) (Kingma & Welling, 2014). Note also that at the same time InvAuto requires half of the number of trainable parameters. This paper is organized as follows: Section 2 reviews the literature, Section 3 explains InvAuto in details, Section 4 explains how to apply InvAuto to domain adaptation, Section 5 demonstrates experimental verification of the proposed approach, and Section 6 provides conclusions.

Related Work
Unsupervised image-to-image translation models were developed to tackle domain adaptation problem with unpaired data sets. A plethora of existing approaches utilize autoencoders trained in the GAN framework, where autoencoder serves as a generator, for this learning problem. This includes approaches based on conditional GAN (Dong et al., 2017;Wang et al., 2017) and methods introducing additional components to the loss function forcing partial cycle consistency (Taigman et al., 2016). Another approach (Liu & Tuzel, 2016) introduces two coupled GANs, where each generator is an autoencoder and the coupling is obtained by sharing a subset of weights between autoencoders as well as between discriminators. This technique was later on extended to utilize variational autoencoders as generators . The resulting approach is commonly known as UNIT. CycleGAN presents yet another way of addressing the image-to-image translation by specific training scheme that preserves the mutual information between input and translated images (Vincent et al., 2008). Both UNIT and CycleGAN constitute the most popular choices for performing image-to-image translation.
There also exist other learning tasks that can be viewed as instances of image-to-image translation problem. Among them, notable approaches focus on style transfer (Gatys et al., 2016b;Johnson et al., 2016;Ulyanov et al., 2016;Gatys et al., 2016a). They aim at preserving the content of the input image while altering its style to mimic the style of the images from the target domain. This goal is achieved by introducing content and style loss functions that are jointly optimized. Finally, inverse problems, such as super-resolution, also fall into the category of image-toimage translation problems (McCann et al., 2017).

Invertible autoencoder
Here we explain the details of the architecture of InvAuto.
The architecture needs to be symmetric to allow invertibility, e.g.: the layers should be arranged as T 1 , T 2 , . . . , T M denote subsequent transformations of the signal that is being propagated through the network (M is the total number of those) and T −1 1 , T −1 2 , . . . , T −1 M denote their inversions. Thus, the architecture is inverted layer by layer, where any layer of the encoder has its mirror inverted counterpart in the decoder. The autoencoder is trained to reconstruct its input. Below we explain how to invert different types of layers of the deep model.

Fully-connected layer
Consider transformation T E of an input signal performed by an arbitrary fully-connected layer of an encoder E parametrized with weight matrix W . Let x denote layer's input and y denote its output. Thus (2) An inverse operation is then defined as We parametrize the counterpart layer of the decoder with a transpose of W , thus the considered encoder and decoder layers will share parametrization. Therefore, we enforce the counterpart decoder's layer to perform transformation: By training the autoencoder to reconstruct its input on its output we will enforce orthonormality W −1 = W and thus equivalence of transformations (T E ) −1 and T D , i.e.

Convolutional layer
Consider transformation T E of an input image performed by an arbitrary convolutional layer of an encoder E. Let x denote layer's vectorized input image and y denote corresponding output. 2D convolution can be implemented using matrix multiplication involving a Toeplitz matrix (Vasudevan et al., 2017). Toeplitz matrix is obtained from the set of kernels of the 2D convolutional filters . Thus transformation T E and its inverse (T E ) −1 can be explained with the same equations as the ones used before, Equations 2 and 8, however now W is a Toeplitz matrix. We will again parametrize the counterpart layer of the decoder with a transpose of a Toeplitz matrix W . The transpose of the Toeplitz matrix is in practice obtained by copying weights from the considered convolutional layer to the counterpart decoder's layer that is implemented as a transposed convolutional layer (also known as a deconvolutional layer).
Therefore, as before, we enforce the counterpart decoder's layer to perform transformation T D : x = W y and by appropriate training ensure (T E ) −1 ≡ T D .

Activation function
Invertible activation function should be a bijection. In this paper we consider a modified LeakyReLU activation function σ and use only this non-linearity in the model. Consider transformation T E of an input signal performed by this non-linearity applied in the encoder E. This nonlinearity is defined as An inverse operation is then defined as The corresponding non-linearity in the decoder will therefore realize the operation of an inverted modified LeakyReLU given in Equation 6. In the experiments we set α = 2. Consider transformation T E of an input signal performed by a residual block (He et al., 2016) of an encoder E. We modify the residual block to remove the internal nonlinearity as given in Figure 3a. The residual block is parametrized with weight matrices W 1 and W 2 . Those are Toeplitz matrices corresponding to the convolutional and transposed convolutional layers of the residual block. Let x denote block's vectorized input and y denote its corresponding output. Thus transformation T E is defined as

Residual block
An inverse operation is then defined as We will parametrize the counterpart residual block of the decoder with a transpose of matrix W 2 · W 1 + I as given in Figure 3b. Therefore we enforce the counterpart decoder's residual block to perform transformation: Similarly as before, at training will enforce orthonormality

Bias
We consider bias as a separate layer in the network. Then, handling biases is straightforward. In particular, the layer in the encoder that perform bias addition has its counterpart layer in the decoder, where the same bias is subtracted.

Experimental validation of orthonormality
In this section, we validate the concept of InvAuto. The goal of this section is to show that proposed shared parametrization and training enforce orthonormality and that at the same time the orthonormality property is not organically achieved by standard architectures. We compare InvAuto with previously mentioned vanilla autoencoder, autoencoder with cycle consistency, and variational autoencoder. We experimented with various data sets (MNIST and CIFAR-10) and architectures (MLP, convolutional (Conv), and ResNet). All the networks were designed to have 2 down-sampling layers and 2 up-sampling layers. Encoder's matrix E and decoder's matrix D are constructed by multiplying the weight matrices of consecutive layers of encoder and decoder, respectively. We test orthonormality by reporting the histograms of the cosine similarity of each pair of rows of matrix E for all methods ( Figure 4) along with their mean and standard deviation (Table 1) as we expect the cosine similarity to be close to 0 for InvAuto. We then show the 2 -norm of the rows of E as we expect the rows of InvAuto to have closeto-unit norm (Table 2). InvAuto enforces the encoder, and consequently the decoder, to be orthonormal. Other methods do not explicitly demand that and thus the orthonormality of their encoders is weaker. This observation is further  confirmed by Figures 1 and 2 shown before in the Introduction. In the Supplement (Section A), we provide three more figures that complement Figure 2 (recall that the latter reports the MSE of DE − I). They show the MSE of the diagonal ( Figure 14) and off-diagonal of DE − I (Figure 15) as well as the ratio of the MSE of the off-diagonal and diagonal of DE ( Figure 16) for various methods. The reconstruction loss obtained for all methods is also shown in Section A in the Supplement (Table 5).
Next we describe how InvAuto is applied to the problem of domain adaptation.

Invertible autoencoder for domain adaptation
For the purpose of performing domain adaptation we construct the dedicated architecture that is similar to Cycle-GAN, but we use InvAuto at the feature level of the generators. This InvAuto contains encoder E and decoder D that themselves have the form of autoencoders. Each of these internal autoencoders is used to do the conversion between the features corresponding to two different domains. And thus, the encoder E performs the conversion from the features corresponding to domain A into the features corre-sponding to domain B. The decoder D, on the other hand, performs the conversion from the features corresponding to domain B into the features corresponding to domain A.
Since E and D form InvAuto, E realizes an inversion of D (and vice versa) and shares parameters with D. This introduces strong correlations between two generators and reduces the number of trainable parameters, which distinguishes our approach from CycleGAN. The proposed architecture is illustrated in Figure 5. The details of the architecture and training are provided in Section C in the Supplement.
Next we describe the cost function that we use to train our deep model. The first component of the cost function is the adversarial loss (Goodfellow et al., 2014), i.e.

L adv (Gen
where p d (A) and p d (B) denote the distribution of data from A and B, respectively.
The second component of the loss function is the cycle consistency loss defined as The objective function that we minimize therefore becomes L(Gen A , Gen B , Dis A , Dis B ) = λL cc (Gen A , Gen B ) + L adv (Gen A , Dis A ) + L adv (Gen B , Dis B ), where λ controls the balance between the adversarial loss and cycle consistency loss. The cycle consistency loss enforces the orthonormality property of InvAuto.

Experiments
We next demonstrate the experiments on domain adaptation problems. We compare our model against UNIT  and CycleGAN . We used publicly available implementations of both methods available from https://github.com/mingyuliutw/ UNIT/ and https://github.com/junyanz/ pytorch-CycleGAN-and-pix2pix/.The details of our architecture and the training process are summarized in Section C in the Supplement.

Experiments with benchmark data sets
We considered the following domain adaptation tasks: (i) Day-to-night and night-to-day image conversion: we used unpaired road pictures recorded during the day and at night obtained from KAIST data set (Hwang et al., 2015).
(ii) Day-to-thermal and thermal-to-day image conversion: we used road pictures recorded during the day with a regular camera and a thermal camera obtained from KAIST data set (Hwang et al., 2015).
(iii) Maps-to-satellite and satellite-to-maps: we used satellite images and maps obtained from Google Maps .
The data sets for the last two tasks, i.e. (ii) and (iii), are originally paired, however we randomly permuted them and train the model in an unsupervised fashion. The training and testing images were furthermore resized to 128 × 128 resolution.
The visual results of image conversion are presented in Figures 6-11 (Section B in the Supplement contains the same figures in higher resolution). We see that InvAuto visually performs comparably to other state-of-the-art methods.

Original
CycleGAN UNIT InvAuto Figure 6. Day-to-night image conversion. Zoomed image is shown in Figure 18 in Section B of the Supplement.

Original
CycleGAN UNIT InvAuto Figure 7. Night-to-day image conversion. Zoomed image is shown in Figure 19 in Section B of the Supplement.
Original CycleGAN UNIT InvAuto Reference Figure 8. Day-to-thermal image conversion. Zoomed image is shown in Figure 20 in Section B of the Supplement.
Original CycleGAN UNIT InvAuto Reference Figure 9. Thermal-to-day image conversion. Zoomed image is shown in Figure 21 in Section B of the Supplement.
To evaluate the performance of the methods numerically we use the following approach: • For the tasks (ii) and (iii), we directly calculated the 1 loss between the converted images and the ground truth.
• For the task (i), we trained two autoencoders Ω A and Ω B on both domains, i.e. we trained each of them to reconstruct well the images from its own domain and reconstruct badly the images from the other domain. Then we use these two autoencoders to evaluate the quality of the converted images, where high 1 reconstruction loss of the autoencoder for the images converted to resemble those from its corresponding domain implies low-quality image translation. Table 3 contains the results of the numerical evaluation and shows that the performance of InvAuto is similar to the state-of-the-art techniques that we compare InvAuto with and is furthermore contained within the performance range established by the CycleGAN (best performer) and UNIT (consistently slightly worst from CycleGAN).
Original CycleGAN UNIT InvAuto Reference Figure 10. Maps-to-satellite image conversion. Zoomed image is shown in Figure 22 in Section B of the Supplement.
Original CycleGAN UNIT InvAuto Reference Figure 11. Satellite-to-maps image conversion. Zoomed image is shown in Figure 23 in Section B of the Supplement.  Table 3. Numerical evaluation of CycleGAN, UNIT, and InvAuto.

Experiments with autonomous driving system
To test the quality of the image-to-image translations obtained by InvAuto, we use the NVIDIA evaluation system for autonomous driving described in details in (Bojarski et al., 2016). The system evaluates the performance of an already trained NVIDIA neural-network-based end-to-end learning platform for autonomous driving (PilotNet) on a test video using a simulator for autonomous driving. The system uses the following performance metrics for evaluation: autonomy, position precision, and comfort. We do not describe these metrics as they are described well in the mentioned paper. We only emphasize that these metrics are expressed as a percentage, where 100% corresponds to the best performance. We collected the high-resolution videos of the same road during the day and night from the camera inside the car. Each video had ∼ 45K frames. The pictures were resized to 512 × 512 resolution for the conversion and then resized back to the original size of 1920 × 1208. We used our domain translator as well as CycleGAN to convert the collected day video to a night video ( Figure 12) and also the collected night video to a day video (Figure 13). To evaluate our model, we used aforementioned NVIDIA evaluation system, where the converted videos where used as testing sets for this system. We report results in Table 4.  Table 4. Experimental results with autonomous driving system: autonomy, position precision, and comfort.
The PilotNet model used for testing was trained mostly on day videos, thus it is expected to perform worse on night videos. Therefore the performance for original night video is worse than for the same video converted to a day video in terms of autonomy and position precision. The comfort deteriorates due to the inconsistency of consecutive frames in the converted video, i.e. the videos are converted frame-byframe and we do not apply any post-processing to ensure smooth transition between frames. The results for InvAuto and CycleGAN are comparable.

Conclusion
We proposed a novel architecture that we call invertible autoencoder, which, as opposed to the common deep learning architectures, allows the layers of the model performing opposite operations (like encoder and decoder) to share weights. This is achieved by enforcing orthonormal mappings in the layers of the model. We demonstrate the applicability of the proposed architecture to the problem of domain adaptation and evaluate it on benchmark data sets and autonomous driving task. The performance of the proposed approach matches state-of-the-art methods and requires less trainable parameters.

Original
CycleGAN InvAuto Figure 12. Experimental results with autonomous driving system: day-to-night conversion. Zoomed image is shown in Figure 24 in Section B of the Supplement.

Original
CycleGAN InvAuto Figure 13. Experimental results with autonomous driving system: night-to-day conversion. Zoomed image is shown in Figure 25 in Section B of the Supplement.

Invertible Autoencoder for domain adaptation (Supplementary material)
A. Additional plots and tables for Section 3.6

C. Invertible autoencoder for domain adaptation: architecture and training
Generator architecture Our implementation of InvAuto contains 18 invertible residual blocks for both 128 × 128 and 512 × 512 images, where 9 blocks are used in the encoder and the remaining in the decoder. All layers in the decoder are the inverted versions of encoder's layers. We furthermore add two down-sampling layers and two upsampling layers for the model trained on 128×128 images, and three down-sampling layers and three up-sampling layers for the model trained on 512 × 512 images. The details of the generator's architecture are listed in Table 7 and Table 8. For convenience, we use Conv to denote convolutional layer, ConvNormReLU to denote Convolutional-InstanceNorm-LeakyReLU layer, InvRes to denote invertible residual block, and Tanh to denote hyperbolic tangent activation function. The negative slope of LeakyReLU function is set to 0.2. All filters are square and we have the following notations: K represents filter size and F represents the number of output feature maps. The paddings are added correspondingly.
Discriminator architecture We use similar discriminator architecture as PatchGAN . It is described in Table 6. We use this architecture for training both on 128 × 128 and 512 × 512 images.
Criterion and Optimization At training, we set λ = 10 and use l 1 loss for the cycle consistency in Equation 12. We use Adam optimizer (Kingma & Ba, 2014) with learning rate l r = 0.0002, β 1 = 0.5 and β 2 = 0.999. We also add l 2 penalty with weight 10 −6 .