A Unifying Generator Loss Function for Generative Adversarial Networks

A unifying α-parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN) that uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system. The generator loss function is based on a symmetric class probability estimation type function, Lα, and the resulting GAN system is termed Lα-GAN. Under an optimal discriminator, it is shown that the generator’s optimization problem consists of minimizing a Jensen-fα-divergence, a natural generalization of the Jensen-Shannon divergence, where fα is a convex function expressed in terms of the loss function Lα. It is also demonstrated that this Lα-GAN problem recovers as special cases a number of GAN problems in the literature, including VanillaGAN, least squares GAN (LSGAN), least kth-order GAN (LkGAN), and the recently introduced (αD,αG)-GAN with αD=1. Finally, experimental results are provided for three datasets—MNIST, CIFAR-10, and Stacked MNIST—to illustrate the performance of various examples of the Lα-GAN system.


Introduction
Generative adversarial networks (GANs), first introduced by Goodfellow et al. in 2014 [10], have a variety of applications in media generation [21], image restoration [29], and data privacy [14].GANs aim to generate synthetic data that closely resembles the original real data with (unknown) underlying distribution P x .The GAN is trained such that the distribution of the generated data, P g , approximates P x well.More specifically, low-dimensional random noise is fed to a generator neural network G to produce synthetic data.Real data and the generated data are then given to a discriminator neural network D scoring the data between 0 and 1, with a score close to 1 meaning that the discriminator thinks the data belongs to the real dataset.The discriminator and generator play a minimax game, where the aim is to minimize the generator's loss and maximize the discriminator's loss.
Since their initial introduction, several variants of GAN have been proposed.Deep convolutional GAN (DCGAN) [30] utilizes the same loss functions as VanillaGAN (the original GAN), combining GANs with convolutional neural networks, which are helpful when applying GANs to image data as they extract visual features from the data.DCGANs are more stable than the baseline model, but can suffer from mode collapse, which occurs when the generator learns that a select number of images can easily fool the discriminator, resulting in the generator only generating those images.Another notable issue with VanillaGAN is the tendency for the generator network's gradients to vanish.In the early stages of training, the discriminator lacks confidence, assigning generated data values close to zero.Therefore, the objective function tends to zero, resulting in small gradients and a lack of learning.To mitigate this issue, a non-saturating generator loss function was proposed in [10] so that gradients do not vanish early on in training.
In the original (VanillaGAN) problem setup, the objective function, expressed as a negative sum of two Shannon cross-entropies, is to be minimized by the generator and maximized by the discriminator.It is demonstrated that if the discriminator is fixed to be optimal (i.e., as a maximizer of the objective function), the GAN's minimax game can be reduced to minimizing the Jensen-Shannon divergence (JSD) between the real and generated data's probability distributions [10].An analogous result was proven in [5] for RényiGANs, a dual-objective GAN using distinct discriminator and generator loss functions.More specifically, under a canonical discriminator loss function (as in [10]), and a generator loss function expressed in terms of two Rényi cross-entropies, it is shown that the RényiGAN optimization problem reduces to minimizing the Jensen-Rényi divergence, hence extending VanillaGAN's result.Nowozin et al. generalized VanillaGAN by formulating a class of loss functions in [27] parametrized by a lower semicontinuous convex function f , devising f -GAN.More specifically, the f -GAN problem consists of minimizing an f -divergence between the true data distribution and the generator distribution via a minimax optimization of a Fenchel conjugate representation of the f -divergence, where the VanillaGAN discriminator's role (as a binary classifier) is replaced by a variational function estimating the ratio of the true data and generator distributions.The f -GAN loss function may be tedious to derive, as it requires the computation of the Fenchel conjugate of f .It can be shown that f -GAN can interpolate between VanillaGAN and HellingerGAN, among others [27].
More recently, α-GAN was presented in [19], where the aim is to derive a class of loss functions parameterized by α > 0, expressed in terms of a class probability estimation (CPE) loss between a real label y ∈ {0, 1} and predicted label ŷ ∈ [0, 1] [19].The ability to control α as a hyperparameter is beneficial to be able to apply one system to multiple datasets, as two datasets may be optimal under different α values.This work was further analyzed in [20] and expanded in [35] by introducing the dual-objective (α D , α G )-GAN, which allowed for the generator and discriminator loss functions to have a distinct α parameter with the aim of improving training stability.When α D = α G , the α-GAN optimization reduces to minimizing an Arimoto divergence, as originally derived in [19].Note that α-GAN can recover several f -GANs, such as HellingerGAN, Vanilla-GAN, WassersteinGAN and Total Variation GAN [19].Furthermore, in their more recent work which unifies [19,20,35], the authors establish, under some conditions, a one-to-one correspondence between CPE loss based GANs (such as α-GANs) and f -GANs that use a symmetric f -divergence; see [34, Theorems 4-5 and Corollary 1].They also prove various generalization and estimation error bounds for (α D , α G )-GANs and illustrate their ability in mitigating training instability for synthetic Gaussian data as well as the Celeb-A and LSUN Classroom image datasets.The various (α D , α G )-GAN equilibrium results do not provide an analogous result to the JSD and Jensen-Rényi divergence minimization for the VanillaGAN [10] and RényiGAN [5] problems, respectively, as it does not involve a Jensen-type divergence. 1  The main objective of our work is to present a unifying approach that provides an axiomatic framework to encompass several existing GAN generator loss functions so that the GAN optimization can be simplified in terms of a Jensen-type divergence.In particular, our framework classifies the set of αparameterized CPE-based loss functions L α , generalizing the α-loss function in [19,20,34,35].We then propose L α -GAN, a dual objective GAN that uses a function from this class for the generator, and uses any canonical discriminator loss function that admits the same optimizer as VanillaGAN [10].We show that under some regularity (convexity/concavity) conditions on L α , the minimax game played with these two loss functions is equivalent to the minimization of a Jensen-f α -divergence, a Jensen-type divergence and another natural extension of the Jensen-Shannon divergence (in addition to the Jensen-Rényi divergence [5]), where the generating function f α of the divergence is directly computed from the CPE loss function L α .This result recovers various prior dual-objective GAN equilibrium results, thus unifying them under one parameterized generator loss function.The newly obtained Jensen-f α -divergence, which is noted to belong to the class of symmetric f -divergences with different generating functions (see Remark 1), is a useful measure of dissimilarity between distributions as it requires a convex function f with a restricted domain given by the interval [0, 2] (see Remark 2) in addition to its symmetry and finiteness properties.
The rest of the paper is organized as follows.In Section 2, we review fdivergence measures and introduce the Jensen-f -divergence as an extension of 1 Given a divergence measure D(p∥q) between distributions p and q (i.,e., a positive-definite bivariate function: D(p∥q) ≥ 0 with equality if and only if (iff) p = q almost everywhere (a.e.)), a Jensen-type divergence of D is given by 1 2 D p∥ p+q 2 + 1 2 D q∥ p+q the Jensen-Shannon divergence.In Section 3, we establish our main result regarding the optimization of our unifying generator loss function (Theorem 1), and show that it can be applied to a large class of known GANs (Lemmas 2-4).
We conduct experiments in Section 4 by implementing different manifestations of L α -GAN on three datasets, MNIST, CIFAR-10 and Stacked MNIST.Finally, we conclude the paper in Section 5.

Preliminaries
We begin by presenting key information measures used throughout the paper.
Definition 1 [1,7,8] The f -divergence between two probability densities p and q with common support R ⊆ R d on the Lebesgue measurable space (R, B(R), µ) is denoted by D f (p∥q) and given by3 where we have used the shorthand R g dµ := R g(x) dµ(x), where g is a measurable function; we follow this convention from now on.Here, f is referred to as the generating function of D f (p∥q).
We require that f is strictly convex around 1 and that it satisfies the normalization condition f (1) = 0 to ensure positive-definiteness of the f -divergence, i.e., D f (p∥q) ≥ 0 with equality holding iff p = q (a.e.).We present examples of f -divergences under various choices of their generating function f in Table 1.
We will be invoking these divergence measures in different parts of the paper.
Table 1: Examples of f -divergences.[12,22,32] Hα The Rényi divergence of order α (α > 0, α ̸ = 1) between densities p and q with common support R is used in [5] in the RényiGAN problem; it is given by [31,33] Note that the Rényi divergence is not an f -divergence; however, it can be expressed as a transformation of the Hellinger divergence (which is itself an fdivergence): We now introduce a new measure, the Jensen-f -divergence, which is analogous to the Jensen-Shannon and Jensen-Rényi divergences.

Definition 2
The Jensen-f -divergence between two probability distributions p and q with common support R ⊆ R d on the Lebesgue measurable space (R, B(R), µ) is denoted by JD f (p∥q) and given by where We next verify that the Jensen-Shannon divergence is a Jensen-f -divergence.
Lemma 1 Let p and q be two densities with common support R ⊆ R d , and consider the function f : [0, ∞) → (−∞, ∞] given by f (u) = u log u.Then we have that Proof.As f is convex (and continuous) on its domain with f (1) = 0, we have that ■ Remark 1 (Jensen-f -divergence is a symmetric f -divergence) Note that JD f (p∥q) is itself a symmetric f -divergence (with a modified generating function).Indeed, given the continuous convex function f that is strictly convex around 1 with f (1) = 0, consider the functions which are both continuous convex, strictly convex around 1, and satisfy f 1 (1) = f 2 (1) = 0. Now direct calculations yield that and Thus where is also continuous convex, strictly convex around 1 and satisfies f (1) = 0. Since by (4), JD f (p∥q) = JD f (q∥p), we conclude that the Jensen-f -divergence is a symmetric f -divergence. 4emark 2 (Domain of f ) Examining (4), we note that the Jensen-f -divergence between p and q involves the f -divergences between either p or q and their mixture (p + q)/2.In other words to determine JD f (p∥q), we only need f 2p p+q and f 2q p+q when taking the expectations in (1).Thus, it is sufficient to restrict the domain of the convex function f to the interval [0, 2].

Main Results
We now present our main theorem which unifies various generator loss functions under a CPE-based loss function L α for a dual-objective GAN, L α -GAN, with a canonical discriminator loss function loss function that is optimized as in [10].Under some regularity conditions on the loss-function L α , we show that under the optimal discriminator, our generator loss becomes a Jensen-f -divergence.
Let (X , B(X ), µ) be the measure space of n × n × m images (where m = 1 for black and white images and m = 3 for RGB images), and let (Z, B(Z), µ) be a measure space such that Z ⊆ R d .The discriminator neural network is given by D : X → [0, 1], and the generator neural network is given by G : Z → X .The generator's noise input is sampled from a multivariate Gaussian distribution P z : Z → [0, 1].We denote the probability distribution of the real data by P x : X → [0, 1] and the probability distribution of the generated data by P g : X → [0, 1].We also set P x and P g as the densities corresponding to P x and P g , respectively.We begin by introducing the L α −GAN system.
, with strict convexity (resp., strict concavity) around ŷ = 1, and such that L α is symmetric in the sense that Then the L α −GAN system is defined by (V D , V Lα,G ), where V D : X × Z → R is the discriminator loss function, and V Lα,G : X × Z → R is the generator loss function, given by Moreover, the L α −GAN problem is defined by inf We now present our main result about the L α −GAN optimization problem.
) be the loss functions of L α −GAN, and consider the joint optimization in (9)- (10).If V D is a canonical loss function in the sense that it is maximized at D = D * , where then (10) where JD fα (•∥•) is the Jensen-f α -divergence, and f α : [0, 2] → R is a continuous convex function, that is strictly convex around 1, given by where a and b are real constants chosen so that f α (1) = 0 with a < 0 (resp., a > 0) if uL α 1, u 2 is convex (resp., concave).Finally, ( 12) is minimized when P x = P g (a.e.).
Proof.Under the assumption that V D is maximized at D * = Px Px+Pg , we have that = −2 X P x + P g 2 = −2 X P x + P g 2 where: (7), where u = Px Px+Pg .
• (b) holds by solving for L α (1, u) in terms of f α (2u) in (13), where u = Px Px+Pg in the first term and u = Pg Px+Pg in the second term.
The constants a and b are chosen so that f α (1) = 0. Finally, the continuity and convexity of f α (as well as its strict convexity around 1) directly follow from the corresponding assumptions imposed on the loss function L α in Definition 3 and on the condition imposed on the sign of a in the theorem's statement.■ Remark 3 Note that not only D * given in (11) is an optimal discriminator of the (original) VanillaGAN discriminator loss function, but it also optimizes the LSGAN/LkGAN discriminator loss function when their discriminator's labels for fake and real data, γ and β, respectively satisfy γ = 1 and β = 0 (see Section 3.3).
We next show that the L α −GAN of Theorem 1 recovers as special cases a number of well-known GAN generator loss functions and their equilibrium points (under an optimal classical discriminator D * ).

VanillaGAN
VanillaGAN [10] uses the same loss function V VG for both generator and discriminator, which is and can be cast as a saddle point optimization problem: It is shown in [10] that the optimal discriminator for ( 15) is given by D * = Px Px+Pg , as in (11).When D = D * , the optimization reduces to minimizing the Jensen-Shannon divergence: We next show that ( 16) can be obtained from Theorem 1.
Lemma 2 Consider the optimization of the VanillaGAN given in (15).Then we have that where Proof.For any fixed α ∈ R, let the function L α in (8) be as defined in the statement: Note that L α is symmetric, since for ŷ ∈ [0, 1], we have that Instead of showing the continuity and convexity/concavity conditions imposed on ŷL α 1, ŷ 2 in Definition 3, we implicitly verify them by directly deriving f α from L α using (13) and showing that it is continuous convex and strictly convex around 1. Setting a = 1 and b = log 2, we have that Clearly, f is convex (actually strictly convex on (0, ∞) and hence strictly convex around 1) and continuous on its domain (where f (0) = lim u→0 u log(u) = 0).It also satisfies f (1) = 0.By Lemma 1, we know that under the generating function f (u) = u log(u), the Jensen-f divergence reduces to the Jensen-Shannon divergence.Therefore, by Theorem 1, we have that which finishes the proof.■

α-GAN
The notion of α-GANs is introduced in [19] as a way to unify several existing GANs using a parameterized loss function.We describe α-GANs next.
The α-loss between y and ŷ is the map Definition 5 [19] For α > 0, the α−GAN loss function is given by The joint optimization of the α−GAN problem is given by It is known that α-GAN recovers several well-known GANs by varying the α parameter, notably, the VanillaGAN (α = 1) [10] and the HellingerGAN (α = 1 2 ) [27].Furthermore, as α → ∞, V α recovers a translated version of the WassersteinGAN loss function [4].We now present the solution to the joint optimization problem presented in (19).
Proposition 1 [19] Let α > 0, and consider the joint optimization of the α-GAN presented in (19).The discriminator D * that maximizes the loss function is given by Furthermore, when D = D * is fixed, the problem in (19) reduces to minimizing an Arimoto divergence (as defined in Table 1) and a Jensen-Shannon divergence when α = 1: where ( 21) and ( 22) achieve their minima iff P x = P g (a.e.).
Recently, α-GAN was generalized in [35] to implement a dual objective GAN, which we describe next.
Definition 6 [35] For α D > 0 and α G > 0, the (α D , α G )−GAN's optimization is given by inf where V α D and V α G are defined in (18), with α replaced by α D and α G respectively.
Proposition 2 [35] Consider the joint optimization in (23)- (24).Let parameters α D , α G > 0 satisfy The discriminator D * that maximizes V α D is given by Furthermore, when D = D * is fixed, the minimization of V α G in (24) is equivalent to the following f -divergence minimization: where We now apply the (α D , α G )-GAN to our main result in Theorem 1 by showing that (12) can recover (27) when α D = 1 (which corresponds to a VanillaGAN discriminator loss function).
Lemma 3 Consider the (α D , α G )−GAN given in Definition 6.Let α D = 1 and α G = α > 1 2 .Then, the solution to (24) presented in Proposition 2 is equivalent to minimizing a Jensen-f α -divergence: specifically, if D * is the optimal discriminator given by (26), which is equivalent to (11) where L α (y, ŷ) = ℓ α (y, ŷ) and Proof.We show that Theorem 1 recovers Proposition 2 by setting L α (y, ŷ) = ℓ α (y, ŷ).Note that ℓ α is symmetric, since As in the proof of Lemma 2, instead of proving the conditions imposed on ŷL α 1, ŷ 2 in Definition 3, we derive f α directly from L α using (13) and show that it is continuous convex and strictly convex around 1. From Lemma 2, we know that when α = 1, f α (u) = u log u (which is strictly convex and continuous).
For α ∈ (0, 1) ∪ (1, ∞), setting a = 2 (13), we have that Clearly f α (1) = 0. Furthermore for α ̸ = 1, we have that which is positive for α > 1 2 , and f α is convex for α > 1 2 (as well as continuous on its domain and strictly convex around 1).Thus by Theorem 1, we have that We now show that the above Jensen-f α -divergence is equal to the f 1,α -divergence originally derived for the (1, α)-GAN problem of Proposition 2 (note from Proposition 2, that if , so the range of α concurs with the range above required for the convexity of f α ).For any two distributions p and q with common support X , we have that ■ Note that this lemma generalizes Lemma 2; the VanillaGAN is a special case of the (1, α)-GAN for α = 1.

Shifted LkGANs and LSGANs
Least Squares GAN (LSGAN) was proposed in [24] to mitigate the vanishing gradient problem with VanillaGAN and to stabilize training performance.The LSGAN's loss function is derived from the squared error distortion measure, where we aim to minimize the distortion between the data samples and a target value we want the discriminator to assign the samples to.The LSGAN was generalized with the LkGAN in [5] by replacing the squared error distortion measure with the absolute error distortion measure of order k ≥ 1, therefore introducing an additional degree of freedom to the generator's loss function.We first state the general LkGAN problem.We then apply the result of Theorem 1 to the loss functions of LSGAN and LkGAN.
Definition 7 [5] Let γ, β, c ∈ [0, 1] and let k ≥ 1.The LkGAN's loss functions, denoted by V LSGAN,D and V k,G are given by The LkGAN problem is the joint optimization inf We next recall the solution to (33), which is a minimization of the Pearson-Vajda divergence |χ| k (•∥•) of order k (as defined in Table 1).
Proposition 3 [5] Consider the joint optimization for the LkGAN presented in (33).Then, the optimal discriminator D * that maximizes V LSGAN,D in (31) is given by Furthermore, if D = D * , and Note that the LSGAN [24] is a special case of LkGAN, as we recover LSGAN when k = 2 [5].By scrutinizing Proposition 3 and Theorem 1, we observe that the former cannot be recovered from the latter.However we can use Theorem 1 by slightly modifying the LkGAN generator's loss function.First, for the dual objective GAN proposed in Theorem 1, we need D * = Px Px+Pg .By (35), this is achieved for γ = 1 and β = 0.Then, we define the intermediate loss function Comparing the above loss function with (8), we note that setting c 1 = 0 and c 2 = 1 in (37) satisfies the symmetry property of L α .Finally, to ensure the generating function f α satisfies f α (1) = 0, we shift each term in (37) by 1.
Putting these changes together, we propose a revised generator loss function, denoted by Vk,G , given by We call a system that uses (38) as a generator loss function a Shifted LkGAN (SLkGAN).If k = 2, we have a shifted version of the LSGAN generator loss function, which we call the Shifted LSGAN (SLSGAN).Note that none of these modifications alter the gradients of V k,G in (32), since the first term is independent of G, the choice of c 1 is irrelevant, and translating a function by a constant does not change its gradients.However, from Proposition 3, for γ = 0, β = 1 and c = 1, we do not have that γ − β = 2(c − β), and as a result, this modified problem does not reduce to minimizing a Pearson-Vajda divergence.Consequently, we can relax the condition on k in Definition 7 to just k > 0. We now show how Theorem 1 can be applied to L α -GAN using (38).
Lemma 4 Let k > 0. Let V D be a discriminator loss function, and let Vk,G be the generator's loss function defined in (38).Consider the joint optimization , where f k is given by Examples of V D (D, G) that satisfy the requirements of Lemma 4 include the LkGAN discriminator loss function given by ( 31) with γ = 1 and β = 0, and the VanillaGAN discriminator loss function given by (14).
Proof.Let k > 0. We can restate the SLkGAN's generator loss function in (38) in terms of V Lα,G in ( 8): we have that We have that L k is symmetric, since We derive f α from L α via ( 13) and directly check that it is continuous convex and strictly convex around 1. Setting a = 1 2 k and b = 2 k − 1 in ( 13), we have that We clearly have that f k (1) = 0 and that f k is continuous.Furthermore, we have that f ′′ k (u) = k(k + 1)u, which is non-negative for u ≥ 0. Therefore f k is convex (as well as strictly convex around 1).As a result, by Theorem 1, we have that .
■ We conclude this section by emphasizing that Theorem 1 serves as a unifying result recovering the existing loss functions in the literature and moreover, provides a way for generalizing new ones.Our aim in the next section is to demonstrate the versatility of this result in experimentation.

Experiments
We perform two experiments on three different image datasets which we describe below.Experiment 1.In the first experiment, we compare the (α, α)-GAN with the (1, α)-GAN, controlling the value of α. 5 Recall that α D = 1 corresponds to the canonical VanillaGAN (or DCGAN) discriminator.We aim to verify whether or not replacing an α-GAN discriminator with a VanillaGAN discriminator stabilizes or improves the system's performance depending on the value of α.Note that the result of Theorem 1 only applies to the (α D , α G )-GAN for α D = 1.
Experiment 2. We train two variants of SLkGAN, with the generator loss function as described in (38), parameterized by k > 0. We then utilize two different canonical discriminator loss functions to align with Theorem 1.The first is the VanillaGAN discriminator loss given by ( 14); we call the resulting dual objective GAN by Vanilla-SLkGAN.The second is the LkGAN discriminator loss, given by (31), where we set γ = 1 and β = 0 such that the optimal discriminator is given by (11).We call this system by Lk-SLkGAN.We compare the two variants to analyze how the value of k and choice of discriminator loss impacts the system's performance.

Experimental Setup
We run both experiments on three image datasets: MNIST [9], CIFAR-10 [17], and Stacked MNIST [23].The MNIST dataset is a dataset of black and white handwritten digits between 0 and 9 of size 28 × 28 × 1.The CIFAR-10 dataset is an RGB dataset of small images of common animals and modes of transportation of size 32 × 32 × 3. The Stacked MNIST dataset is an RGB dataset derived from the MNIST dataset, constructed by taking three MNIST images, assigning each one of the three colour channels, and stacking the images on top of each other.The resulting images are then padded so that each one of them have size 32 × 32 × 3.For Experiment 1, we use α values of 0.5, 5.0, 10.0 and 20.0.For each value of α, we train the (α, α)-GAN and the (1, α)-GAN.We additionally train the DCGAN, which corresponds to the (1, 1)-GAN.For Experiment 2, we use k values of 0.25, 1.0, 2.0, 7.5 and 15.0.Note that when k = 2, we recover LSGAN.For the MNIST dataset, we run 10 trials with the random seeds 123, 500, 1600, 199621, 60677, 20435, 15859, 33764, 79878, and 36123, and train each GAN for 250 epochs.For the RGB datasets (CIFAR-10 and Stacked MNIST), we run 5 trials with the random seeds 123, 1600, 60677, 15859, 79878, and train each GAN for 500 epochs.All experiments utilize an Adam optimzer for the stochastic gradient descent algorithm, with a learning rate of 2 × 10 −4 , and parameters β 1 = 0.5, β 2 = 0.999 and ϵ = 10 −7 [16].We also experiment with the addition of a gradient penalty (GP); we add a penalty term to the discriminator's loss function to encourage the discriminator's gradient to have a unit norm [11].
The MNIST experiments were run on one 6130 2.1 GHz 1xV100 GPU, 8 CPUs, and 16 GB of memory.The CIFAR-10 and Stacked MNIST experiments were run on one Epyc 7443 2.8 GHz GPU, 8 CPUs and 16 GB of memory.For each experiment, we report the best overall Fréchet Inception Distance (FID) score [13], the best average FID score amongst all trials and its variance, and the average epoch the best FID score occurs and its variance.The FID score for each epoch was computed over 10 000 images.For each metric, the lowest numerical value corresponds to the model with the best metric (indicated in bold in the tables).We also report how many trials we include in our summary statistics, as it is possible for a trial to collapse and not train for the full number of epochs.The neural network architectures used in our experiments are presented in Appendix A. The training algorithms are presented in Appendix B.

Experimental Results
We report the FID metrics for Experiment 1 in Tables 2, 3 and 4, and for Experiment 2 in Tables 5, 6 and 7. We report only on those experiments that produced meaningful results.Models that utilize a simplified gradient penalty have the suffix "-GP".We display the output of the best-performing (α D , α G )-GANs in Figure 1 and the best-performing SLKGANs in Figure 3. Finally, we plot the trajectory of the FID scores throughout training epochs in Figures 2 and 4.

Experiment 1
From Table 2, we note that 37 of the 90 trials collapse before 250 epochs have passed without a gradient penalty.The (5,5)-GAN collapses for all 5 trials, and hence it is not displayed in Table 2.This behaviour is expected, as the (α,α)-GAN is more sensitive to exploding gradients when α does not tend to 0 or +∞ [19].The addition of a gradient penalty could mitigate the discriminator's gradients diverging in the (5,5)-GAN by encouraging gradients to have a unit norm.Using a VanillaGAN discriminator with an α-GAN generator (i.e., the (1,α)-GAN) produces better quality images for all tested values of α, compared to when both networks utilize an α-GAN loss function.The (1,10)-GAN achieves excellent stability, converging in all 10 trials, and also achieves the lowest average FID score.The (1,5)-GAN achieves the lowest FID score overall, marginally outperforming DCGAN.Note that when the average best FID score is very close to the best FID score, the resulting best FID score variance is quite small (of the order of 10 −3 ), indicating little statistical variability over the trials.
Likewise, for the CIFAR-10 and Stacked MNIST datasets, the (1,α)-GAN produces lower FID scores than the (α, α)-GAN (see Tables 3 and 4).However, both models are more stable with the CIFAR-10 dataset.With the exception of DCGAN, no model converged to its best FID score for all 5 trials with the Stacked MNIST dataset.Comparing the trials that did converge, both (α, α)-GAN and (1, α)-GAN performed better on the Stacked MNIST dataset than the CIFAR-10 dataset.For CIFAR-10, the (1,10)-and (1,20)-GANs produced the best overall FID score and the best average FID score respectively.On the other hand, the (1,0.5)-GANproduced the best overall FID score and the best average FID score for the Stacked MNIST dataset.We also observe a tradeoff between speed and performance for the CIFAR-10 and Stacked MNIST datasets: the (1, α)-GANs arrive at their lowest FID scores later than their respective (α, α)-GANs, but achieve lower FID scores overall.
Comparing Figures 2c and 2d, we observe that the (α, α)-GAN-GP provides more stability than the (1, α)-GAN for lower values of α (i.e.α = 0.5), while the (1, α)-GAN-GP exhibits more stability for higher α values (α = 10 and α = 20).Figures 2e and 2f show that the two α-GANs trained on the Stacked MNIST dataset exhibit unstable behaviour earlier into training when α = 0.5 or α = 20.However, both systems stabilize and converge to their lowest FID scores as training progresses.The (0.5,0.5)-GAN-GP system in particular exhibits wildly erratic behaviour for the first 200 epochs, then finishes training with a stable trajectory that outperforms DCGAN-GP.
A future direction is to explore how the complexity of an image dataset influences the best choice of α.For example, the Stacked MNIST dataset might be considered to be less complex than CIFAR-10, as images in the Stacked MNIST dataset only contain four unique colours (black, red, green, and blue), while the CIFAR-10 dataset utilizes significantly more colours.

Experiment 2
We see from Table 5 that all Lk-LkGANs and Vanilla-SLkGANs have FID scores comparable to the DCGAN.When k = 15, Vanilla-SLkGAN and Lk-SLkGAN arrive at their lowest FID scores slightly earlier than DCGAN and other SLkGANs.
The addition of a simplified gradient penalty is necessary for Lk-SLkGAN to achieve overall good performance on the CIFAR-10 dataset (see Table 6).Interestingly, Vanilla-SLkGAN achieves lower FID scores without a gradient penalty for lower k values (k = 1, 2), and with a gradient penalty for higher k values (k = 7.5, 15).When k = 0.25, both SLkGANs collapsed for all 5 trials without a gradient penalty.
Table 7 shows that Vanilla-SLkGANs achieve better FID scores than their respective Lk-LkGAN counterparts.However, Lk-LkGANs are more stable, as no single trial collapsed, while 10 of the 25 Vanilla-SLkGAN trials collapsed before 500 epochs had passed.While all Vanilla-SLkGANs outperform the DC-GAN with gradient penalty, Lk-SLkGAN-GP only outperforms DCGAN-GP when k = 15.Except for when k = 7.5, we observe that the Lk-SLkGAN system takes less epochs to arrive at its lowest FID score.Comparing Figures 4e  and 4f, we observe that Lk-SLkGANs exhibit more stable FID score trajectories than their respective Vanilla-SLkGANs.This makes sense, as the LkGAN loss function aims to increase the GAN's stability compared to DCGAN [5].

Conclusion
We introduced a parameterized CPE-based generator loss function for a dualobjective GAN termed L α -GAN which, when used in tandem with a canonical discriminator loss function that achieves its optimum in (11), minimizes a Jensen-f α -divergence.We showed that this system can recover VanillaGAN, (1, α)-GAN, and LkGAN as special cases.We conducted experiments with the three aforementioned L α -GANs on three image datasets.The experiments indicate that (1, α)-GAN exhibits better performance than (α, α)-GAN with α > 1.They also show that the devised SLkGAN system achieves lower FID scores with a VanillaGAN discriminator compared with an LkGAN discriminator.
Future work consists of unveiling more examples of existing GANs that fall under our result as well as applying L α -GAN to novel judiciously designed CPE losses L α and evaluating the performance (in terms of both quality and diversity of generated samples) and the computational efficiency of the resulting models.Another interesting and related direction is to study L α -GAN within the context of f -GANs, given that the Jensen-f -divergence is itself an f -divergence (see Remark 1), by systematically analyzing different Jensen-f -divergences and the role they play in improving GAN performance and stability.Other worthwhile directions include incorporating the proposed L α loss into state-of-the-art GAN models, such as among others BigGAN [6], StyleGAN [15] and CycleGAN [2], for high-resolution data generation and image-to-image translation applications, conducting a meticulous analysis of the sensitivity of the models' performance to different values of the α parameter and providing guidelines on how best to tune α for different types of datasets.

Table 9 :
Discriminator architecture for the MNIST dataset.

Table 10 :
Generator architecture for the MNIST dataset.