Entropy recent Article "A Unifying Generator Loss Function for Generative Adversarial Networks"

Entropy recent Article "A Unifying Generator Loss Function for Generative Adversarial Networks"

Authors: Justin Veiner, Fady Alajaji and Bahman Gharesifard

Read full article at: https://www.mdpi.com/1099-4300/26/4/290

This article belongs to the Special Issue Information-Theoretic Methods in Deep Learning: Theory and Applications

Abstract: A unifying ????-parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN) that uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system. The generator loss function is based on a symmetric class probability estimation type function, ???, and the resulting GAN system is termed ???-GAN. Under an optimal discriminator, it is shown that the generator’s optimization problem consists of minimizing a Jensen-????-divergence, a natural generalization of the Jensen-Shannon divergence, where ???? is a convex function expressed in terms of the loss function ???. It is also demonstrated that this ???-GAN problem recovers as special cases a number of GAN problems in the literature, including VanillaGAN, least squares GAN (LSGAN), least kth-order GAN (LkGAN), and the recently introduced (????,????)-GAN with ????=1. Finally, experimental results are provided for three datasets—MNIST, CIFAR-10, and Stacked MNIST—to illustrate the performance of various examples of the ???-GAN system.

Keywords: generative adversarial networks; deep learning; parameterized loss functions; f-divergence; Jensen-f-divergence

要查看或添加评论,请登录

Entropy MDPI的更多文章

社区洞察

其他会员也浏览了