Papers
Topics
Authors
Recent
Search
2000 character limit reached

Disentangled Inference for GANs with Latently Invertible Autoencoder

Published 19 Jun 2019 in cs.LG and stat.ML | (1906.08090v4)

Abstract: Generative Adversarial Networks (GANs) play an increasingly important role in machine learning. However, there is one fundamental issue hindering their practical applications: the absence of capability for encoding real-world samples. The conventional way of addressing this issue is to learn an encoder for GAN via Variational Auto-Encoder (VAE). In this paper, we show that the entanglement of the latent space for the VAE/GAN framework poses the main challenge for encoder learning. To address the entanglement issue and enable inference in GAN we propose a novel algorithm named Latently Invertible Autoencoder (LIA). The framework of LIA is that an invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. The decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from a disentangled autoencoder by detaching the invertible network from LIA, thus avoiding the entanglement problem caused by the random latent space. Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA/GAN.

Citations (32)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.