Papers
Topics
Authors
Recent
Search
2000 character limit reached

Image Generation From Small Datasets via Batch Statistics Adaptation

Published 3 Apr 2019 in cs.CV | (1904.01774v4)

Abstract: Thanks to the recent development of deep generative models, it is becoming possible to generate high-quality images with both fidelity and diversity. However, the training of such generative models requires a large dataset. To reduce the amount of data required, we propose a new method for transferring prior knowledge of the pre-trained generator, which is trained with a large dataset, to a small dataset in a different domain. Using such prior knowledge, the model can generate images leveraging some common sense that cannot be acquired from a small dataset. In this work, we propose a novel method focusing on the parameters for batch statistics, scale and shift, of the hidden layers in the generator. By training only these parameters in a supervised manner, we achieved stable training of the generator, and our method can generate higher quality images compared to previous methods without collapsing, even when the dataset is small (~100). Our results show that the diversity of the filters acquired in the pre-trained generator is important for the performance on the target domain. Our method makes it possible to add a new class or domain to a pre-trained generator without disturbing the performance on the original domain.

Citations (196)

Summary

  • The paper introduces a method of adapting pre-trained generators by updating scale and shift batch statistics, enabling effective image generation from limited data.
  • The methodology leverages perceptual and L1 losses during supervised training, bypassing the instability of adversarial techniques on small datasets.
  • Experimental results demonstrate significant improvements in FID and KMMD metrics, suggesting potential applications in fields like medical imaging and rare species identification.

Image Generation From Small Datasets via Batch Statistics Adaptation

The paper "Image Generation From Small Datasets via Batch Statistics Adaptation" authored by Atsuhiro Noguchi and Tatsuya Harada presents a method designed to address the challenges associated with generating images from small datasets using pre-trained deep generative models. This study exploits the parameters related to batch statistics—specifically scale and shift—to facilitate knowledge transfer from a well-trained generator to a new target domain with limited data.

Methodology

The primary challenge addressed by this research is the inherent data requirement of deep generative models such as GANs and VAEs. These models typically necessitate large datasets to perform optimally, assuming the abundance of information to fill the data distribution completely. The authors propose that using batch statistics from an already pre-trained model allows adaptation to smaller datasets by modifying only the scale and shift parameters while keeping the other parameters static. This proposed approach aims to harness the diversity and robust convolutional filters acquired in the pre-trained model without extensively retraining the network.

Training Approach

The adaptation process includes updating the scale and shift parameters during supervised learning to ensure the generator's output resembles the training data as closely as possible. The authors utilized perceptual loss alongside L1 loss as the measurement criterion at different feature layers. This strategy contrasts adversarial training’s reliance on a discriminator, which may not be stable with smaller datasets due to its requirement for a large supporting dataset to adequately estimate distribution distances.

Experimental Evaluation

The experiments compared the proposed method to standard GAN training and transfer learning techniques such as Transfer GAN across datasets of varied size, which included human faces, anime faces, and passion flowers. The study demonstrated that their method significantly improves image quality and diversity metrics like FID and KMMD over traditional adversarial and straightforward transfer learning approaches, especially when data is scarce. Notably, the method maintains adaptability to new domain classes while preserving the performance on the original classes, suggesting a potential for low-shot learning applications.

Implications and Future Directions

The findings present practical implications for applying deep generative models in fields constrained by data availability. The transferability of batch statistics can enhance other domains dealing with small datasets, potentially benefiting applications in medical imaging, rare species identification, and niche content creation.

Theoretically, this work stresses the importance of not only maximizing network capacity through new training paradigms but also efficiently leveraging existing model knowledge to introduce parsimony in model adaptation. It opens pathways for future research into refining these adaptation techniques, possibly incorporating more complex structures or alternate layers within neural networks to achieve greater transfer robustness and fidelity.

In conclusion, this study underscores the promising approach of focusing on batch statistics adaptation as a means of enabling high-quality generative performance from limited datasets, marking a step forward in the practical deployment of generative models in data-sparse scenarios. The exploration may be expanded further into various model architectures or integrated into ensemble strategies to enhance scalability and generalization.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.