Papers
Topics
Authors
Recent
Search
2000 character limit reached

Attentive VQ-VAE

Published 20 Sep 2023 in cs.CV and cs.AI | (2309.11641v2)

Abstract: We present a novel approach to enhance the capabilities of VQ-VAE models through the integration of a Residual Encoder and a Residual Pixel Attention layer, named Attentive Residual Encoder (AREN). The objective of our research is to improve the performance of VQ-VAE while maintaining practical parameter levels. The AREN encoder is designed to operate effectively at multiple levels, accommodating diverse architectural complexities. The key innovation is the integration of an inter-pixel auto-attention mechanism into the AREN encoder. This approach allows us to efficiently capture and utilize contextual information across latent vectors. Additionally, our models uses additional encoding levels to further enhance the model's representational power. Our attention layer employs a minimal parameter approach, ensuring that latent vectors are modified only when pertinent information from other pixels is available. Experimental results demonstrate that our proposed modifications lead to significant improvements in data representation and generation, making VQ-VAEs even more suitable for a wide range of applications as the presented.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. “Auto-encoding variational Bayes,” in 2nd International Conference on Learning Representations, ICLR, 2014.
  2. “Stochastic backpropagation and approximate inference in deep generative models,” in International conference on machine learning. PMLR, 2014, pp. 1278–1286.
  3. “Pixel recurrent neural networks,” in International conference on machine learning. PMLR, 2016, pp. 1747–1756.
  4. “Neural discrete representation learning,” Advances in neural information processing systems, vol. 30, 2017.
  5. “Generating diverse high-fidelity images with VQ-VAE-2,” in Advances in Neural Information Processing Systems. 2019, vol. 32, Curran Associates, Inc.
  6. “Flow++: Improving flow-based generative models with variational dequantization and architecture design,” in ICML, 2019.
  7. “Generative pretrained transformers,” in NeurIPS, 2020.
  8. “VQ-VAE-2: Improved variational autoencoders for image generation,” arXiv preprint arXiv:2012.09841, 2021.
  9. “Soft-to-hard vector quantization for end-to-end learned compression of images and neural networks,” arXiv preprint arXiv:1704.00648, vol. 3, 2017.
  10. “Generating diverse high-fidelity images with vq-vae-2,” Advances in neural information processing systems, vol. 32, 2019.
  11. “Anomaly detection through latent space restoration using vector quantized variational autoencoders,” in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 2021, pp. 1764–1767.
  12. “S-hr-vqvae: Sequential hierarchical residual learning vector quantized variational autoencoder for video prediction,” arXiv preprint arXiv:2307.06701, 2023.
  13. “Generating diverse structure for image inpainting with hierarchical vq-vae,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 10775–10784.
  14. “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  15. “Efficient image super-resolution using pixel attention,” in Computer Vision–ECCV, Part III. Springer, 2020, pp. 56–72.
  16. “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
  17. “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in Computer Vision–ECCV, Part II. Springer, 2016, pp. 702–716.
  18. “The concrete distribution: A continuous relaxation of discrete random variables,” in International Conference on Learning Representations, 2017.
  19. “Conditional image generation with pixelcnn decoders,” Advances in neural information processing systems, vol. 29, 2016.
  20. “Identity mappings in deep residual networks,” in Computer Vision–ECCV, Part IV. Springer, 2016, pp. 630–645.
  21. “Progressive growing of GANs for improved quality, stability, and variation,” in International Conference on Learning Representations, 2018.
  22. “Vqfr: Blind face restoration with vector-quantized dictionary and parallel decoder,” in European Conference on Computer Vision. Springer, 2022, pp. 126–143.
  23. “Degradation-aware blind face restoration via high-quality vq codebook,” in Computer Graphics International Conference. Springer, 2023, pp. 309–321.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.