Papers
Topics
Authors
Recent
Search
2000 character limit reached

Latent Neural Cellular Automata for Resource-Efficient Image Restoration

Published 22 Mar 2024 in eess.IV, cs.LG, and cs.NE | (2403.15525v1)

Abstract: Neural cellular automata represent an evolution of the traditional cellular automata model, enhanced by the integration of a deep learning-based transition function. This shift from a manual to a data-driven approach significantly increases the adaptability of these models, enabling their application in diverse domains, including content generation and artificial life. However, their widespread application has been hampered by significant computational requirements. In this work, we introduce the Latent Neural Cellular Automata (LNCA) model, a novel architecture designed to address the resource limitations of neural cellular automata. Our approach shifts the computation from the conventional input space to a specially designed latent space, relying on a pre-trained autoencoder. We apply our model in the context of image restoration, which aims to reconstruct high-quality images from their degraded versions. This modification not only reduces the model's resource consumption but also maintains a flexible framework suitable for various applications. Our model achieves a significant reduction in computational requirements while maintaining high reconstruction fidelity. This increase in efficiency allows for inputs up to 16 times larger than current state-of-the-art neural cellular automata models, using the same resources.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. J. von Neumann. The general and logical theory of automata. In L. A. Jeffress, editor, Cerebral Mechanisms in Behaviour. Wiley, 1951.
  2. Paul Rendell. Turing Universality of the Game of Life, pages 513–539. Springer London, London, 2002.
  3. Deep Learning. MIT Press, Cambridge, MA, USA, 2016. http://www.deeplearningbook.org.
  4. Growing neural cellular automata. Distill, 2020. https://distill.pub/2020/growing-ca.
  5. Growing 3d artefacts and functional machines with neural cellular automata, 2021.
  6. Towards self-organized control: Using neural cellular automata to robustly control a cart-pole agent, 2021.
  7. Attention-based neural cellular automata. Advances in Neural Information Processing Systems, 35:8174–8186, 2022.
  8. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.
  9. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5728–5739, 2022.
  10. Simple baselines for image restoration. In European Conference on Computer Vision, pages 17–33. Springer, 2022.
  11. Digital image restoration. IEEE Signal Processing Magazine, 14(2):24–41, 1997.
  12. U-net: Convolutional networks for biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pages 234–241, Cham, 2015. Springer International Publishing.
  13. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  14. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 933–941. JMLR.org, 2017.
  15. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
  16. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
  17. FaceNet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, jun 2015.
  18. Swapping autoencoder for deep image manipulation. Advances in Neural Information Processing Systems, 33:7198–7211, 2020.
  19. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
  20. Learning multiple layers of features from tiny images. Technical Report 0, University of Toronto, Toronto, Ontario, 2009.
  21. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
  22. RENOIR – a dataset for real low-light image noise reduction. Journal of Visual Communication and Image Representation, 51:144–154, feb 2018.
  23. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  24. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, July 2017.
  25. Real-world blur dataset for learning and benchmarking deblurring algorithms. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision – ECCV 2020, pages 184–201, Cham, 2020. Springer International Publishing.
  26. Sgdr: Stochastic gradient descent with warm restarts. In ICLR (Poster). OpenReview.net, 2017.
  27. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.