Papers
Topics
Authors
Recent
Search
2000 character limit reached

Inverse problem regularization with hierarchical variational autoencoders

Published 20 Mar 2023 in cs.CV and cs.LG | (2303.11217v2)

Abstract: In this paper, we propose to regularize ill-posed inverse problems using a deep hierarchical variational autoencoder (HVAE) as an image prior. The proposed method synthesizes the advantages of i) denoiser-based Plug & Play approaches and ii) generative model based approaches to inverse problems. First, we exploit VAE properties to design an efficient algorithm that benefits from convergence guarantees of Plug-and-Play (PnP) methods. Second, our approach is not restricted to specialized datasets and the proposed PnP-HVAE model is able to solve image restoration problems on natural images of any size. Our experiments show that the proposed PnP-HVAE method is competitive with both SOTA denoiser-based PnP approaches, and other SOTA restoration methods based on generative models.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. Ntire 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 126–135, 2017.
  2. Patchnr: learning from very few images by patch normalizing flow regularization. Inverse Problems, 39(6):064006, 2023.
  3. Proximal alternating minimization and projection methods for nonconvex problems: An approach based on the kurdyka-łojasiewicz inequality. Mathematics of operations research, 35(2):438–457, 2010.
  4. Compressed sensing using generative models. In International Conference on Machine Learning, pages 537–546. PMLR, 2017.
  5. Large scale gan training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2018.
  6. Rewon Child. Very deep vaes generalize autoregressive models and can outperform them on images. In International Conference on Learning Representations, 2020.
  7. Diffusion posterior sampling for general noisy inverse problems. In The Eleventh International Conference on Learning Representations, 2022.
  8. Intermediate layer optimization for inverse problems using deep generative models. In International Conference on Machine Learning, pages 2421–2432. PMLR, 2021.
  9. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021.
  10. Density estimation using real nvp. In International Conference on Learning Representations, 2016.
  11. Solving inverse problems by joint posterior maximization with autoencoding prior. SIAM Journal on Imaging Sciences, 15(2):822–859, 2022.
  12. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
  13. Phase retrieval under a generative prior. Advances in Neural Information Processing Systems, 31, 2018.
  14. Hierarchical vaes know what they don’t know. In International Conference on Machine Learning, pages 4117–4128. PMLR, 2021.
  15. Efficient-vdvae: Less is more. arXiv preprint arXiv:2203.13751, 2022.
  16. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
  17. Bayesian imaging with data-driven priors encoded by neural networks. SIAM Journal on Imaging Sciences, 15(2):892–924, 2022.
  18. A provably convergent scheme for compressive sensing under random generative priors. Journal of Fourier Analysis and Applications, 27:1–34, 2021.
  19. Gradient step denoiser for convergent plug-and-play. In International Conference on Learning Representations, 2022.
  20. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401–4410, 2019.
  21. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8110–8119, 2020.
  22. Denoising Diffusion Restoration Models. In ICLR Workshop on Deep Generative Models for Highly Structured Data, volume 2020-Decem, jan 2022.
  23. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diega, CA, USA, 2015.
  24. Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018.
  25. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
  26. Fast and provable admm for learning with generative priors. Advances in Neural Information Processing Systems, 32, 2019.
  27. Understanding and evaluating blind deconvolution algorithms. In 2009 IEEE conference on computer vision and pattern recognition, pages 1964–1971. IEEE, 2009.
  28. Enhanced deep residual networks for single image super-resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
  29. Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15(2018):11, 2018.
  30. RePaint: Inpainting using Denoising Diffusion Probabilistic Models. In (CVPR) Conference on Computer Vision and Pattern Recognition, pages 11451–11461. IEEE, jun 2022.
  31. Optimizing hierarchical image vaes for sample quality. arXiv preprint arXiv:2210.10205, 2022.
  32. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th Int’l Conf. Computer Vision, volume 2, pages 416–423, July 2001.
  33. Diffusion Model Based Posterior Sampling for Noisy Linear Inverse Problems. nov 2022.
  34. Pulse: Self-supervised photo upsampling via latent space exploration of generative models. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, pages 2437–2445, 2020.
  35. Regularization via deep generative models: an analysis point of view. In 2021 IEEE International Conference on Image Processing (ICIP), pages 404–408, 2021.
  36. Exploiting deep generative prior for versatile image restoration and manipulation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):7474–7489, 2021.
  37. Learning maximally monotone operators for image recovery. SIAM Journal on Imaging Sciences, 14(3):1206–1237, 2021.
  38. Interpretable unsupervised diversity denoising and artefact removal. In International Conference on Learning Representations, 2021.
  39. Learning local regularization for variational image restoration. In Scale Space and Variational Methods in Computer Vision: 8th International Conference, SSVM 2021, Virtual Event, May 16–20, 2021, Proceedings, pages 358–370. Springer, 2021.
  40. Gan-based projector for faster recovery with convergence guarantees in linear inverse problems. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5602–5611, 2019.
  41. Variational inference with normalizing flows. In International conference on machine learning, pages 1530–1538. PMLR, 2015.
  42. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pages 1278–1286. PMLR, 2014.
  43. The little engine that could: Regularization by denoising (red). SIAM J. on Im. Sc., 10(4):1804–1844, 2017.
  44. Kim Seonghyeon (rosinality). stylegan2-pytorch. https://github.com/rosinality/stylegan2-pytorch, 2020.
  45. Plug-and-play methods provably converge with properly trained denoisers. In International Conference on Machine Learning, pages 5546–5557. PMLR, 2019.
  46. Palette: Image-to-Image Diffusion Models. In (SIGGRAPH) Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings, number 1, pages 1–10, New York, NY, USA, aug 2022. ACM.
  47. Image Super-Resolution via Iterative Refinement. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–14, apr 2021.
  48. Solving linear inverse problems using gan priors: An algorithm with provable guarantees. In 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 4609–4613. IEEE, 2018.
  49. Ladder variational autoencoders. Advances in neural information processing systems, 29, 2016.
  50. Pseudoinverse-Guided Diffusion Models for Inverse Problems. In (ICLR) International Conference on Learning Representations, 2023.
  51. Solving inverse problems in medical imaging with score-based generative models. In International Conference on Learning Representations, 2021.
  52. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2020.
  53. Nvae: A deep hierarchical variational autoencoder. Advances in neural information processing systems, 33:19667–19679, 2020.
  54. Plug-and-play priors for model based reconstruction. In 2013 IEEE Global Conference on Signal and Information Processing, pages 945–948. IEEE, 2013.
  55. Plug-and-play image restoration with deep denoiser prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(10):6360–6376, 2021.
  56. Learning hierarchical features from deep generative models. In International Conference on Machine Learning, pages 4091–4099. PMLR, 2017.
  57. From learning models of natural image patches to whole image restoration. In 2011 international conference on computer vision, pages 479–486. IEEE, 2011.
Citations (4)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.