Papers
Topics
Authors
Recent
Search
2000 character limit reached

Plug-and-Play Priors in Inverse Problems

Updated 8 February 2026
  • Plug-and-Play Priors are methods that integrate advanced denoisers as implicit priors within iterative inverse problem solvers.
  • They alternate between data-consistency updates and denoising steps to effectively address imaging challenges like super-resolution, deblurring, and compressed sensing.
  • Empirical studies show that PnP techniques offer robust convergence and state-of-the-art performance across a wide range of computational imaging applications.

Plug-and-Play Priors (PnP) are a class of techniques in computational imaging and inverse problems that integrate advanced denoisers—such as deep neural networks or generative models—directly as implicit priors within iterative reconstruction algorithms, bypassing the need to explicitly formulate a regularization function. PnP methods alternate between forward-model data-consistency updates and prior-driven denoising, enabling the flexible integration of statistical or learned image models into optimization and Bayesian inference schemes. This paradigm supports a broad range of applications, including super-resolution, deblurring, compressed sensing, image registration, multimodal fusion, and beyond.

1. Fundamentals and Mathematical Formulation

The central principle of PnP is to replace the explicit prior or regularizer in classical inversion (often expressed as minimizing f(x)+λR(x)f(x) + \lambda R(x)) by an operator that is implemented via a denoiser or, more generally, a generative prior. This is formalized in split algorithms such as ADMM, ISTA, or half-quadratic splitting, where the regularization/proximal step

vk+1=proxλR(xk+1+uk)v^{k+1} = \operatorname{prox}_{\lambda R}(x^{k+1}+u^k)

is replaced by

vk+1=Dσ(xk+1+uk)v^{k+1} = D_\sigma(x^{k+1}+u^k)

where DσD_\sigma acts as a black-box denoiser or score-based estimator (Xu et al., 2020, Sreehari et al., 2015). The framework seamlessly integrates into schemes that alternate data-consistency steps (solvable given explicit likelihoods or linear models) with "prior step" denoising without requiring the closed-form of R(x)R(x).

Formally, PnP methods solve

minx  f(x)+λR(x)\min_{x} \; f(x) + \lambda R(x)

with R(x)R(x) only accessible via denoising, enabling the use of any powerful denoising technique—BM3D, DnCNN, deep generative priors, etc.—as an implicit prior for general inverse problems (Sreehari et al., 2015, Wang et al., 20 May 2025, Zhao et al., 2020).

2. PnP with Advanced Generative Priors: Diffusion and Beyond

Recent advances have focused on leveraging generative diffusion models as plug-and-play priors, yielding score-based PnP frameworks (Wang et al., 20 May 2025, Li et al., 10 Nov 2025, Wang et al., 2 Feb 2026, Banerjee et al., 28 Jul 2025):

  • Score-based Diffusion Priors: Here, the prior p(x)p(x) is encoded via a parameterized score network sθ(x,t)xlogpt(x)s_\theta(x,t)\approx \nabla_x \log p_t(x), trained over a continuum of noise levels. The PnP iterations incorporate MCMC-based or Langevin steps where the prior term is replaced by calls to the pretrained score network, and the data-consistency steps enforce the measurement model (Wang et al., 20 May 2025, Wang et al., 2 Feb 2026).
  • Flexible Integration: The prior step can now support various generative formulations, including variance-preserving (VP), variance-exploding (VE), EDM, or even rectified-flow models (Yang et al., 2024).
  • Extensions to Non-Gaussian Likelihoods: Incorporation of complex fidelity terms, such as those arising from non-Gaussian noise (e.g., gGSM, Poisson), is achieved via variational majorization or by recasting the data-fidelity as a weighted least-squares within the PnP loop (Li et al., 10 Nov 2025, Klatzer et al., 20 Mar 2025).

3. Algorithmic Instantiations and Convergence Theory

PnP methods have been realized in diverse algorithmic settings:

  • ADMM and Proximal Gradient: Iterative approaches such as PnP-ADMM, PnP-ISTA, PnP-FISTA, and variants in Euclidean or Bregman geometry are used for both convex and certain nonconvex data-fitting terms. Convergence to stationary points has been established for a variety of denoisers—including MMSE, deep CNNs, and Bregman extensions—under Lipschitz, nonexpansive, or spectral-norm bounds (Xu et al., 2020, Al-Shabili et al., 2022).
  • Score-based MCMC: Plug-and-play Langevin samplers and stochastic differential equation (SDE) solvers enable full posterior inference, supporting MMSE estimates and uncertainty quantification (Laumont et al., 2021, Klatzer et al., 20 Mar 2025). In the diffusion case, split Gibbs samplers alternate analytic conditional sampling (likelihood) and MCMC score-guided denoising (prior).
  • Mirror Descent and Non-Euclidean Generalizations: Bregman PnP algorithms and Riemannian adaptations allow the native handling of task-specific geometries, e.g., Poisson imaging using Burg entropy or other mirror maps (Al-Shabili et al., 2022, Klatzer et al., 20 Mar 2025).
  • Denoiser Scaling and Test-Time Adaptation: To resolve mismatches between the denoiser’s training distribution and test-time data, explicit input-output scaling of the denoiser (Xu et al., 2020) or test-time self-supervised adaptation (TTT) (Chandler et al., 2024) are employed, improving robustness in settings with distribution shift.
  • Single-Shot and Kolmogorov-Arnold Priors: Emerging PnP paradigms allow for per-instance denoiser training (Single-Shot PnP), including single-observation Kolmogorov-Arnold Network priors, extending PnP to fully data-minimal settings without large pretraining (Cheng et al., 2024, Cheng et al., 2023).

4. Applications and Empirical Results

PnP frameworks have demonstrated state-of-the-art outcomes in a spectrum of high-dimensional inverse problems:

Application Area Representative Results / Notable Features Reference
OCT Super-resolution & Denoising PnP-DM achieves PSNR \approx32.5dB/SSIM 0.72 (Wang et al., 20 May 2025, Wang et al., 2 Feb 2026)
Impulse/Non-Gaussian Noise Restoration PnP-diffusion with IRLS outperforms TV/DRUNet (Li et al., 10 Nov 2025)
Multimodal Protein Structure Sub-Ångstrom RMSD via Adam-PnP with adaptive weighting (Banerjee et al., 28 Jul 2025)
Hyperspectral Unmixing PnP-ADMM with DnCNN/BM3D/BM4D outperforms TV/graph/low-rank methods (Zhao et al., 2020)
Electron Tomography/Registration PnP with doubly-stochastic NLM improves RMSE, convergence, stability (Sreehari et al., 2015, Xing et al., 2019)
Speech Dereverberation PnP-RED in WPE yields 1–2 dB PESQ/F-SNR gains over standard WPE (Yang et al., 2023)
MRI Reconstruction (Domain Shift) PnP-TTT bridges gap between mismatched and domain-specific priors (Chandler et al., 2024)

Empirical studies consistently demonstrate rapid convergence (often in a few tens of iterations), robustness to challenging measurement models (high noise, severe undersampling), and the ability to leverage both pre-trained and single-shot priors. Application-specific details—such as analysis (gradient-domain) priors for image super-resolution/deblurring (Chandler et al., 18 Sep 2025), or Bregman/Poisson mirror matching (Al-Shabili et al., 2022, Klatzer et al., 20 Mar 2025)—are implemented to optimally align prior structure with underlying data geometry.

5. Theoretical Guarantees and Performance Analysis

A rigorous theoretical framework underpins the convergence and recovery performance of PnP methods:

  • Convergence of ISTA/ADMM PnP: For denoisers corresponding to MMSE estimators (or those satisfying certain regularity and spectral norm bounds), convergence to stationary points of an implicit global cost g(x)+h(x)g(x)+h(x) can be guaranteed (majorization-minimization argument) (Xu et al., 2020, Liu et al., 2021).
  • Recovery Under Restricted Eigenvalue Conditions: Assuming S-REC (stable embedding of the denoiser's fixed-point set), explicit contraction constants and error floors can be established, covering both noiseless and noisy settings, for forward operators AA of full rank or certain random ensembles (Liu et al., 2021).
  • Bayesian Validity and Well-posedness: Under mild assumptions, PnP-induced posterior models (including those with denoising deep networks) are shown to approximate a well-defined, regularized posterior; the associated Langevin samplers are provably ergodic (Laumont et al., 2021).
  • Robustness to Non-Euclidean Data Fidelity: In Bregman/generalized-mirror formulations, contractive properties of the PnP iteration can be established in the corresponding metric, enabling the extension to Poisson, multiplicative, or other structured noise regimes (Al-Shabili et al., 2022).
  • Denoiser Scaling Consistency: Introducing a scaling parameter μ\mu provides a theoretical and practical basis for regularization-strength adjustment, maintaining the fixed-point correspondence between scaled and unscaled denoisers (Xu et al., 2020).

6. Structural Flexibility and Extensions

PnP priors allow unprecedented modularity and adaptivity:

7. Impact and Future Directions

Plug-and-Play Priors constitute a foundational methodology for modern inverse problems, combining rigorous mathematical structure with practical flexibility and empirical superiority. Future research is likely to focus on:

  • Unified plug-in frameworks for increasingly general classes of priors and data models
  • Accelerated and scalable solvers for large-scale, high-throughput applications
  • Enhanced uncertainty quantification and model interpretation in scientific and biomedical settings
  • Integration with causal, physics-informed, and multimodal generative priors

Theoretical challenges remain—including a complete characterization of convergence and optimality in all practical settings, and handling the full generality of nonconvex and nonstationary data models—but empirical and domain-specific evidence already establishes PnP as a central, extensible tool for next-generation computational imaging (Wang et al., 20 May 2025, Wang et al., 2 Feb 2026, Li et al., 10 Nov 2025, Laumont et al., 2021, Liu et al., 2021).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (19)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Plug-and-Play Priors.