Papers
Topics
Authors
Recent
Search
2000 character limit reached

Kernel Density Steering: Inference-Time Scaling via Mode Seeking for Image Restoration

Published 8 Jul 2025 in cs.CV and eess.IV | (2507.05604v1)

Abstract: Diffusion models show promise for image restoration, but existing methods often struggle with inconsistent fidelity and undesirable artifacts. To address this, we introduce Kernel Density Steering (KDS), a novel inference-time framework promoting robust, high-fidelity outputs through explicit local mode-seeking. KDS employs an $N$-particle ensemble of diffusion samples, computing patch-wise kernel density estimation gradients from their collective outputs. These gradients steer patches in each particle towards shared, higher-density regions identified within the ensemble. This collective local mode-seeking mechanism, acting as "collective wisdom", steers samples away from spurious modes prone to artifacts, arising from independent sampling or model imperfections, and towards more robust, high-fidelity structures. This allows us to obtain better quality samples at the expense of higher compute by simultaneously sampling multiple particles. As a plug-and-play framework, KDS requires no retraining or external verifiers, seamlessly integrating with various diffusion samplers. Extensive numerical validations demonstrate KDS substantially improves both quantitative and qualitative performance on challenging real-world super-resolution and image inpainting tasks.

Summary

  • The paper introduces Kernel Density Steering (KDS), a novel inference-time framework that steers image restoration via patch-wise kernel density estimation.
  • It employs an ensemble of diffusion samples and computes local KDE gradients to guide each latent patch toward higher consensus regions, balancing distortion and perceptual quality.
  • KDS operates without retraining and integrates seamlessly with various diffusion models, delivering improved metrics in super-resolution and inpainting tasks.

Kernel Density Steering: Inference-Time Mode Seeking for Robust Image Restoration

The paper "Kernel Density Steering: Inference-Time Scaling via Mode Seeking for Image Restoration" (2507.05604) introduces a novel inference-time framework, Kernel Density Steering (KDS), designed to enhance the fidelity, perceptual quality, and robustness of diffusion-based image restoration. The method leverages an ensemble of diffusion samples and applies explicit local mode-seeking via kernel density estimation (KDE) gradients, steering the sampling process toward high-density, consensus regions in the solution space.

Motivation and Context

Diffusion models have demonstrated strong performance in image restoration tasks such as super-resolution and inpainting. However, standard posterior sampling with these models often results in a trade-off between perceptual quality and distortion: sharp, perceptually pleasing samples may exhibit high distortion or hallucinations, while averaging multiple samples reduces distortion but leads to blurriness and loss of detail. Existing approaches to improve this trade-off typically require retraining, architectural modifications, or reliance on external verifiers, which can introduce biases or limit applicability.

KDS addresses these limitations by providing a plug-and-play, retraining-free mechanism that operates entirely at inference time. It is compatible with a wide range of diffusion samplers and does not require knowledge of the degradation process or access to external reward models.

Methodology

KDS operates by maintaining an ensemble of NN particles (samples) during the diffusion process. At each timestep, it computes patch-wise KDE over the ensemble's predicted clean latents, estimating the local density in the latent space. The key steps are:

  1. Patch-wise KDE and Mean Shift: For each spatial patch in the latent representation, KDS computes the mean shift vector, which is proportional to the gradient of the log-KDE. This vector points toward regions of higher sample density (modes) within the ensemble.
  2. Steering Update: Each particle's patch is updated by moving it in the direction of its mean shift vector, scaled by a time-dependent steering strength δt\delta_t. This operation is performed independently for each patch and each particle.
  3. Integration with Diffusion Samplers: The KDS-refined latent predictions are used in place of the standard predictions in the next diffusion step. The process is compatible with both first-order (e.g., DDIM) and higher-order (e.g., DPM-Solver++) samplers.
  4. Final Output Selection: After the reverse diffusion process, a single output is selected from the ensemble by choosing the latent closest to the ensemble mean, which empirically yields robust and high-fidelity results.

The patch-wise mechanism is critical for scalability, as direct KDE in high-dimensional latent spaces is infeasible with practical ensemble sizes. By operating on small patches (e.g., 1×11 \times 1 spatial locations in the latent map), KDS achieves effective mode-seeking with moderate computational overhead.

Empirical Results

Extensive experiments on real-world super-resolution (DIV2K, RealSR, DRealSR) and inpainting (ImageNet) tasks demonstrate that KDS consistently improves both distortion (PSNR, SSIM) and perceptual (LPIPS, FID, NIMA, CLIPIQA) metrics across multiple diffusion model backbones (LDM-SR, DiffBIR, SeeSR) and samplers. Notably:

  • Quantitative Gains: KDS yields improvements of up to 1 dB in PSNR and significant reductions in LPIPS and FID compared to baseline sampling. The improvements are consistent across datasets and backbones.
  • Qualitative Improvements: Visual results show that KDS reduces artifacts, enhances sharpness, and produces more coherent and plausible restorations, particularly in challenging regions.
  • Robustness: KDS improves the worst-case performance within the ensemble, reducing the likelihood of severe artifacts or failure cases.
  • Comparison to Best-of-N: KDS outperforms naive best-of-N selection strategies based on no-reference metrics, both in terms of stability and overall quality, at comparable computational cost.

Implementation Considerations

Computational Overhead: The primary cost of KDS is linear in the number of particles NN. Empirical results indicate that moderate ensemble sizes (N=10N=10–$15$) provide a favorable trade-off between performance and cost. The patch-wise KDE and mean shift computations are highly parallelizable and can be efficiently implemented on modern hardware.

Hyperparameters: The kernel bandwidth hh and steering strength δt\delta_t are critical for balancing perception and distortion. The paper provides empirical guidelines for their selection and demonstrates that moderate values yield robust performance. Adaptive strategies for these hyperparameters could further improve generalization.

Plug-and-Play Integration: KDS is implemented as a modular update within the sampling loop and does not require changes to the underlying diffusion model or retraining. Pseudocode for integration with DDIM and DPM-Solver++ is provided, facilitating adoption in existing pipelines.

Scalability: The patch-wise approach enables KDS to scale to high-dimensional latent spaces typical of modern diffusion models. The method is agnostic to the specific architecture or conditioning mechanism of the diffusion model.

Theoretical and Practical Implications

KDS introduces a new paradigm for inference-time guidance in generative models, leveraging internal ensemble consensus rather than external verifiers or explicit likelihoods. This approach is particularly advantageous in real-world restoration scenarios where degradation models are unknown or ill-specified.

Theoretically, KDS can be viewed as a collaborative filtering mechanism that regularizes the sampling process, steering it toward robust, high-density regions of the posterior. This mitigates the risk of sampling from spurious modes induced by model imperfections or noise.

Practically, KDS enables practitioners to improve restoration quality without retraining or architectural changes, making it suitable for deployment in resource-constrained or legacy systems. The method is broadly applicable to a range of inverse problems beyond image restoration, wherever diffusion models are used.

Limitations and Future Directions

  • Computational Cost: While moderate ensemble sizes are effective, the linear scaling with NN may be prohibitive for some applications. Future work could explore adaptive ensemble sizing or more efficient density estimation techniques.
  • Hyperparameter Sensitivity: The performance of KDS depends on the choice of kernel bandwidth and steering strength. Automated or data-driven selection methods could enhance robustness.
  • Extension to Other Modalities: While demonstrated on image restoration, the core principles of KDS are applicable to other domains (e.g., audio, medical imaging) and tasks (e.g., conditional generation, editing).

Outlook

KDS represents a significant step toward more reliable and perceptually robust inference with diffusion models. Its ensemble-based, mode-seeking approach opens new avenues for inference-time optimization, particularly in settings where retraining is impractical or external verifiers are unavailable. Future research may extend KDS to other generative frameworks, explore theoretical guarantees, and develop adaptive or hierarchical variants to further improve efficiency and generalization.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 11 tweets with 170 likes about this paper.