Papers
Topics
Authors
Recent
Search
2000 character limit reached

Frequency Band Substitution

Updated 3 February 2026
  • Frequency band substitution is a technique that uses DCT-based manipulation of latent diffusion features to decouple appearance, layout, and contour guidance.
  • It enables precise control over specific visual attributes by substituting selective frequency bands from a reference image without any retraining of the diffusion model.
  • FBSDiff and FBSDiff++ demonstrate significant improvements in visual fidelity and computational efficiency, offering state-of-the-art trade-offs in image translation.

Frequency band substitution is a plug-and-play paradigm for highly controllable text-driven image-to-image (I2I) translation in latent diffusion models. The method exploits frequency-domain representations of intermediate diffusion features to decouple appearance, layout, and contour guidance, enabling dynamic and interpretable transfer of source image attributes. Prominent frameworks such as FBSDiff and its successor, FBSDiff++, implement frequency band substitution without retraining or fine-tuning, and demonstrate significant improvements in visual fidelity, flexibility, and computational efficiency (Gao et al., 2024, Gao et al., 27 Jan 2026).

1. Conceptual Foundations and Motivation

In the spatial domain, guiding factors—such as global appearance, geometric layout, and fine contours—within a reference image are entangled in pixel or feature space, confounding fine-grained control over image translation. Frequency band substitution operates in the frequency domain, where a 2D Discrete Cosine Transform (DCT) of diffusion feature maps separates information across frequency bands: low frequencies encode global appearance and layout, mid-frequencies encode object arrangement and intermediate structure, and high frequencies encode fine contours and edges. Selectively substituting specific bands from a reference image into the sampling trajectory enables direct and continuous manipulation of distinct visual correlations without the need for retraining, finetuning, or online optimization (Gao et al., 2024, Gao et al., 27 Jan 2026).

2. Mathematical Formulation

Let z0z_0 be the encoder output of the source image under a pretrained latent diffusion model. The DDIM inversion procedure maps z0z_0 to a latent noise vector zTinvz_{T_\mathrm{inv}} along the reconstruction trajectory (z^t{ \hat z_t }), while a parallel sampling trajectory (z~t{ \tilde z_t }) is initialized from Gaussian noise and progressively denoised under classifier-free guidance toward the target text prompt.

At each calibration step tt, the frequency band substitution (FBS) layer acts as follows:

  1. Compute per-channel 2D-DCTs:

Fz^(u,v)=DCT(z^t)u,v,Fz~(u,v)=DCT(z~t)u,vF_{\hat z}(u,v) = \mathrm{DCT}(\hat z_t)_{u,v} ,\quad F_{\tilde z}(u,v) = \mathrm{DCT}(\tilde z_t)_{u,v}

  1. Construct a binary mask M(u,v)M(u,v) selecting the desired frequency band:
    • Low-pass: Mlp(u,v)=1[u+v≤thlp]M_{\text{lp}}(u,v) = 1[u+v \leq th_{\text{lp}}]
    • Mid-pass: Mmp(u,v)=1[thmp1<u+v≤thmp2]M_{\text{mp}}(u,v) = 1[ th_{\text{mp}1} < u+v \leq th_{\text{mp}2}]
    • High-pass: Mhp(u,v)=1[u+v>thhp]M_{\text{hp}}(u,v) = 1[u+v > th_{\text{hp}} ]
  2. Substitute the masked band:

F~z~(u,v)=Fz^(u,v)M(u,v)+Fz~(u,v)[1−M(u,v)]\tilde F_{\tilde z}(u,v) = F_{\hat z}(u,v) M(u,v) + F_{\tilde z}(u,v) [1 - M(u,v)]

  1. Invert to the spatial domain:

z~t=IDCT(F~z~)\tilde z_t = \mathrm{IDCT}( \tilde F_{\tilde z} )

This operation can be expressed as: z~t=IDCT(DCT(z^t)⋅M+DCT(z~t)⋅(1−M))\tilde z_t = \mathrm{IDCT}\bigl( \mathrm{DCT}( \hat z_t ) \cdot M + \mathrm{DCT}( \tilde z_t ) \cdot (1-M) \bigr) By adjusting the mask type (low/mid/high) and its bandwidth (threshold values or percentiles), precise control over the type and strength of source image guidance is achieved (Gao et al., 2024, Gao et al., 27 Jan 2026).

3. Integration with Diffusion Models and Algorithmic Pipeline

FBSDiff and FBSDiff++ integrate frequency band substitution with off-the-shelf latent diffusion models (e.g., Stable Diffusion) in a plug-and-play fashion:

  • No model weights are altered. FBS is inserted into the latent feature maps at specific U-Net layers during the sampling stage.
  • Typical workflow:

    1. Encode the source image: z0=E(x)z_0 = E(x).
    2. Run DDIM inversion for TT steps to store {zt}\{z_t\}.
    3. Initialize the target trajectory from noise: z~T∼N(0,I)\tilde z_T \sim \mathcal{N}(0, I).
    4. For the first λT\lambda T sampling steps (calibration phase), apply FBS after each denoising step.
    5. In FBSDiff++, only one inversion and one sampling trajectory are used; inversion features are replayed in reverse order to serve as guidance.
    6. Decode the denoised latent code to image space: x~=D(z~0)\tilde x = D(\tilde z_0).

FBSDiff uses hyperparameters such as T=50T=50 (sampling steps), λ=0.45\lambda=0.45 (calibration ratio), guidance scale ω=7.5\omega=7.5. For band masks on H=W=64H=W=64 features: thlp=80th_{\mathrm{lp}}=80, thhp=5th_{\mathrm{hp}}=5, thmp1=5th_{\mathrm{mp}1}=5, thmp2=80th_{\mathrm{mp}2}=80 (Gao et al., 2024).

FBSDiff++ introduces percentile-based adaptive masking for arbitrary H,WH,W and decouples resolution constraints by using two consecutive 1D-DCTs, further streamlining the entire process (Gao et al., 27 Jan 2026).

4. Control of Guiding Factors and Intensity

The masking scheme enables both discrete and continuous control over image translation attributes:

  • The substituted band specifies the guiding factor:

    • Low-pass: appearance and layout are preserved from the source.
    • Mid-pass: object layout is preserved, appearance and contour are variant.
    • High-pass: only contours are copied, leaving style and structure to the prompt.
  • The intensity of correlation is adjusted by mask bandwidth:
    • Widening the low-pass mask yields stronger appearance or layout guidance.
    • Narrowing the mid-pass constrains or relaxes spatial arrangement influence.
  • In FBSDiff++, band masks are specified by percentiles (plp∈[50,70]p_{\mathrm{lp}} \in [50,70], php∈[4,6]p_{\mathrm{hp}} \in [4,6], pmp1∈[5,10]p_{\mathrm{mp}1} \in [5,10], pmp2∈[40,60]p_{\mathrm{mp}2} \in [40,60]) and applied consistently across all resolutions (Gao et al., 27 Jan 2026).

FBSDiff++ further extends FBS with localized editing (by masking spatial regions in the feature grid) and style-specific content creation (by randomizing geometric arrangement through a spatial transformation pool prior to low-pass FBS) (Gao et al., 27 Jan 2026).

5. Experimental Results and Comparative Evaluation

FBSDiff and FBSDiff++ have been evaluated on large-scale datasets such as LAION-Mini using both derivative generation (appearance consistency) and style translation (appearance divergence) tasks. Key quantitative metrics include Structure Similarity (1–DINO self-similarity distance), LPIPS, AdaIN Style Loss, CLIP Similarity to prompt, and Aesthetic Score (Gao et al., 27 Jan 2026, Gao et al., 2024).

Summary Table: Task Modes and Guidance Types

Task Mode FBS Band Main Visual Effect
Derivative Generation Low-pass Preserves appearance/layout
Layout Editing Mid-pass Preserves object arrangement
Style Translation High-pass Transfers edges/contours

FBSDiff and FBSDiff++ consistently rank among the highest methods by structure preservation, perceptual similarity, text fidelity, and overall aesthetic score. For instance, FBSDiff++ is reported at 8.9×8.9\times faster inference (9.6s/image versus 69–85s for previous methods on a NVIDIA A100), with state-of-the-art trade-offs between fidelity and editability (Gao et al., 27 Jan 2026).

Step-by-step per-calibration substitution is critical; ablation studies show that once-only or full-spectrum substitutions degrade output quality and controllability (Gao et al., 2024, Gao et al., 27 Jan 2026).

6. Implementation Improvements and Functionality Extensions

FBSDiff++ introduces several enhancements over FBSDiff:

  • Efficiency: Removes reconstruction sampling, storing inversion features and replaying them as guidance, reducing inference speed from ~85s to ~9.6s per image (Gao et al., 27 Jan 2026).
  • Resolution and Aspect-Ratio Invariance: Uses two 1D-DCTs followed by adaptive percentile masking, supporting images of arbitrary shape.
  • Localized and Style-Specific Manipulation: Enables spatially targeted substitutions and brushwork-specific generation by augmenting FBS with spatial masking and pre-FBS structure randomization.
  • Parameterization: Percentile-based masks automatically adapt to varying spatial dimensions, minimizing manual threshold tuning.

This modular design allows seamless integration with any U-Net-based latent diffusion model and can be extended by varying frequency transforms or incorporating learned band weights (Gao et al., 27 Jan 2026).

7. Limitations and Future Research Directions

Several limitations persist:

  • Manual or percentile-based mask selection may require context-dependent tuning to balance guidance strength and diversity.
  • Very narrow or wide frequency bands risk insufficient or excessive source correlation, adversely affecting text fidelity or the intended edit.
  • Extreme aspect ratios may still cause mild filtering artifacts even with adaptive masking.
  • Current applications operate at a single U-Net feature layer; deeper, multi-scale, or learned frequency manipulations could increase versatility.
  • Real-time performance and perceptual controllability studies remain to be addressed. Extending FBS to non-DCT bases (e.g., wavelets or learned transforms) represents a potential research direction (Gao et al., 2024, Gao et al., 27 Jan 2026).

Frequency band substitution, as instantiated in FBSDiff and FBSDiff++, constitutes a rigorously evaluated, efficient, and interpretable framework for controlling source-prompt correlation in I2I translation and demonstrates that frequency-domain feature blending is a viable, generalizable alternative to costly attention or explicit model retraining approaches.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Frequency Band Substitution.