Papers
Topics
Authors
Recent
Search
2000 character limit reached

Rectified Flow Transformers (FlowEdit)

Updated 7 February 2026
  • Rectified Flow Transformers (FlowEdit) are a deep generative modeling framework that leverages a straight-line ODE between data and noise to enable efficient text-to-image, image, and video editing.
  • They integrate transformer architectures with conditional flow matching, LoRA-based fine-tuning, and advanced attention mechanisms, resulting in rapid, inversion-free editing and high structural fidelity.
  • The modular design supports state-of-the-art performance across benchmarks, scalable high-resolution outputs, and reliable cross-modal editing in diverse applications.

Rectified Flow Transformers (FlowEdit) refer to an emerging paradigm in deep generative modeling that operationalizes text-to-image (T2I), image, and video generation as well as fine-grained editing through the framework of rectified flow (RF) and transformer-based, often multimodal, architectures. This approach leverages conditional flow matching and transformer attention mechanisms to achieve efficient, high-fidelity generation and editing, with robust theoretical and empirical guarantees. Unlike previous score-based diffusion methods, rectified flow employs a straight-line ODE between data and Gaussian noise, enabling rapid and stable sampling. FlowEdit and its derivatives organize a suite of methodologies—including LoRA-based parameter-efficient finetuning, attention regularization, high-order ODE solvers, and feature-adaptive editing—that together establish a scalable, modular, and highly effective platform for generative modeling and image/text/video editing tasks in contemporary AI research.

1. Mathematical Foundations and Model Architecture

Rectified Flow Transformers replace the stochastic SDEs of classical diffusion models with deterministic ODEs connecting data x0p0x_0 \sim p_0 and Gaussian noise ϵN(0,I)\epsilon \sim \mathcal{N}(0, I) via straight-line interpolation: zt=(1t)x0+tϵ,t[0,1]z_t = (1-t)x_0 + t\epsilon,\quad t \in [0,1] This trajectory satisfies

dztdt=vθ(zt,t)ϵx0\frac{dz_t}{dt} = v_\theta(z_t, t) \approx \epsilon - x_0

where vθv_\theta is a neural network parameterized by transformer backbones, frequently adopting an MM-DiT structure. The canonical training objective is the conditional flow-matching loss: LCFM=Et,ϵ,x0vθ(zt,t,c)(ϵx0)2\mathcal{L}_\mathrm{CFM} = \mathbb{E}_{t,\epsilon, x_0} \|v_\theta(z_t, t, c) - (\epsilon - x_0)\|^2 where cc is a conditioning variable (e.g., text prompt) (Esser et al., 2024, Gao et al., 2024).

The backbone architecture consists of alternating dual-stream (text/image separated) and single-stream (mixed) transformer blocks. Cross-modal attention is formulated by linearly projecting both text and image features to QQ, KK, VV representations, concatenating, and applying scaled dot-product attention. Adaptations such as progressive/conditional utilization of transformer layers per resolution (as in NAMI) or different modalities (as in MM-DiT) permit both high scalability and compositionality (Esser et al., 2024, Ma et al., 12 Mar 2025).

2. FlowEdit: Direct, Inversion-Free Editing

FlowEdit, in its strict sense, is a methodology for real-image or latent editing that circumvents traditional noisy inversion. Instead of inverting an input image to latent noise and then sampling back to a new image under the target prompt, it defines a deterministic and direct ODE between source and target prompts: dZtFEdt=ENt[Vtar(Z^ttar,t)Vsrc(Z^tsrc,t)Z0src],Z1FE=Z0src\frac{dZ^{FE}_t}{dt} = \mathbb{E}_{N_t}\left[ V^{tar}(\hat Z^{tar}_t, t) - V^{src}(\hat Z^{src}_t, t) \mid Z^{src}_0 \right],\quad Z^{FE}_1 = Z^{src}_0 where

Z^tsrc=(1t)Z0src+tNt,NtN(0,I)\hat Z^{src}_t = (1-t) Z^{src}_0 + t N_t,\quad N_t \sim \mathcal{N}(0, I)

Z^ttar=ZtFE+(Z^tsrcZ0src)\hat Z^{tar}_t = Z^{FE}_t + (\hat Z^{src}_t - Z^{src}_0)

The method only requires calls to a pretrained velocity model under two prompts and marginalizes over auxiliary noise, bypassing explicit inversion and per-image optimization. This results in superior structure preservation and text alignment, with improvements in metrics such as LPIPS and reduced mean-square transport cost relative to inversion-based baselines (Kulikov et al., 2024, Li et al., 17 Mar 2025).

This algorithmic structure is model-agnostic, needing no architectural changes or retraining, and is directly applicable across Stable Diffusion 3, FLUX, and large-scale DiT/MM-DiT-based RFTs.

3. Parameter-Efficient Concept Editing: LoRA and Bi-Level Optimization

Concept erasure and more general editing are formulated as a constrained bi-level optimization, where a minimal subset of LoRA (Low-Rank Adapter) weights Δθ\Delta\theta are adapted for the desired effect (Gao et al., 2024):

  • The lower-level objective enforces erasure of unwanted concepts by suppressing their activations and penalizing their generation:

Linner=ECun[Esd(θ0+ϕ;Cun)]+Rattn(θ0+ϕ;Cun)L_\mathrm{inner} = \mathbb{E}_{C_{un}} [E_\mathrm{sd}(\theta_0+\phi; C_{un})] + R_\mathrm{attn}(\theta_0+\phi; C_{un})

  • The upper-level objective mandates the preservation of irrelevant or unaffected concepts, actively discouraging negative side-effects and collapse through:

Louter=ECir[Lgen(θ0+ϕ;Cir)]+Lcontrast(θ0+ϕ;Cun,Cir)L_\mathrm{outer} = \mathbb{E}_{C_{ir}} [L_\mathrm{gen}(\theta_0+\phi; C_{ir})] + L_\mathrm{contrast}(\theta_0+\phi; C_{un}, C_{ir})

EsdE_\mathrm{sd} is a negative-guidance loss; RattnR_\mathrm{attn} is an attention map norm regularizer that pushes the residual cross-modal attention of erased tokens towards zero or a mask; LcontrastL_\mathrm{contrast} implements a reverse InfoNCE self-contrastive loss, encouraging specificity by aligning erased features with unrelated ones and disaligning with synonyms.

LoRA-based parameter tuning is efficiently performed by introducing low-rank matrices A,BA, B into Wq,WkW_q, W_k of the transformer blocks; only the small rdr \ll d adapters are optimized. This pattern is highly modular, allowing for conversion between erasure, insertion, and replacement of concepts by redefinition of regularization and loss terms. Attention regularization localizes edits at the level of input tokens or pixel regions (Gao et al., 2024).

4. Attention Feature Extraction and Adaptive Editing

Beyond LoRA-based methods, “feature-injection” editing approaches such as ReFlex (Kim et al., 2 Jul 2025) invoke manipulation of intermediate transformer representations for real-image editing:

  • Three feature types are extracted at a “mid-step” t=T/2t = T/2 latent: image\totext cross-attention (I2T-CA), image\toimage self-attention (I2I-SA), and high-level image embedding from the residual branch.
  • During reverse sampling, these features are adapted and injected at early timesteps, employing cross-attention refinement to improve text alignment (by scaling or swapping tokens) and self-attention smoothing to maintain spatial structure.
  • No inversion to noise level or mask is required, and the approach is compatible with source-optional editing.
  • Ablations confirm dramatic drops in structure preservation and text alignment if these adaptations are omitted.

This framework achieves strong user preference in human evaluation studies and can be generalized to arbitrary edits—including local inpainting and multi-concept composition—by adjusting which features are injected or adapted and at which layers (Kim et al., 2 Jul 2025).

5. High-Order Solvers and Decoupled Attention for Editing Fidelity

FlowEdit strategies for high-fidelity inversion and editing benefit from improvements in ODE/inversion solvers and attention operation:

  • Runge-Kutta (RK) solvers (orders 2–4) significantly reduce inversion errors compared to explicit Euler, essential for precise source-structure transfer. Taylor-expansion-based variants further enhance accuracy in rectified flow models (Chen et al., 16 Sep 2025, Wang et al., 2024).
  • Decoupled Diffusion Transformer Attention (DDTA) separates text\leftrightarrowimage cross-attention and self-attention, enabling precise feature transfer during editing by replacing cross-attention maps with inverted (source) values and blending image value feature maps.
  • Empirical results show state-of-the-art reconstruction fidelity (PSNR, SSIM, LPIPS), outperforming earlier DDIM-based methods, and improved semantic control over edited attributes.

These properties are preserved even when the FlowEdit pipeline is generalized to the video domain, as in Pyramid-Edit or Wan-Edit, where each window/frame latent is processed with the same algebraic ODE machinery and feature-adaptive transfer (Li et al., 17 Mar 2025).

6. Practical Extensions: Multi-Concept, Masked, and High-Resolution Editing

FlowEdit’s modularity extends to:

  • LoRAShop (Dalva et al., 29 May 2025): A training-free, multi-concept region-controlled editing system that employs early-stage cross-attention to derive binary spatial masks for each concept, and locally blends LoRA adapter feature streams into the backbone only where masks are active. This achieves high identity and background preservation in compositional tasks.
  • I-Max (Du et al., 2024): High-resolution extrapolation by projected flow, where a low-resolution guidance is combined with high-resolution flow, using NTK-aware RoPE rescaling and SNR time-shift to maximize the generative potential of RFTs at large scales. This enables tuning-free 4K–8K outputs with native 1K/2K models, surpassing prior extrapolation and inpainting methods.
  • Progressive multi-resolution RFTs (NAMI) (Ma et al., 12 Mar 2025): Piecewise flow and transformer staging for large-scale, low-compute image generation and possible staged editing (coarse edits at low resolution, refinement at high), with a reported 40% speed-up at 1024² and close alignment to larger models.

7. Benchmarking, Limitations, and Future Developments

Rectified Flow Transformer editors have been evaluated on MS-COCO, PIE-Bench, NAMI-1K, FiVE (video), I2P, and diverse user studies:

  • EraseAnything achieves second-lowest explicit count in nudity removal, best FID, and highest CLIP among erasure methods. Editing specificity (Acc_erased 12.5–21.1%) and preservation (Acc_irrelevant >90%) are confirmed (Gao et al., 2024).
  • FlowEdit and variants set Pareto frontiers on LPIPS (0.15–0.22 vs. 0.32–0.37 for inversion) and CLIP/Text similarity, and consistently outrank diffusion and inversion-based baselines in structure and alignment metrics (Kulikov et al., 2024, Li et al., 17 Mar 2025).
  • Video editing in FiVE: Wan-Edit achieves superior background preservation, motion fidelity, and run-time (3.07 s/frame), and lower sensitivity to hyperparameter sweeps than all diffusion-based competitors.

Limitations include:

  • Memory scaling challenges for high-res inference.
  • Sensitivity to model architecture in extrapolation (e.g., DiT vs. MM-DiT for RoPE/attention scaling).
  • Current pipelines rely on precise attention or mask extraction; object removal and large-scale temporal edits in video remain nontrivial (Li et al., 17 Mar 2025, Chen et al., 16 Sep 2025).
  • Further expansion to adaptive ODE scheduling, token-merging, video, and 3D domains is an active area of research.

In sum, Rectified Flow Transformers and the FlowEdit family constitute a unifying platform for efficient, expressive, highly modular, and parameter-efficient generative modeling and editing that leverages the geometrical and architectural advantages of straight-line ODEs and advanced transformer designs. Their seamless compatibility with low-rank adaptation, attention-based localization, and inversion-free editing makes them foundational to contemporary state-of-the-art T2I and editing systems (Gao et al., 2024, Kulikov et al., 2024, Kim et al., 2 Jul 2025, Dalva et al., 29 May 2025, Chen et al., 16 Sep 2025, Du et al., 2024, Dalva et al., 2024, Esser et al., 2024).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Rectified Flow Transformers (FlowEdit).