Papers
Topics
Authors
Recent
Search
2000 character limit reached

Positive-Negative Prompting

Updated 19 February 2026
  • Positive-negative prompting is a conditioning paradigm for generative models that uses a positive prompt to specify desired features and a negative prompt to exclude unwanted attributes.
  • It enables fine-grained output control by decoupling inclusion and exclusion signals, employing strategies like adaptive-window and dynamic VLM-guided algorithms.
  • The approach extends beyond image generation to reasoning tasks, enhancing both safety and accuracy through techniques such as contrastive chain-of-thought and logical debiasing.

Positive-Negative Prompting

Positive-negative prompting is a principled conditioning paradigm in generative models—especially diffusion models and LLMs—wherein a “positive” prompt specifies desired content or attributes, while a “negative” prompt explicitly excludes or demotes undesired features. The technique yields fine-grained control over generated samples by providing both inclusion and exclusion signals, with negative prompts acting as structured complements to their positive counterparts. Recent work has established the foundational mechanisms, formalism, and empirical benefits of positive-negative prompting for both text-to-image and LLM reasoning tasks, and has produced a growing toolkit of adaptive and optimization-based strategies for constructing and utilizing negative prompts.

1. Mathematical Formalism and Foundational Mechanisms

In latent diffusion models (LDMs), positive-negative prompting augments standard classifier-free guidance (CFG) by introducing separate conditioning paths for both sought and undesired concepts (Ban et al., 2024, Chang et al., 30 Oct 2025, Desai et al., 5 Aug 2025). Standard CFG for denoising prediction at timestep tt is given by:

ϵ^t=(1+w)ϵθ(xt,c+,t)wϵθ(xt,c,t)\hat\epsilon_t = (1 + w)\,\epsilon_\theta(x_t, c_{+}, t) - w\,\epsilon_\theta(x_t, c_{\emptyset}, t)

where c+c_{+} is the embedding of the positive prompt, cc_{\emptyset} is for the empty prompt, and ww is a guidance strength parameter. Positive-negative prompting generalizes this to:

ϵ^t=(1+w)ϵθ(xt,c+,t)wϵθ(xt,c,t)\hat\epsilon_t = (1 + w)\,\epsilon_\theta(x_t, c_{+}, t) - w\,\epsilon_\theta(x_t, c_{-}, t)

with cc_{-} the embedding of the negative prompt—directly subtracting features associated with undesired concepts. Mathematically, this is equivalent to sampling under an energy-based model where

pPN(xtp,n)p0(xt)[pθ(pxt)/pθ(nxt)]wp_\text{PN}(x_t|p, n) \propto p_0(x_t) \big[ p_\theta(p|x_t) / p_\theta(n|x_t) \big]^w

where pp is the positive and nn the negative prompt (Desai et al., 5 Aug 2025). Mutual cancellation (neutralization) occurs if, for a feature subspace associated with the negative prompt, the negative and positive conditioned noise estimates converge, producing near-zero residuals in that direction and progressively removing the unwanted concept from the generation (Ban et al., 2024).

2. Empirical Phenomena: Delayed Effect and Deletion by Neutralization

Ban et al. (Ban et al., 2024) identify two core empirical behaviors:

A. Delayed Effect:

Negative prompts begin influencing the generation at later diffusion steps than positive prompts. Cross-attention heatmaps show that positive tokens lock to their spatial regions by t2 ⁣ ⁣4t\approx2\!-\!4, while negative tokens only focus on target regions at a “critical step” t05 ⁣ ⁣10t_0\approx5\!-\!10 (for a standard 30-step schedule). The ratio

rt=kFk,p(i)(t)F  /  kFk,p+(r(i))(t)Fr_t = \sum_k \| F_{k,p_-}(i)^{(t)} \|_F \;\big/\; \sum_k\| F_{k,p_+}(r(i))^{(t)} \|_F

(where r(i)r(i) pairs positive and negative tokens) characterizes this attention lag and alignment, with rtr_t peaking near t0t_0.

B. Deletion through Neutralization:

Once negative prompts have attended to the correct location, the model’s response to negative-conditioned and positive-conditioned noise predictions become nearly identical over target features, leading to cancellation in the guidance equation and, over subsequent steps, deletion of the targeted object or attribute.

3. Adaptive and Dynamic Prompting Algorithms

Standard negative prompting can overcorrect, leading to unwanted side effects such as loss of fidelity or background distortions, especially when applied indiscriminately. To mitigate this, several adaptive algorithms have been proposed:

3.1 Adaptive Windowed Negative Prompting

Applying negative prompts only within an empirically determined “critical window” (e.g., t[tstart,tend]t\in[t_\text{start}, t_\text{end}] with tstart5t_\text{start}\approx5, tend15t_\text{end}\approx15 in 30 steps) preserves the background while still achieving object removal (Ban et al., 2024). Outside this window, standard CFG (with the empty prompt for cc_{-}) is used. The generic loop is:

1
2
3
4
5
6
7
8
9
10
for t = N_steps down to 1:
    if t_start  t  t_end:
        z_plus = εθ(x_t, c_plus, t)
        z_minus = εθ(x_t, c_minus, t)
        epsilon_hat = (1+w)*z_plus - w*z_minus
    else:
        z_plus = εθ(x_t, c_plus, t)
        z_empty = εθ(x_t, c_empty, t)
        epsilon_hat = (1+w)*z_plus - w*z_empty
    x_{t-1} = sampler_step(x_t, epsilon_hat, t)
(Ban et al., 2024)

3.2 Dynamic VLM-Guided Negative Prompting

Dynamic approaches use a vision-LLM (VLM) to adaptively generate negative prompts during denoising, based on intermediate image predictions. At specific timesteps, a VLM analyzes x^0\hat x_0 predictions and emits context-specific negatives, which are then used in the guidance equation for subsequent steps. This procedure enables context-aware suppression, notably improving safety-fidelity trade-offs compared to static negative prompting (Chang et al., 30 Oct 2025).

Strategy Negative Prompt Update Granularity
Static Fixed (user-provided) Global (all steps)
Adaptive-Window Fixed Selected steps/window
Dynamic (VLM) Contextual (VLM-sampled) Step or interval-specific

Adaptive and dynamic strategies consistently outperform fixed negative prompting in preserving prompt fidelity and mitigating over-suppression (Chang et al., 30 Oct 2025, Golan et al., 12 Oct 2025).

4. Optimization and Construction of Negative Prompts

Manual engineering of negative prompts is often tedious and suboptimal. Several recent techniques automate and optimize the negative prompt construction process:

  • NegOpt uses a two-stage curriculum: supervised fine-tuning on large collections of (prompt, negative prompt) pairs, followed by reinforcement learning (PPO) trained on downstream reward signals (e.g., Inception Score, CLIP-score, aesthetics) to produce tailored negative prompts that target undesired artifacts or failure modes. Adjusting reward weights allows metric-specific optimization, and NegOpt’s prompts are preferred to both ground-truth and prior baselines in human evaluation (Ogezi et al., 2024).
  • Model-internal Negation (ANSWER): Instead of generating explicit text negatives, ANSWER (Desai et al., 5 Aug 2025) constructs internal negative signals through multi-step diffusion-negative sampling within the latent space—steering the sampling trajectory away from the positive prompt’s semantic manifold without requiring lossy or incomplete textual negatives.
  • VLM-Guided Creative Negation: For promoting creativity rather than merely filtering artifacts, VLM-guided negative prompting iteratively accumulates detected “typical” concepts (e.g., “cat,” “dog” in pet synthesis from (Golan et al., 12 Oct 2025)) and conditions the sampler to avoid them, generating outputs that are both valid and novel.

5. Applications Beyond Image Generation: Reasoning and Logical Inference

Positive-negative prompting extends beyond unconditional generative tasks and finds application in reasoning and logic with LLMs:

  • Contrastive Chain-of-Thought (CoT): Each positive (valid) CoT rationale is augmented with a “negative” (contrastive) rationale consisting of analogously-structured but invalid reasoning. This dual demonstration format enables the model to internalize both “what to do” and “what to avoid," achieving significant double-digit accuracy improvements on arithmetic and factual reasoning benchmarks (Chia et al., 2023).
  • Hypothesis Testing Prompting: For deductive reasoning over facts and rules, the model is explicitly prompted to assume both the positive (the hypothesis is true) and negative (the hypothesis is false) cases, reason backward through the rulebase for each, and then aggregate the results according to logical support. This symmetric evaluation improves both accuracy and interpretability, particularly for multi-hop and unknown-label inference (Li et al., 2024).
  • Logical Negation Debiasing (NAND): For FOL and NLI tasks, positive and negative prompt templates are simultaneously evaluated (“Is ss true?” and “Is not ss false?”), and softmax scores are combined with calibrated offsets to counteract spurious correlations between the presence of negation and label bias, drastically improving accuracy and robustness across both closed-world and open-world datasets (Li et al., 2024).

6. Evaluation, Best Practices, and Observed Limitations

Quantitative Effects

Table: Quantitative Comparison for Negative Prompting (selected axes)

Method Prompt Fidelity (CLIP↑) Safety (ASR↓/TR↓) FID Human Pref. (%)
No neg 0.312 ~0.95 8–20
Static neg 0.296–0.277 0–0.025 136–152 21–32
Dynamic (VL-DNP) 0.311 0.011–0.084 13–15 46–61
  • Aggressive (fixed) negative prompting reduces unwanted content but severely degrades image-text alignment (CLIP), increases FID (>100), and often suppresses desired semantics (Chang et al., 30 Oct 2025).
  • Properly scheduled or adaptively-constructed negative prompts offer strictly better safety–fidelity Pareto frontiers.
  • Human evaluations consistently prefer outputs with adaptively- or dynamically-constructed negative prompts (Desai et al., 5 Aug 2025, Ogezi et al., 2024).

Practical Guidelines

  • Never apply negative prompts from initial denoising steps—wait until “critical steps” where positive prompt features are established.
  • For object removal, tune the negative prompt time window and guidance weight; verify attention localization via cross-attention maps.
  • Use noun negatives for object removal and adjective negatives for refinements.
  • For logical reasoning, always prompt both branches (“assume true” / “assume false”) and aggregate model support.
  • For text-to-image, leverage metric-weighted RL or VLM-adaptive schemes for context-specific negatives.

Limitations

  • Over-suppression and degradation of global scene fidelity when negative prompts are applied globally or with high weights.
  • Latency overhead in dynamic VLM-guided approaches (potentially 3–4× inference time).
  • Residual bias in logic models may require explicit debiasing (as in NAND) and careful calibration.

7. Significance, Extensions, and Open Problems

Positive-negative prompting has evolved into a core mechanism for generative control, filtering, inpainting, and structured reasoning, enabling practitioners to enforce both inclusivity and exclusivity in model outputs. Key ongoing directions include:

  • Learning joint schedules for time-dependent application and strength of negative prompts, especially in dynamic and scene-evolving contexts.
  • Fusing positive and negative prompt generation into unified, context-aware VLM modules.
  • Developing lightweight mechanisms for real-time negative prompt construction in resource-constrained settings.
  • Extending VLM-guided and adaptive negative prompting strategies to video and temporally-coherent generation.
  • Probing the theoretical underpinnings of negative prompt effects, particularly the latent-space cancellation phenomena observed in diffusion models.

Positive-negative prompting thus provides an effective, theoretically-grounded, and empirically validated lever for steering generative models toward targeted and safe outputs, and for debiasing logical inference in neural LLMs (Ban et al., 2024, Chang et al., 30 Oct 2025, Desai et al., 5 Aug 2025, Chia et al., 2023, Li et al., 2024, Li et al., 2024, Golan et al., 12 Oct 2025, Ogezi et al., 2024).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Positive-Negative Prompting.