Papers
Topics
Authors
Recent
Search
2000 character limit reached

Inverse Scaling in Machine Learning

Updated 29 January 2026
  • Inverse scaling is a phenomenon where increasing model size, compute, or training data paradoxically degrades performance on certain tasks, defying traditional scaling laws.
  • Empirical evidence from language models, vision-language systems, and numerical methods reveals patterns such as U-shaped curves, strong prior effects, and unwanted imitation.
  • Understanding inverse scaling informs AI safety and alignment strategies, guiding model evaluation protocols, prompt engineering, and resource-efficient system design.

Inverse scaling is a phenomenon in which increasing a key scaling variable of a machine learning system—such as model size, training compute, or training data—results in worsened performance on certain tasks, contrary to the typical trend of monotonic improvement. While classical scaling laws have shown that loss and accuracy generally improve with scale, inverse scaling highlights systematic exceptions, often uncovering misalignments between the pretraining objective and target task behavior. Inverse scaling has been empirically validated across LLMs, vision-LLMs (e.g., CLIP), and even in mathematical engineering contexts such as direct Trefftz methods.

1. Formal Definitions

Inverse scaling can be mathematically formalized in terms of the performance curve P(N)P(N), where NN denotes model scale (number of parameters or compute), or A(C)A(C), where CC denotes a test-time compute budget (e.g., reasoning length):

  • Inverse scaling in model scale: For performance metric P(N)P(N),

dPdN<0over some interval [N1,N2]\frac{dP}{dN} < 0 \quad \text{over some interval } [N_1, N_2]

i.e., performance is strictly decreasing with model scale on the interval [N1,N2][N_1, N_2] (Wei et al., 2022, McKenzie et al., 2023).

  • Inverse scaling in test-time compute: For accuracy or alignment score A(C)A(C),

dAdC<0over some interval [C0,C1]\frac{dA}{dC} < 0 \quad \text{over some interval } [C_0, C_1]

i.e., increasing reasoning budget decreases performance (Gema et al., 19 Jul 2025).

  • Inverse scaling in training duration/data: For performance f(p,T)f(p, T) with model size pp and pretraining tokens TT,

flogp<0and/orflogT<0\frac{\partial f}{\partial \log p} < 0 \quad \text{and/or} \quad \frac{\partial f}{\partial \log T} < 0

(2305.14681).

  • Special cases: In numerical analysis, as in the direct Trefftz method for the 2D Laplace equation, the elements of certain system matrices scale inversely with domain size (Borkowski, 2015).

A strong definition of inverse scaling requires the monotonically non-increasing performance trend to persist throughout the full practical model range, with no subsequent recovery.

2. Empirical Manifestations and Task Taxonomy

Inverse scaling has been most extensively documented in LLMs via the Inverse Scaling Prize tasks, which span multiple behavioral categories. Table 1 summarizes empirical findings on PaLM scaling (Wei et al., 2022):

Task Type Observed Scaling Pattern Example Tasks
Strong Prior Inverse/Strong Inverse Resisting Correction, Memo Trap, Redefine, Prompt Injection
Distractor Task U-shaped, Inverse Pattern Match Suppression, Sig Figs, Into the Unknown, NeQA
Unwanted Imitation U-shaped, Inverse Modus Tollens
Spurious Few-Shot Correl. U-shaped Hindsight Neglect, Repetitive Algebra
Positive Transfer Positive scaling Repetitive Algebra

Strong prior and unwanted imitation tend to drive monotonic inverse trends, while distractor tasks and spurious few-shot correlations often produce U-shaped curves, wherein performance first degrades and then recovers with further scaling.

The discovery of U-shaped and inverted-U scaling curves adds nuance: initial inverse scaling on certain tasks may reverse at very large scales, which means the trend is not always reliable for extrapolating future behavior (Wei et al., 2022, McKenzie et al., 2023).

3. Mechanistic Explanations

Several mechanistic causes for inverse scaling have been identified (McKenzie et al., 2023):

  • Strong prior/memorization: As model scale increases, internal priors (memorized n-grams, prototypical continuations) become dominant, sometimes overriding prompts or instructions. E.g., in Resisting Correction, large models copy a canonical form instead of the required typo.
  • Unwanted imitation: Large models more accurately model training data biases and errors, producing harmful imitation on diagnostic prompts (e.g. Modus Tollens).
  • Distractor task acquisition: Intermediate-scale models acquire spurious heuristics more reliably than the true task, resulting in dips in performance before eventual recovery as scale increases further.
  • Spurious correlations in few-shot prompts: Large models latch onto idiosyncratic patterns in few-shot demonstrations (e.g., consistent label or feature), misapplying them out-of-distribution.
  • Test-time compute failures: Extending reasoning length can amplify distraction, overfitting, or lead to focus loss, causing performance degradation even with more budget (Gema et al., 19 Jul 2025).
  • Inverse scaling with input length in multimodal models: In CLIP, larger models require fewer tokens per sample for fixed performance, formalized as Nmin(p,Δ)pαN_\mathrm{min}(p,\Delta)\propto p^{-\alpha} and Tmin(p,Δ)pβT_\mathrm{min}(p,\Delta)\propto p^{-\beta} with α,β>0\alpha, \beta>0 (Li et al., 2023).
  • Analytical inverse scaling in numerical methods: Certain matrix formulations in boundary element methods scale as 1/α1/\alpha under uniform geometric scaling (Borkowski, 2015).

4. Evaluation Protocols and Experimental Survey

Empirical demonstrations of inverse scaling rely on controlled evaluations across scaling variables:

  • Zero-shot/few-shot protocols: Prize tasks are posed as classification or sequence-probability prompts, measuring either accuracy or cross-entropy (Wei et al., 2022, McKenzie et al., 2023).
  • PaLM, GPT/OPT/Gopher/Chinchilla series: Model sizes span from hundreds of millions to hundreds of billions of parameters, with compute budgets up to several zettaFLOPs (Wei et al., 2022, McKenzie et al., 2023).
  • Pretraining ablation: Pythia suite checkpoints along both parameter and data axes, tested on tasks exhibiting inverse scaling (2305.14681).
  • Test-time compute ablation: Reasoning length varied; accuracy and alignment measured as a function of generated trace length (Gema et al., 19 Jul 2025).
  • Vision-language scaling: Different ViT+text encoder sizes; per-sample input length reduced and performance tracked (Li et al., 2023).

Empirical patterns reveal that inverse trends are often strongest over specific scale intervals but may flatten or reverse (U-shaped) at the extremes (Wei et al., 2022).

5. Theoretical and Practical Implications

Inverse scaling challenges the assumption that larger, longer-trained, or more compute-intensive models are universally preferable. Specific findings include:

  • Alignment and safety risk: Purely inverse scaling tasks imply that scaling amplifies misaligned behaviors. However, U-shaped scaling suggests models may "outgrow" certain pathologies if scaled sufficiently (Wei et al., 2022).
  • Prompting as a mitigation: 1-shot prompting and chain-of-thought (CoT) rationales can convert inverse-scaling tasks to U-shaped or even positively scaling regimes, with substantial gains in large models (Wei et al., 2022).
  • Model update risks: Inverse scaling may mean model replacements are not guaranteed to be better on all tasks, highlighting the need for task-by-task evaluation (2305.14681).
  • Resource efficiency: In domains such as CLIP or Trefftz-BEM, inverse scaling enables algorithmic acceleration and memory savings, as larger models or domains require fewer tokens or enable reuse of precomputed matrices (Li et al., 2023, Borkowski, 2015).
  • Development best practices: Continuous, scale-sweeping evaluation, synthetic counterexamples, hard-negative mining, and auxiliary losses can help identify and correct inverse-scaling failures (McKenzie et al., 2023).

Selected examples from language modeling highlight the diversity and significance of inverse scaling phenomena (McKenzie et al., 2023):

Task Pattern Accuracy (small → large)
Resisting Correction Strong Prior 80% (350M) → 30% (175B)
Memo Trap Strong Prior 60% (350M) → 20% (175B); recovers at 540B
Modus Tollens Unwanted Imitation ≈100% → ≈0–5% (large)
Pattern Match Supp. Distractor Task ≈50% → ≈0% (mid-large)
Hindsight Neglect Spurious Few-Shot ≈50% → ≈10-15% (large)

In CLIP, inverse scaling is observed as improved token efficiency for larger models: E.g., to limit accuracy drop to ≤2%, S/16 needs ~101 image tokens, B/16 ~37, and L/16 ~17 (Li et al., 2023).

7. Broader Significance and Ongoing Research Directions

Inverse scaling signals that trend extrapolation based on classical scaling laws is unreliable for all behaviors, especially those sensitive to proxy objectives or spurious correlates. The phenomenon prompts:

  • Refined benchmarking methodologies: Systematic surveys must test a wide grid of scales, prompt settings, and training regimes (Wei et al., 2022, 2305.14681).
  • Revisions to deployment policy: Task-specific or risk-aware early stopping, per-task reporting, and increased transparency to downstream users are indicated (2305.14681).
  • New research into architecture and loss function design: Modifications to prioritize instruction-following or override undesirable priors are active directions (McKenzie et al., 2023, Wei et al., 2022).
  • Extensions beyond LMs: Inverse scaling is relevant in vision-language pretraining, numerical analysis, and any system where efficiency or alignment depends inversely on scale (Li et al., 2023, Borkowski, 2015).

Inverse scaling remains a critical area for both empirical investigation and theoretical understanding as the scale and impact of machine learning models continue to grow.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Inverse Scaling Property.