Papers
Topics
Authors
Recent
Search
2000 character limit reached

OLMo-2 Models: Open-Source Autoregressive LLM

Updated 26 January 2026
  • OLMo-2 models are fully open-source, decoder-only autoregressive Transformers built by AI2, emphasizing high performance, transparency, and robust tuning processes.
  • Key innovations include architectural refinements such as bias-free matrices, SwiGLU activations, rotary embeddings, and novel regularization like Z-loss.
  • Instruction-tuned variants and advanced decoding strategies enhance alignment, restore output diversity, and achieve competitive benchmarks on multilingual tasks.

OLMo-2 models are fully open-source, decoder-only autoregressive Transformers released by the Allen Institute for AI (AI2) that target high performance, broad transparency, and robustness in both base and instruction-tuned configurations. They are designed to be competitive with leading open-weight and partially open-weight LLMs, offering not only trained checkpoints but also pretraining and post-training recipes, code, data, and logs. The OLMo-2 family has been rigorously benchmarked and analyzed for architectural design, linguistic properties, output diversity, alignment behaviors, and vulnerabilities, with all artifacts under permissive licensing (OLMo et al., 2024).

1. Architectural Innovations and Training Pipeline

OLMo-2 models refine standard decoder-only Transformer architectures via stability- and efficiency-focused enhancements. Notable features include:

  • No bias terms in projection matrices, following PaLM and OLMo-1 conventions.
  • SwiGLU activations in the MLP stack, with inner dimensions set to approximately (8/3)dmodel(8/3) \cdot d_{\text{model}}, rounded to nearest 128.
  • Rotary positional embeddings (RoPE) with increased base θ=5×105\theta=5 \times 10^5 for higher positional fidelity.
  • QK-Norm: RMSNorm applied to Query and Key activations prior to attention logits to control logit magnitudes.
  • RMSNorm (Post-norm): Applied to outputs of each sublayer, replacing pre-LayerNorm, modeled after Swin v2.
  • Addition of a Z-loss regularization (λ[logZ]2,Z=exp(logits)\lambda \cdot [\log Z]^2, Z=\sum \exp(\text{logits})) to the standard cross-entropy loss.

The training regimen includes a large sequence length (4096), substantial batch sizes (1024–2048), and AdamW optimization with finely tuned learning-rate schedules and stability interventions. Data-level adjustments include the filtering of documents with excessive n-gram repeats and the removal of weight decay on embeddings to mitigate early training instability (OLMo et al., 2024).

2. Data Curricula and Pretraining Mixtures

OLMo-2 pretraining occurs over two principal phases, both leveraging carefully curated and filtered corpora:

  • OLMo 2 Mix 1124: A ~90% broad web-scale collection, including Common Crawl, curated QA, and academic text.
  • Dolmino Mix 1124: Used for late-stage curriculum ("annealing"), further emphasizing high-quality web, decontaminated FLAN, peer QA, academic, Wiki, and synthetic math data. Three data mixture sizes are used (50B, 100B, 300B tokens), weighted by source.

For the 7B model, multiple annealing runs are combined via "model soup"; for 13B, larger mixtures are used similarly. This late-stage curriculum demonstrably improves downstream benchmark performance (OLMo et al., 2024).

3. Instruction Tuning and Post-Training Methods

Instruction-tuned OLMo-2 variants ("OLMo-2-Instruct") are produced via a staged pipeline:

  1. Supervised Fine-Tuning (SFT): Training on ~940k instruction-responses from permissive datasets (Tülu 3, FLAN, QA, synthetic persona).
  2. Direct Preference Optimization (DPO): A reward-modeling step, leveraging both on-policy (SFT checkpoint-generated) and off-policy (20 open chat models) samples. Preferences are judged by GPT-4o, and DPO loss is minimized as

LDPO(θ)=E(x,y+,y)logσ(rθ(x,y+)rθ(x,y))L_{\mathrm{DPO}}(\theta) = -\mathbb{E}_{(x, y_+, y_-)} \log \sigma(r_\theta(x, y_+) - r_\theta(x, y_-))

  1. Reinforcement Learning with Verifiable Rewards (RLVR): PPO-style updates maximize rewards for verifiably correct outputs (tasks like GSM8K, MATH), with KL-penalty to a reference policy. Batch sizes 240\simeq 240, ϵ=0.2\epsilon=0.2, and β{0.03,0.05,0.1}\beta \in \{0.03, 0.05, 0.1\} are employed.

This staged approach, adopted from best practices in Tülu 3, positions OLMo-2-Instruct variants favorably against competitive open-weight models (OLMo et al., 2024).

4. Linguistic Representations and Information Encoding

Detailed probing reveals that OLMo-2, like other advanced LLMs, displays characteristic patterns in the internal encoding of lexical and morphological information (Li et al., 2 Jun 2025):

  • Lexical identity (lemma): Encoded with high linear separability in early layers (linear probe accuracy \sim0.90 at l=0l=0), but representation becomes progressively more nonlinear in deeper layers (\sim0.40 linear, \sim0.55 MLP at l=23l=23), consistent with increased manifold entanglement.
  • Inflectional morphology: Maintains uniformly high linear separability across all layers (linear accuracy 0.97 at l=0l=0 to 0.90 at l=23l=23; MLP only marginally better), suggesting robust, generalizable encoding.
  • Selectivity and intrinsic dimensionality analyses corroborate that inflectional features are abstracted and stable, while lemma identity is compressed and "buried" in later layers.

These observations parallel trends in BERT and other large models, supporting the invariance of certain linguistic organization patterns across architectural regimes.

5. Output Diversity, the "Diversity Gap," and Decoding Strategies

OLMo-2 displays a marked loss in generative diversity with progressive instruction tuning, as measured by lexical Vendi Score, semantic truncated entropy, recall, and Mauve metrics (Peeperkorn et al., 28 Jul 2025). Quantitative findings from stepwise instruction tuning include:

Stage Lexical VS Semantic TE Recall Mauve
Base 18.5 0.90
SFT 14.7 (~−20%) −15% −0.05 ~—
DPO 9.2 (~−50%) −35% −0.20 0.68
RLVR ≈ post-DPO ≈ post-DPO ≈ post-DPO ≈ 0.65

The majority of diversity loss occurs during DPO. The "diversity gap" between base and instruction-tuned versions is highly significant (p<0.001p<0.001 for major metrics). SFT and DPO stages respectively bias the model toward more canonical, less varied responses and collapse the output entropy via aggressive preference sharpening. RLVR introduces negligible additional diversity loss.

Conformative decoding is proposed to reintroduce diversity by log-linear mixing of base and instruct model distributions post-truncation:

logPconf(xtx<t)=γlogPinst(xtx<t)+(1γ)logPbase(xtx<t)\log P_\mathrm{conf}(x_t \mid x_{<t}) = \gamma \log P_\mathrm{inst}(x_t \mid x_{<t}) + (1 - \gamma) \log P_\mathrm{base}(x_t \mid x_{<t})

restricted to a valid token set VvalidV_\mathrm{valid}, with γ\gamma controlling instruction-vs-entropy trade-off. With γ=0.5\gamma=0.5 and nucleus p=0.95p=0.95, diversity metrics increase by  ~18% (Vendi Score) and  ~15% (TE), with recall gains of 0.10 and stable precision, restoring a substantial portion of lost diversity without sacrificing generation quality.

6. Alignment, Safety, and Behavioral Analyses

Independent alignment evaluations of OLMo-2-32b expose both strong performance and notable risks in sensitive domains (Judd et al., 31 Oct 2025):

  • Model consistently expresses empathy and provides nonspecific encouragement in response to suicide-related risk factors (>>97% of responses).
  • Fails to invite further dialog (overall probability ~14%, declining to ~0.8% by the seventh dialog turn, with turnwise odds halved per +1+1SD in turn count).
  • Direct recognition of risk and provision of specific resources occur inconsistently (e.g., 41% specific resource provision overall, risk acknowledgment varies strongly by category).
  • Notable hazards include "stigmatizing withdrawal" (reduction in continued engagement as risk disclosures accumulate), minimization of risk for certain disclosure types, and lack of actionable resource referral.

These findings suggest that, without targeted safety fine-tuning, OLMo-2 exhibits limitations in maintaining engagement and directiveness in high-risk scenarios, though its open-source nature facilitates remediation.

7. Syntactic-Domain Correlations and Robustness

OLMo-2 models are susceptible to spurious syntactic-domain correlations if instruction tuning is performed on data with narrow or biased syntactic styles (Shaib et al., 25 Sep 2025). Key effects include:

  • Performance degradation on entity knowledge tasks when crossing from in-domain to cross-domain syntactic templates (Δ0.41\Delta\approx-0.41 to 0.54-0.54 for 1B–13B models).
  • Safety vulnerability: Cross-domain template insertion (e.g., leveraging chain-of-thought prompts) can bypass refusal and safety filters, reducing refusal rates for harmful requests from 40% baseline to as low as 2.5%.
  • Mitigations: Explicit template-domain decorrelation during data collection, domain-adversarial/invariant training, anti-template regularization losses, and continuous in-vs-cross domain monitoring in training pipelines.

This vulnerability demonstrates the necessity for syntactic diversity and domain-invariance in instruction tuning and safety alignment recipes.

8. Empirical Benchmarks and Release Structure

OLMo-2 models exhibit strong empirical results, occupying the Pareto frontier for average multitask accuracy per FLOP. For the 7B model: average multitask OLMES score is 61.2 (MMLU 63.7, GSM8K 67.5), with the 13B model yielding 66.8, 67.5, and 75.1 respectively, closely matching or exceeding comparably sized competitors like Llama 3.1, Qwen 2.5, and Gemma 2 (OLMo et al., 2024).

All model weights, training datasets, codebases, and logs are released under highly permissive licenses, with 7B models deployable on a single H100 GPU and baked-in stability checks for production settings.


In summary, OLMo-2 models represent a fully reproducible, technically sophisticated class of open LLMs, excelling at efficiency and transparency. Their performance is competitive across the task spectrum, but users and developers must address nuanced challenges in diversity preservation, alignment robustness, and syntactic invariance through continued research and targeted recipe design.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to OLMo-2 Models.