Papers
Topics
Authors
Recent
Search
2000 character limit reached

Compositional Steering Tokens

Updated 9 January 2026
  • Compositional Steering Tokens are specialized embeddings that enable LLMs to simultaneously satisfy multiple behavioral and semantic constraints through modular input modifications.
  • They combine input-token adjustments and activation-space interventions to achieve fine-grained, zero-shot generalization over unseen behavior combinations.
  • Empirical results indicate that using composition tokens enhances accuracy and robustness compared to traditional instruction concatenation, offering scalable multi-attribute control.

Compositional steering tokens are specialized embeddings or activation-space interventions that enable LLMs and related transformer architectures to satisfy multiple behavioral or semantic constraints simultaneously. Unlike classical single-property steering, these methods target the joint satisfaction of several desiderata—such as controlling both output format and content style—through either input-token modifications or direct intervention in the internal activations of the model. Recent advances in compositional steering tokens exploit both explicit input embeddings and latent-space vector composition to achieve fine-grained, modular, and scalable control across an expanding set of behaviors, attributes, or properties (Radevski et al., 8 Jan 2026).

1. Formal Foundations and Input-Token Approaches

Compositional steering tokens were introduced to address the underexplored problem of controlling multiple behaviors in frozen LLMs by operating in the space of trainable input embeddings rather than modifying model weights or activations post hoc. Let B={b1,...,bk}B = \{b_1, ..., b_k\} denote a set of behaviors (e.g., "answer in French," "use 10–50 words"). For each b∈Bb \in B, a dedicated "steering token" e⃗b∈Rd\vec{e}_b \in \mathbb{R}^d is introduced, where dd is the model's hidden size. A special "composition" token e⃗<and>\vec{e}_{\text{<and>}} is trained to mediate the logical conjunction of behaviors.

Given a prompt xx and desired behaviors {bi,bj,...}\{b_i, b_j, ...\}, the system feeds [Ex,e⃗bi,e⃗<and>,e⃗bj,e⃗<and>,...][E_x, \vec{e}_{b_i}, \vec{e}_{\text{<and>}}, \vec{e}_{b_j}, \vec{e}_{\text{<and>}}, ...] as input tokens to the frozen LLM. The behavioral tokens, interleaved with the composition operator, direct the model to jointly satisfy all specified constraints (Radevski et al., 8 Jan 2026).

2. Learning Single-Behavior and Composition Tokens

The canonical training regime uses self-distillation. For each bb:

  • Teacher prompt: x⊕Ibx \oplus I_b, where IbI_b is a natural-language instruction for behavior bb.
  • Student prompt: x⊕<b>x \oplus <b>, with <b><b> replaced by its trainable steering token embedding.

Behavioral steering tokens are optimized by minimizing the distillation loss

Ldist(b)=T2 KL(PT(y∣x,Ib)∥PS(y∣x,<b>)),L_{\text{dist}}(b) = T^2 \, \mathrm{KL}(P_T(y|x, I_b) \parallel P_S(y|x, <b>)),

at temperature TT, with only e⃗b\vec{e}_b updated and all model parameters frozen. To avoid overfitting, 10 paraphrases of each IbI_b are sampled per batch (Radevski et al., 8 Jan 2026).

The composition token e⃗<and>\vec{e}_{\text{<and>}} is then trained with all e⃗b\vec{e}_b and the LLM frozen, on distillation pairs where teachers receive concatenated instructions and students receive the behavioral tokens interleaved with e⃗<and>\vec{e}_{\text{<and>}}. An orthogonality regularizer

Lorth=∑b∈Bseen(e⃗<and>⋅e⃗b∥e⃗<and>∥∥e⃗b∥)2L_{\text{orth}} = \sum_{b \in B_{\text{seen}}} \left( \frac{\vec{e}_{\text{<and>}} \cdot \vec{e}_b}{\|\vec{e}_{\text{<and>}}\| \|\vec{e}_b\|} \right)^2

is applied to prevent collapse. Full loss: L=Ldist+λLorthL = L_{\text{dist}} + \lambda L_{\text{orth}} (with λ=0.5\lambda=0.5). Token-order shuffling ensures permutation invariance (Radevski et al., 8 Jan 2026).

3. Compositional Generalization and Metrics

Compositional steering tokens exhibit robust zero-shot generalization under several axes:

  • Unseen behavior pairs: Generalize to combinations of behaviors not seen during composition-token training.
  • Unseen behaviors: Transfer to novel behaviors by training only their steering tokens.
  • Unseen behavior cardinality: Trained on 2-tuples, tested on 3-behavior conjunctions.

Evaluation metrics include:

  • Mean accuracy: Fraction of outputs satisfying all constraints, averaged over all k!k! token orderings.
  • Order variance: Max accuracy difference across input permutations.
  • Response quality: Coherence and correctness rated 1–5 by LLM auto-raters.

For example, on Qwen-8B, the composition-token approach outperforms pure instruction concatenation for both 2-behavior (76.9% vs. 71.8%) and 3-behavior (59.5% vs. 54.0%) zero-shot settings. The hybrid method (tokens + instructions) yields further gains and lower order variance (Radevski et al., 8 Jan 2026).

Setting Instruction (%) Composition Token (%) Hybrid (%) Order Variance (%)
2-behavior, unseen 71.8 76.9 — 7.8 (instr) → 5.3 (comp)
3-behavior, unseen 54.0 59.5 62.9 18.1 (instr) → 15.2 (hybrid)

4. Activation-Space and Contrastive Methods for Compositional Steering

Alternative approaches apply compositional control via intervention in activation space. One class derives steering directions from contrastive activation shifts anchored to influential input tokens.

  • GrAInS identifies top-kk positive and negative tokens via Integrated Gradients w.r.t. a preference objective. Directional steering vectors are derived as the primary principal components of the shifts caused by ablating these tokens, e.g.

vℓ=vℓ+−vℓ−,v_\ell = v_\ell^+ - v_\ell^-,

where vℓ+,vℓ−v_\ell^+, v_\ell^- are computed by PCA over the hidden state changes due to ablation. At inference, vℓv_\ell is injected (scaled and normalized) into the activations at each transformer layer (Nguyen et al., 24 Jul 2025).

  • MAT-Steer learns per-attribute vectors vtv_t using a kernel MMD alignment objective, coupled with gating, sparsity, and orthogonality constraints to minimize interference. At inference, the composed effect is a gated, weighted sum

hi′=hi+∑t=1TαtGt(hi)vt,h_i' = h_i + \sum_{t=1}^T \alpha_t G_t(h_i) v_t,

renormalized to preserve activation norm (Nguyen et al., 18 Feb 2025).

  • Dynamic Activation Composition modulates the strength αtp\alpha_t^p of each property vector at each token position based on the KL distance between the current model distribution and one maximally steered for property pp. This dynamic weighting enables robust multi-property satisfaction with minimal adverse effects on fluency (Scalena et al., 2024).

5. Semantically Meaningful Tokens in Vision and Multimodal Transformers

The principle of compositional steering via tokens also extends to visual and vision-LLMs. Semantically meaningful tokenization—instead of non-semantic image patches—grants the transformer units discrete, interpretable elements (objects and relations), which facilitate compositional reasoning.

  • Tangible tokens are instance segmentation mask embeddings, while intangible tokens encode inter-object relationships or actions.
  • Additive attention weights, learned for token-pair ranks derived from scene graphs, guide the transformer’s focus to true compositional relations (subject–predicate–object, spatial structure).

Experiments report marked gains in image/text retrieval and compositionality benchmarks (ARO +18%, Winoground +10%) relative to models relying solely on patch tokens (Kalibhat et al., 2024). The result underscores the efficacy of explicit compositional structures in tokenization or steering.

6. Advantages Over Prior Steering Paradigms

Input-token compositional steering provides decisive advantages over pure activation-based and LoRA/DARE merging approaches:

  • No model weights are updated; only input embeddings are trained, or activation vectors composed at inference.
  • Zero-shot generalization to unseen compositions is robust due to explicit input-level composition operators (e.g., the <and> token).
  • Arbitrary new behaviors can be added plug-and-play by learning new tokens without retraining the rest of the steering vocabulary.
  • Order variance and brittleness are mitigated by order-invariant training objectives and composition token regularization.
  • Hybridization with natural-language instructions offers additive gains in both satisfaction rate and robustness (Radevski et al., 8 Jan 2026).

In contrast, activation-space steering often requires careful hand-tuning of segment- and layer-wise magnitudes, and merges such as LoRA DARE show poor compositional generalization (e.g., 44.8% on unseen multi-behavior test cases), while naive vector addition or unified steering methods typically break down with increasing numbers of concurrently steered attributes (Radevski et al., 8 Jan 2026, Nguyen et al., 18 Feb 2025, Scalena et al., 2024).

7. Empirical Results and Practical Recommendations

  • Composition-token methods outperform instruction concatenation by at least 5–6 percentage points on held-out multi-behavior sets, and hybrid methods further increase accuracy and consistency.
  • MAT-Steer and GrAInS enable fine-grained, interpretable multi-attribute control, maintaining or exceeding strong baselines in factuality, safety, helpfulness, and coherence across LLMs and VLMs.
  • Dynamic Activation Composition is the sole method in current peer-reviewed literature to simultaneously achieve >95% accuracy for all properties in multiproperty settings without fluency degradation (Scalena et al., 2024).

For practical deployment, recommended practices include: enforcing orthogonality between attribute directions, applying sparsity gating to target only the necessary tokens, incorporating order-agnostic training for input-based approaches, and dynamically modulating steering vector strength to avoid over- or under-conditioning during generation.

References

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Compositional Steering Tokens.