Quantify PRISM’s gains at large (70B+) model scales
Determine the magnitude of performance improvements achieved by PRISM (Persona Routing via Intent-based Self-Modeling), a gated LoRA distillation approach for intent-conditioned persona routing, when applied to large-scale language models with 70B or more parameters.
References
Model scale. Our experiments are limited to 7--8B parameter models. While the findings on persona sensitivity and task-type dependence are likely to generalize, the magnitude of PRISM's improvements at larger scales (e.g., 70B+) remains untested.
— Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM
(2603.18507 - Hu et al., 19 Mar 2026) in Limitations, Model scale paragraph