Modeling personalization in LM-based user simulators via theory-of-mind

Develop and validate a theory-of-mind-based personalization framework for the language-model-driven user simulator in the Proactive Agent Research Environment (PARE) that captures individual differences in user personality and trust levels, including variability in proposal acceptance and preferences regarding intervention timing, to enable more realistic evaluation of proactive assistants.

Background

The paper evaluates proactive assistants using an LM-based user simulator within the Proactive Agent Research Environment (PARE). While the simulator enforces strict acceptance criteria for assistant proposals, it does not currently model heterogeneity across users.

The authors note that real users differ in personality and trust—some readily accept proposals without verifying details, while others reject proposals when intervention timing does not match their preferences. They explicitly state that modeling such personalization using theory-of-mind approaches remains an open challenge, highlighting a gap for building more realistic user simulations.

References

Moreover, our user simulation does not model individual differences in user personality and trust levels. In practice, some users are more trusting and would accept proposals without verifying the underlying content, while others actively reject proposals if the intervention timing does not match their preferences. Modeling this personalization through theory-of-mind approaches \citep{zhou2026tomsweusermentalmodeling} remains an open challenge.

Proactive Agent Research Environment: Simulating Active Users to Evaluate Proactive Assistants  (2604.00842 - Nathani et al., 1 Apr 2026) in Appendix, Section: Limitations and Future Work (Limitations subsection)