Generalization of gaze-based user-state models across AI reliability conditions and users

Determine whether predictive models that infer subjective user states from gaze signals, when trained in one AI assistance reliability setting, generalize to other AI reliability conditions and to unseen users.

Background

In AI-assisted tasks, observed gaze and pupil changes can arise from multiple sources, including evidence difficulty, processing of AI advice, or conflict due to misleading suggestions. Such factors can shift the mapping between implicit gaze signals and subjective states.

Because AI assistance may alter decision strategies and attention allocation, it is uncertain whether models trained under one context will maintain performance across different reliability conditions or transfer across users, which is critical for deployable user modeling.

References

This implies that the mapping from implicit gaze signals to subjective states may change with AI context rather than remain fixed, making it unclear whether models trained in one setting will generalize across different reliability conditions or across users.

Eyes Can't Always Tell: Fusing Eye Tracking and User Priors for User Modeling under AI Advice Conditions  (2604.01741 - Sun et al., 2 Apr 2026) in Related Work, Subsection "Gaze signal for cognitive state inference"