Do We Know What They Know We Know? Calibrating Student Trust in AI and Human Responses Through Mutual Theory of Mind
Abstract: Trust and reliance are often treated as coupled constructs in human-AI interaction research, with the assumption that calibrating trust will lead to appropriate reliance. We challenge this assumption in educational contexts, where students increasingly turn to AI for learning support. Through semi-structured interviews with graduate students (N=8) comparing AI-generated and human-generated responses, we find a systematic dissociation: students exhibit high trust but low reliance on human experts due to social barriers (fear of judgment, help-seeking anxiety), while showing low trust but high reliance on AI systems due to social affordances (accessibility, anonymity, judgment-free interaction). Using Mutual Theory of Mind as an analytical lens, we demonstrate that trust is shaped by epistemic evaluations while reliance is driven by social factors -- and these may operate independently.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.