Papers
Topics
Authors
Recent
Search
2000 character limit reached

Performance and Metacognition Disconnect when Reasoning in Human-AI Interaction

Published 25 Sep 2024 in cs.HC | (2409.16708v2)

Abstract: Optimizing human-AI interaction requires users to reflect on their own performance critically. Our paper examines whether people using AI to complete tasks can accurately monitor how well they perform. In Study 1, participants (N = 246) used AI to solve 20 logical problems from the Law School Admission Test. While their task performance improved by three points compared to a norm population, participants overestimated their performance by four points. Interestingly, higher AI literacy was linked to less accurate self-assessment. Participants with more technical knowledge of AI were more confident but less precise in judging their own performance. Using a computational model, we explored individual differences in metacognitive accuracy and found that the Dunning-Kruger effect, usually observed in this task, ceased to exist with AI. Study 2 (N = 452) replicates these findings. We discuss how AI levels metacognitive performance and consider consequences of performance overestimation for interactive AI systems enhancing cognition.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 5 tweets with 484 likes about this paper.