Papers
Topics
Authors
Recent
Search
2000 character limit reached

Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences

Published 25 Feb 2025 in cs.HC | (2502.18683v1)

Abstract: AI systems have rapidly advanced, diversified, and proliferated, but our knowledge of people's perceptions of mind and morality in them is limited, despite its importance for outcomes such as whether people trust AIs and how they assign responsibility for AI-caused harms. In a preregistered online study, 975 participants rated 26 AI and non-AI entities. Overall, AIs were perceived to have low-to-moderate agency (e.g., planning, acting), between inanimate objects and ants, and low experience (e.g., sensing, feeling). For example, ChatGPT was rated only as capable of feeling pleasure and pain as a rock. The analogous moral faculties, moral agency (doing right or wrong) and moral patiency (being treated rightly or wrongly) were higher and more varied, particularly moral agency: The highest-rated AI, a Tesla Full Self-Driving car, was rated as morally responsible for harm as a chimpanzee. We discuss how design choices can help manage perceptions, particularly in high-stakes moral contexts.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 4 likes about this paper.