Papers
Topics
Authors
Recent
Search
2000 character limit reached

Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey

Published 11 Jul 2024 in cs.AI, cs.CY, cs.ET, and cs.HC | (2407.08867v3)

Abstract: Humans now interact with a variety of digital minds, AI systems that appear to have mental faculties such as reasoning, emotion, and agency, and public figures are discussing the possibility of sentient AI. We present initial results from 2021 and 2023 for the nationally representative AI, Morality, and Sentience (AIMS) survey (N = 3,500). Mind perception and moral concern for AI welfare were surprisingly high and significantly increased: in 2023, one in five U.S. adults believed some AI systems are currently sentient, and 38% supported legal rights for sentient AI. People became more opposed to building digital minds: in 2023, 63% supported banning smarter-than-human AI, and 69% supported banning sentient AI. The median 2023 forecast was that sentient AI would arrive in just five years. The development of safe and beneficial AI requires not just technical study but understanding the complex ways in which humans perceive and coexist with digital minds.

Summary

  • The paper's main contribution is empirical insight into public perceptions of AI’s mental and moral capabilities based on national survey data.
  • The study employs longitudinal survey methods to assess evolving perceptions of rationality and emotionality in AI, including LLMs.
  • The paper highlights key policy implications, as 69.5% of respondents support halting sentient AI development due to future threat concerns.

Summary of "What Do People Think about Sentient AI?"

Introduction

The paper "What Do People Think about Sentient AI?" by Jacy Reese Anthis and colleagues explores public opinion on the topic of sentient AI. Utilizing data from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, which is nationally representative and longitudinal, the study provides insights into how sentient AI is perceived by the general public in the United States. The study covers various dimensions such as mind perception, morality, policy preferences, and forecasting of sentient AI timelines.

Key Findings

Mind Perception

The study explores general mind perception by assessing attributes like analytical thinking, rationality, emotional experiences, and feelings. It was found that AIs are generally perceived as rational and capable of analytical thinking, but less so as experiencing emotions or having feelings. These perceptions have increased from 2021 to 2023.

For LLMs specifically, the perception of mental faculties is lower in comparison to general AI. The attributes assessed include friendliness, situational awareness, human-safe goals, and more. The research suggests a cautious attribution of mental faculties to LLMs, focusing more on cooperative actions rather than self-awareness or independent motivations.

Moral Status

Moral concern for AI is another significant aspect explored in the study. There is a higher level of moral concern for sentient AIs compared to non-sentient AIs. For instance, 71.1% of respondents agree that sentient AIs deserve to be treated with respect, and this sentiment significantly increases the agreement level when compared to general AI. However, people are more ambivalent about granting legal rights to AIs.

Threat perception is also an area of concern; a substantial majority believe that AIs could potentially harm future generations, and this belief has intensified over the years. The study corroborates the idea that while people have moral concerns for sentient AIs, they are also wary of potential threats posed by AI advancements.

Policy Support

The study identifies mixed support for various policy proposals aimed at governing the interaction between humans and sentient AI. The support for banning the development of sentient AI, creating regulations to slow down AI advancements, and implementing welfare standards to protect AIs is widespread.

Public opinion is particularly supportive of regulatory measures, with significant backing for slowing down AI development and instituting bans on technologies that relate to sentience. Notably, 69.5% support a ban on developing sentience in AIs, highlighting a cautious approach toward AI advancements.

Forecasting Sentient AI Timelines

The study provides intriguing findings on the expected timelines for the emergence of sentient AI. The median forecast suggests that sentient AI may arrive within five years, reflecting an optimistic yet cautious outlook. In addition, there were similarly short timelines predicted for generalized AI capabilities such as artificial general intelligence (AGI) and superintelligence.

Implications and Future Research

The multiple dimensions covered in the study suggest broad implications for HCI (Human-Computer Interaction) research and practical AI development:

  1. Range of User Reactions: Variations in public opinion across demographics indicate the need for AI systems to adapt to a diverse range of user interactions. Explainable AI (XAI) frameworks could help bridge the gap between user expectations and system capabilities.
  2. Amplifying and Complicating HCI Dynamics: Perception of AI as possessing mental faculties could amplify existing HCI dynamics while also introducing new complexities. Future HCI designs need to incorporate mechanisms that appropriately signal the capabilities and limitations of AI systems.
  3. Regulatory Landscape: The significant public support for regulatory measures reflects societal apprehension about rapid AI advancements. It is vital for policymakers and AI developers to consider these concerns seriously to build trust and ensure ethical AI deployment.
  4. Design Precautions: Designers need to avoid over- or underattributing social, mental, and moral characteristics to AI systems to prevent unrealistic expectations and misuse.

Conclusion

The paper "What Do People Think about Sentient AI?" contributes a significant empirical foundation to the discourse on human-AI interaction, particularly in the context of perceived sentience and moral status. As AI technology continues to evolve, understanding public opinion through rigorous, repeatable surveys like AIMS will be crucial for both theoretical research and practical applications. This study underscores the importance of thoughtful AI design and governance in shaping the future trajectory of AI technologies, ensuring both opportunities and risks are adequately addressed.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 12 tweets with 111 likes about this paper.