Papers
Topics
Authors
Recent
Search
2000 character limit reached

Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications

Published 8 Aug 2024 in cs.CY and cs.AI | (2408.12613v2)

Abstract: As political parties around the world experiment with AI in election campaigns, concerns about deception and manipulation are rising. This article examines how the public reacts to different uses of AI in elections and the potential consequences for party evaluations and regulatory preferences. Across three preregistered studies with over 7,600 American respondents, we identify three categories of AI use -- campaign operations, voter outreach, and deception. While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations. However, parties engaging in AI-enabled deception face no significant drop in favorability, neither with supporters nor opponents. Instead, deceptive AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development. These findings reveal a misalignment between public disapproval of deceptive AI and the political incentives of parties, underscoring the need for targeted regulatory oversight. Rather than banning AI in elections altogether, regulation should distinguish between harmful and beneficial applications to avoid stifling democratic innovation.

Summary

  • The paper finds that deceptive AI practices in elections notably increase public opposition, raising AI ban support from 29% to 38%.
  • It employs a robust methodology with a representative survey and two preregistered experiments involving 7,635 respondents to assess public attitudes.
  • The study highlights that while voters strongly disapprove of AI manipulation, partisan loyalties may cushion political consequences despite negative sentiments.

Examining Attitudes Toward AI Use in Election Campaigns: Public Perception and Regulatory Implications

The paper "Deceptive Uses of Artificial Intelligence in Elections Strengthen Support for AI Ban" provides a comprehensive study on the public perception and potential implications of AI utilization in political campaigns. Through a methodological approach involving a preregistered representative survey and two preregistered experiments with a substantial sample size (n=7,635), the authors, Jungherr, Rauchfleisch, and Wuttke, explore the nuances of AI-enabled electoral activities and the varying public concerns associated with them.

Key Findings and Numerical Results

The research identifies three primary categories of AI use in elections: campaign operations, voter outreach, and deception. It highlights three core findings that capture the public's attitude towards these practices:

  1. Public Perception of AI Use in Elections: The respondents generally perceived AI use in elections negatively. However, they expressed the most substantial objection to deceptive AI uses compared to operational or voter outreach uses. For example, 76.37% of respondents disliked the use of AI for creating deceptive social media content.
  2. Impact of Deceptive AI Practices: Exposure to deceptive AI practices substantially increased public anxiety and perceived norm violations. Notably, individuals learning about deceptive AI use were more inclined to support stringent regulatory oversight and even a moratorium on AI development due to heightened perceptions of threat and norm violation. Public support for an AI ban increased from 29% to 38% upon exposure to deceptive uses.
  3. Partisan Responses: The study observed limited punitive responses in party favorability among partisans, even after being informed of their party's involvement in deceptive AI practices. Equivalence tests revealed a lack of a substantial decrease in party favorability ratings despite the adverse use of AI, underscoring the insufficiency of public opinion as a deterrent for parties considering deceptive AI tactics.

Implications and Speculative Directions in AI

The paper's results underscore the need for nuanced regulatory frameworks addressing the diversity of AI uses in elections. Broadly, it illustrates a misalignment between political incentives and public trust. Given the limited coaction between public disapproval and political penalty, the authors argue this misalignment requires regulatory intervention to curb deceptive practices without stifling productive AI uses in electoral processes.

The study further implicates how AI's role in elections could catalyze a broader discourse on AI governance, emphasizing safety over innovation when potential harms to democratic processes are perceived. It cautions regulatory bodies against adopting blanket restrictions, which could hamper beneficial AI applications in campaign operations and voter outreach that enhance democratic participation and engagement.

Future Developments

The research opens avenues for refining AI policy by highlighting areas such as AI fairness, transparency, and accountability in electoral contexts. It suggests international comparative research to understand diverse political environments' responses and explore the interplay between AI and public trust globally.

Moreover, the paper intimates the future need to balance innovation and regulation to leverage AI's positive potentials while curbing its use for detrimental purposes. With society's increasing dependence on technology, such regulatory foresight is pivotal. Future studies could seek to quantify the tangible impacts of AI regulations on electoral behavior and democratic health, potentially steering the dialogue toward more informed and effective AI governance strategies.

In conclusion, this research provides significant insights into the public's nuanced viewpoints on AI's role in elections. It offers critical considerations for policymakers striving to establish regulatory frameworks that uphold electoral integrity without curtailing the benefits of AI innovations in political engagement.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 4 tweets with 101 likes about this paper.