Papers
Topics
Authors
Recent
Search
2000 character limit reached

Building Ethics into Artificial Intelligence

Published 7 Dec 2018 in cs.AI | (1812.02953v1)

Abstract: As AI systems become increasingly ubiquitous, the topic of AI governance for ethical decision-making by AI has captured public imagination. Within the AI research community, this topic remains less familiar to many researchers. In this paper, we complement existing surveys, which largely focused on the psychological, social and legal discussions of the topic, with an analysis of recent advances in technical solutions for AI governance. By reviewing publications in leading AI conferences including AAAI, AAMAS, ECAI and IJCAI, we propose a taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision frameworks; and 4) ethics in human-AI interactions. We highlight the intuitions and key techniques used in each approach, and discuss promising future research directions towards successful integration of ethical AI systems into human societies.

Citations (178)

Summary

  • The paper presents a comprehensive survey of technical strategies that integrate ethics into AI, categorizing ethical dilemmas, individual and collective decision frameworks, and human-AI interactions.
  • It details methodologies such as rule-based, analogical reasoning, and machine learning approaches to balance ethical principles with autonomous decision-making.
  • The study advocates for interdisciplinary collaboration, culturally sensitive data collection, and explainable AI to ensure accountable and robust ethical AI systems.

Building Ethics into Artificial Intelligence

The integration of ethical considerations into AI systems has become increasingly pertinent as AI technologies permeate daily life. The paper "Building Ethics into Artificial Intelligence" provides a comprehensive survey of technical solutions for incorporating ethics into AI systems, categorizing advancements into key areas and proposing future research directions.

Taxonomy of Ethical AI

The paper structures the field of AI ethics into four primary categories: exploring ethical dilemmas, individual ethical decision frameworks, collective ethical decision frameworks, and ethics in human-AI interactions.

  1. Exploring Ethical Dilemmas: Tools like GenEth and the MIT Moral Machine aim to identify and analyze human ethical preferences in dilemmas, especially concerning autonomous systems. These tools leverage expert review and crowdsourcing to articulate ethical principles and human attitudes, which are crucial for designing ethically aware AI systems.
  2. Individual Ethical Decision Frameworks: Frameworks such as MoralDM incorporate rule-based and analogical reasoning for ethical decision-making in AI systems. They aim to balance ethical principles with individual agent actions, using constructs like the Belief-Desire-Intention model. Recent advancements also include integrating game theory and machine learning to form generalized ethical decision-making models.
  3. Collective Ethical Decision Frameworks: This area focuses on enabling a group of autonomous agents, whether AI or human, to make collective decisions aligned with ethical norms. Techniques involve using social norms and voting systems derived from human preference data to guide decision-making processes.
  4. Ethics in Human-AI Interactions: The paper examines the ethical aspects of AI systems designed to influence human behavior, proposing guidelines drawn from fields like behavioral science. Study findings highlight the complexity of human perceptions of ethical persuasion by AI, which requires delicate balancing of persuasive strategies and emotional intelligence.

Future Directions

The authors underscore several research avenues:

  • Cultural and Contextual Diversities: Improved data collection on human ethical decision-making across diverse cultural contexts is essential. This diversity can help refine AI frameworks to better align with varied ethical standards and expectations.
  • AI and Social Contracts: As AI becomes more integrated into society, redefining social and legal frameworks governing AI actions and accountability is necessary. This entails a dynamic approach that adapts to evolving societal and technological conditions.
  • Explainable AI for Ethics: Developing means for AI systems to explain their ethical choices is crucial. By utilizing techniques from argumentation theory, AI systems can provide transparency in their decision-making, fostering user trust and understanding.
  • Robust Ethical AI Dynamics: To mitigate strategic exploitation by humans of ethical AI actions, mechanisms from game theory could be used to anticipate and counteract adverse strategic behaviors, preserving AI system integrity and design objectives.

Conclusion

The paper stresses the importance of interdisciplinary collaboration in embedding ethical considerations into AI technologies. As these systems continue to evolve, ongoing research and dialogue among AI practitioners, ethicists, and regulatory bodies are essential for developing comprehensive, culturally aware, and technically feasible solutions. This groundwork will better prepare AI systems to operate ethically within the diverse frameworks of human society.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.