Explainable Human-AI Interaction: A Planning Perspective
Abstract: From its inception, AI has had a rather ambivalent relationship with humans -- swinging between their augmentation and replacement. Now, as AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. One critical requirement for such synergistic human-AI interaction is that the AI systems be explainable to the humans in the loop. To do this effectively, AI agents need to go beyond planning with their own models of the world, and take into account the mental model of the human in the loop. Drawing from several years of research in our lab, we will discuss how the AI agent can use these mental models to either conform to human expectations, or change those expectations through explanatory communication. While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception. Although the book is primarily driven by our own research in these areas, in every chapter, we will provide ample connections to relevant research from other groups.
- James F Allen. Mixed initiative planning: Position paper. In ARPA/Rome Labs Planning Initiative Workshop, 1994.
- Guidelines for human-ai interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 3, 2019.
- Highlights: Summarizing agent behavior to people. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 1168–1176, 2018.
- Modeling human plan recognition using bayesian theory of mind. Plan, activity, and intent recognition: Theory and practice, 7:177–204, 2014.
- Plan, Repair, Execute, Explain – How Planning Helps to Assemble Your Home Theater. In ICAPS, 2014.
- Belief tracking for planning with sensing: Width, complexity and approximations. Journal of Artificial Intelligence Research, 50:923–970, 2014.
- Cynthia Breazeal. Toward sociable robots. Robotics and autonomous systems, 42(3-4):167–175, 2003.
- Cynthia L Breazeal. Designing sociable robots. MIT press, 2004.
- Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
- Maintaining evolving domain models. In Proceedings of the twenty-fifth international joint conference on artificial intelligence, pages 3053–3059, 2016.
- (How) Can AI Bots Lie? In XAIP Workshop, 2019a.
- (when) can ai bots lie? In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 53–59, 2019b.
- Planning for serendipity. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5300–5306. IEEE, 2015.
- Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. In IJCAI, 2017a.
- Mr. jones–towards a proactive smart room orchestrator. In AAAI Fall Symposia, 2017b.
- Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots. In IROS, 2018.
- Explicability? Legibility? Predictability? Transparency? Privacy? Security?: The Emerging Landscape of Interpretable Agent Behavior. In ICAPS, 2019a.
- Plan explanations as model reconciliation–an empirical study. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 258–266. IEEE, 2019b.
- Balancing Explicability and Explanation in Human-Aware Planning. In IJCAI, 2019c.
- The emerging landscape of explainable ai planning and decision making. In IJCAI, 2020.
- Generating Legible Motion. In RSS, 2013.
- Anca D Dragan. Robot Planning with Mathematical Models of Human State and Action. arXiv:1705.04226, 2017.
- Legibility and Predictability of Robot Motion. In HRI, 2013.
- Effects of Robot Motion on Human-Robot Collaboration. In HRI, 2015.
- A New Approach to Plan-Space Explanation: Analyzing Plan-Property Dependencies in Oversubscription Planning. In AAAI, 2020.
- Christiane Fellbaum. Wordnet. In Theory and applications of ontology: computer applications, pages 231–243. Springer, 2010.
- Generating Plans that Predict Themselves. In WAFR, 2018.
- Explainable Planning. In IJCAI XAI Workshop, 2017.
- A Concise Introduction to Models and Methods for Automated Planning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2013.
- Deep learning. MIT press, 2016.
- Model elicitation through direct questioning. arXiv preprint arXiv:2011.12262, 2020.
- Explanation augmented feedback in human-in-the-loop reinforcement learning. arXiv preprint arXiv:2006.14804, 2020.
- Improving robot controller transparency through autonomous policy explanation. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 303–312. IEEE, 2017.
- Malte Helmert. The Fast Downward Planning System. JAIR, 2006.
- Jörg Hoffmann. Ff: The fast-forward planning system. AI magazine, 22(3):57–57, 2001.
- Jörg Hoffmann. Where’ignoring delete lists’ works: Local search topology in planning benchmarks. Journal of Artificial Intelligence Research, 24:685–758, 2005.
- Oliver Wendell Holmes. Medical Essays, 1842-1882, volume 9. Houghton, Mifflin, 1895.
- Symbols as a lingua franca for bridging human-ai chasm for explainable and advisable ai systems. In AAAI Senior Member Track, 2021.
- Goal Recognition Design. In ICAPS, 2014.
- Privacy Preserving Plans in Partially Observable Environments. In IJCAI, 2016.
- Strong Stubborn Sets for Efficient Goal Recognition Design. In ICAPS, 2018.
- Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pages 2668–2677. PMLR, 2018.
- Implicit communication in a joint action. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pages 283–292. ACM, 2017.
- From skills to symbols: Learning symbolic representations for abstract high-level planning. Journal of Artificial Intelligence Research, 61:215–289, 2018.
- The intelligent patient’s guide to the doctor-patient relationship: learning how to talk so your doctor will listen. Oxford University Press, 1998.
- Model-Based Contrastive Explanations for Explainable Planning. In XAIP Workshop, 2019.
- Explicable Robot Planning as Minimizing Distance from Expected Behavior. In AAMAS Extended Abstract, 2019a.
- A Unified Framework for Planning in Adversarial and Cooperative Environments. In AAAI, 2019b.
- Designing environments conducive to interpretable robot behavior. In IROS, 2020a.
- Signaling friends and head-faking enemies simultaneously: Balancing goal obfuscation and goal legibility. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’20, page 1889–1891. International Foundation for Autonomous Agents and Multiagent Systems, 2020b.
- Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001.
- Exploring Computational User Models for Agent Policy Summarization. In IJCAI, 2019.
- Action Selection for Transparent Planning. In AAMAS, 2018.
- Deceptive Path Planning. In IJCAI, 2017.
- Tim Miller. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence, 2019.
- Planning over multi-agent epistemic states: A classical planning approach. In Proceedings of the AAAI Conference on Artificial Intelligence, 2015.
- Planning for goal-oriented dialogue systems. arXiv preprint arXiv:1910.08137, 2019.
- Robust planning with incomplete domain models. Artificial Intelligence, 245:134–161, 2017.
- Generating diverse plans to handle unknown and partially known user preferences. Artificial Intelligence, 190(0):1 – 31, 2012.
- Nils J Nilsson. Principles of artificial intelligence. Morgan Kaufmann, 2014.
- Lies in the doctor-patient relationship. Primary care companion to the Journal of clinical psychiatry, 11(4):163, 2009.
- " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
- Artificial intelligence: a modern approach. Prentice Hall, 2002.
- Brian Scassellati. Theory of mind for a humanoid robot. Auton. Robots, 12(1):13–24, 2002.
- Making Hybrid Plans More Clear to Human Users – A Formal Approach for Generating Sound Explanations. In ICAPS, 2012.
- RADAR – A Proactive Decision Support System for Human-in-the-Loop Planning. In AAAI Fall Symposium, 2017a.
- Radar-a proactive decision support system for human-in-the-loop planning. In AAAI Fall Symposia, pages 269–276, 2017b.
- Ma-radar–a mixed-reality interface for collaborative decision making. ICAPS UISP, 2018.
- Not all users are the same: Providing personalized explanations for sequential decision making problems. arXiv preprint arXiv:2106.12207, 2021.
- Handling model uncertainty and multiplicity in explanations via model reconciliation. In Proceedings of the International Conference on Automated Planning and Scheduling, 2018a.
- Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations. In IJCAI, 2018b.
- Model-free model reconciliation. In AAAI, 2019a.
- Why can’t you do that hal? explaining unsolvability of planning tasks. In International Joint Conference on Artificial Intelligence, 2019b.
- Planning with Explanatory Actions: A Joint Approach to Plan Explicability and Explanations in Human-Aware Planning. In AAAI, 2020a.
- –d3wa+–a case study of xaip in a model acquisition task for dialogue planning. In Proceedings of the International Conference on Automated Planning and Scheduling, volume 30, pages 488–497, 2020b.
- Bridging the gap: Providing post-hoc symbolic explanations for sequential decision-making problems with black box simulators. arXiv preprint arXiv:2002.01080, 2020c.
- Tldr: Policy summarization for factored ssp problems using temporal abstractions. In Proceedings of the International Conference on Automated Planning and Scheduling, volume 30, pages 272–280, 2020d.
- Foundations of explanations as model reconciliation. Artificial Intelligence, 301:103558, 2021a.
- A unifying bayesian formulation of measures of interpretability in human-ai interaction. In International Joint Conference on Artificial Intelligence, pages 4602–4610, 2021b.
- Using state abstractions to compute personalized contrastive explanations for ai agent behavior. Artificial Intelligence, 301:103570, 2021c.
- Domain independent approaches for finding diverse plans. In IJCAI, pages 2016–2022, 2007.
- Generation of Policy-Level Explanations for Reinforcement Learning. In AAAI, 2019.
- Radar-x: An interactive interface pairing contrastive explanations with revised plan suggestions. In XAIP ICAPS, 2020.
- Kurt VanLehn. The behavior of tutoring systems. I. J. Artificial Intelligence in Education, 16(3):227–265, 2006.
- On exploiting hitting sets for model reconciliation. In AAAI, 2021.
- Manuela M Veloso. Learning by Analogical Reasoning in General Problem Solving. Doctoral Thesis, 1992.
- Contrastive Explanations for Reinforcement Learning in Terms of Expected Consequences. In IJCAI Workshop on explainable AI (XAI), 2018.
- H Wimmer and J Perner. Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children’s understanding of deception. Cognition, 1983.
- Graying the Black Box: Understanding DQNs. In ICML, 2016.
- Trust-aware planning: Modeling trust evolution in longitudinal human-robot interaction. arXiv preprint arXiv:2105.01220, 2021.
- Learning from ambiguous demonstrations with self-explanation guided reinforcement learning, 2021.
- A General Approach to Environment Design with One Agent. In IJCAI, 2009.
- Plan Explicability and Predictability for Robot Task Planning. In ICRA, 2017.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.