Seamful XAI: Operationalizing Seamful Design in Explainable AI
Abstract: Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes. Instead of hiding these AI imperfections, can we leverage them to help the user? While Explainable AI (XAI) has predominantly tackled algorithmic opaqueness, we propose that seamful design can foster AI explainability by revealing and leveraging sociotechnical and infrastructural mismatches. We introduce the concept of Seamful XAI by (1) conceptually transferring "seams" to the AI context and (2) developing a design process that helps stakeholders anticipate and design with seams. We explore this process with 43 AI practitioners and real end-users, using a scenario-based co-design activity informed by real-world use cases. We found that the Seamful XAI design process helped users foresee AI harms, identify underlying reasons (seams), locate them in the AI's lifecycle, learn how to leverage seamful information to improve XAI and user agency. We share empirical insights, implications, and reflections on how this process can help practitioners anticipate and craft seams in AI, how seamfulness can improve explainability, empower end-users, and facilitate Responsible AI.
- Mark S. Ackerman. 2000. The Intellectual Challenge of CSCW: The Gap between Social Requirements and Technical Feasibility. Hum.-Comput. Interact. 15, 2 (sep 2000), 179–203. https://doi.org/10.1207/S15327051HCI1523_5
- P Agre. 1997. Toward a critical technical practice: Lessons learned in trying to reform AI in Bowker. G., Star, S., Turner, W., and Gasser, L., eds, Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide, Erlbaum (1997).
- Evgeni Aizenberg and Jeroen Van Den Hoven. 2020. Designing for human rights in AI. Big Data & Society 7, 2 (2020), 2053951720949566.
- Saleema Amershi. 2020. Toward Responsible AI by Planning to Fail. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Virtual Event, CA, USA) (KDD ’20). Association for Computing Machinery, New York, NY, USA, 3607. https://doi.org/10.1145/3394486.3409557
- Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
- Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7, 1 (Oct. 2019), 2–11. https://doi.org/10.1609/hcomp.v7i1.5285
- Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
- Responsible AI by design in practice. arXiv preprint arXiv:1909.12838 (2019).
- Philip AE Brey. 2012. Anticipatory ethics for emerging technologies. NanoEthics 6, 1 (2012), 1–13.
- Gregor Broll and Steve Benford. 2005. Seamful Design for Location-Based Mobile Games. In Entertainment Computing - ICEC 2005, Fumio Kishino, Yoshifumi Kitamura, Hirokazu Kato, and Noriko Nagata (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 155–166.
- Matthew Chalmers. 2003. Seamful Design and Ubicomp Infrastructure.
- Social Navigation and Seamful Design. In In Japanese Journal of Cognitive Science, Special Issue on Social Navigation. 171–181.
- Seamful design: showing the seams in wearable computing. In 2003 IEE Eurowearable. 11–16. https://doi.org/10.1049/ic:20030140
- Kathy Charmaz. 2014. Constructing Grounded Theory (Introducing Qualitative Methods series) 2nd Edition. Sage, London.
- Applying the anticipatory failure determination at a very early stage of a system’s development: overview and case study. Multidisciplinary Aspects of Production Engineering 1 (2018).
- Stakeholder Participation in AI: Beyond” Add Diverse Stakeholders and Stir”. arXiv preprint arXiv:2111.01122 (2021).
- Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. arXiv preprint arXiv:2205.06922 (2022).
- Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. In Designing Interactive Systems Conference 2021. 1591–1602.
- Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces. 275–285.
- UX Design Innovation: Challenges for Working with Machine Learning as a Design Material. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 278–288. https://doi.org/10.1145/3025453.3025739
- Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
- Upol Ehsan and Mark O Riedl. 2020. Human-centered explainable ai: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449–466.
- Upol Ehsan and Mark O Riedl. 2021. Explainability pitfalls: Beyond dark patterns in explainable AI. arXiv preprint arXiv:2109.12480 (2021).
- Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (2023), 1–32.
- The Algorithmic Imprint. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1305–1317.
- Operationalizing human-centered perspectives in explainable AI. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.
- Ambiguity as a resource for design. In Proceedings of the SIGCHI conference on Human factors in computing systems. 233–240.
- Datasheets for datasets. Commun. ACM 64, 12 (2021), 86–92.
- Explainable active learning (xal) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–28.
- Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89.
- Do Explanations Help Users Detect Errors in Open-Domain QA? An Evaluation of Spoken vs. Visual Explanations. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, Online, 1103–1116. https://doi.org/10.18653/v1/2021.findings-acl.95
- A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
- XAI—Explainable artificial intelligence. Science Robotics 4, 37 (2019).
- Aaron Halfaker and R Stuart Geiger. 2020. Ores: Lowering barriers with participatory machine learning in wikipedia. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–37.
- Understanding Machine Learning Practitioners’ Data Documentation Perceptions, Needs, Challenges, and Desiderata. Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022), 1–30.
- Nicole Hengesbach. 2022. Undoing Seamlessness: Exploring Seams for Critical Visualization. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 364, 7 pages. https://doi.org/10.1145/3491101.3519703
- Lars Erik Holmquist. 2017. Intelligence on Tap: Artificial Intelligence as a New Design Material. Interactions 24, 4 (jun 2017), 28–33. https://doi.org/10.1145/3085571
- Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–16.
- Planning for natural language failures with the ai playbook. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–11.
- Human factors in model interpretability: Industry practices, challenges, and needs. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (2020), 1–26.
- A review of TRIZ, and its benefits and challenges in practice. Technovation 33, 2-3 (2013), 30–37.
- Sarah Inman and David Ribes. 2019. ”Beautiful Seams”: Strategic Revelations and Concealments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300508
- Designing AI for trust and collaboration in time-constrained medical decisions: a sociotechnical lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
- Algorithmic recourse: from counterfactual explanations to interventions. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 353–362.
- Integrating User eXperience practices into software development processes: implications of the UX characteristics. PeerJ Computer Science 3 (10 2017), e130. https://doi.org/10.7717/peerj-cs.130
- Sensible AI: Re-imagining Interpretability and Explainability using Sensemaking Theory. arXiv preprint arXiv:2205.05057 (2022).
- Identifying the Intersections: User Experience + Research Scientist Collaboration in a Generative Machine Learning Interface. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3290607.3299059
- Mark T. Keane and Barry Smyth. 2020. Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI). In Case-Based Reasoning Research and Development (Lecture Notes in Computer Science), Ian Watson and Rosina Weber (Eds.). Springer International Publishing, Cham, 163–178. https://doi.org/10.1007/978-3-030-58342-2_11
- Paul M Leonardi. 2013. Theoretical foundations for the study of sociomateriality. Information and organization 23, 2 (2013), 59–76. Publisher: Elsevier.
- Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
- Q Vera Liao and Kush R Varshney. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790 (2021).
- Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. arXiv preprint arXiv:2206.10847 (2022).
- Zachary C Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016).
- Conceptualising contestability: Perspectives on contesting algorithmic decisions. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–25.
- Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (2022).
- Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376445
- Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
- Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 735–746. https://doi.org/10.1145/3442188.3445935
- Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
- Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML. In 26th International Conference on Intelligent User Interfaces. 170–174.
- Applying seamful design in location-based mobile museum applications. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 12, 4 (2016), 1–23.
- Samir Passi. 2021. Making Data Work: The Human and Organizational Lifeworlds of Data Science Practices. Ph. D. Dissertation.
- Samir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 39–48. https://doi.org/10.1145/3287560.3287567
- Samir Passi and Steven J. Jackson. 2018. Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 136 (nov 2018), 28 pages. https://doi.org/10.1145/3274405
- Samir Passi and Phoebe Sengers. 2020. Making data science systems work. Big Data & Society 7, 2 (2020), 2053951720939605. https://doi.org/10.1177/2053951720939605
- Samir Passi and Mihaela Vorvoreanu. 2022. Overreliance on AI: Literature Review. Technical Report MSR-TR-2022-12. Microsoft. https://www.microsoft.com/en-us/research/publication/overreliance-on-ai-literature-review/
- Responsible AI—Two Frameworks for Ethical Design Practice. IEEE Transactions on Technology and Society 1, 1 (2020), 34–47. https://doi.org/10.1109/TTS.2020.2974991
- Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–52.
- Jugaad innovation: Think frugal, be flexible, generate breakthrough growth. John Wiley & Sons.
- Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.
- Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 7 (apr 2021), 23 pages. https://doi.org/10.1145/3449081
- Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–23.
- Bogdana Ravoka. 2022. Slowing Down AI with Speculative Friction. Branch Magazine (2022).
- Mary Beth Rosson and John M Carroll. 2009. Scenario based design. Human-computer interaction. boca raton, FL (2009), 145–162.
- Seamless Visions, Seamful Realities: Anticipating Rural Infrastructural Fragility in Early Design of Digital Agriculture. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 451, 15 pages. https://doi.org/10.1145/3491102.3517579
- Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta GA USA, 59–68. https://doi.org/10.1145/3287560.3287598
- Identifying Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction. arXiv:2210.05791 [cs.HC]
- Ben Shneiderman. 2021. Responsible AI: Bridging from ethics to practice. Commun. ACM 64, 8 (2021), 32–35.
- Participation is not a design fix for machine learning. arXiv preprint arXiv:2007.02423 (2020).
- Anselm Strauss and Juliet M. Corbin. 1990. Basics of Qualitative Research: Grounded Theory Techniques and Procedures. Sage, New York.
- Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
- Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109–119.
- Using TRIZ to invent failures–concept and application to go beyond traditional FMEA. Procedia engineering 131 (2015), 426–450.
- Arthur B VanGundy. 1984. Brain writing for new product ideas: an alternative to brainstorming. Journal of Consumer Marketing (1984).
- Janet Vertesi. 2014. Seamful spaces: Heterogeneous infrastructures in interaction. Science, Technology, & Human Values 39, 2 (2014), 264–284.
- Responsible AI Maturity Model. Technical Report MSR-TR-2023-26. Microsoft. https://www.microsoft.com/en-us/research/publication/responsible-ai-maturity-model/
- Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
- Mark Weiser. 1994. Creating the invisible interface: (invited talk). In Proceedings of the 7th annual ACM symposium on User interface software and technology. 1.
- CheXplain: enabling physicians to explore and understand data-driven, AI-enabled medical imaging analysis. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
- Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13.
- Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 295–305.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.