Papers
Topics
Authors
Recent
Search
2000 character limit reached

Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How

Published 11 Mar 2024 in cs.HC and cs.CY | (2403.06823v2)

Abstract: Advances in Generative AI are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the currently discussed European AI Act aims at addressing these risks through Article 52's AI transparency obligations, its interpretation and implications remain unclear. In this early work, we adopt a participatory AI approach to derive key questions based on Article 52's disclosure obligations. We ran two workshops with researchers, designers, and engineers across disciplines (N=16), where participants deconstructed Article 52's relevant clauses using the 5W1H framework. We contribute a set of 149 questions clustered into five themes and 18 sub-themes. We believe these can not only help inform future legal developments and interpretations of Article 52, but also provide a starting point for Human-Computer Interaction research to (re-)examine disclosure transparency from a human-centered AI lens.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (58)
  1. World Economic Forum 2024. 2024. The Global Risks Report 2024. https://www.weforum.org/publications/global-risks-report-2024. [2024-1-19].
  2. ACM. 2024. ACM Policy on Authorship. https://www.acm.org/publications/policies/new-acm-policy-on-authorship. [Accessed 22-01-2024].
  3. European AI Act. 2022. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts) and amending certain Union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206. https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf [Accessed 12-01-2024].
  4. Power to the People? Opportunities and Challenges for Participatory AI. In Equity and Access in Algorithms, Mechanisms, and Optimization (Arlington, VA, USA) (EAAMO ’22). Association for Computing Machinery, New York, NY, USA, Article 6, 8 pages. https://doi.org/10.1145/3551624.3555290
  5. On the Opportunities and Risks of Foundation Models. arXiv preprint arXiv:2108.07258 (2021). https://crfm.stanford.edu/assets/report.pdf
  6. AudioLM: a Language Modeling Approach to Audio Generation. arXiv:2209.03143 [cs.SD]
  7. Matthew Burtell and Thomas Woodside. 2023. Artificial Influence: An Analysis Of AI-Driven Persuasion. http://arxiv.org/abs/2303.08721 arXiv:2303.08721 [cs].
  8. C2PA. 2024. Introducing Official Content Credentials Icon - C2PA — c2pa.org. https://c2pa.org/post/contentcredentials/. [Accessed 17-01-2024].
  9. Thematic analysis. Qualitative psychology: A practical guide to research methods 222, 2015 (2015), 248.
  10. The Str(AI)ght Scoop: Artificial Intelligence Cues Reduce Perceptions of Hostile Media Bias. Digital Journalism 11, 9 (Oct. 2023), 1577–1596. https://doi.org/10.1080/21670811.2021.1969974
  11. Google DeepMind. 2024. SynthID. https://deepmind.google/technologies/synthid/. Accessed: 2024-1-19.
  12. Upol Ehsan and Mark O. Riedl. 2020. Human-Centered Explainable AI: Towards a Reflective Sociotechnical Approach. In HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence, Constantine Stephanidis, Masaaki Kurosu, Helmut Degen, and Lauren Reinerman-Jones (Eds.). Springer International Publishing, Cham, 449–466.
  13. Transforming HCI Research Cycles using Generative AI and “Large Whatever Models” (LWMs). In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’24 EA). Association for Computing Machinery, New York, NY, USA, 5 pages. https://doi.org/10.1145/3613905.3643977
  14. What label should be applied to content produced by generative AI? https://doi.org/10.31234/osf.io/v4mfz
  15. Batya Friedman. 1996. Value-Sensitive Design. Interactions 3, 6 (dec 1996), 16–23. https://doi.org/10.1145/242485.242493
  16. Six Human-Centered Artificial Intelligence Grand Challenges. International Journal of Human Computer Interaction 39, 3 (2023), 391–437. https://doi.org/10.1080/10447318.2022.2153320 arXiv:https://doi.org/10.1080/10447318.2022.2153320
  17. Dark Patterns and the Legal Requirements of Consent Banners: An Interaction Criticism Perspective. In Proc. CHI ’21. ACM, Yokohama Japan, 1–18. https://doi.org/10.1145/3411764.3445779
  18. Human Detection of Political Speech Deepfakes across Transcripts, Audio, and Video. arXiv:2202.12883 [cs.HC]
  19. Regulating ChatGPT and Other Large Generative AI Models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1112?1123. https://doi.org/10.1145/3593013.3594067
  20. Geoff Hart. 1996. The Five W’s: An Old Tool for the New Task of Audience Analysis. Technical Communication 43, 2 (1996), 139–145. http://www.jstor.org/stable/43088033
  21. Natali Helberger and Nicholas Diakopoulos. 2023. ChatGPT and the AI act. Internet Pol. Rev. 12, 1 (Feb. 2023).
  22. Imagen video: High definition video generation with diffusion models. arXiv:2210.02303 [cs.CV]
  23. The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Research Ethics 19, 4 (2023), 449–465. https://doi.org/10.1177/17470161231180449 arXiv:https://doi.org/10.1177/17470161231180449
  24. Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–8.
  25. Understanding Effects of Algorithmic vs. Community Label on Perceived Accuracy of Hyper-Partisan Misinformation. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 371 (nov 2022), 27 pages. https://doi.org/10.1145/3555096
  26. 5W+1H pattern: A perspective of systematic mapping studies and a case study on cloud software testing. Journal of Systems and Software 116 (2016), 206–219. https://doi.org/10.1016/j.jss.2015.01.058
  27. Michael H. Kernis and Brian M. Goldman. 2006. A Multicomponent Conceptualization of Authenticity: Theory and Research. In Advances in Experimental Social Psychology. Vol. 38. Elsevier, 283–357. https://doi.org/10.1016/S0065-2601(06)38006-9
  28. “Look! It’s a Computer Program! It’s an Algorithm! It’s AI!”: Does Terminology Affect Human Perceptions and Evaluations of Algorithmic Decision-Making Systems?. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 581, 28 pages. https://doi.org/10.1145/3491102.3517527
  29. User Experience Design Professionals’ Perceptions of Generative Artificial Intelligence. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’24). Association for Computing Machinery, New York, NY, USA, 15 pages. https://doi.org/10.1145/3613904.3642114
  30. GPT detectors are biased against non-native English writers. Patterns 4, 7 (2023), 100779. https://doi.org/10.1016/j.patter.2023.100779
  31. News from Generative Artificial Intelligence Is Believed Less. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 97?106. https://doi.org/10.1145/3531146.3533077
  32. Nikita Lukianets. [n.d.]. Taxonomy — openethics.ai. https://openethics.ai/taxonomy/. [Accessed 22-01-2024].
  33. Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Marketing Science 38, 6 (2019), 937–947. https://doi.org/10.1287/mksc.2019.1192 arXiv:https://doi.org/10.1287/mksc.2019.1192
  34. Reliability and Inter-rater Reliability in Qualitative Research: Norms and Guidelines for CSCW and HCI Practice. Proceedings of the ACM on Human-Computer Interaction 3 (11 2019), 1–23. https://doi.org/10.1145/3359174
  35. Tech Monitor. [n.d.]. Text of EU AI Act leaked amid debate over the timeline for final approval — techmonitor.ai. https://techmonitor.ai/technology/ai-and-automation/eu-ai-act-leaked-short-timeline. [Accessed 24-01-2024].
  36. Elements that Influence Transparency in Artificial Intelligent Systems - A Survey. In Human-Computer Interaction – INTERACT 2023, José Abdelnour Nocera, Marta Kristín Lárusdóttir, Helen Petrie, Antonio Piccinno, and Marco Winckler (Eds.). Springer Nature Switzerland, Cham, 349–358.
  37. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity. arXiv:2401.07348 [cs.CY]
  38. Council of the EU. [n.d.]. Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world. https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/. [Accessed 22-01-2024].
  39. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.DL]
  40. OpenAI. 2024. How OpenAI is approaching 2024 worldwide elections — openai.com. https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections. [Accessed 17-01-2024].
  41. The Role of Explainable AI in the Context of the AI Act. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (Chicago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 1139–1150. https://doi.org/10.1145/3593013.3594069
  42. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science 66, 11 (2020), 4944–4957. https://doi.org/10.1287/mnsc.2019.3478 arXiv:https://doi.org/10.1287/mnsc.2019.3478
  43. Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. 5, CSCW1 (2021).
  44. Can AI-Generated Text be Reliably Detected? arXiv:2303.11156 [cs.CL]
  45. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems 35 (2022), 36479–36494.
  46. Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study. In 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, Chicago IL USA, 248–260. https://doi.org/10.1145/3593013.3593994
  47. Simulating the Human in HCD with ChatGPT: Redesigning Interaction Design with AI. Interactions 31, 1 (jan 2024), 24–31. https://doi.org/10.1145/3637436
  48. Aaron Springer and Steve Whittaker. 2020. Progressive Disclosure: When, Why, and How Do Users Want Algorithmic Transparency Information? ACM Transactions on Interactive Intelligent Systems 10, 4 (Dec. 2020), 1–32. https://doi.org/10.1145/3374218
  49. S Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88.
  50. S. Shyam Sundar and Jinyoung Kim. 2019. Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–9. https://doi.org/10.1145/3290605.3300768
  51. Benjamin Toff and Felix M Simon. 2023. “Or they could just not use it?”: The Paradox of AI Disclosure for Audience Trust in News. (Dec. 2023).
  52. Joel E Tohline et al. 2008. Computational provenance. Computing in Science & Engineering 10, 03 (2008), 9–10.
  53. Richie Torres. 2023. H.R.3831 - AI Disclosure Act of 2023. https://www.congress.gov/bill/118th-congress/house-bill/3831?s=1&r=1 Accessed: 2024-1-15.
  54. Steven Umbrello and Ibo Poel. 2021. Mapping value sensitive design onto AI for social good principles. AI and Ethics 1 (08 2021), 3.
  55. Michael Veale and Frederik Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act: Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International 22, 4 (2021), 97–112. https://doi.org/10.9785/cri-2021-220402
  56. Merriam Webster. 2024. Definition of DISCLOSE. https://www.merriam-webster.com/dictionary/disclose. Accessed: 2024-1-22.
  57. Effects of Credibility Indicators on Social Media News Sharing Intent. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376213
  58. Copyright Protection and Accountability of Generative AI: Attack, Watermarking and Attribution. In Companion Proceedings of the ACM Web Conference 2023 (Austin, TX, USA) (WWW ’23 Companion). Association for Computing Machinery, New York, NY, USA, 94–98. https://doi.org/10.1145/3543873.3587321
Citations (4)

Summary

  • The paper introduces a participatory framework using the 5W1H approach to dissect the obligations set by Article 52 of the European AI Act.
  • It employs multi-disciplinary workshops that generated 149 targeted questions to address ethical, legal, and practical challenges in AI-generated content disclosure.
  • Findings underscore the need for standardized, clear disclosure protocols to mitigate misinformation and enhance stakeholder trust.

Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How

Introduction

The paper "Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How" discusses the implications of Article 52 of the European AI Act concerning AI transparency obligations. It explores the challenges and considerations surrounding the disclosure of AI-generated content, which has become increasingly indistinguishable from human-created content. The research adopts a participatory approach, involving multi-disciplinary workshops to derive essential questions about AI disclosure. Figure 1

Figure 1

Figure 1: Snapshots of the workshop sessions showing participants generating questions.

Background and Motivation

Advancements in Generative AI are rapidly transforming the media landscape, producing content that resembles human-created material. This introduces significant risks of misinformation, which the European AI Act aims to address through its transparency obligations. Article 52 requires providers to inform users when they are interacting with AI systems, and users to disclose when content has been artificially generated. The complexity of this task lies in the lack of standardized definitions and protocols for disclosure, and the broader implications on human-computer interaction (HCI) and societal trust in AI.

Methodology

The authors conducted workshops with researchers from diverse fields to explore the nuances of Article 52. Using the 5W1H framework (Who, What, When, Where, Why, How), participants deconstructed the clauses to generate 149 questions categorized into themes. This participatory approach ensures that various perspectives on AI transparency are considered, encouraging interdisciplinary dialogue. The questions were clustered into themes addressing ethical concerns, implementation challenges, societal impacts, provider responsibilities, and user empowerment.

Key Findings

The workshops highlighted ethical concerns regarding the communication of AI limitations and potential dangers of AI-generated content. Legal questions arose about responsibility and accountability for non-disclosure, emphasizing the need for clear regulations and standards. Participants expressed the importance of defining authenticity and providing guidelines for effective transparency.

Practical Challenges and Evolving Context

The study identified practical challenges in implementing transparency measures, such as adapting disclosures to different devices and contexts. It underscored the importance of future-proofing AI disclosures against rapid technological advancements and varying societal impacts, especially in contexts where misinformation can propagate quickly.

Provider Responsibility and Industry Impact

Themes related to provider obligations focused on the motivations for and against disclosing AI usage. Providers must balance transparency with operational realities, considering industry-specific challenges and benefits. The workshops suggested that AI's role should be clearly communicated to avoid potential misrepresentations and ethical dilemmas.

Trust, Authenticity, and User Empowerment

Building user trust through authenticity is critical. Participants discussed mechanisms for verifying AI content and debated the psychological impacts of disclosure on trust and perceived credibility. Empowering users with educational tools to identify AI-generated content can reduce misinformation risks and enhance societal trust.

User Experience and Personalization

The research delved into user experience aspects such as minimizing information overload and optimizing personalization in AI interactions. Standardizing disclosure methods and integrating user-centric design can help maintain user engagement without detracting from content quality. The challenge is balancing adequate disclosure with enhancing user interaction and accessibility.

Discussion

The paper's approach reveals the complexity involved in crafting effective AI transparency measures and highlights the need for comprehensive interdisciplinary strategies. Future AI designs must consider evolving societal contexts and the implications of widespread misinformation. Enhancing communication channels between policy-makers, technologists, and the public is essential to foster trust and accountability.

Conclusion

Transparent AI disclosure is a multifaceted issue requiring ongoing refinement as technology advances. The paper's participatory approach unveils critical insights that can guide the development of practical, user-centric AI transparency frameworks. By addressing legal, ethical, and practical challenges, stakeholders can better align AI systems with societal values and expectations, ensuring responsible AI deployment in media and beyond.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 5 tweets with 15 likes about this paper.