Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation
Abstract: Trustworthy AI is based on seven technical requirements sustained over three main pillars that should be met throughout the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system's life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial for the present and future of our society.
- European Commission High-Level Expert Group on AI, Ethics guidelines for trustworthy AI (2019).
- European Union, Proposal for a Regulation of the European Parliament and of the Council Laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. COM/2021/206 final (2021).
- arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/rego.12512, doi:https://doi.org/10.1111/rego.12512. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/rego.12512
- European Commission High-Level Expert Group on AI, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment (2020).
- E. Union, Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act) (2022).
- E. Union, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on harmonised rules on fair access to and use of data (Data Act) (2022).
- doi:10.18653/v1/P19-1487. URL https://aclanthology.org/P19-1487
- doi:https://doi.org/10.1016/j.ipm.2023.103276. URL https://www.sciencedirect.com/science/article/pii/S0306457323000134
- doi:10.18653/v1/P19-1355. URL https://aclanthology.org/P19-1355
- A. Institute, Algorithmic Accountability Policy Toolkit (2018). URL https://ainowinstitute.org/aap-toolkit.pdf
- arXiv:2301.11616.
- K. Yordanova, The EU AI Act-Balancing human rights and innovation through regulatory sandboxes and standardization (2022).
- Coalition for Health AI (CHAI), Blueprint for trustworthy AI implementation guidance and assurance for healthcare (2023). URL https://www.coalitionforhealthai.org/papers/Blueprint%20for%20Trustworthy%20AI.pdf
- doi:https://doi.org/10.1038/s42256-023-00678-6.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.