Papers
Topics
Authors
Recent
Search
2000 character limit reached

Where Assessment Validation and Responsible AI Meet

Published 4 Nov 2024 in cs.CY | (2411.02577v1)

Abstract: Validity, reliability, and fairness are core ethical principles embedded in classical argument-based assessment validation theory. These principles are also central to the Standards for Educational and Psychological Testing (2014) which recommended best practices for early applications of AI in high-stakes assessments for automated scoring of written and spoken responses. Responsible AI (RAI) principles and practices set forth by the AI ethics community are critical to ensure the ethical use of AI across various industry domains. Advances in generative AI have led to new policies as well as guidance about the implementation of RAI principles for assessments using AI. Building on Chapelle's foundational validity argument work to address the application of assessment validation theory for technology-based assessment, we propose a unified assessment framework that considers classical test validation theory and assessment-specific and domain-agnostic RAI principles and practice. The framework addresses responsible AI use for assessment that supports validity arguments, alignment with AI ethics to maintain human values and oversight, and broader social responsibility associated with AI use.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.