Structural transparency of societal AI alignment through Institutional Logics
Abstract: The field of AI alignment is increasingly concerned with the questions of how values are integrated into the design of generative AI systems and how their integration shapes the social consequences of AI. However, existing transparency frameworks focus on the informational aspects of AI models, data, and procedures, while the institutional and organizational forces that shape alignment decisions and their downstream effects remain underexamined in both research and practice. To address this gap, we develop a framework of \emph{structural transparency} for analyzing organizational and institutional decisions concerning AI alignment, drawing on the theoretical lens of Institutional Logics. We develop a categorization of organizational decisions that are present in the governance of AI alignment, and provide an explicit analytical approach to examining them. We operationalize the framework through five analytical components, each with an accompanying "analyst recipe" that collectively identify the primary institutional logics and their internal relationships, external disruptions to existing social orders, and finally, how the structural risks of each institutional logic are mapped to a catalogue of sociotechnical harms. The proposed concept of structural transparency enables analysts to complement existing approached based on informational transparency with macro-level analyses that capture the institutional dynamics and consequences of decisions regarding AI alignment.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.