Towards Responsible Governing AI Proliferation
Abstract: This paper argues that existing governance mechanisms for mitigating risks from AI systems are based on the Big Compute' paradigm -- a set of assumptions about the relationship between AI capabilities and infrastructure -- that may not hold in the future. To address this, the paper introduces theProliferation' paradigm, which anticipates the rise of smaller, decentralized, open-sourced AI models which are easier to augment, and easier to train without being detected. It posits that these developments are both probable and likely to introduce both benefits and novel risks that are difficult to mitigate through existing governance mechanisms. The final section explores governance strategies to address these risks, focusing on access governance, decentralized compute oversight, and information security. Whilst these strategies offer potential solutions, the paper acknowledges their limitations and cautions developers to weigh benefits against developments that could lead to a `vulnerable world'.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.