AI Agents Should be Regulated Based on the Extent of Their Autonomous Operations
Abstract: This position paper argues that AI agents should be regulated by the extent to which they operate autonomously. AI agents with long-term planning and strategic capabilities can pose significant risks of human extinction and irreversible global catastrophes. While existing regulations often focus on computational scale as a proxy for potential harm, we argue that such measures are insufficient for assessing the risks posed by agents whose capabilities arise primarily from inference-time computation. To support our position, we discuss relevant regulations and recommendations from scientists regarding existential risks, as well as the advantages of using action sequences -- which reflect the degree of an agent's autonomy -- as a more suitable measure of potential impact than existing metrics that rely on observing environmental states.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.