Papers
Topics
Authors
Recent
Search
2000 character limit reached

Toward a Global Regime for Compute Governance: Building the Pause Button

Published 25 Jun 2025 in cs.CY | (2506.20530v1)

Abstract: As AI capabilities rapidly advance, the risk of catastrophic harm from large-scale training runs is growing. Yet the compute infrastructure that enables such development remains largely unregulated. This paper proposes a concrete framework for a global "Compute Pause Button": a governance system designed to prevent dangerously powerful AI systems from being trained by restricting access to computational resources. We identify three key intervention points -- technical, traceability, and regulatory -- and organize them within a Governance--Enforcement--Verification (GEV) framework to ensure rules are clear, violations are detectable, and compliance is independently verifiable. Technical mechanisms include tamper-proof FLOP caps, model locking, and offline licensing. Traceability tools track chips, components, and users across the compute supply chain. Regulatory mechanisms establish constraints through export controls, production caps, and licensing schemes. Unlike post-deployment oversight, this approach targets the material foundations of advanced AI development. Drawing from analogues ranging from nuclear non-proliferation to pandemic-era vaccine coordination, we demonstrate how compute can serve as a practical lever for global cooperation. While technical and political challenges remain, we argue that credible mechanisms already exist, and that the time to build this architecture is now, before the window for effective intervention closes.

Summary

  • The paper introduces a compute governance framework featuring tamper-proof FLOP caps and regulatory controls to restrict AI training.
  • It details a tripartite Governance, Enforcement, and Verification (GEV) model that integrates technical, traceability, and regulatory mechanisms.
  • The study draws parallels with non-proliferation treaties to advocate for international cooperation and preemptive controls over high-risk AI development.

Toward a Global Regime for Compute Governance: Building the Pause Button

Introduction

The paper "Toward a Global Regime for Compute Governance: Building the Pause Button" explores the urgent need for a global governance system to regulate the compute infrastructure that underpins advanced AI model training. As AI models become increasingly capable, their potential to cause societal disruption and harm grows. The paper proposes a specific framework aimed at preventing potentially dangerous AI systems from being developed by restricting access to the necessary compute resources. This is achieved by targeting intervention points—technical controls, traceability, and regulatory mechanisms—organized within a Governance, Enforcement, and Verification (GEV) framework.

Proposed Compute Governance Framework

The core proposition is the establishment of a "Compute Pause Button," a governance mechanism designed to regulate the use of computational resources in AI training. The paper identifies three key intervention points:

  1. Technical Mechanisms: Incorporate modifications to hardware to enforce limits on computational power usage, such as tamper-proof FLOP caps. These could be implemented using secure hardware modules that prevent training runs from exceeding predefined compute thresholds.
  2. Traceability Mechanisms: Develop comprehensive traceability infrastructure to track chips and computational usage across the entire supply chain. This ensures visibility into who is accessing compute resources, thus preventing unmonitored large-scale training runs.
  3. Regulatory Mechanisms: Establish export controls, licensing schemes, and production caps to set rules for compute use and ensure compliance. These would align with international legal frameworks and utilize existing structures for implementation and oversight. Figure 1

    Figure 1: Framework and Example Mechanisms for Compute Pause Button.

Governance, Enforcement, and Verification (GEV) Framework

The GEV framework provides the operational basis for compute governance, distinguishing between setting rules (governance), ensuring compliance (enforcement), and monitoring adherence (verification). This approach ensures an integrated system where each component reinforces the others:

  • Governance sets the legal and institutional standards for compute usage, similar to tax codes defining taxable incomes.
  • Enforcement involves proactive measures like FLOP caps to prevent violations, akin to payroll tax withholding.
  • Verification uses audits and ongoing monitoring to detect breaches of compliance and inform governance updates. Figure 2

    Figure 2: Cyclical nature of GEV.

Implementing the Framework

The paper suggests multiple mechanisms to operationalize the framework:

  • Tamper-Proof FLOP Caps: Hardware-level FLOP caps act as a direct intervention, providing a failsafe against excessive compute use.
  • Model Locking: This mechanism restricts the unauthorized deployment of trained model weights, allowing regulation of how and where models are used post-training.
  • Offline Licensing: By implementing licensing systems that limit compute usage and require periodic renewals, this mechanism enacts control even in offline environments.
  • Traceability and Chain of Custody: These mechanisms ensure end-to-end visibility of compute resources, from chip production through deployment, strengthening auditability and compliance.

The paper also discusses existing policy analogues like the Nuclear Non-Proliferation Treaty and the Chemical Weapons Convention, drawing parallels with compute governance. These analogues provide insights into effective implementation, such as the use of international cooperation and verification mechanisms.

Conclusion

The proposed architecture aims to shift the focus of AI governance from post-deployment oversight to preemptive controls on computational resources. By doing so, it seeks to prevent the unchecked development of powerful AI systems. The paper concludes that although technical and political challenges remain, credible mechanisms for governing compute already exist. Mobilizing political will and fostering international cooperation will be crucial to implementing this architecture. The framework offers a pragmatic approach to managing the rapid advancements in AI, helping to ensure technologies develop in a manner aligned with global safety and stability objectives.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 9 tweets with 35027 likes about this paper.