A living comparison of AI governance frameworks, legislation, and institutions worldwide. What's enacted, what's proposed, who the key actors are, and where enforcement has teeth.
Legislation
The world's approach to AI governance ranges from comprehensive binding law (EU, China, South Korea) to voluntary guidelines (India, Japan) to deliberate non-regulation (US federal level). Here is every major jurisdiction compared.
| Jurisdiction | Key Instrument(s) | Status | Enforcement | Max Penalty | Verified |
|---|---|---|---|---|---|
| EU | AI Act (2024/1689) | In Force (phased) Prohibited practices: Feb 2025 GPAI: Aug 2025 High-risk: Aug 2026 |
EU AI Office + national authorities | EUR 35M / 7% global turnover | Mar 2026 |
| China | Algorithm Provisions (2022), Deep Synthesis (2023), Generative AI (2023), Content Labeling (2025), Cybersecurity Law (2026) | In Force Multiple regulations active |
CAC-led, multi-agency | CNY 50M / 5% turnover | Mar 2026 |
| US (Federal) | Executive Orders only | No Comprehensive Law Executive action only |
Agency-by-agency (FTC, DOJ) | Case-by-case | Mar 2026 |
| US (Texas) | TRAIGA | In Force Jan 2026 | Texas AG | $10K-$200K/violation | Mar 2026 |
| US (California) | SB 53 (Frontier AI Transparency) | In Force Jan 2026 | California AG | $1M/violation | Mar 2026 |
| US (Colorado) | Colorado AI Act (SB 24-205) | Enacted, not yet effective Effective Jun 2026 |
Colorado AG | $20K/violation | Mar 2026 |
| UK | Pro-innovation framework; no legislation | No AI Law Bill delayed to summer 2026 |
Existing sector regulators | N/A | Mar 2026 |
| Canada | None (AIDA died Jan 2025) | No Framework | Existing laws (PIPEDA) | N/A | Mar 2026 |
| Brazil | PL 2338/2023 | Passed Senate Pending in Chamber |
Not yet enacted | N/A | Mar 2026 |
| Japan | AI Promotion Act (2025) | Enacted (soft law) Best-effort obligations |
Name-and-shame; no fines | None | Mar 2026 |
| South Korea | AI Basic Act | In Force Jan 2026 1-year grace period |
MSIT | KRW 30M (~$20.5K) | Mar 2026 |
| India | AI Governance Guidelines (2025) | Non-binding | Existing laws only | N/A | Mar 2026 |
| International | Council of Europe AI Treaty | In Force Nov 2025 | National implementation | Per national law | Mar 2026 |
| International | G7 Hiroshima Code of Conduct | Voluntary | Self-reporting via OECD | None | Mar 2026 |
Classify AI systems by risk level; regulate proportionally. High-risk systems face conformity assessments. The EU's 10^25 FLOPs threshold defines systemic risk for GPAI models.
Regulate each AI technology type separately: algorithms (2022), deepfakes (2023), generative AI (2023), content labeling (2025). Fast iteration but regulatory complexity.
Minimize regulation to maintain competitive advantage. Focus on voluntary standards, industry self-governance, and removing regulatory barriers. Safety framed as secondary to competitiveness.
Rely on existing laws (privacy, consumer protection) supplemented by voluntary guidelines. No comprehensive AI-specific framework.
Enforcement
Laws on paper mean nothing without enforcement. Here's where AI governance is actually binding, and where it's performative.
| Jurisdiction | Instrument | Enforcement Level | Evidence |
|---|---|---|---|
| China | AI regulations (multiple) | 5,000+ algorithm filings; regular CAC enforcement campaigns; mandatory pre-launch approval; increased penalties to CNY 50M (Jan 2026); documented enforcement waves in 2024-2025 | |
| EU | AI Act | Penalties up to 7% turnover exist in law. But: only 3/27 Member States designated authorities; no fines issued yet; AI Office understaffed (~125 staff). Finland first with full enforcement powers (Dec 2025) | |
| EU / GDPR | GDPR applied to AI | Italy fined OpenAI EUR 15M (Dec 2024); Italy banned ChatGPT (Mar 2023); multiple DPA investigations across EU | |
| US (FTC) | FTC Act (Section 5) | Operation AI Comply (2024-2025): multiple cases. Then: vacated Rytr consent order (Dec 2025); retreating under Trump leadership | |
| US (States) | Texas TRAIGA, CA SB 53 | Both effective Jan 2026; no enforcement actions yet. Face federal preemption threat from Dec 2025 EO | |
| South Korea | AI Basic Act | 1-year enforcement grace period. Max fine only ~$20.5K. Emphasis on guidance first | |
| Japan | AI Promotion Act | "Best-effort" obligations. No fines. Name-and-shame only | |
| International | All frameworks | G7 Code, OECD Principles, UN Compact: all voluntary. Council of Europe treaty depends on national implementation. No international enforcement body |
| Framework | Organization | External Verification? | Credibility Issue |
|---|---|---|---|
| Responsible Scaling Policy (RSP) | Anthropic | No | v3.0 (Feb 2026) replaced binding pause with competitive conditional: "will only pause if we have a significant lead" |
| Preparedness Framework | OpenAI | No | Contains competitive escape clause. Superalignment team dissolved May 2024. "Safely" removed from mission Nov 2025 |
| Frontier Safety Framework | Google DeepMind | No | Internal governance only. Critical Capability Levels not externally audited |
| Frontier Model Forum | Industry consortium | No | No binding commitments. No public accountability mechanism |
| AVERI (AI Assurance Levels) | Miles Brundage (nonprofit) | Building | Launched Jan 2026. $7.5M raised. Not yet conducting audits. Developing standards for independent verification |
Institutions
The institutional landscape of AI governance spans government agencies, multilateral bodies, industry forums, and civil society organizations. Here are the most consequential actors.
Direct oversight of GPAI models. Coordinates AI Board, Scientific Panel, Advisory Forum. Published GPAI Code of Practice (Jul 2025). Understaffed relative to mandate. No enforcement actions yet.
Formerly AI Safety Institute. "Safety" removed from name. Focus shifted from safety evaluation to standards and innovation. New agentic AI initiative (Feb 2026). $20M for two new AI centers with MITRE.
Formerly AI Safety Institute. Refocused on national security risks. GBP 27M Alignment Project (with OpenAI/Microsoft contributions). End-to-end biosecurity red-teaming. 62,000 agent vulnerabilities identified.
Primary drafter and enforcer of all AI regulations. Operates national algorithm filing platform (5,000+ filings). Conducts regular enforcement campaigns. Pre-launch gatekeeper for generative AI services.
Created by Dec 2025 EO to challenge state AI laws in federal court. Commerce Dept evaluation of state laws due Mar 2026. Threatens $42B BEAD broadband funding to states with "onerous AI laws."
Independent, multidisciplinary scientific assessments. First report due Jul 2026 Geneva. US voted against establishment (117-2). Advisory only — no enforcement power.
Manages G7 Hiroshima Code of Conduct reporting. Updated AI Principles (May 2024). Developing AI Policy Toolkit. Influential but all frameworks voluntary.
Launched at Seoul Summit. US (CAISI), UK (AISI), EU, France, Japan, Kenya, Korea, Singapore, Canada, Australia. But US and UK have rebranded away from "safety."
Data-driven AI policy analysis informing Congress and executive branch. One of the most influential DC policy research centers on AI.
Leading compute governance research. Lennart Heim's former base. Publishes foundational AI governance research.
One of the largest funders of AI governance. Funds CAIS, GovAI, CSET, and many others. AI governance RFP closed Jan 2026. Growing giving in 2026.
AI Verification and Evaluation Research Institute. Developing independent auditing standards for frontier AI. "AI Assurance Levels" framework (L1-L4). $7.5M raised.
Debates
AI governance is shaped by unresolved tensions. Each debate presents genuinely strong arguments on both sides. Click to expand.
Core question: Should frontier AI require government licenses, or is mandatory transparency sufficient?
Direction of travel: Disclosure winning. CA SB 53 (Jan 2026) chose transparency over licensing. Federal licensing is a non-starter under Trump.
Core question: Can AI be governed through chips, compute, and infrastructure?
Direction of travel: Becoming more nuanced. DeepSeek shock forced rethinking. Debate shifting from "restrict chips" to "what complementary measures?" including on-chip governance, cloud KYC, international coordination.
Core question: Should frontier AI be open-sourced? At what capability level does openness become too dangerous?
Direction of travel: Consensus fracturing. Meta's partial retreat to proprietary "Avocado" model. EU lighter obligations for open-source but not for systemic risk models. No global consensus.
Core question: When AI causes harm, who pays — developer, deployer, or user?
Direction of travel: Unresolved. EU withdrew its AI Liability Directive. US has no federal framework. Likely to be defined by litigation rather than legislation.
Core question: Is a binding global AI governance body feasible?
Direction of travel: International frameworks proliferating but toothless. Council of Europe treaty is binding but narrow. Everything else voluntary. The US institutional retreat (AISI rebranded, TTC dormant) creates a governance vacuum.
Core question: Are frontier AI labs writing their own rules?
Direction of travel: Growing skepticism of self-governance. AVERI (Jan 2026) building independent auditing standards. But: no regulator has the technical capacity to match frontier labs.
Direction of travel: Increasingly geopoliticized. US: safety = competitive handicap. EU: partially retreating (Digital Omnibus delays). Practical convergence toward disclosure-based approaches.
Direction of travel: Evolving past the binary. PNAS 2025 study: x-risk narratives don't distract from near-term concerns. Emerging "accumulative x-risk" framing connects near-term harms to long-term threats.
The Anthropic-Pentagon standoff (Feb-Mar 2026) exposed the governance vacuum: Anthropic refused military contract conditions; Trump admin retaliated; OpenAI stepped in with softer conditions. MIT Tech Review: "a race to the bottom."
Direction of travel: 156 countries support autonomous weapons oversight (UN vote Nov 2025). But: major military powers block binding negotiations. No legislation governs military AI in any jurisdiction.
Direction of travel: Race narrative dominant in Washington and Beijing. Export controls face enforcement challenges (smuggling, DeepSeek). Outcome likely sustained competition with different comparative advantages rather than single winner.
Key People
Timeline
Major inflection points from ChatGPT's release through March 2026. Filled dots mark the most consequential events.
Analysis
No jurisdiction governs military AI comprehensively. EU AI Act excludes it. China focuses on civilian use. The Anthropic-Pentagon standoff showed governance defaults to whatever the least restrictive lab will accept.
Autonomous agents (multi-step actions, tool use, financial transactions) not addressed by any enacted legislation. EU AI Act was drafted before agentic AI. NIST CAISI launched initiative Feb 2026 but standards are years away.
No mechanism for international AI enforcement cooperation. EU AI Act has extraterritorial scope but untested. A company can relocate to avoid regulation. No AI-specific mutual legal assistance.
Treatment varies wildly: EU lighter for open-source; China focuses on service providers; no global consensus on open-weight frontier models. Meta's retreat to proprietary models may reduce urgency but doesn't resolve the question.
EU withdrew AI Liability Directive (Oct 2025). US has no federal AI liability framework. Section 230 implications unresolved. When AI causes harm, who pays remains an open question everywhere.
US chip export controls face enforcement challenges. EU/Korea set compute thresholds but no international framework exists. DeepSeek showed algorithmic efficiency can bypass hardware restrictions.
$400B+ US AI capex, massive energy consumption. No jurisdiction has binding AI energy reporting or efficiency standards. Coalition for Sustainable AI (Paris 2025) is voluntary.
AI capabilities advance faster than governance can adapt. The EU AI Act was drafted before GPT-4, before agentic AI, before AI agents could browse the web and take autonomous actions. Every jurisdiction's framework is already partially obsolete.
The 2023 Bletchley moment — 28 countries agreeing on coordinated safety action — has splintered. US retreated from safety framing. UK softened to "security." Summits shifted Safety to Action to Impact. "Safety" became politically coded.
In labs: a handful of companies make most safety decisions unilaterally. In the US: US-based labs, US chip controls, US executive orders shape everything. In executives: one president can reverse another's entire framework on day one.
Even where laws exist, enforcement lags. EU AI Office: ~125 staff for the world's most complex AI law. Most Member States haven't designated regulators. China enforces but opaquely. US federal government pulling back. The gap between law-on-paper and law-in-practice is vast.
Framing AI as a US-China "race" treats governance as a handicap. Trump's Dec 2025 EO warned AI leadership would be "DESTROYED IN ITS INFANCY" by regulation. This creates political pressure against any rule that might slow development.
| Event | Date | Confidence | Impact |
|---|---|---|---|
| US Commerce Dept evaluation of state AI laws | Mar 2026 | High | May trigger DOJ challenges |
| Colorado AI Act takes effect | Jun 2026 | High | First comprehensive US state AI law enforceable (unless preempted) |
| UN Global Dialogue on AI Governance | Jul 2026 | High | First session; UN Scientific Panel first report |
| EU AI Act high-risk provisions | Aug 2026+ | High | First real test of EU enforcement. May be delayed by Digital Omnibus |
| US federal preemption legislation | 2026-2027 | Medium | TRUMP AMERICA AI Act or narrower vehicle |
| China publishes draft comprehensive AI law | 2026-2027 | Medium | Removed from 2025 agenda but still expected |
| UK introduces AI legislation | Summer 2026 | Medium | Scope and binding nature uncertain |
| EU AI Office first enforcement action | 2026 | Medium | Likely formal inquiry before fines |
| Major AI incident accelerates governance | Any time | Low (but high impact) | Could rapidly change timelines everywhere |