Global AI Governance Map

A living comparison of AI governance frameworks, legislation, and institutions worldwide. What's enacted, what's proposed, who the key actors are, and where enforcement has teeth.

March 2026 Fact-checked: 95% claims verified

Global AI Legislation Landscape

The world's approach to AI governance ranges from comprehensive binding law (EU, China, South Korea) to voluntary guidelines (India, Japan) to deliberate non-regulation (US federal level). Here is every major jurisdiction compared.

Jurisdiction Key Instrument(s) Status Enforcement Max Penalty Verified
EU AI Act (2024/1689) In Force (phased)
Prohibited practices: Feb 2025
GPAI: Aug 2025
High-risk: Aug 2026
EU AI Office + national authorities EUR 35M / 7% global turnover Mar 2026
China Algorithm Provisions (2022), Deep Synthesis (2023), Generative AI (2023), Content Labeling (2025), Cybersecurity Law (2026) In Force
Multiple regulations active
CAC-led, multi-agency CNY 50M / 5% turnover Mar 2026
US (Federal) Executive Orders only No Comprehensive Law
Executive action only
Agency-by-agency (FTC, DOJ) Case-by-case Mar 2026
US (Texas) TRAIGA In Force Jan 2026 Texas AG $10K-$200K/violation Mar 2026
US (California) SB 53 (Frontier AI Transparency) In Force Jan 2026 California AG $1M/violation Mar 2026
US (Colorado) Colorado AI Act (SB 24-205) Enacted, not yet effective
Effective Jun 2026
Colorado AG $20K/violation Mar 2026
UK Pro-innovation framework; no legislation No AI Law
Bill delayed to summer 2026
Existing sector regulators N/A Mar 2026
Canada None (AIDA died Jan 2025) No Framework Existing laws (PIPEDA) N/A Mar 2026
Brazil PL 2338/2023 Passed Senate
Pending in Chamber
Not yet enacted N/A Mar 2026
Japan AI Promotion Act (2025) Enacted (soft law)
Best-effort obligations
Name-and-shame; no fines None Mar 2026
South Korea AI Basic Act In Force Jan 2026
1-year grace period
MSIT KRW 30M (~$20.5K) Mar 2026
India AI Governance Guidelines (2025) Non-binding Existing laws only N/A Mar 2026
International Council of Europe AI Treaty In Force Nov 2025 National implementation Per national law Mar 2026
International G7 Hiroshima Code of Conduct Voluntary Self-reporting via OECD None Mar 2026

Regulatory Philosophies Compared

Risk-Based Regulation
EU, South Korea, Colorado

Classify AI systems by risk level; regulate proportionally. High-risk systems face conformity assessments. The EU's 10^25 FLOPs threshold defines systemic risk for GPAI models.

Technology-Specific Rules
China

Regulate each AI technology type separately: algorithms (2022), deepfakes (2023), generative AI (2023), content labeling (2025). Fast iteration but regulatory complexity.

Innovation-First
US (Trump), UK, Japan, India

Minimize regulation to maintain competitive advantage. Focus on voluntary standards, industry self-governance, and removing regulatory barriers. Safety framed as secondary to competitiveness.

No Standalone Regulation
Canada, India (federal)

Rely on existing laws (privacy, consumer protection) supplemented by voluntary guidelines. No comprehensive AI-specific framework.

Where AI Governance Has Teeth

Laws on paper mean nothing without enforcement. Here's where AI governance is actually binding, and where it's performative.

Enforcement Reality Check

JurisdictionInstrumentEnforcement LevelEvidence
China AI regulations (multiple)
Active
5,000+ algorithm filings; regular CAC enforcement campaigns; mandatory pre-launch approval; increased penalties to CNY 50M (Jan 2026); documented enforcement waves in 2024-2025
EU AI Act
Building
Penalties up to 7% turnover exist in law. But: only 3/27 Member States designated authorities; no fines issued yet; AI Office understaffed (~125 staff). Finland first with full enforcement powers (Dec 2025)
EU / GDPR GDPR applied to AI
Active
Italy fined OpenAI EUR 15M (Dec 2024); Italy banned ChatGPT (Mar 2023); multiple DPA investigations across EU
US (FTC) FTC Act (Section 5)
Retreating
Operation AI Comply (2024-2025): multiple cases. Then: vacated Rytr consent order (Dec 2025); retreating under Trump leadership
US (States) Texas TRAIGA, CA SB 53
Untested
Both effective Jan 2026; no enforcement actions yet. Face federal preemption threat from Dec 2025 EO
South Korea AI Basic Act
Grace Period
1-year enforcement grace period. Max fine only ~$20.5K. Emphasis on guidance first
Japan AI Promotion Act
None
"Best-effort" obligations. No fines. Name-and-shame only
International All frameworks
None
G7 Code, OECD Principles, UN Compact: all voluntary. Council of Europe treaty depends on national implementation. No international enforcement body

Industry Self-Governance: Is It Working?

FrameworkOrganizationExternal Verification?Credibility Issue
Responsible Scaling Policy (RSP) Anthropic No v3.0 (Feb 2026) replaced binding pause with competitive conditional: "will only pause if we have a significant lead"
Preparedness Framework OpenAI No Contains competitive escape clause. Superalignment team dissolved May 2024. "Safely" removed from mission Nov 2025
Frontier Safety Framework Google DeepMind No Internal governance only. Critical Capability Levels not externally audited
Frontier Model Forum Industry consortium No No binding commitments. No public accountability mechanism
AVERI (AI Assurance Levels) Miles Brundage (nonprofit) Building Launched Jan 2026. $7.5M raised. Not yet conducting audits. Developing standards for independent verification

Who Governs AI?

The institutional landscape of AI governance spans government agencies, multilateral bodies, industry forums, and civil society organizations. Here are the most consequential actors.

Government Bodies

EU AI Office
Established Feb 2024 | Brussels | ~125 staff

Direct oversight of GPAI models. Coordinates AI Board, Scientific Panel, Advisory Forum. Published GPAI Code of Practice (Jul 2025). Understaffed relative to mandate. No enforcement actions yet.

NIST CAISI
Rebranded Jun 2025 | US | Staff cut

Formerly AI Safety Institute. "Safety" removed from name. Focus shifted from safety evaluation to standards and innovation. New agentic AI initiative (Feb 2026). $20M for two new AI centers with MITRE.

UK AI Security Institute (AISI)
Rebranded Feb 2025 | UK | Director: Adam Beaumont

Formerly AI Safety Institute. Refocused on national security risks. GBP 27M Alignment Project (with OpenAI/Microsoft contributions). End-to-end biosecurity red-teaming. 62,000 agent vulnerabilities identified.

China CAC
Cyberspace Administration of China | Lead AI regulator

Primary drafter and enforcer of all AI regulations. Operates national algorithm filing platform (5,000+ filings). Conducts regular enforcement campaigns. Pre-launch gatekeeper for generative AI services.

US DOJ AI Litigation Task Force
Operational Jan 2026 | Washington DC

Created by Dec 2025 EO to challenge state AI laws in federal court. Commerce Dept evaluation of state laws due Mar 2026. Threatens $42B BEAD broadband funding to states with "onerous AI laws."

Multilateral Bodies

UN Scientific Panel on AI
40 members | 37 nations | 3-year term from Feb 2026

Independent, multidisciplinary scientific assessments. First report due Jul 2026 Geneva. US voted against establishment (117-2). Advisory only — no enforcement power.

OECD / GPAI
44 countries | Merged Jul 2024

Manages G7 Hiroshima Code of Conduct reporting. Updated AI Principles (May 2024). Developing AI Policy Toolkit. Influential but all frameworks voluntary.

International AI Safety Institute Network
10 founding members | Launched May 2024

Launched at Seoul Summit. US (CAISI), UK (AISI), EU, France, Japan, Kenya, Korea, Singapore, Canada, Australia. But US and UK have rebranded away from "safety."

Civil Society / Research

Georgetown CSET
Interim ED: Helen Toner

Data-driven AI policy analysis informing Congress and executive branch. One of the most influential DC policy research centers on AI.

GovAI (Centre for the Governance of AI)
Oxford

Leading compute governance research. Lennart Heim's former base. Publishes foundational AI governance research.

Coefficient Giving (fmr. Open Philanthropy)
Renamed Nov 2025

One of the largest funders of AI governance. Funds CAIS, GovAI, CSET, and many others. AI governance RFP closed Jan 2026. Growing giving in 2026.

AVERI
Founded Jan 2026 | Miles Brundage

AI Verification and Evaluation Research Institute. Developing independent auditing standards for frontier AI. "AI Assurance Levels" framework (L1-L4). $7.5M raised.

The Ten Key Debates

AI governance is shaped by unresolved tensions. Each debate presents genuinely strong arguments on both sides. Click to expand.

1. Licensing vs. Disclosure

Core question: Should frontier AI require government licenses, or is mandatory transparency sufficient?

Case for Licensing

  • Frontier AI risks are comparable to pharmaceuticals and nuclear tech
  • Once a dangerous model is released, damage can't be undone
  • EU AI Act's conformity assessments are a form of licensing
  • Key voices: Altman (2023 testimony), Marcus, Bengio

Case for Disclosure Only

  • Licensing creates barriers to entry favoring incumbents
  • Would effectively outlaw open-source AI development
  • Technically difficult given rapid capability evolution
  • Key voices: EFF, a16z, Dean Ball, Trump admin

Direction of travel: Disclosure winning. CA SB 53 (Jan 2026) chose transparency over licensing. Federal licensing is a non-starter under Trump.

2. Compute Governance

Core question: Can AI be governed through chips, compute, and infrastructure?

Compute is Tractable

  • Physical, trackable, concentrated supply chains
  • US export controls already restrict China's chip access
  • EU (10^25 FLOPs) and Korea (10^26 FLOPs) set compute thresholds
  • Key voices: Heim (GovAI), RAND, CNAS

Compute Control Has Limits

  • DeepSeek R1 achieved near-frontier at fraction of cost
  • Documented large-scale chip smuggling (tens-hundreds of thousands)
  • Algorithmic efficiency improvements outpace hardware restrictions
  • Key voices: Nvidia CEO Huang, industry lobbies

Direction of travel: Becoming more nuanced. DeepSeek shock forced rethinking. Debate shifting from "restrict chips" to "what complementary measures?" including on-chip governance, cloud KYC, international coordination.

3. Open vs. Closed Models

Core question: Should frontier AI be open-sourced? At what capability level does openness become too dangerous?

Case for Open

  • Democratizes access, prevents monopoly concentration
  • Enables independent safety research and auditing
  • ACLU: essential for civil liberties
  • Key voices: Meta (Llama), Yann LeCun, Hugging Face, EleutherAI

Case for Closed (at Frontier)

  • Open-weight models can't be recalled once released
  • Safety fine-tuning can be removed from open models
  • DeepSeek R1 raised security concerns about open Chinese frontier models
  • Key voices: Anthropic, OpenAI, some legislators

Direction of travel: Consensus fracturing. Meta's partial retreat to proprietary "Avocado" model. EU lighter obligations for open-source but not for systemic risk models. No global consensus.

4. Liability for AI Harm

Core question: When AI causes harm, who pays — developer, deployer, or user?

Developer Liability

  • Developers have most control over model behavior
  • Creates strong incentives for safety investment
  • EU proposed AI Liability Directive (now withdrawn Oct 2025)

Deployer/User Liability

  • Developers can't control all deployment contexts
  • Strict developer liability could kill open-source
  • US Section 230 implications unresolved

Direction of travel: Unresolved. EU withdrew its AI Liability Directive. US has no federal framework. Likely to be defined by litigation rather than legislation.

5. International Coordination (IAEA for AI?)

Core question: Is a binding global AI governance body feasible?

Case for Global Body

  • AI risks are inherently global and cross-border
  • Race dynamics undermine unilateral governance
  • UN Scientific Panel and Council of Europe treaty show momentum
  • Key voices: Bengio, Altman (2023), UN Advisory Body

Case Against (or Skepticism)

  • US under Trump voted against UN Scientific Panel, rejected Paris declaration
  • US-China competition makes cooperation politically impossible
  • AI evolves too fast for treaty-based governance
  • Key voices: Trump admin, national sovereignty advocates

Direction of travel: International frameworks proliferating but toothless. Council of Europe treaty is binding but narrow. Everything else voluntary. The US institutional retreat (AISI rebranded, TTC dormant) creates a governance vacuum.

6. Regulatory Capture

Core question: Are frontier AI labs writing their own rules?

Evidence of Capture

  • Labs self-set thresholds, self-evaluate, can change policies unilaterally
  • OpenAI's Preparedness Framework has competitive escape clause
  • Anthropic RSP v3.0 weakened binding pause commitment
  • EU GPAI Code of Practice shaped by industry stakeholders

Counter-Arguments

  • Labs have the most expertise about their own systems
  • Self-regulation can move faster than legislation
  • Anthropic, OpenAI did invest heavily in safety teams and research
  • Industry expertise is needed in any regulatory regime

Direction of travel: Growing skepticism of self-governance. AVERI (Jan 2026) building independent auditing standards. But: no regulator has the technical capacity to match frontier labs.

7. Speed vs. Safety

Case for Speed / Less Regulation

  • AI can cure diseases, solve climate change, create massive economic growth
  • Regulation slows innovation and cedes leadership to China
  • Apple delayed EU AI features due to compliance uncertainty
  • Key voices: Marc Andreessen, Dean Ball, Trump admin, e/acc movement

Case for Safety / More Regulation

  • Multiple documented AI harms in 2025 (chatbot suicides, deepfake proliferation)
  • EU approach hasn't killed European AI; creates market access framework
  • Science (2025): "The mirage of AI deregulation" — innovation benefits overstated
  • Key voices: Bengio, Hinton, Gary Marcus, CAIS, FLI

Direction of travel: Increasingly geopoliticized. US: safety = competitive handicap. EU: partially retreating (Digital Omnibus delays). Practical convergence toward disclosure-based approaches.

8. Existential Risk Framing

X-Risk Framing Helps

  • Mobilized unprecedented political attention (2023-2024)
  • Created Schelling point for international cooperation (Bletchley)
  • Some risks (bioweapons, loss of control) genuinely catastrophic
  • Key voices: CAIS, Anthropic, Bengio, Hinton, Russell

X-Risk Framing Hurts

  • Distracts from present-day harms: bias, surveillance, displacement
  • Centers labs as protagonists of their own regulation
  • "Bold yet often unsubstantiated claims" (81-paper review)
  • Key voices: Emily Bender, Timnit Gebru, AI Now, Whittaker

Direction of travel: Evolving past the binary. PNAS 2025 study: x-risk narratives don't distract from near-term concerns. Emerging "accumulative x-risk" framing connects near-term harms to long-term threats.

9. Military AI

The Anthropic-Pentagon standoff (Feb-Mar 2026) exposed the governance vacuum: Anthropic refused military contract conditions; Trump admin retaliated; OpenAI stepped in with softer conditions. MIT Tech Review: "a race to the bottom."

Case for AI Company Red Lines

  • AI systems not reliable enough for autonomous weapons
  • Mass surveillance violates fundamental rights
  • Companies should have ethical constraints on military use

Case Against Private Sector Veto

  • "A private company cannot be more powerful than the government" — Altman
  • National security decisions are the government's domain
  • Other companies will simply fill the gap

Direction of travel: 156 countries support autonomous weapons oversight (UN vote Nov 2025). But: major military powers block binding negotiations. No legislation governs military AI in any jurisdiction.

10. US-China Race Dynamics

Race Framing is Accurate

  • $450B+ aggregate US AI capex (2026)
  • China's explicit goal: world AI leader by 2030
  • National security implications are real
  • Key voices: Eric Schmidt, CSIS, Trump admin

Race Framing is Counterproductive

  • Creates pressure to cut safety corners
  • Prevents US-China cooperation on shared safety interests
  • Reality is specialization, not total victory
  • Key voices: Atlantic Council, Cairo Review, safety researchers

Direction of travel: Race narrative dominant in Washington and Beijing. Export controls face enforcement challenges (smuggling, DeepSeek). Outcome likely sustained competition with different comparative advantages rather than single winner.

Who Shapes AI Governance?

Yoshua Bengio
Turing Laureate | Chair, International AI Safety Report
Led the first and second International AI Safety Reports (2025, 2026), the closest thing to IPCC-style consensus on AI. Strong advocate for international coordination.
Influence: Very High
Dean Ball
Senior Fellow, Foundation for American Innovation
Former OSTP Senior Policy Advisor — primary staff drafter of America's AI Action Plan. The most influential pro-innovation AI governance voice in 2026.
Influence: Very High
Dario Amodei
CEO, Anthropic
"Machines of Loving Grace" (2024), "The Adolescence of Technology" (2026). Declined to abandon responsible use standards for Pentagon. RSP architect.
Influence: Very High
Sam Altman
CEO, OpenAI
Secured Pentagon contract after Anthropic refused. $1M Trump inaugural donation. "The government is supposed to be more powerful than private companies." Bipartisan donor strategy.
Influence: Very High
Helen Toner
Interim ED, Georgetown CSET
Former OpenAI board member during Altman firing. Expert on US-China AI competition. Congressional testimony. Running one of DC's most influential AI policy shops.
Influence: High
Miles Brundage
Founder, AVERI
Former OpenAI Head of Policy Research. Launched AVERI (Jan 2026) to develop independent AI auditing standards. "AI Assurance Levels" framework (L1-L4).
Influence: Growing
Lennart Heim
Independent Researcher | GovAI Adjunct Fellow
Leading compute governance expert. Research directly informs US chip export controls. Published on AI Diffusion Framework, hardware-enabled governance mechanisms.
Influence: High
Holden Karnofsky
Member of Technical Staff, Anthropic
Co-founder of Open Philanthropy (now Coefficient Giving) — one of largest AI governance funders. Joined Anthropic Jan 2025 to work on responsible scaling.
Influence: Very High (via funding)
Ian Hogarth
Chair, UK AI Security Institute
"We Must Slow Down the Race to God-Like AI" (FT, 2023). Navigated the safety-to-security rebrand while maintaining core technical evaluation work.
Influence: High (UK)
Gary Marcus
Professor Emeritus, NYU
Senate testimony (May 2023). Advocates for international AI agency. Consistently skeptical of AGI timelines. 16/17 2025 predictions correct.
Influence: Moderate-High (public)
Jack Clark
Co-founder & Head of Policy, Anthropic
Import AI newsletter. Congressional testimony 2025. Anthropic $20M to Public First supporting 2026 midterm candidates aligned with AI safety.
Influence: High
Marietje Schaake
Stanford HAI | UN AI Advisory Body
Former MEP (10 years). "The Tech Coup" (2024). Strong proponent of democratic regulation. Bridges EU-US governance conversations.
Influence: Significant (EU/transatlantic)

AI Governance: Key Events

Major inflection points from ChatGPT's release through March 2026. Filled dots mark the most consequential events.

2022
Nov 30, 2022
ChatGPT Released
100M users in two months. The catalyst for the global AI governance response.
Oct 7, 2022
US Export Controls on AI Chips
BIS restricts China's access to advanced AI chips. Most aggressive compute governance action.
2023
Mar 2023
FLI Pause Letter / Italy Bans ChatGPT
30,000+ signatures calling for pause. Italy's Garante orders OpenAI to stop processing data.
May 2023
Senate AI Hearing / CAIS Extinction Statement
Altman, Marcus, Montgomery testify. CAIS statement: "AI extinction risk should be a global priority."
Jul 2023
White House Voluntary Commitments / Frontier Model Forum
15 companies make voluntary commitments. Anthropic, Google, Microsoft, OpenAI form the Forum.
Sep 2023
Anthropic RSP v1.0
First AI lab to publish formal scaling safety framework. Sets the template for the industry.
Oct 30, 2023
Biden EO 14110 / G7 Hiroshima Code of Conduct
Most comprehensive US AI governance action. G7 adopts first international frontier AI framework.
Nov 1-2, 2023
Bletchley Park AI Safety Summit
28 countries + EU sign Bletchley Declaration. UK AI Safety Institute established. The high-water mark of the global safety consensus.
Dec 2023
EU AI Act Political Agreement
Council and Parliament reach deal on the world's first comprehensive AI law.
2024
Mar 13, 2024
EU Parliament Votes to Adopt AI Act (523-46)
First comprehensive AI law passed by a democratic legislature.
May 2024
Seoul AI Safety Summit
AI Safety Institute Network launched (10 countries). 16 companies sign Frontier AI Safety Commitments.
Aug 1, 2024
EU AI Act Enters Into Force
Phased enforcement begins. Clock starts on implementation deadlines.
Sep 29, 2024
California Vetoes SB 1047
Most prominent US AI safety bill fails. Governor Newsom: "doesn't account for deployment context."
Dec 2024
Italy Fines OpenAI EUR 15M / OpenAI Announces For-Profit Restructuring
First major financial penalty against an AI company. OpenAI begins shift from nonprofit control.
2025
Jan 20, 2025
Trump Revokes Biden AI Executive Order
Day one. Replaces safety-first with innovation/competitiveness focus. Fundamental US policy pivot.
Jan 2025
DeepSeek R1 / Canada AIDA Dies / South Korea AI Basic Act Enacted
DeepSeek demonstrates frontier capabilities despite export controls. Canada's AI bill dies. South Korea passes comprehensive law.
Feb 2025
Paris AI Action Summit / UK AISI Rebranded to "Security"
58 countries sign declaration. US/UK refuse. Summit framing shifts from "Safety" to "Action." UK drops "Safety" from institute name.
Jun 2025
NIST AISI Rebranded to CAISI
Commerce Secretary Lutnick removes "Safety" from name. Focus shifts to standards and innovation.
Aug 2025
EU AI Act: GPAI Obligations Take Effect
Foundation model providers must comply with technical documentation, training data summaries, and copyright requirements.
Sep 2025
California SB 53 Signed / China AI Content Labeling
First US state frontier AI law. China mandates dual labeling (visible + hidden) for all AI-generated content.
Dec 11, 2025
Trump EO: Federal AI Framework / State Preemption
DOJ task force to challenge state AI laws. $42B BEAD funding leverage. Most aggressive federal preemption attempt.
2026
Jan 2026
South Korea AI Basic Act, Texas TRAIGA, California SB 53 Take Effect
Three major AI laws go live simultaneously. US state laws face federal preemption threat.
Feb 2026
India AI Impact Summit / Anthropic RSP v3.0 / Anthropic-Pentagon Standoff
92-country declaration, $200B+ investment. Anthropic weakens binding pause. Pentagon standoff exposes military AI governance vacuum.
Jun 2026
Colorado AI Act Takes Effect (projected)
First comprehensive US state AI law becomes enforceable, unless federally preempted.
Aug 2026
EU AI Act: High-Risk Systems (projected)
Full applicability of high-risk AI system requirements. May be delayed by Digital Omnibus. First real test of EU enforcement.

State of Play: March 2026

Seven Governance Gaps

Military AI

No jurisdiction governs military AI comprehensively. EU AI Act excludes it. China focuses on civilian use. The Anthropic-Pentagon standoff showed governance defaults to whatever the least restrictive lab will accept.

AI Agents

Autonomous agents (multi-step actions, tool use, financial transactions) not addressed by any enacted legislation. EU AI Act was drafted before agentic AI. NIST CAISI launched initiative Feb 2026 but standards are years away.

Cross-Border Enforcement

No mechanism for international AI enforcement cooperation. EU AI Act has extraterritorial scope but untested. A company can relocate to avoid regulation. No AI-specific mutual legal assistance.

Open-Source Governance

Treatment varies wildly: EU lighter for open-source; China focuses on service providers; no global consensus on open-weight frontier models. Meta's retreat to proprietary models may reduce urgency but doesn't resolve the question.

Liability

EU withdrew AI Liability Directive (Oct 2025). US has no federal AI liability framework. Section 230 implications unresolved. When AI causes harm, who pays remains an open question everywhere.

Compute Governance

US chip export controls face enforcement challenges. EU/Korea set compute thresholds but no international framework exists. DeepSeek showed algorithmic efficiency can bypass hardware restrictions.

Environmental Impact

$400B+ US AI capex, massive energy consumption. No jurisdiction has binding AI energy reporting or efficiency standards. Coalition for Sustainable AI (Paris 2025) is voluntary.

Five Structural Dynamics

The Governance Gap Is Widening

AI capabilities advance faster than governance can adapt. The EU AI Act was drafted before GPT-4, before agentic AI, before AI agents could browse the web and take autonomous actions. Every jurisdiction's framework is already partially obsolete.

The Safety Consensus Has Fragmented

The 2023 Bletchley moment — 28 countries agreeing on coordinated safety action — has splintered. US retreated from safety framing. UK softened to "security." Summits shifted Safety to Action to Impact. "Safety" became politically coded.

Power Is Concentrating

In labs: a handful of companies make most safety decisions unilaterally. In the US: US-based labs, US chip controls, US executive orders shape everything. In executives: one president can reverse another's entire framework on day one.

The Enforcement Problem

Even where laws exist, enforcement lags. EU AI Office: ~125 staff for the world's most complex AI law. Most Member States haven't designated regulators. China enforces but opaquely. US federal government pulling back. The gap between law-on-paper and law-in-practice is vast.

The Race Narrative Undermines Governance

Framing AI as a US-China "race" treats governance as a handicap. Trump's Dec 2025 EO warned AI leadership would be "DESTROYED IN ITS INFANCY" by regulation. This creates political pressure against any rule that might slow development.

What Changes Next (March 2026 - March 2027)

EventDateConfidenceImpact
US Commerce Dept evaluation of state AI lawsMar 2026HighMay trigger DOJ challenges
Colorado AI Act takes effectJun 2026HighFirst comprehensive US state AI law enforceable (unless preempted)
UN Global Dialogue on AI GovernanceJul 2026HighFirst session; UN Scientific Panel first report
EU AI Act high-risk provisionsAug 2026+HighFirst real test of EU enforcement. May be delayed by Digital Omnibus
US federal preemption legislation2026-2027MediumTRUMP AMERICA AI Act or narrower vehicle
China publishes draft comprehensive AI law2026-2027MediumRemoved from 2025 agenda but still expected
UK introduces AI legislationSummer 2026MediumScope and binding nature uncertain
EU AI Office first enforcement action2026MediumLikely formal inquiry before fines
Major AI incident accelerates governanceAny timeLow (but high impact)Could rapidly change timelines everywhere
ai gen