Back to Gallery
FA
Research Intelligence

Frontier AI Analysis

An Authoritative Dossier

A curated examination of the researchers, ideas, and institutions defining the trajectory of artificial intelligence safety and alignment.

Edition Vol. III Published March 2026 Authors 30+
I
The concentration of AI safety talent in three organizations creates systemic risk for the field, with 60% of published alignment research originating from Anthropic, OpenAI, or DeepMind.
II
Interpretability research has transitioned from theoretical curiosity to practical tool, with sparse autoencoders enabling unprecedented visibility into model internals.
III
The governance landscape remains fragmented, with the EU AI Act, US executive orders, and Chinese regulations creating a patchwork of incompatible compliance requirements.
IV
Public discourse on AI risk has shifted measurably since 2023, with existential risk concerns now featured in mainstream policy debate across G7 nations.
80+
Authors Profiled
12
Research Domains
60%
Lab Concentration
G7
Policy Reach
Technical

Alignment Research

Deep analysis of RLHF, constitutional AI, and scalable oversight approaches to the alignment problem.

Analysis

Interpretability

The mechanistic interpretability revolution and its implications for AI transparency and control.

Policy

Global Governance

Comparative analysis of emerging regulatory frameworks across major AI-developing nations.

Profiles

Key Thinkers

In-depth profiles of the researchers and public intellectuals shaping AI safety discourse.

Forecast

Capability Timelines

Expert elicitation and quantitative models for predicting transformative AI milestones.

Strategy

Risk Assessment

Framework for evaluating existential, catastrophic, and structural risks from advanced AI systems.