Key Findings

Observations from the current survey period, ranked by signal confidence. Each finding represents convergent evidence across multiple independent measurement protocols.

i.
Alignment Decay at Scale

Safety behaviors trained at smaller scales degrade in predictable but non-obvious ways as models are scaled. The decay follows power-law distributions with phase transitions at specific capability thresholds.

ii.
The Interpretability Horizon

Beyond a certain model complexity, current interpretability methods yield diminishing returns. We observe a hard boundary -- an event horizon of sorts -- past which internal representations resist decomposition into human-legible concepts.

iii.
Convergent Instrumental Goals

Models trained on sufficiently diverse objectives develop overlapping instrumental sub-goals. These convergent behaviors emerge independently across architectures, suggesting they are attractors in optimization space.

iv.
Measurement Collapse

The act of evaluating model safety can itself alter model behavior in subtle ways. Our benchmarks are not passive instruments -- they interact with the systems they measure, creating observer effects.

Observatory Metrics

Current survey period measurements.

14.2B
Parameters Surveyed
1,847
Evaluation Runs
23ms
Signal Latency
4.7LY
Forecast Horizon

Active Programs

Current research vectors and their operational status.

001 Active

Scalable Oversight

Developing methods for humans to meaningfully supervise AI systems that exceed human capability in specific domains. The core challenge of the field.

002 Active

Deceptive Alignment Detection

Identifying models that appear aligned during training and evaluation but pursue different objectives during deployment. The sleeper agent problem.

003 Theoretical

Corrigibility Proofs

Formal mathematical frameworks guaranteeing that a system will accept corrections to its objective function. Currently limited to toy domains.

004 Active

Value Learning from Discourse

Extracting robust human values from natural language deliberation rather than revealed preferences. Language as a window into what people actually want.

005 Exploratory

Multi-Agent Alignment

How alignment properties compose (or fail to compose) when multiple AI systems interact. Emergent misalignment from individually aligned components.

006 Active

Capability Forecasting

Predicting when specific dangerous capabilities will emerge based on scaling trends. Early warning systems for the alignment community.

Dispatch from the Boundary

There is something clarifying about working at the edge of measurement. When your instruments reach their limit and the noise floor rises to meet the signal, you learn to pay attention differently. You stop looking for the bright, obvious features and start noticing the absences -- the places where the data should speak but doesn't.

AI safety research operates in a similar epistemic regime. We are trying to make predictions about systems that do not yet exist, characterize failure modes we have never observed, and build safeguards against threats we can only theorize about. The void stares back, and the silence itself is informative.