Back to the framework

Veganism, Giving & Motivation: Red Team / Blue Team

Three rounds of adversarial stress-testing, March 2026

Testing whether the veganism argument holds under its own consequentialist logic, whether giving levels are framework-justified, and whether the self-knowledge claims survive scrutiny.

Vegan: Personal taste vastly outweighed by animal suffering. Strict commitment for consistency, market signaling, infrastructure-building.

Offsetting: Probably fine in principle if price includes second-order network effects. In practice people don't actually offset.

Genuine tension: Any harm could theoretically be offset, but murder + saving 10 still feels wrong. "Why can't you just do both good things?"

Outsourcing: Outsources emotionally costly tasks (mice) when outcome is same. Psychology as resource.

Giving: 10% to charity (animal welfare, global health) as diversification; ~90% to AI safety through work.

Motivation: Self-image wanting to be good person, but includes wanting to be correct — partially self-correcting.

Round 1 — The Consequentialist Who Won't Do the Math
Red Team: The Consequentialist Who Won't Do the Math

The Diplomat

Work dinner in Japan. The host prepared fish. Refusing creates social friction, marks veganism as rigid, makes the next vegan's life harder. A policy-level consequentialist should sometimes eat the fish. Strict commitment is rule-worship, not consequentialism.

The Hitman

The mouse problem is worse than acknowledged. Outsourcing reveals you can selectively disengage moral disgust. A CEO who hires contractors for the dirty work then calls it "managing psychology" — we have a name for that pattern, and it is not flattering.

The Lifeboat

10% is embarrassingly low by her own lights. If AI safety is the highest priority, diversification makes no sense. Do you ration to 10% because it's a nice Schelling point, or actually calculate? The giving level proves the framework is a collection of post-hoc rationalizations for an aesthetically-chosen lifestyle.

Blue Team: In Defense of Principled Imprecision

The Recovering Alcoholic

Bright-line rules are consequentialist because decision fatigue is real and flexibility becomes a ratchet. The vegan who politely declines creates a better data point than the one who eats fish when convenient. Recovering alcoholics don't evaluate each drink on the merits — the policy of zero is what works.

The Surgeon's Cleaner

Mouse outsourcing: managing psychology IS a legitimate consequentialist strategy. A surgeon who hires a cleaner isn't a coward. Emotional resilience is a finite resource, and spending it where it changes no outcome is waste, not virtue.

Portfolio Theory

Labor is already 100% in AI safety. Donations diversify the total moral portfolio. Under Knightian uncertainty about cause prioritization, diversification is THE rational strategy. Schelling points ARE the calculation at the system level — 1000 people giving 10% beats one person giving 23.7% while everyone else gives nothing.

Round 2 — The Self-Knowledge Trap
Red Team: The Self-Knowledge Trap

The Epistemic Immune System

The motivation claim is designed to be irrefutable. Any criticism becomes evidence of sophistication. "I know I'm biased, and my desire for correctness compensates" absorbs all attacks. This is an epistemic immune system, not self-knowledge.

The Philosopher's Panopticon

Would she make the same choices if every moral deliberation were broadcast? If not, the self-correction points toward performative virtue, not truth.

The Fork

Suppose rigorous EV calculation showed deworming logistics in sub-Saharan Africa was optimal, not AI safety. Self-image says "I'm an AI safety researcher." What wins? The framework, or the identity?

The Aesthetic Contradiction

Moral feelings as "aesthetic preference" contradicts building infrastructure for veganism. You don't build infrastructure for jazz preferences. Either veganism is a moral requirement (then stop calling it aesthetic) or a preference (then drop the philosophical fortifications).

Copenhagen as Convenience

Copenhagen rejection is convenient for someone whose work involves engaging with the largest problem. Rejecting it avoids bearing the weight of "if I fail, I'm more responsible."

Blue Team: Self-Knowledge Is a Practice, Not a State

The Irrefutability Objection Proves Too Much

No level of self-disclosure would satisfy the red team. Demanding self-flagellation as the price of being taken seriously is a rhetorical move, not a philosophical objection.

The Fork Has Been Tested

The EA community HAS produced career changes based on EV calculations. She already chose the less glamorous path — applied safety work, not pure philosophy. The framework has demonstrably moved behavior.

The Middle Position

"I believe I'm correct with high confidence while maintaining humility about conviction's source" is a middle position between universal requirement and mere taste. Moral motivation and justification are different things. You can believe veganism is justified while acknowledging that your confidence is partly fueled by identity.

Copenhagen Distinction

Psychological discomfort and moral responsibility are different claims. You can consistently hold "equally responsible whether I or someone else kills the mouse" AND "directly killing is more distressing to me." Outsourcing the distress while accepting the responsibility is not hypocrisy — it is efficiency.

Round 3 — The System-Level Consequentialist Who Only Evaluates Her Own System
Red Team: The System-Level Consequentialist Who Only Evaluates Her Own System

Veganism's System-Level Failure

Honest analysis must confront quinoa price impacts, palm oil deforestation, almond water use. The Pastured Cattle thought experiment: Irish cattle grazing land unsuitable for crops, sequestering carbon — might produce better system-level consequences than imported soy protein from deforested Brazil. Strict veganism forecloses this comparison by definitional fiat.

The Asteroid

10% chance of asteroid. Do you put $90 to deflection and $10 to malaria nets? Her career implies high confidence in AI safety as the top priority; her donations imply meaningful uncertainty. These are contradictory credences.

The AI Norms Argument

The norm-training-AIs argument "deserves special contempt" — unfalsifiable, proves anything you want.

Consequentialism of the Gaps

The pattern across all domains: consequentialist reasoning fills spaces around predetermined conclusions, and when it threatens those conclusions, a different consideration saves them. This is "Consequentialism of the Gaps."

Blue Team: The Demanding Consequentialist's Infinite Regress

Pastured Cattle

The thought experiment compares the best animal agriculture scenario against the worst vegan scenario. Most animal agriculture is factory farming. The Aspirin analogy: aspirin is a good recommendation even though exceptions exist. A policy of veganism is a good recommendation even though Irish pastured cattle might be an exception.

The Venture Capitalist

Portfolio theory applies. A venture capitalist already heavily exposed to one sector through their career hedges with diversified personal investments. Donations hedge against cause-prioritization error. This is not contradictory credences — it is rational risk management.

The Concession

The AI norms argument is weak. Abandoned.

The Realistic Baseline

"Convenient consequentialism" charge: compare to the realistic baseline, not an impossible ideal. Veganism, 10% donation, a cause-prioritized career ARE uncomfortable choices relative to the general population. The framework moved her dramatically from default behavior. The comfort came after the choices, not before them.

"No framework forces maximal discomfort. The question is: better than what?"

Where this lands

The position survives as a livable, considerably-better-than-average approach to ethics. What it does not survive as is a coherent consequentialist framework that generates conclusions from first principles. The conclusions came first; the framework was built around them. The self-knowledge that this might be happening was incorporated as a feature, not a corrective. But compared to unreflective default behavior, it produces a substantially better life by most measures — and the red team's demand for perfection may itself be the enemy of the good.