cryonics-posts post 2 04-scan-modality-and-quality-evaluation

Scan modality and quality evaluation

Epistemic status: the resolution limits of each modality are physics (C1). The practice of which provider uses which modality is public for Alcor, Tomorrow Bio, and Nectome (C1/C2 from primary/secondary sources). The claim "Tomorrow Bio doesn't do EM-level rat audits" is directly stated by their own 2026 roadmap (C1: they explicitly announce EM-microsampling as new for 2026). Aurelia's claim about rat-preservation-quality testing in vitrification-only workflows is essentially unverifiable in public literature (C5) — if it exists, it isn't published.

1. Resolution ladder

What each imaging modality can and cannot see, in decreasing order of what you'd typically scan a preserved brain with:

Modality Typical resolution Can see Cannot see
CT (clinical) ~0.5–1 mm Gross anatomy, ice vs non-ice density, fractures Cells, synapses
Micro-CT ~10–50 μm Capillaries (barely), macroscale structure Cells' interiors
MRI (clinical 3 T) ~0.5–1 mm Tissue contrast, edema, bleeds Cells
Diffusion MRI ~1 mm Tractography of major fiber bundles Single-axon anything
Light microscopy (visible) ~200 nm (diffraction-limited) Cell bodies, blood vessels, myelin, large processes Synapses (thinnest clefts ~20 nm)
Confocal fluorescence ~200 nm lateral / ~500 nm axial Labelled proteins, neurons, processes Ultrastructure
Expansion microscopy ~25–70 nm effective Proteins at near-EM resolution Fine membrane features; needs clearing
SEM / FIB-SEM 3–10 nm Synapses, vesicles, PSDs, membrane bilayers Individual protein structure
TEM <1 nm Individual ribosomes, membranes, everything Large volumes without heroics
Volume EM / serial sectioning 5–10 nm (typical for connectomics) Connectome reconstruction (limits are speed, not resolution)

Sources: Delmic blog on nanoscale connectomes; Nature Biotechnology Lichtman group 2023; Nature Reviews Neuroscience 2025 on connectomics.

2. What resolution is actually needed for connectomic / synaptic preservation evaluation?

Synapses are ~0.5–2 μm long but the functionally critical features are nanoscale: the synaptic cleft is ~20 nm wide; postsynaptic densities (PSDs) are ~30 nm thick; synaptic vesicles are 30–50 nm in diameter. Fine axonal branches can be <100 nm in diameter.

To trace axons through a volume unambiguously, you need pixel sizes on the order of the thinnest axon's diameter with some oversampling — so ~5-10 nm pixel size laterally and ~30 nm section thickness, which is the standard for large volume EM connectomics (Nature 2025 Lichtman connectome). The MICrONS 1 mm³ mouse visual cortex volume: 4 × 4 × 40 nm voxels.

The BPF Large Mammal Prize rule: 5 nm pixel EM, every synapse traceable. This is the correct, well-chosen bar. (BPF prize rules)

Light microscopy cannot resolve synapses in the ordinary sense. It can detect labelled synaptic-marker puncta if you stain for them, and with expansion microscopy you can get effective ~25 nm resolution (Nature 2025 on light-microscopy-based connectomic reconstruction), but membrane ultrastructure and unlabeled cleft geometry are not in reach. Importantly, for quality evaluation of preservation (as distinct from reconstruction of the connectome), EM is irreplaceable: it shows membrane damage, organelle distortion, vesicle extraction, and the sub-cellular patchiness of bad perfusion.

CT's role: at cryonic temperatures, CT images density, which is a direct proxy for ice vs vitrified glass (ice is lower density than either water or ethylene-glycol glass) (Alcor CT page; Alcor CT calibration). CT is excellent at answering "did the whole brain vitrify or are there ice regions?" It is useless for "is the connectome traceable?" A brain can be perfectly vitrified (green on CT) and still have every synapse destroyed by pre-cooling ischemic damage. This is Aurelia's "CT can look fine; EM can be terrible" point, and it's correct.

3. What the providers actually scan

Alcor

Tomorrow Bio

They are explicitly acknowledging: our current QC (CT) has hit its ceiling, we need EM. "Good CT scans are a necessary condition, but not a sufficient one."

Nectome

Sparks / Oregon Brain Preservation

4. Independent standards body

Currently there is exactly one adjudicator-of-record in this space: the Brain Preservation Foundation (BPF). Its technology prize and its prize rules (5 nm EM, every synapse traceable) are the de facto industry standard. The Large Mammal Prize was won by ASC in 2018 and has not been re-won by a different method. (BPF rules)

The BPF also evaluated third-party provided samples (e.g. Mikula lab, Mikula eval page).

Tangential: the Aspirational Neuroscience Prize (BPF prize rules) targets reading out information from preserved brain, which presupposes quality preservation.

No other standards body / regulatory body has published a brain-preservation-quality standard that I can find. There is no FDA-type framework; preserved brains are not a medical product in the regulatory sense in the U.S. The Survival and Flourishing Fund has funded independent evaluations but is a funder rather than a standards body.

5. The "rat left out for X minutes then preserved" test — does it exist in public literature?

Nectome describes rat experiments internally as the basis for their 14-minute window. Their 2026 preprint's pig experiments are functionally this test at scale: they deliberately varied time-to-perfusion (approximately 18 min → cellular damage; <14 min → intact) and evaluated with volume EM. That's the test Aurelia wants other providers to run.

I cannot find any published vitrification-only (non-ASC) rat study where (a) the rat was deliberately subjected to a clinically realistic ischemic interval before cryopreservation, and (b) the brain was then evaluated by BPF-level EM. This is a real gap in the literature. The 2026 bioRxiv preprint "Ultrastructural and Histological Cryopreservation of Mammalian Brains by Vitrification" (biorxiv 702375) examines vitrification-only quality but does not center the ischemic interval as an experimental variable.

In other words: Aurelia's implicit challenge — "just do the experiment" — is well-posed, low-cost, and, as best I can tell, not yet done outside Nectome. (C5)

6. The epistemic trap: why "CT looks fine" can be misleading

Pulling the threads together:

  1. CT resolution (~0.5 mm) is 10,000–100,000× coarser than synaptic-cleft scales. A brain with an intact vascular tree that didn't ice over will always look fine on CT.
  2. Light microscopy (~200 nm) can reveal gross perfusion failures (white patches) but cannot reveal synaptic degradation.
  3. Synaptic-level damage from cytotoxic edema, from no-reflow, from partial CPA penetration, is visible only at EM. And it is visible as patchy, subregional damage — a single coronal slice can have beautifully preserved cortex next to white-matter tracts that are vacuolated or "concerningly indistinct." (LessWrong Nectome post has an image describing exactly this)
  4. CT cannot distinguish "perfectly vitrified and also perfectly preserved" from "perfectly vitrified and ultrastructurally destroyed." This is the epistemic trap: the easy test is insensitive to the failure mode that matters.
  5. EM requires destructive sampling. Every EM sample is a biopsy. The brain you want to keep intact for future revival is exactly the one you cannot fully evaluate. This is why model-organism (rat/pig) audits are essential — they're the epistemic lifeline for a service you can't properly test on paying customers.

This is the intellectual structure of the scan-modality problem: the evaluation modality you have cheap access to (CT) is not informative about the quantity you care about (connectome preservation). The modality that is informative (EM) is destructive and expensive, so you ration it by running it on model organisms instead of humans. If a provider isn't doing that model-organism auditing, they have no principled basis for claims about quality. They only have "didn't visibly fail on CT" — which is the positive cosmic-ray test, not the scientific hypothesis.

7. Summary

ai gen