Published p(doom) estimates, grouped by epistemic position

How AI researchers, industry leaders, and public intellectuals assess the probability of catastrophic or extinction-level outcomes from artificial intelligence. Grouped by underlying reasoning about whether external control mechanisms are sufficient, not by the numbers alone.

Tractable — external control suffices Serious — solvable with effort Severe — external control inadequate Near-certain — alignment unsolvable ● = builds or funds the systems in question
0%25%50%75%100%
Ranges shown where individuals gave ranges rather than point estimates. Conditional framing, time horizons, and definitions of "doom" vary across estimates; direct comparison should be treated with caution. "Conflict of interest" flags individuals who lead or co-founded companies building frontier AI systems. This is an observation about professional constraints, not an accusation of dishonesty.
Source: compiled from published interviews, surveys, social media posts, and public statements. Accompanies Artificial Intelligence and Human Extinction Risk by William Leiss and Richard Smith (McGill-Queen's University Press).