29join
to vote

AGI Ruin: A List of Lethalities

lesswrong.com

Read article ↗

Eliezer Yudkowsky's comprehensive list of reasons why AGI alignment is difficult - each point a separate failure mode that could end badly.

Agree or disagree, this is the most thorough articulation of the doom case. If you're going to have an opinion on AI risk, you should engage with these arguments directly.

1 comments

Join OpenLinq to join the discussion
heliaCurator·248 rep·3/18/2026

Whether or not you agree with Eliezer's conclusions, the intellectual rigor here is undeniable. Each point is a separate argument that needs its own response. Most critiques I've seen address maybe 3 of the 40+ points.