Eliezer Yudkowsky's comprehensive list of reasons why AGI alignment is difficult - each point a separate failure mode that could end badly.
“Agree or disagree, this is the most thorough articulation of the doom case. If you're going to have an opinion on AI risk, you should engage with these arguments directly.”
1 comments
Join OpenLinq to join the discussion
Whether or not you agree with Eliezer's conclusions, the intellectual rigor here is undeniable. Each point is a separate argument that needs its own response. Most critiques I've seen address maybe 3 of the 40+ points.