Existential risk from artificial general intelligence
From The Art and Popular Culture Encyclopedia
(Redirected from Existential risk from advanced artificial intelligence)
Related e |
Featured: |
Existential risk from artificial general intelligence is the theory that substantial progress in artificial intelligence (AI) could someday result in human extinction (or some other unrecoverable global catastrophe).
[edit]
See also
- AI control problem
- Artificial general intelligence § Risk of human extinction
- AI takeover § Existential risk from artificial intelligence
- Effective altruism § Far future and global catastrophic risks
- Global catastrophic risk § Artificial intelligence
- Grey goo
- Intelligence explosion § Existential risk
- Lethal autonomous weapon
- Nick Bostrom § Superintelligence
- Roboethics § In popular culture
- Superintelligence § Potential danger to human survival
- Superintelligence: Paths, Dangers, Strategies
- Technological singularity § Uncertainty and risk
- Transhumanism § Existential risks
Unless indicated otherwise, the text in this article is either based on Wikipedia article "Existential risk from artificial general intelligence" or another language Wikipedia page thereof used under the terms of the GNU Free Documentation License; or on research by Jahsonic and friends. See Art and Popular Culture's copyright notice.