Artificial general intelligence is our greatest existential risk.
Ord’s book provides a taxonomy of humanity’s existential risks. The two broad categories are natural risks and man-made risks. Perhaps unsurprisingly, man-made risks substantially outweigh natural risks such as asteroids and volcanos. The impact is the same either way: existential risk is existential risk whatever its origin. The biggest man-made risk in Ord’s opinion is artificial general intelligence that is unaligned with humanity’s needs and wishes. He rates this risk at 1 in 10. The answer, perhaps surprisingly, is not to prevent its development. Rather, it is to develop it cautiously, even iteratively, under the umbrella of appropriate international protocols.
You may also like to browse other AI books: https://www.thesentientrobot.com/category/ai/ai-books/