The existential risk lies in AI drivers and goals.

Hawking rightly points out the distinct possibility that AI will become more intelligent than humans. Specifically, when AI can itself design better AI without human help, then the speed at which AI will outpace humans will explode. This is a matter of probability rather than risk. The risk is that AI drivers and goals might develop that differ from human drives and goals. The mitigant lies in trying to ensure that AI drives and goals align with those of humans. (This article derives from Hawking’s last book.)

Link to article:

Link to book:

You may also like to browse other AI books: