The existential risk lies in AI drivers and goals.
Hawking rightly points out the distinct possibility that AI will become more intelligent than humans. Specifically, when AI can itself design better AI without human help, then the speed at which AI will outpace humans will explode. This is a matter of probability rather than risk. The risk is that AI drivers and goals might develop that differ from human drives and goals. The mitigant lies in trying to ensure that AI drives and goals align with those of humans. (This article derives from Hawking’s last book.)
Link to article: https://www.thetimes.co.uk/magazine/the-sunday-times-magazine/stephen-hawking-ai-will-robots-outsmart-us-big-questions-facing-humanity-q95gdtq6w
You may also like to browse other AI books: https://www.thesentientrobot.com/category/ai/ai-books/