No conclusions reached but a good airing of the problem.
The case for the prosecution (Harari) rests on the premise that the engineers have taken over from the philosophers. Moreover, the engineers will not think as hard about the risks of AI development as philosophers would have done. Specifically, AI is effectively ‘hacking’ humanity: it is taking over more and more human decision-making. Once humans surrender their decision-making, they lose their autonomy and ultimately they lose their humanity.
The case for the defence (Li) is that we must re-frame AI in a ‘human-centred way’. That is probably right but it does not eliminate the risk. Truth is, AI development will proceed apace unless governments constrain it. But that seems unlikely given that we have already seen the first steps being taken in a new AI arms race. This time it is between the US and China. Unlike in the nuclear arms race between the US and Russia, the AI environment does not offer a mutually assured destruction outcome. That would keep everybody honest. The first country to achieve artificial superintelligence might dominate the rest of the world overnight, unless of course the superintelligence eliminates its own designer. Is there any good news? Just one item: the AI equivalent of nukes is still a few decades away.
Link to conversation: https://www.youtube.com/watch?v=d4rBh6DBHyw
You may also like to browse other AI posts: https://www.thesentientrobot.com/category/ai/