AI ethics remains a big unsolved problem.

The more that one thinks about AI ethics, the more one realises that it is an intellectual quagmire. The trouble is that AI, in a very general sense, is becoming more and more autonomous. Without a value system, how does autonomous AI decide which way to judge difficult problems or conundrums? And, if we do give it a value system, what will that look like? From time to time, this or that institute sets out a list of principles for AI. They are generally worthy and uncontentious. But there’s a world of a difference between them and practice on the ground. For the time being AI ethics remains in the hands of those who run the big AI companies. And is that what we really want?

Link to book:

You may also like to browse other philosophy posts: