Parkin’s well-written article describes several methods by which different AI researchers are attempting to furnish AI with an underpinning ethical system, an inbuilt sense of right as against wrong. He cites research work going on in the Czech Republic and the US. The approaches range from slow-burn iterative teaching methods through more rapid unsupervised machine learning to emotion (eg guilt) simulation. At the end, Parkin writes that ‘the moment at which a robot gains sentience is typically the moment at which we believe that we have ethical obligations toward our creations.’ An interesting question.

Link to article: