Growing calls for AI decision-making to be transparent.

As AI is increasingly all around us, we are realising that sometimes we cannot predict its outputs or behaviour. This article discusses the problem of AI transparency. There are actually two problems. The first is the black box nature of AI: we know the inputs but we cannot foresee the outputs. The second problem is about understanding, after the event, why an algorithm reached the conclusion that it did. In an age of accountability, understanding an AI’s conclusion will probably become mandatory. That addresses the second problem. But the first problem is less tractable. The idea that we should be capable of foreseeing an AI’s outputs before we let it loose probably makes no sense. The whole point of AI is to generate outputs or conclusions that we are otherwise unable to generate on our own. There is no such thing as a free lunch.

You may also like to browse other AI articles: