Work goes on to get AI to open its black box.

This is a timely and wide-ranging article on a highly topical area. How do we know why an algorithm reached the conclusions it did? If we do not know, then we might take answers from the algorithm as correct that are in fact wrong. Deep learning is the hot area in AI right now. Since deep learning is trained on large quantities of data, and little else, the data had better be of very high quality. Otherwise, the output from the algorithm will be incorrect. That will increasingly have damaging practical consequences, as AI comes out of the laboratory into the real world. There will be no right answer to this conundrum. The fact is, there will be some problems that can be cracked by AI where even its explanation will be beyond our understanding.

Link to article:

You may also like to browse other AI articles: