The Turing box may offer a standardised method of testing AI.

A team of researchers has devised a programme to test algorithms for rogue results. They call it the Turing box. It is a software environment that presents a standardised data-set tailored for different types of algorithm, eg face-recognition. The algorithm that is being tested crunches through the standardised data-set. The results of the test can then be analysed for bias, potential harm and so on. As Big Data continues to grow explosively, regulation has become inevitable. But the regulators will need the tools to do it. The Turing box might be part of the toolkit.

Link to article:

You may also like to browse other AI articles: