AI uses one method to do object recognition; humans use two.

Everyone is familiar with the AI recognises the turtle as a rifle problem. All it took was the alteration of a few pixels. But fiddling with objects in 3D can fool AI recognition algorithms as well. It’s a real problem. Researchers have managed to set up an illuminating experiment by which, in certain circumstances, humans can be fooled too. They simply exposed humans to the test images for a fraction of a second.

As a result, Ian Goodfellow at Apple believes that humans use two complementary methods to recognise images and objects. The first method is in principle similar to that of an AI recognition algorithm. That is why exposure to doctored images for a fraction of a second can fool humans. The second method is a common sense feedback mechanism. It relies on stored knowledge in the brain, including context. If method one in the brain is about to misclassify an object, method two notes the discrepancy between the potential misclassification and its stored knowledge. It can then feed its correction back into the recognition process. Better get AI recognition up to speed on the same basis.

Link to article: https://www.newscientist.com/article/mg24232270-200-machine-mind-hack-the-new-threat-that-could-scupper-the-ai-revolution/

You may also like to browse other AI articles: https://www.thesentientrobot.com/category/ai/ai-articles/