How a machine might infer others’ behaviour by observation.

There are some interesting observations at the beginning of the paper about how we understand other people. This is about how we form in our own minds a model of other people’s minds. It is obviously a crude model. It is not a neuron by neuron facsimile of the other person’s mind. Equally, it is sophisticated enough to allow us to theorise about other’s intentions, beliefs and so on. There is another approach. Ignore the theory and look at the output instead. In other words, focus on the actual behaviour and infer from that. This is the model espoused in the paper as an answer to machine theory of mind. Interesting question: would it get past Searle’s Chinese Room test?

Link to paper:

You might also like to browse other neuroscience papers: