A few weeks ago I published a blog entry previewing my critical review of mirror neuron theory of action understanding. The paper has been in the review process since that time, and I've finally received a bit of feedback. As requested, the feedback is from a mirror neuron/action understanding proponent. I find the comments extremely valuable because (i) I have been directed to additional papers that had eluded my attention previously, and (ii) while the review is highly critical of my manuscript -- comments like disappointing, astounding non sequitur, and totally nonsense were used -- I have come away with more confidence that my analysis is correct: there is nothing in the reviews that provide any challenge to my interpretation of the literature.
So i've been looking at the papers that I either hadn't read carefully enough, or just plain missed. Here is one of them.
Fogassi et al., (2005) present very interesting data from mirror neurons in the inferior parietal lobule (IPL) of monkeys. Monkeys were trained either to grasp a piece of food and put it in his (the monkey’s) mouth, or to pick up an object and put it in a container. In some conditions, the container was next to the monkey’s mouth such that the mechanics of the movement were very similar between grasping-to-eat and grasping-to-place. In addition, a condition was also implemented in which the monkey grasped and placed a piece of food in the container to control for differences between food items and objects, both visually and tactilely. In all variants of the experiment, the authors report that some IPL cells preferentially responded to the goal of the action: grasping-to-eat vs. grasping-to-place. Again, this was true even when the placing-action terminated in close proximity to the mouth and involved grasping a piece of food. Some of these cells also responded selectively and congruently during the observation of grasping-to-eat and grasping-to-place.
So both in perception and action, there are IPL cells that seem to be selective for the specific goal of an action rather than the sensory or motor features of an action -- a very intriguing result. Fogassi et al. discuss their motor findings in the context of “intentional chains” in which different motor acts forming the entire action are linked in such a way that each act is facilitated in a predictive and goal-oriented fashion by the previous ones. They give an example of IPL neurons observed in another unpublished study that respond to flexion of the forearm, have tactile receptive fields around the mouth, and respond during grasping actions of the mouth and suggest that, “these neurons appear to facilitate the mouth opening when an object is touched or grasped” (p. 665).
Regarding the action perception response properties of the IPL neurons in their study, Fogassi et al. all conclude, “that IPL mirror neurons, in addition to recognizing the goal of the observed motor act, discriminate identical motor acts according to the action in which these acts are embedded. Because the discriminated motor act is part of a chain leading to the final goal of the action, this neuronal property allows the monkey to predict the goal of the observed action and, thus, to ‘read’ the intention of the acting individual” (p. 666).
According to Fogassi et al., IPL mirror neurons code action goals and can “read the intention” of the acting individual. But is there a simpler explanation? Perhaps Fogassi et al.’s notion of predictive coding and their example of the IPL neuron with receptive fields on the face can provide such an explanation. Suppose the abstract goal of an action and/or it’s meaning is coded outside of the motor system. And suppose that Fogassi et al. are correct in that a complex motor act leads to some form of predictive coding (anticipatory opening of the mouth, salivation, perhaps even forward modeling of the expected somatosensory consequences of the action). The predictive coding in the motor system is now going to be different for the grasping-to-eat versus grasping-to-place actions, even though it is not coding "goals." For eating, there may be anticipatory opening of the mouth, salivation, perhaps even forward modeling of the expected somatosensory consequences of the action. For placing, there will be no mouth-related coding, but there may be other kinds of coding such as expectations about the size, shape or feel of the container, or the sound that will result if the object is placed in it. If cells in IPL differ in their sensitivity to feedback from these different systems, then it may look like the cells are coding goals, when in fact they are just getting differential feedback input from the forward models. Observing an action may activate this system with similar electrophysiological consequences, not because it is reading the intention of the actor, but simply because the sensory event is associated with particular motor acts.
In short, very interesting paper. Not proof, however, that mirror neurons code goals or intentions, or support mind reading.
L. Fogassi, et al. (2005). Parietal Lobe: From Action Organization to Intention Understanding Science, 308 (5722), 662-667 DOI: 10.1126/science.1106138
5 comments:
I'm looking forward to reading your review, a thorough critical appraisal of this theory is something that's missing from the literature. Can you make a guess about when it might appear? I'm particulary keen to see if I can spot the astounding non sequitur.
Hopefully before the end of the year. I'm currently making some revisions/clarifications based on the comments. The plan was originally to have my article followed up by a rebuttal from a mirror neuron-action understanding proponent, but I guess the rebuttal is slow in coming. Too bad, because that would have been fun.
I think one of the biggest problems with the mirror neuron story, particularly in relation to action understanding, is the lack of a clearly-defined theoretical statement or falsifiable prediction. For example, Ferrari et al. (2005) report that neurons in F5 respond to the sight of the experimenter grasping food objects with a tool (e.g. pliers). These neurons also fire when the monkey performs certain grasping actions of its own, earning them (the neurons) the designation, according to the authors, of 'mirror'. As the monkey has never learned to use tools, the authors conclude that these mirror neurons "extend
action-understanding capacity to actions that do not strictly
correspond to its motor representations" (p. 212).
To me, this is a case of having your cake and eating it too. Either mirror neurons facilitate action understanding by encoding sensory information in the motor frame of the monkey (predicting that no tool-specific mirror neurons should exist) or mirror neurons do not play a role in action understanding and are instead doing something else.
Take your pick.
That's exactly right. The problem is that mirror neurons are now ASSUMED to support action understanding -- e.g., read the discussion of Fogassi et al. 2005 paper -- so every result is simply interpreted within that view. With respect to tools, if I remember correctly, one of the early MN papers showed that MNs did not respond to grasping with tools, which was one of the arguments for the system supporting action understanding. I'd have to re-read the original papers though to be sure about this.
Here is a heads up on a recent paper you might like (it is anti-mirror neurons), in case you haven't seen it yet:
Dinstein, I., Gardner, J. L., Jazayeri, M., & Heeger, D. J. (2008). Executed and Observed Movements Have Different Distributed Representations in Human aIPS. J. Neurosci., 28(44), 11231-11239. doi: 10.1523/JNEUROSCI.3585-08.2008.
Post a Comment