Friday, May 21, 2010

Dissociation of mirror system activity and action understanding: Evidence from sign language

Sign language is arguably an ideal system to study in the context of the mirror neuron theory of action understanding, particularly its relation to language: you've got overt manual gestures, not those pesky obscured gestures associated with speech, and you have the ability to study the relation between action understanding in linguistic (pantomime) and nonlinguistic (sign language) domains. The latter is particularly interesting given recent claims that speech evolved from a manual gesture system (e.g., Rizzolatti & Arbib, 1998; Corballis, 2010). To date, evidence from the sign language literature has been less than supportive of the role for the mirror system in action understanding (Corina & Knapp, 2008; Knapp & Corina, 2010).

A recent study by Karen Emmorey and colleagues continues this non-supportive trend. Deaf signers and hearing non-signers were studied using fMRI during the perception of non-linguistic gestures (pantomimes) and linguistic gestures (American Sign Language verbs). Behaviorally of course, both types of gestures are meaningful to signers but only the pantomimes were meaning to hearing non-signers.



The findings were unexpected from the perspective of the mirror neuron theory of action understanding, at least with the deaf group. The hearing subjects showed activation in the expected visual-related areas in the ventral occipito-temporal region as well as in the fronto-parietal "mirror system". This was true both for meaningful (pantomimes) and non-meaningful (ASL verbs for the hearing group) stimuli. So "understanding" isn't what's driving the mirror system -- but we knew that already from previous work on viewing meaningless gestures. Surprisingly, the deaf signers did not activate the mirror system during the perception of pantomimes at all, and only in a small focus in Broca's area during the perception of ASL verbs. Comprehension performance on pantomimes assessed after the scan was equivalent for deaf and hearing groups.



It is unclear to me why the two groups should differ so dramatically, but it is clear that you don't need to activate the "mirror system" to understand actions. Emmorey et al. state it succinctly:

We conclude that the lack of activation within the MNS for deaf signers does not support an account of human communication that depends upon automatic sensorimotor resonance between perception and action.


References


Corballis, M. (2010). Mirror neurons and the evolution of language Brain and Language, 112 (1), 25-35 DOI: 10.1016/j.bandl.2009.02.002

Corina DP, & Knapp HP (2008). Signed language and human action processing: evidence for functional constraints on the human mirror-neuron system. Annals of the New York Academy of Sciences, 1145, 100-12 PMID: 19076392

Emmorey K, Xu J, Gannon P, Goldin-Meadow S, & Braun A (2010). CNS activation and regional connectivity during pantomime observation: no engagement of the mirror neuron system for deaf signers. NeuroImage, 49 (1), 994-1005 PMID: 19679192

Knapp HP, & Corina DP (2010). A human mirror neuron system for language: Perspectives from signed languages of the deaf. Brain and language, 112 (1), 36-43 PMID: 19576628

Rizzolatti, G., & Arbib, M. (1998). Language within our grasp Trends in Neurosciences, 21 (5), 188-194 DOI: 10.1016/S0166-2236(98)01260-0

2 comments:

P Kassel said...

My limited understanding of the literature (as a scholar/practioner in Theatre, not science) is that mimed or pantomimed gestures tend to have a lower activation rate of MNS (see PLos Biology March 2005, V3, Issue 3). Pre-cognitive action "understanding" (not a very apt word, I would say) is a result of INTENTIONAL gestures. It seems to me that we respond at a motor level to threat/opportunity of a gesture (like a flinch when someone raises an open hand up and toward another). Since pantomime and ASL are symbolic systems, it stands to reason that MNS would NOT highly activate. The study, it seems to me, substantiates that non-intentional/symbolic gestures are processed like language, and not via sensory-motor systems. It also seems to me that this does not negate the theory that language MAY have arisen from intentional gestures.

Greg Hickok said...

Thanks for your perspective. You mention that we respond at a motor level to threat/opportunity of a gesture, like a flinch. I agree. What's interesting is that many (most) of these motor responses are not mirror actions. In fact mirror actions would be maladaptive in these situations (you flinch to a raised hand rather than raising your own hand). It doesn't make sense that we would have to simulate the movement to understand it then suppress the mirror movement and generate the appropriate non-mirror movement. That is, presumably we don't have to motor-simulate a ball flying toward our head to "understand" that we need to duck, so why do we have to simulate a hand moving toward our face?

I think your "limited" understanding is actually more accurate than the view promoted by many mirror neuron theorists. You are essentially suggesting, if I've got it right, a sensory-motor view in which "understanding" is not the kind of semantic understanding we think of in language but rather a sensory-motor association. Check out the work of Cecilia Heyes for a detailed proposal along these lines. Unfortunately, this is not the dominant view in the field.