I'm guest editing a special issue of Brain and Language on mirror neurons so I've been poking through more of the mirror neuron-and-speech literature recently. Most of this literature concerns the semantics of action words. There's no shortage of papers showing that motor cortex activations to action words follow some degree of somatotopic organization. I don't think there much evidence at all suggesting that these activations go beyond simple association -- the meaning of the word kick is associatively linked to foot actions -- but I want to raise a more general issue. Namely, even if we admit that motor representations are part of an action word's semantics, how much of the meaning of these words is actually explained by movement?
Let's take the sample items from a prominent study by Friedemann Pulvermuller et al. (2005, Functional links between motor and language systems. European Journal of Neuroscience, 21: 793-797). This is a TMS study that found faster lexical decision RTs to hand/arm words when hand/arm areas were stimulated, and faster RTs to leg words when leg areas were stimulated, etc.
So here are the sample arm words: fold, beat, grasp.
What does the motor code for the action FOLD look like? It depends on what you're folding. Motor codes associated with folding an empty sugar packet are going to be rather different from those associated with folding a bed sheet. And the meaning of fold is not restricted to hand/arm actions. I can fold my tongue, I can fold paper with my feet, I can fold paper by feeding it into a machine, and proteins can fold without my help at all. Clearly the meaning of fold is not dependent on any specific hand/arm actions.
The verb beat is no better. I can beat an egg with a fork or a hand-held blender, and I can beat an attacker with any number of actions (punching, hitting with a bat, kicking, sitting on).
Likewise, grasping can be achieved with a hand or a tool or a mind, as in "Language within our Grasp." Consider as well, that if i reach out to grasp a glass, but the glass is damaged and shatters in my grip, I did not grasp the glass, but instead crushed the glass. So the same motor action can lead to different conceptual action.
The situation isn't much better for the leg action example words: kick, hike, step. Hike in particular is odd in that motorically it is identical to walk, the difference being in the purpose of the excursion not the motor codes at all.
In other words, a specification of motor codes is not going to get you very far in capturing verb meaning, even for these cherry-picked examples.
News and views on the neural organization of language moderated by Greg Hickok and David Poeppel
Tuesday, August 26, 2008
Thursday, August 21, 2008
Mirror neurons, hubs, and puppet masters
Hubs are IN in cognitive neuroscience. Griffiths and Warren have their computational hub in the planum temporale, and Patterson et al. have their semantic hub in the anterior temporal lobe. Long before the hub we had the convergence zone of Antonio Damasio and the transmodal node of Marcel Mesulam which he described as an "epicenter" (I like that term -- sounds very important). Despite the variation in terminology, the basic idea behind all these proposals is similar: there are regions in the brain that function to integrate information from different brain systems. This seems reasonable, and may even be right.
So what do hubs have to do with mirror neurons and puppet masters? Everything, according to a recent paper in Nature by Damasio and Meyer. These authors argue that mirror neurons are not themselves the basis for action understanding, but rather function as a "convergence-divergence zone" (CDZ) -- a "hub" -- which activates a broad network of areas involved in action perception, including oft neglected sensory systems: "The [mirror] neurons ... are not so much like mirrors ..." Damsio and Meyer write, "They are more like puppet masters, pulling the strings of various memories" (p. 168).
Damasio and Meyer's essay provides a welcome and rational view on the possible function of mirror neurons in action understanding. I wonder though, whether they are still giving mirror neurons too much credit. I fully agree with the claim that mirror neurons are part of a larger network involving in processing action-related information that is associatively linked via experience. But I question whether mirror neurons are the puppet masters. Maybe they are just a hand on the puppet.
Antonio Damasio, Kaspar Meyer (2008). Behind the looking-glass Nature, 454 (7201), 167-168 DOI: 10.1038/454167a
So what do hubs have to do with mirror neurons and puppet masters? Everything, according to a recent paper in Nature by Damasio and Meyer. These authors argue that mirror neurons are not themselves the basis for action understanding, but rather function as a "convergence-divergence zone" (CDZ) -- a "hub" -- which activates a broad network of areas involved in action perception, including oft neglected sensory systems: "The [mirror] neurons ... are not so much like mirrors ..." Damsio and Meyer write, "They are more like puppet masters, pulling the strings of various memories" (p. 168).
Damasio and Meyer's essay provides a welcome and rational view on the possible function of mirror neurons in action understanding. I wonder though, whether they are still giving mirror neurons too much credit. I fully agree with the claim that mirror neurons are part of a larger network involving in processing action-related information that is associatively linked via experience. But I question whether mirror neurons are the puppet masters. Maybe they are just a hand on the puppet.
Antonio Damasio, Kaspar Meyer (2008). Behind the looking-glass Nature, 454 (7201), 167-168 DOI: 10.1038/454167a
Friday, August 15, 2008
Lexical phonology and the posterior STG
More interesting stuff has recently been published in JoCN by William Graves and company. Previous work by this group, highlighted here on Talking Brains, found that a regions of the pSTG showed frequency effects in naming. Now this group has used repetition priming with pseudowords to identify regions involved in lexical phonological access. Check it out:
The Left Posterior Superior Temporal Gyrus Participates Specifically in Accessing Lexical Phonology
William W. Graves1, Thomas J. Grabowski2, Sonya Mehta2 and Prahlad Gupta2
1 Medical College of Wisconsin, 2 University of Iowa
Reprint requests should be sent to William W. Graves, Medical College of Wisconsin, Neuro Lab, MEB 4550, 8701 Watertown Plank Road, Milwaukee, WI 53226, or via e-mail: wgraves@mcw.edu.
Impairments in phonological processing have been associated with damage to the region of the left posterior superior temporal gyrus (pSTG), but the extent to which this area supports phonological processing, independent of semantic processing, is less clear. We used repetition priming and neural repetition suppression during functional magnetic resonance imaging (fMRI) in an auditory pseudoword repetition task as a semantics-free model of lexical (whole-word) phonological access. Across six repetitions, we observed repetition priming in terms of decreased reaction time and repetition suppression in terms of reduced neural activity. An additional analysis aimed at sublexical phonology did not show significant effects in the areas where repetition suppression was observed. To test if these areas were relevant to real word production, we performed a conjunction analysis with data from a separate fMRI experiment which manipulated word frequency (a putative index of lexical phonological access) in picture naming. The left pSTG demonstrated significant effects independently in both experiments, suggesting that this area participates specifically in accessing lexical phonology.
The Left Posterior Superior Temporal Gyrus Participates Specifically in Accessing Lexical Phonology
William W. Graves1, Thomas J. Grabowski2, Sonya Mehta2 and Prahlad Gupta2
1 Medical College of Wisconsin, 2 University of Iowa
Reprint requests should be sent to William W. Graves, Medical College of Wisconsin, Neuro Lab, MEB 4550, 8701 Watertown Plank Road, Milwaukee, WI 53226, or via e-mail: wgraves@mcw.edu.
Impairments in phonological processing have been associated with damage to the region of the left posterior superior temporal gyrus (pSTG), but the extent to which this area supports phonological processing, independent of semantic processing, is less clear. We used repetition priming and neural repetition suppression during functional magnetic resonance imaging (fMRI) in an auditory pseudoword repetition task as a semantics-free model of lexical (whole-word) phonological access. Across six repetitions, we observed repetition priming in terms of decreased reaction time and repetition suppression in terms of reduced neural activity. An additional analysis aimed at sublexical phonology did not show significant effects in the areas where repetition suppression was observed. To test if these areas were relevant to real word production, we performed a conjunction analysis with data from a separate fMRI experiment which manipulated word frequency (a putative index of lexical phonological access) in picture naming. The left pSTG demonstrated significant effects independently in both experiments, suggesting that this area participates specifically in accessing lexical phonology.
Thursday, August 7, 2008
Representation of nouns and verbs in the brain
Are nouns and verbs represented differently in the brain? A typical view is that they are, with nouns relying more on temporal cortices and verbs on frontal regions. A new study in the Journal of Cognitive Neuroscience by Lolly Tyler and colleagues suggests a different view, that cortical differentiation depends on the presence of grammatical markers associated with a noun or verb, not on the noun or verb itself.
Here's what they did: In an fMRI study, subjects read isolated nouns or verbs, and also read nouns or verbs presented in the context of a mini phrase that "marked" the form class of the noun/verb (e.g., a battle, you drive). As many words have both noun and verb uses (the shout was loud, I shout daily), the authors included noun-verb dominance as a parametric variation. The idea behind this manipulation is that if nouns are verbs are differentially represented in the brain, activity in these different areas should vary as a function of noun-verb dominance.
The basic result was that reading the stem forms of nouns versus verbs produced no differential activation. More specifically, there were no brain areas where activity was modulated by a word's relative use (dominance) as a noun or verb. However, differential activity WAS elicited during reading of mini noun phrases compared to mini verb phrases. Mini verb phrases produced greater activation that mini noun phrases in the posterior middle temporal gyrus and superior temporal gyrus (see fig below). No regions were more active for noun phrases than verb phrases. Effectively then, verb phrases (but not verbs themselves) seem to activate a superset of the regions activated by noun phrases, with the additional load of verb phrase processing being carried primarily by posterior temporal regions, not frontal regions.
Tyler and company attribute the verb-phrase preferring posterior temporal activation to grammatical processing: verb phrases carry additional grammatical load compared to noun phrases and therefore draw additional grammatical processing resources. This is a reasonable interpretation, but I'm not fully convinced. We (various Hickok & Poeppel pubs) have suggested that posterior temporal regions support primarily lexical-level processes, not grammatical functions, a view that is at-odds with Tyler et al. I'm not fully convinced that we are right either (evidence isn't all that strong one way or the other), but before abandoning the idea, I think we should consider other explanations for Tyler et al.'s result. For example, mini noun phrases and mini verb phrases of the sort they used, differ in an important way. A noun phrase like the burn is basically a complete noun phrase, whereas the VP version, I burn, is incomplete; it's waiting for an additional argument, I burn toast. Maybe the posterior temporal activation reflects lexical-semantic access of possible "cloze" items: words that might finish the phrase.
In any case, the really important observation from this study is that, as Tyler et al. put it, "nouns and verbs qua nouns and verbs are not represented in separate regions of the brain." p. 1386. And even when differences are found in phrasal contexts, there is no evidence that verbs are more dependent on frontal cortex and nouns more dependent on temporal cortex.
Here's what they did: In an fMRI study, subjects read isolated nouns or verbs, and also read nouns or verbs presented in the context of a mini phrase that "marked" the form class of the noun/verb (e.g., a battle, you drive). As many words have both noun and verb uses (the shout was loud, I shout daily), the authors included noun-verb dominance as a parametric variation. The idea behind this manipulation is that if nouns are verbs are differentially represented in the brain, activity in these different areas should vary as a function of noun-verb dominance.
The basic result was that reading the stem forms of nouns versus verbs produced no differential activation. More specifically, there were no brain areas where activity was modulated by a word's relative use (dominance) as a noun or verb. However, differential activity WAS elicited during reading of mini noun phrases compared to mini verb phrases. Mini verb phrases produced greater activation that mini noun phrases in the posterior middle temporal gyrus and superior temporal gyrus (see fig below). No regions were more active for noun phrases than verb phrases. Effectively then, verb phrases (but not verbs themselves) seem to activate a superset of the regions activated by noun phrases, with the additional load of verb phrase processing being carried primarily by posterior temporal regions, not frontal regions.
Tyler and company attribute the verb-phrase preferring posterior temporal activation to grammatical processing: verb phrases carry additional grammatical load compared to noun phrases and therefore draw additional grammatical processing resources. This is a reasonable interpretation, but I'm not fully convinced. We (various Hickok & Poeppel pubs) have suggested that posterior temporal regions support primarily lexical-level processes, not grammatical functions, a view that is at-odds with Tyler et al. I'm not fully convinced that we are right either (evidence isn't all that strong one way or the other), but before abandoning the idea, I think we should consider other explanations for Tyler et al.'s result. For example, mini noun phrases and mini verb phrases of the sort they used, differ in an important way. A noun phrase like the burn is basically a complete noun phrase, whereas the VP version, I burn, is incomplete; it's waiting for an additional argument, I burn toast. Maybe the posterior temporal activation reflects lexical-semantic access of possible "cloze" items: words that might finish the phrase.
In any case, the really important observation from this study is that, as Tyler et al. put it, "nouns and verbs qua nouns and verbs are not represented in separate regions of the brain." p. 1386. And even when differences are found in phrasal contexts, there is no evidence that verbs are more dependent on frontal cortex and nouns more dependent on temporal cortex.
Wednesday, August 6, 2008
Eight Problems for the Mirror Neuron Theory of Action Understanding
Regular readers (and perhaps even occasional readers) of Talking Brains are well aware that I have been rather critical of the interpretation of mirror neurons that dominates the literature, namely, that they are the basis of action understanding. I've finally synthesized all of these critical comments into a critical review titled "Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans." The paper has recently been submitted for publication in the Journal of Cognitive Neuroscience. The review process should be interesting; I'll post updates on the paper's progress.
In the meantime, by way of preview, here are the Eight Problems. If anyone is interested getting a discussion going on any of these issues, I'd be happy to participate. Did I miss any problems? Am I wrong about the problems I listed? Just click on the "comments" link at the end of this entry and let me know!
1. There is no evidence in monkeys that mirror neurons support action understanding.
2. Action understanding can be achieved via non-mirror neuron mechanisms.
3. M1 contains mirror neurons
4. The relation between macaque mirror neurons and the “mirror system” in humans is either non-parallel or undetermined
5. Action understanding in humans dissociates from neurophysiological indices of the human “mirror system”
6. Action understanding and action production dissociate
7. Damage to the inferior frontal gyrus is not correlated with action understanding deficits
8. Generalization of the mirror system to speech recognition fails on empirical grounds
In the meantime, by way of preview, here are the Eight Problems. If anyone is interested getting a discussion going on any of these issues, I'd be happy to participate. Did I miss any problems? Am I wrong about the problems I listed? Just click on the "comments" link at the end of this entry and let me know!
1. There is no evidence in monkeys that mirror neurons support action understanding.
2. Action understanding can be achieved via non-mirror neuron mechanisms.
3. M1 contains mirror neurons
4. The relation between macaque mirror neurons and the “mirror system” in humans is either non-parallel or undetermined
5. Action understanding in humans dissociates from neurophysiological indices of the human “mirror system”
6. Action understanding and action production dissociate
7. Damage to the inferior frontal gyrus is not correlated with action understanding deficits
8. Generalization of the mirror system to speech recognition fails on empirical grounds
Tuesday, August 5, 2008
Post Docs at University of Chicago
The word on the street is that there are two post-doctoral positions in Steve Small's lab at the University of Chicago. Research focus for these positions includes the neural basis of normal language function or imaging and modeling of imitation-based aphasia therapy. For more info contact Steve Small: http://home.uchicago.edu/~slsmall/
Monday, August 4, 2008
New paper by Corianne Rogalsky on the anterior temporal lobe and sentence processing
Congratulations to Corianne Rogalsky, who successfully defended her dissertation here in the TB West lab last month. Corianne will be moving up the road to begin her post-doc in the Damasio lab at USC starting in the Fall. Corianne's dissertation work focused on the neural basis of sentence comprehension, and included studies (i) on the role of anterior temporal regions in syntactic vs. combinatorial semantic processes, (ii) on the relation between working memory and sentence comprehension in Broca's area, & (iii) on the relation between sentence comprehension and melody perception.
The first of these experiments has just been published in Cerebral Cortex (Rogalsky & Hickok, 2008, Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex, advance e-pub). The goal of the study was to try to use fMRI to distinguish between two competing hypotheses for the role of the ATL in sentence processing, one that it is involved in syntactic computations (of some undetermined sort), and two that it is involved in combinatorial semantic computations. These are difficult to separate because manipulations of sentence structure affects combinatorial semantics and vise versa. So we decided to leave the sentences alone and ask the subjects either to attend to structure (monitoring for occasional syntactic agreement errors) or to sentence-level meaning (monitoring for occasional semantic implausibilities). The idea is that attention to one or another aspect of the sentence will boost the gain of that process, which will be reflected in greater activity in the neural networks supporting said process. We focused on the ATL region that has previously been shown to show relatively selective responses to sentences compared to non-sentence stimuli. To do this we ran a "localizer" where we contrasted passive listening to sentences with passive listening to lists of words. This picked out a region in the ATL bilaterally (see black outline in the figure below). So is this region modulated more by attention to syntax, suggesting a syntactic function? Or attention to sentence semantics, hinting at a more compositional semantic function?
As it turns out it wasn't an either-or result. A small portion of the left ATL "sentence ROI" was modulated by attention to sentence semantics but not syntax (see anterior blue shade region inside the black outline). Most of the left ROI was equally modulated by both tasks, that is, activation was higher in during the attentional conditions (equally so) than during the passive listening condition that defined the ROI. The attention tasks did not modulate activity at all in the right hemisphere "sentence ROI": activation was no different in the attention tasks than in the passive listening condition.
What does this mean? Given that all of the left hemisphere sentence ROI was sensitive to the semantic attention task, I think we can conclude that the region isn't performing some "pure" syntactic computation. Rather, it seems to be involved in some kind of syntactic/semantic integrative function, at least in the left hemisphere. Of the two competing hypotheses, then, our result seems to favor the combinatorial semantic view.
The first of these experiments has just been published in Cerebral Cortex (Rogalsky & Hickok, 2008, Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex, advance e-pub). The goal of the study was to try to use fMRI to distinguish between two competing hypotheses for the role of the ATL in sentence processing, one that it is involved in syntactic computations (of some undetermined sort), and two that it is involved in combinatorial semantic computations. These are difficult to separate because manipulations of sentence structure affects combinatorial semantics and vise versa. So we decided to leave the sentences alone and ask the subjects either to attend to structure (monitoring for occasional syntactic agreement errors) or to sentence-level meaning (monitoring for occasional semantic implausibilities). The idea is that attention to one or another aspect of the sentence will boost the gain of that process, which will be reflected in greater activity in the neural networks supporting said process. We focused on the ATL region that has previously been shown to show relatively selective responses to sentences compared to non-sentence stimuli. To do this we ran a "localizer" where we contrasted passive listening to sentences with passive listening to lists of words. This picked out a region in the ATL bilaterally (see black outline in the figure below). So is this region modulated more by attention to syntax, suggesting a syntactic function? Or attention to sentence semantics, hinting at a more compositional semantic function?
As it turns out it wasn't an either-or result. A small portion of the left ATL "sentence ROI" was modulated by attention to sentence semantics but not syntax (see anterior blue shade region inside the black outline). Most of the left ROI was equally modulated by both tasks, that is, activation was higher in during the attentional conditions (equally so) than during the passive listening condition that defined the ROI. The attention tasks did not modulate activity at all in the right hemisphere "sentence ROI": activation was no different in the attention tasks than in the passive listening condition.
What does this mean? Given that all of the left hemisphere sentence ROI was sensitive to the semantic attention task, I think we can conclude that the region isn't performing some "pure" syntactic computation. Rather, it seems to be involved in some kind of syntactic/semantic integrative function, at least in the left hemisphere. Of the two competing hypotheses, then, our result seems to favor the combinatorial semantic view.
Subscribe to:
Posts (Atom)