Friday, October 31, 2008

Rock-Paper-Scissors and mirror neurons: Executed and observed movements have different distributed representations in human aIPS

"Shane" left a comment on a previous post about a recently published paper by David Heeger's group.

I have heard about this paper, but haven't had a chance to read it yet. Here is the abstract for a quick summary:

How similar are the representations of executed and observed hand movements in the human brain? We used functional magnetic resonance imaging (fMRI) and multivariate pattern classification analysis to compare spatial distributions of cortical activity in response to several observed and executed movements. Subjects played the rock-paper-scissors game against a videotaped opponent, freely choosing their movement on each trial and observing the opponent's hand movement after a short delay. The identities of executed movements were correctly classified from fMRI responses in several areas of motor cortex, observed movements were classified from responses in visual cortex, and both observed and executed movements were classified from responses in either left or right anterior intraparietal sulcus (aIPS). We interpret above chance classification as evidence for reproducible, distributed patterns of cortical activity that were unique for execution and/or observation of each movement. Responses in aIPS enabled accurate classification of movement identity within each modality (visual or motor), but did not enable accurate classification across modalities (i.e., decoding observed movements from a classifier trained on executed movements and vice versa). These results support theories regarding the central role of aIPS in the perception and execution of movements. However, the spatial pattern of activity for a particular observed movement was distinctly different from that for the same movement when executed, suggesting that observed and executed movements are mostly represented by distinctly different subpopulations of neurons in aIPS.
(Italics added.)

So this is an anti-mirror neuron paper. While I'm fully on-board with the anti-mirror neuron conclusion, I'm not sure the data really support this view. Again, I haven't yet read the paper and am basing my argument on the abstract only, so somebody correct me if I'm missing something. The study found that aIPS activated both for action production and action viewing. No surprise there. The interesting and novel contribution of this paper is that within the activated region, they found different patterns of activation for observation and execution of movements. From this they conclude that the these two functions are supported by distinctly different subpopulations of neurons.

I like the methodology employed here, and I believe their findings do indicate that observation and execution involve non-identical populations of neurons, but I don't think this is strong evidence against a mirror neuron view. Here's why: Suppose there are three types of cells in aIPS:

1. sensory-only cells
2. motor-only cells
3. sensory-motor cells (mirror neurons)

There is evidence for this kind of distribution of cells in parietal sensory-motor areas. Suppose further that action understanding is achieved by cell type #3, the mirror neurons. If this were true, the ROI as a whole would activate for both action observation and action execution, as the study found, but sensory vs. motor events would nonetheless activate non-identical populations of cells within the ROI: observation would activate cell types 1 & 3, whereas execution would activate cell types 2 & 3. This difference may be enough to allow for above chance pattern classification that is based on non-mirror neurons within the ROI.

So if I've got the basics of the study correct (based on the abstract), this is not strong evidence against mirror neurons supporting action understanding. Neither is it evidence FOR mirror neurons, however.

I. Dinstein, J. L. Gardner, M. Jazayeri, D. J. Heeger (2008). Executed and Observed Movements Have Different Distributed Representations in Human aIPS Journal of Neuroscience, 28 (44), 11231-11239 DOI: 10.1523/JNEUROSCI.3585-08.2008

Tuesday, October 28, 2008

Action comprehension in non-human primates: motor simulation or inferential reasoning?

I noticed this forthcoming paper in the same issue of TICS as the Grodzinsky & Santi paper that David highlighted in a previous post. Looks interesting!

Action comprehension in non-human primates: motor simulation or inferential reasoning?

Justin N. Wood1, and Marc D. Hauser2

1University of Southern California, Department of Psychology, 3620 South McClintock Ave, Los Angeles, CA 90089, USA 2Harvard University, Department of Psychology, 33 Kirkland Street, Cambridge, MA 02138, USA

Available online 23 October 2008.

Some argue that action comprehension is intimately connected with the observer’s own motor capacities, whereas others argue that action comprehension depends on non-motor inferential mechanisms. We address this debate by reviewing comparative studies that license four conclusions: monkeys and apes extract the meaning of an action (i) by going beyond the surface properties of actions, attributing goals and intentions to the agent; (ii) by using environmental information to infer when actions are rational; (iii) by making predictions about an agent’s goal, and the most probable action to obtain the goal given environmental constraints; (iv) in situations in which they are physiologically incapable of producing the actions. Motor theories are, thus, insufficient to account for primate action comprehension in the absence of inferential mechanisms.

Monday, October 27, 2008

Mirror neuron review reviews, to see?

Hey, Greg, are the reviews of your mirror neuron review juicy enough that they are worth posting on the blog? I don't know if it's legitimate to post reviews of a journal article on a blog. Are there guidelines about this sort of thing?

However, given what's at stake, and given how much influence the wretched mirror neuron action perception hypothesis has, it would be both intellectually helpful and sociologically fun to see such reviews and pick at them.

I'm certainly willing -- if we can agree that it's ethically defensible -- to post some of the more outrageous reviews that I've gotten. For example, that I "understand virtually nothing". Man, that hurt my feelings! Anyway, this might not be doable, although it would be a whole lot of fun.

It would be particularly interesting to find out about how your paper will be treated in subsequent rounds of pure review and the editorial process.

Maybe, in fact, the occasional readers of this blog would comment more if it meant posting one of the more bizarre reviews that they have gotten in their own research... :-) Nothing like a little levity to balance the pain of negative reviews.

"The battle for Broca’s region" -- lost again

There is a new paper in the journal Trends in Cognitive Sciences that, once again, examines the role of Broca's area and language processing.

The battle for Broca’s region, by Yosef Grodzinsky and Andrea Santi, summarizes four positions about the role of Broca's area and concludes -- who would have thunk it? -- that the 'syntactic movement account' is the best account to date.

Grodzinsky and Santi distinguish between four positions: an "action perception" model (advocated, for example, by Arbib and Rizzolatti), a "working memory" model (Caplan), a "syntactic complexity" model (Goodglass, Friederici), and a "syntactic movement" model (supported by the authors). I think one can quibble about the attributions, but by and large this is more or less of a fair characterization of various positions. The former two are of the "general" variety; the latter two are language specific. The authors examine these positions in light of data from deficit-lesion correlation and neuroimaging evidence, basically from fMRI. They argue, reasonably, that a single model account is likely to underspecified. That being said, they conclude that the recent evidence is most consistent with a "syntactic movement" model Broca's area.

I have rather mixed feelings about this brief review/perspective piece. On the one hand, it is perfectly reasonable for Yosef to work hard on supporting the view that he is fought hard for for a long time. Indeed, attempting to identify a particular kind of computation that's executed in a chunk of brain tissue seems like a sensible goal. On the other hand, I do think it's really time to go further now, and I wish that these authors might lead the way on a more biologically sophisticated perspective.

The fact that their view is too simple is something they state repeatedly. "Importantly, Broca’s region might well be multi-functional." And: "Indeed, Broca’s region might be multifunctional." And so on. Well, yes, then let's actually entertain that...

The fact that we have to make careful distinctions between areas 44, 45, 47, and the frontal operculum is now well established. Yosef has supported important progress in this area, and Friederici and her colleagues as well as Amunts and her colleagues have provided impressive evidence for functionally relevant subdivisions. Moreover, even for a single piece of tissue a la Brodmann, the probability is very a very high that more than one operation is executed. Obviously ... Look, take Brodmann area 17 (primary visual cortex, striate cortex). Beyond subdivisions into ocular dominance columns, orientation pinwheels, and -- obviously -- six differentiated layers of cortical tissue, there are further functionally critical subdivisions into cytochrome oxidase blobs, etc. We are perfectly comfortable attributing multiple functions to local pieces of tissue in the visual system. Yet we persist in trying to find surprisingly monolithic interpretations of the chunk of brain as extensive as Broca's region. Now admittedly we don't have the necessary cell biological analysis of this part of the brain; nevertheless, isn't it time we come up with some more nuanced hypotheses about what gets calculated in these various different parts of the frontal lobe?

Inquiring minds want to know. I'm pretty frustrated with the state-of-the-art in this area of research. Please, somebody, figure this piece of brain out!

Y GRODZINSKY, A SANTI (2008). The battle for Broca’s region Trends in Cognitive Sciences DOI: 10.1016/j.tics.2008.09.001

Saturday, October 25, 2008

A new place for YOU to publish: LCP-CogNeuro

Dear Talking Brains readers,

as of right now, there is a new place to send your papers if they are cognitive neuroscience of language papers. Please see the announcement below -- and then send me your best work.

Lorraine (Lolly) Tyler remains the Editor of LCP. I will be the editor for cognitive neuroscience of language.

There are not that many outlets for theoretically motivated and biologically serious research on speech/language, so please take advantage of this opportunity to publish your best work.


Language and Cognitive Processes -- New Special Section Announcement!

In 2009 LCP will broaden its remit by publishing two additional issues a year devoted to the Cognitive Neuroscience of Language. The development of cognitive neuroscience methodologies has significantly broadened the empirical scope of experimental language studies. Both hemodynamic imaging and electrophysiological approaches provide new perspectives on the representation and processing of language, and add important constraints on the development of theoretical accounts of language function.

In light of the strong interest in and growing influence of these new tools LCP will publish two issues a year on the Cognitive Neuroscience of Language. All types of articles will be considered, including reviews, whose submission is encouraged. Submissions should exemplify the subject in its most straightforward sense: linking good cognitive science and good neuroscience to answer key questions about the nature of language and cognition.

Manuscripts should be submitted through the journal's Scholar One website: When submitting, please select "Cognitive Neuroscience of Language" from the manuscript type drop down.

Peer Review Integrity
All published research articles in this journal have undergone rigorous peer review, based on initial editor screening and refereeing by independent expert referees.

Friday, October 24, 2008

Mirror neurons in the inferior parietal lobe: Are they really "goal" selective?

A few weeks ago I published a blog entry previewing my critical review of mirror neuron theory of action understanding. The paper has been in the review process since that time, and I've finally received a bit of feedback. As requested, the feedback is from a mirror neuron/action understanding proponent. I find the comments extremely valuable because (i) I have been directed to additional papers that had eluded my attention previously, and (ii) while the review is highly critical of my manuscript -- comments like disappointing, astounding non sequitur, and totally nonsense were used -- I have come away with more confidence that my analysis is correct: there is nothing in the reviews that provide any challenge to my interpretation of the literature.

So i've been looking at the papers that I either hadn't read carefully enough, or just plain missed. Here is one of them.

Fogassi et al., (2005) present very interesting data from mirror neurons in the inferior parietal lobule (IPL) of monkeys. Monkeys were trained either to grasp a piece of food and put it in his (the monkey’s) mouth, or to pick up an object and put it in a container. In some conditions, the container was next to the monkey’s mouth such that the mechanics of the movement were very similar between grasping-to-eat and grasping-to-place. In addition, a condition was also implemented in which the monkey grasped and placed a piece of food in the container to control for differences between food items and objects, both visually and tactilely. In all variants of the experiment, the authors report that some IPL cells preferentially responded to the goal of the action: grasping-to-eat vs. grasping-to-place. Again, this was true even when the placing-action terminated in close proximity to the mouth and involved grasping a piece of food. Some of these cells also responded selectively and congruently during the observation of grasping-to-eat and grasping-to-place.

So both in perception and action, there are IPL cells that seem to be selective for the specific goal of an action rather than the sensory or motor features of an action -- a very intriguing result. Fogassi et al. discuss their motor findings in the context of “intentional chains” in which different motor acts forming the entire action are linked in such a way that each act is facilitated in a predictive and goal-oriented fashion by the previous ones. They give an example of IPL neurons observed in another unpublished study that respond to flexion of the forearm, have tactile receptive fields around the mouth, and respond during grasping actions of the mouth and suggest that, “these neurons appear to facilitate the mouth opening when an object is touched or grasped” (p. 665).

Regarding the action perception response properties of the IPL neurons in their study, Fogassi et al. all conclude, “that IPL mirror neurons, in addition to recognizing the goal of the observed motor act, discriminate identical motor acts according to the action in which these acts are embedded. Because the discriminated motor act is part of a chain leading to the final goal of the action, this neuronal property allows the monkey to predict the goal of the observed action and, thus, to ‘read’ the intention of the acting individual” (p. 666).

According to Fogassi et al., IPL mirror neurons code action goals and can “read the intention” of the acting individual. But is there a simpler explanation? Perhaps Fogassi et al.’s notion of predictive coding and their example of the IPL neuron with receptive fields on the face can provide such an explanation. Suppose the abstract goal of an action and/or it’s meaning is coded outside of the motor system. And suppose that Fogassi et al. are correct in that a complex motor act leads to some form of predictive coding (anticipatory opening of the mouth, salivation, perhaps even forward modeling of the expected somatosensory consequences of the action). The predictive coding in the motor system is now going to be different for the grasping-to-eat versus grasping-to-place actions, even though it is not coding "goals." For eating, there may be anticipatory opening of the mouth, salivation, perhaps even forward modeling of the expected somatosensory consequences of the action. For placing, there will be no mouth-related coding, but there may be other kinds of coding such as expectations about the size, shape or feel of the container, or the sound that will result if the object is placed in it. If cells in IPL differ in their sensitivity to feedback from these different systems, then it may look like the cells are coding goals, when in fact they are just getting differential feedback input from the forward models. Observing an action may activate this system with similar electrophysiological consequences, not because it is reading the intention of the actor, but simply because the sensory event is associated with particular motor acts.

In short, very interesting paper. Not proof, however, that mirror neurons code goals or intentions, or support mind reading.

L. Fogassi, et al. (2005). Parietal Lobe: From Action Organization to Intention Understanding Science, 308 (5722), 662-667 DOI: 10.1126/science.1106138

Auditory Cognitive Neuroscience Society

This is a new organization/conference that looks really interesting. This year's meeting is January 9-10 in Tucson. I've already marked my calendar and plan to go. See you there! A note from the organizers is below.


Mark your calendars!

The 3rd annual conference of the Auditory Cognitive Neuroscience Society (ACNS; formerly the Auditory Cognitive Science Society) is scheduled for Friday-Saturday January 9-10, 2009 on the campus of the University of Arizona (Tucson, AZ). This conference is designed to bring together researchers from psychoacoustics, neuroscience, speech perception, speech production, audiology, speech pathology, psychology, linguistics, computer science etc. to discuss topics related to the perception of complex sounds such as speech and music.

The conference is free and open to everyone*. The talks are organized to provide plenty of opportunity for interaction and exchange.

More details (topics, speakers, location, etc.) will be forthcoming soon. Be sure to check out the ACNS website periodically for updates. For now, please put a note in your favorite digital or analog calendar. If you have any questions, comments or suggestions, please feel free to contact either of us.

*Please note that CEUs will not be available for this year's attendees.

Andrew & Julie

Andrew J. Lotto
Speech, Language & Hearing Sciences
University of Arizona

Julie M. Liss
Department of Speech & Hearing Science
Arizona State University

Monday, October 20, 2008

Post-doctoral position at the Center for Cognitive Neuroscience, University of California, Irvine (Hickok lab)

We are looking to fill a post doc position in my lab (Laboratory for Cognitive Neuroscience, A.K.A. Talking Brains West). The project involves fMRI studies of the planum temporale including sensory-motor aspects of speech, visual speech, spatial hearing, and sequence learning, among other domains. I'm excited about this project and hope to get a solid and productive team in place.

Official add below:

School of Social Sciences
Department of Cognitive Sciences
Center for Cognitive Neuroscience
Position: Postdoctoral Scholar
The Department of Cognitive Sciences and the Center for Cognitive Neuroscience announce a Postdoctoral Scholar position in the Laboratory for Cognitive Brain Research.

A postdoctoral position is available in the laboratory of Dr. Greg Hickok at the University California, Irvine. The postdoctoral fellow will collaborate in NIH-funded research investigating the functional anatomy of language and complementary pursuits. Ongoing research projects in the lab employ a variety of methods, including traditional behavioral and neuropsychological studies, as well as techniques such as fMRI, EEG/MEG, and TMS. Opportunities also exist for collaboration with other cognitive science faculty and with faculty in the Center for Cognitive Neuroscience.

Requirements – Candidates should have a Ph.D. in a relevant discipline and experience with functional MRI, preferably in the area of speech and language. Familiarity with computational and statistical methods for neuroimaging (e.g. MatLab, SPM, AFNI) is advantageous.

The appointment would begin as early as December 2008 for a period of 3 years and is contingent on receipt of project funding. Salary will be commensurate with experience, minimum salary: $36,360.

Application Procedure - Candidates should send a CV, a letter of interest (including research skills), and a list of 3 references to the address below:

Lisette Isenberg
Department of Cognitive Sciences and Center for Cognitive Neuroscience
3151 Social Science Plaza
University of California, Irvine
Irvine, CA 92697-5100

The University of California, Irvine is an equal opportunity employer committed to excellence through diversity.
Post: 10/20/08, Close: 11/30/08

Thursday, October 16, 2008

Speech recognition and the left hemisphere: Task matters!

I fully agree with Dorte Hessler's assessment that left hemisphere damage can produce significant "problems to identify or discriminate speech sounds in the absence of hearing deficits." But here is the critical point that David and I have been harping on since 2000: the ability to explicitly identify or discriminate speech sounds (e.g., say whether /ba/ & /pa/ are the same or different) on the one hand, and the ability to implicitly discriminate speech sounds (e.e., recognize that bear refers to a forrest animal while pear is a kind of fruit) on the other hand, are two different things. While it is a priori reasonable to try to study speech sound perception by "isolating" that process in a syllable discrimination task (ba-pa, same or different?), it turns out that by doing so, we end up measuring something completely different from normal speech sound processing as it is used in everyday auditory comprehension. Given that our goal is to understand how speech is processed in ecologically valid situations -- no one claims to be studying the neural basis of the ability to make same-different judgments about nonsense syllables; they claim to be studying "speech perception" -- it follows that syllable discrimination tasks are invalid measures of speech sound processing. I believe the use of syllable discrimination tasks in speech research has impeded progress in understanding its neural basis.

Let me explain.

Some the same studies that Dorte correctly noted as providing evidence for deficits on syllable discrimination tasks following left hemisphere damage also show that the ability to perform syllable discrimination double-dissociates from the ability to comprehend words. Here is a graph from a study by Sheila Blumstein showing auditory comprehension scores plotted in the y-axis and three categories of performance on syllable discrimination & syllable identification tasks on the x-axis. The plus and minus signs indicate preserved or impaired performance respectively. The letters in the graph correspond to clinical aphasic categories (B=Broca's, W=Wernicke's). Notice the red arrows. They point to one patient who has the worst auditory comprehension score in the sample -- a Wernicke's aphasic, not surprisingly -- yet who is performing well on syllable discrimination/identification tasks, and to another patient who has the best auditory comprehension score in the sample -- a Broca's aphasic, not surprisingly -- yet who fails on both syllable discrimination and identification. A nice double-dissociation.

But that's only two patients, and the measure of auditory comprehension is coarse in that it uses sentence level as well as word level performance. Fair enough. So here is data from Miceli et al. that compares auditory comprehension of words (4AFC with phonemic and semantic foils) and syllable discrimination. Notice that 19 patients are pathological on syllable discrimination yet normal on auditory comprehension and 9 patients show the reverse pattern. More double dissociations.

Where are the lesions that are producing the deficits on syllable discrimination versus auditory comprehension? According to Basso et al, syllable discrimination deficits are most strongly associated with non-fluent aphasia, which is most strongly associated with frontal lesions. According to a more recent study by Caplan et al., the inferior parietal lobe is also a critical site. Notice that these regions have also been implicated in sensory-motor aspects of speech, including verbal working memory. This contrasts with work on the neural basis of auditory comprehension deficits (e.g., Bates et al.) which implicates the posterior temporal lobe (STG/MTG).

Some case study contrasts from Caplan et al. underline the point. On the left is a patient who has a lesion in the inferior frontal lobe and who was classified as a Broca's aphasic. On the right, a patient with a temporal lobe lesion and a classification of Wernicke's aphasia. By definition, the Broca's patient will have better auditory comprehension than the Wernicke's patient. Yet look at the syllable discrimination scores of these patients. The Broca case is performing at 72% correct, whereas the Wernicke is at 90%. Again, the patient with better comprehension is performing poorly on syllable discrimination showing that syllable discrimination isn't measuring normal speech sound processing.

To my reading the data are unequivocal. Syllable discrimination tasks tap a different set of processes to auditory comprehension tasks, even though both tasks ostensibly involve the processing of speech sounds. How can this be? Here's an explanation. Syllable discrimination involves activating a phonological representation of one syllable, maintaining that activation while the phonological representation of a second syllable is activated, then comparing the two, and then making a decision. Deficits on this task could arise from activating the phonological representations, maintaining both representations simultaneously in short term memory, comparing the two representations, or in making the decision. Only one of these processes is clearly shared by an auditory comprehension task, namely, activating the phonological representations. I suggest that the deficits in syllable discrimination following left hemisphere damage, particularly left frontal damage, result from one or more of the non-shared components of the task. The fact that the network implicated in syllable discrimination (fronto-parietal regions) is largely identical to that which is independently implicated in phonological working memory supports this claim. If, on the other hand, a patient had a significant disruption of the sensory system that activated phonological representations -- e.g., patients with bilateral lesions and word deafness -- then such a disruption should be evident on both discrimination and comprehension tasks.

It is hard for us to give up syllable discrimination as our bread and butter task in speech research. It seem so rigorous and controlled. But the empirical facts show that it doesn't work. In the neuroscience branch of speech research, the task produces invalid and misleading results (if our goal is to understand speech perception under ecologically valid listening conditions). It's time to move on.


Basso, A., Casati, G. & Vignolo, L. A. (1977). Phonemic identification defects in aphasia. Cortex, 13, 84-95

Elizabeth Bates, Stephen M. Wilson, Ayse Pinar Saygin, Frederic Dick, Martin I. Sereno, Robert T. Knight, Nina F. Dronkers (2003). Voxel-based lesion–symptom mapping Nature Neuroscience DOI: 10.1038/nn1050

S Blumstein, W Cooper, E Zurif, A Caramazza (1977). The perception and production of Voice-Onset Time in aphasia Neuropsychologia, 15 (3), 371-372 DOI: 10.1016/0028-3932(77)90089-6

Caplan, D., Gow, D. & Makris, N. (1995). Analysis of lesions by MRI in stroke patients with acoustic-phonetic processing deficits. Neurology, 45: 293 - 298.

G Hickok, D Poeppel (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

Gregory Hickok, David Poeppel (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

G MICELI, G GAINOTTI, C CALTAGIRONE, C MASULLO (1980). Some aspects of phonological impairment in aphasia*1 Brain and Language, 11 (1), 159-169 DOI: 10.1016/0093-934X(80)90117-0

More on speech recognition and the left hemisphere: Important comment from Dorte Hessler

Dorte Hessler has posted an important comment in on my entry Speech recognition and the left hemisphere. Indeed, the comment is thoughtful, thorough, and important enough that I have decided to repost it here as it's own entry. This is exactly the kind of informal (but informed) discussion that I hoped the blog would support. I'll post a response in a new entry shortly.


dörte hessler said...

Hi again,

First thanks to Greg for your response, which made me think quite a while. Especially your comment of atypical cortical organization. So I went through the articles on phonemic processing deficits I read before, because I seemed to remember that there was a substantial amount of patients with unilateral damage.
But to clarify some things first: Of course my earlier comment was on the acute stroke study – sorry, I should have mentioned it more clearly. Furthermore I definitely did not want to claim that the right hemisphere is does not play any role in phonological processing, I think there is a vast amount of evidence for that it is in fact involved (some of it cited in the comments above). However I did want to claim (and still want to do so), that a damage to solely the left hemisphere can lead to word sound deafness (as e.g. defined by Franklin, 1989): thus problems to identify or discriminate speech sounds in the absence of hearing deficits. I quote Sue Franklin here because she looked at this phenomenon in the light of aphasia and not as a pure syndrome, which, indeed, is very rare. But looking at aphasic cases, quite a lot of aphasic patients suffering from left hemisphere damage have shown problems in discriminating or identifying speech sounds. I won’t quote the single case studies here, but limit myself to larger group studies. I will particularly mention 4 of them which did not investigate only patients with a proven disorder in auditory discrimination, but those who investigated a broader aphasic group:

- Basso, Casati & Vignolo (1977): Of 50 aphasic patients (with unilateral left hemisphere damage) only 13 (26%) were unimpaired in a phoneme identification task (concerning VoiceOnsetTime), the remainder of 37 patients showed impaired performance.

The three other studies are concerned with minimal pair discrimination

- Varney & Benton (1979): Of 39 aphasic patients (with unilateral left hemisphere damage) 10 (~25,6%) showed defective performance on the minimal pair discrimination task and the other 29 showed normal performance

- Miceli, Gainotti, Caltagirone & Masullo (1980): Of 66 aphasic patients (with unilateral left hemisphere damage) 34 (~51,5%) showed pathological performance on a phoneme discrimination task. The other 32 scored normal.

- Varney (1984): Of 80 aphasic patients (with unilateral left hemisphere damage) 14 (17,5%) showed defective performance on the same task as used in Varney & Benton, the remainder was unimpaired.

To sum up 235 aphasic patients (all with unilateral left hemisphere damage) took part in these studies. 95 of them (~40%) were impaired on tasks investigating phonemic processing (discrimination and identification tasks).

For me this seems to underline the notion that a damage in the left hemisphere is definitely sufficient to cause a substantial problem in the recognition/processing of speech sounds!
Also these results differ of course quite from those of the acute stroke study of Rogalsky and colleagues (2008), which I claimed is due to the material used in that study.

Franklin, S. (1989). Dissociations in auditory word comprehension: evidence from nine fluent aphasic patients. Aphasiology 3(3), 189-207.

Basso, A., Casati, G. & Vignolo, L. A. (1977). Phonemic identification defects in aphasia. Cortex, 13, 84-95.

Varney, N.R. & Benton, A.L. (1979). Phonemic discrimination and aural comprehension among aphasic patients. Journal of Clinical Neuropsychology 1(2), 65-73.

Miceli, G., Gainotti, G., Caltagirone, C. & Masullo, C. (1980). Some aspects of phonological impairment in aphasia. Brain and Language 11, 159-169.

Varney, N.R. (1984). Phonemic imperception in aphasia. Brain and Language 21, 85-94.

Rogalsky, C., Pitz, E., Hillis, A. E. & Hickok, G. (2008). Auditory word comprehension impairment in acute stroke: Relative contribution of phonemic versus semantic factors. Brain and Language 107(2), 167-169.

Tuesday, October 14, 2008

Does Parkinson's disease impair action verb processing?

I've been slogging through the evidence typically cited as support for an embodied cognition view of language processing. Much of this research focuses on processing actions verbs, which according to the "EC" view, critically involve motor representations as part of their semantics. In previous posts I've discussed studies that use TMS, ALS, and stroke data to make the case for an embodied view of action word processing. None of it, I argued, was particularly compelling.

Here we have a close look at a recent paper involving Parkinson's disease (PD) patients (Boulenger et al., 2008). These authors used a lexical-decision, masked, identity-priming paradigm: primes were identical to targets (= identity-priming) and were presented rapidly, followed by a mask which precludes conscious awareness of the prime (= masked); priming effects were assessed relative to a control condition where the "prime" was a string of consonants. Priming was compared for visually presented nouns and verbs in PD patients both on and off medication. This is an interesting design because it allowed the team to assess processing when the basal ganglia circuit was relatively functional compared to when it was not. Control subjects were also tested.

So what did they find? On medication, PD patients showed priming for both nouns and verbs (middle panel in figure below), whereas off medication, PD patients only showed priming for nouns. Since nouns primed even off medication, this argues against generalized attentional, perceptual, etc. explanations of the failure of verbs to prime off medication.

(White circles are nouns, black circles are verbs.)

This is a pretty cool result and is interpreted as "compelling evidence that processing lexico-semantic information about action words depends on the integrity of the motor system" (p. 743). I beg to differ.

First, PD is NOT limited to the motor system. In fact, Boulenger et al. point out that "deficits in cognitive functions and subtle semantic language deficits have also been reported" (p. 744). It is impossible to know whether the failure to show priming effects is strictly a matter of motor dysfunction, or whether it stems from disruption of other functions supported by basal ganglia circuits. This is a point similar to one I raised in connection with ALS: just because a prominent symptom of a disease is motor, doesn't mean that the motor deficit is causing all the symptoms.

Second, depending on what you focus on in the reaction time data, the pattern of results could either support a verb processing deficit or a noun processing deficit. Have a look at the top "Patients OFF" panel in the graph above. While it is clear that nouns are priming and verbs are not, it is also the case that RTs to nouns are quite a bit slower than RTs to verbs in the control, unprimed condition (left side of graph). This is puzzling given that ON medication, the PD patients showed no RT difference to the same nouns vs. the same verbs. So one way to look at the result is that being off medication causes a selective deficit in noun processing relative to verb processing!

How do we reconcile these two interpretations? I don't know. It depends on which measure (raw recognition time vs. priming) is a better measure of "lexico-semantic" processing. Sometimes it helps to re-state the findings without all the interpretive baggage. Assuming that basal ganglia dysfunction is exaggerated when the PD patients are off Levodopa medication, the present study leads to the following conclusions:

1. Basal ganglia dysfunction reduces the masked-primer induced pre-activation of essential parts of the cerebral networks for verb (but not noun) processing. (This is a paraphrase of the underlying mechanism of masked priming as provided by the authors on page 744.)

2. Basal ganglia dysfunction slows the ability to recognize nouns relative to verbs in a lexical decision task.

Maybe priming-related pre-activation is a critical function of lexico-semantic networks, but it seems to me that slowed recognition is a bad thing as well, maybe even worse. Still, I don't know whether PD causes noun or verb problems (or both).

More generally, I'm beginning to wonder what lexical decision effects in these sorts of studies are actually telling us. On the one hand, it is possible to argue that lexical decision provides a highly sensitive measure of aspects of language processing, some of which are automatic and unconscious. In this sense, it seems like a good task. On the other hand, we don't normally walk around making lexical decisions on visually presented words. Does this task involve meta-linguistic processes that aren't normally involved in noun and verb processing? Is it a modality-specific (i.e., reading-related) effect? Note that modality-specific verb deficits have been reported (Hillis, et al. 2002).

So while the findings are certainly interesting, and add to the large literature demonstrating dissociations between noun and verb processing, the Boulenger et al. paper is not "compelling evidence" for motor involvement in action verb processing. We don't know that it is the motor system that is causing the problem, the results suggest the possibility of a selective noun deficit, and it is not clear what the task is measuring.


Boulenger, V., Mechtouff, L., Thobois, S., Broussolle, E., Jeannerrod, M., Nazir, T.A. (2008). Word processing in Parkinson's disease is impaired for action verbs but not for concrete nouns Neuropsychologia, 46 (2), 743-756 DOI: 10.1016/j.neuropsychologia.2007.10.007

Argye E. Hillis, Elizabeth Tuffiash, Alfonso Caramazza (2002). Modality-Specific Deterioration in Naming Verbs in Nonfluent Primary Progressive Aphasia Journal of Cognitive Neuroscience, 14 (7), 1099-1108 DOI: 10.1162/089892902320474544

Saturday, October 11, 2008

Sad news: Martha Burton

For those of you who have not  yet heard this terrible news ... Martha Burton, who made terrific contributions to psycholinguistics and neurolinguistics, focusing on speech perception and spoken word recognition, died a few weeks ago, after recently being diagnosed with adenocarcinoma. Martha was a treasured colleague.   

Martha worked at Brown University, principally with Sheila Blumstein, and was then on the faculty at Penn State before moving to the Department of Neurology at the University of Maryland School of Medicine in Baltimore. Both her psycholinguistic work -- see her important papers with her collaborator Sheila Blumstein -- and her neurolinguistic work using imaging  -- with Sheila as well as Steve Small -- is an example of enthusiastically embracing the new methodologies while never losing sight of the theory and the psycholinguistics. 

Martha published a series of papers in Brain and Language and the Journal of Cognitive Neuroscience that everyone working in spoken language processing should read. It's awful that she died so suddenly, and so young.  

Monday, October 6, 2008

Speech recognition and the left hemisphere

In contrast to the traditional view that all aspects of speech processing are strongly left dominant, we have argued in several papers that the recognition of speech sounds is supported by auditory regions in both hemispheres (Hickok & Poeppel, 2000, 2004, 2007). The evidence for this view comes from neuropsychological studies:

1. Chronic damage to the left superior temporal gyrus alone is not associated with auditory comprehension deficits or speech perception deficits, but instead is associated with speech production deficits (conduction aphasia).

2. More extensive chronic damage to the left temporal lobe IS associated with auditory comprehension deficits (e.g., in Wernicke's aphasia), but these deficits are not predominantly caused by difficulties in perceiving speech sounds. Instead, post phonemic deficits appear to account for a majority of the auditory comprehension deficit in aphasia. Evidence for this conclusion comes from the fact that such patients tend to make more semantic than phonemic based errors on auditory word-to-picture matching tests with semantic and phonemic foils.

3. In contrast to the relatively minimal effects of unilateral damage on speech sound recognition, damage to superior temporal regions in both hemisphere's IS associated with a profound deficit in perceiving speech sounds (e.g., word deafness).

One criticism of this body of neuropsychological data is that it involves patients with chronic lesions, and therefore the possibility of compensatory reorganization of speech recognition processes. For example, it could be that speech recognition is strongly left dominant in the intact brain, but following chronic left hemisphere injury, the right hemisphere gradually assumes speech recognition function.

Two new studies argue against this view. Both examine the effects of acute left hemisphere disruption on auditory word comprehension; one uses Wada methods, the other acute stroke. Both find that (i) auditory word-level comprehension deficits tend to be relatively mild, and (ii) reflect primarily post phonemic deficits.

Evidence from Wada procedures

This study (Hickok, et al. 2008) looked at the ability of patients undergoing clinically indicated Wada procedures to comprehend auditorily presented words with either their left or right hemispheres anesthetized. Patients listened to a stimulus word and were asked to point to the matching picture from a four-picture array that included the target, a semantic foil, a phonemic foil, and an unrelated foil. The basic results are provided in the figure below. Overall, errors were more common following left hemisphere anesthesia, but when errors occurred, they tended to be semantic (>2:1 ratio). Notice that the overall phonemic error rate with left disruption is less than 10%. This indicates that even acute disruption of left hemisphere function does not profoundly affect speech sound recognition during auditory comprehension.

Evidence from acute stroke

One could argue that evidence from Wada procedures may not be generalizable to the population as a whole given that Wada patients have a pre-existing neurological condition. Studies of patients in the acute phase of stroke avoid this potential complication. In a collaborative study with Argye Hillis at Johns Hopkins, we examined the auditory comprehension abilities of 289 patients who were within 24 hours of hospital admission for stroke (Rogalsky, et al. 2008). For this study we used a picture verification paradigm: subjects heard a word and were shown a picture that either matched the word, was a phonological foil, or was a phonemic foil. Subjects were asked to decide if the word and picture matched. We used a signal detection-based analysis to determine how well subjects were discriminating matches from non-matches. The top panel of the figure below shows the distribution of patients across the different performance levels. Notice that a very small fraction of the entire group scored worse than 80% correct overall (~7%). The bottom panel shows how well subjects in each of these performance bins could discriminate targets from semantic versus phonemic foils (y-axis = A-prime scores which approximates % correct). At every performance level, semantic confusions dominated (i.e., scores are lower for semantic foils). Within the bottom 7% of subjects -- those who scored worse than 80% correct -- performance was better on phonemic foils than semantic foils by 10 percentage points (72% vs. 62% correct, respectively), and well above chance (50%).

We conclude that the processing of speech sounds during auditory word comprehension is not profoundly impaired by left hemisphere damage either in chronic or acute stages of insult. This in turn indicates that both hemispheres of the intact brain have the capacity for processing speech sounds during comprehension. In other words, speech sound processing is bilaterally organized to some extent. This stands in sharp contrast to the impact of unilateral lesions on speech production, which can lead to profound deficits.


Hickok, G., Okada, K., Barr, W., Pa, J., Rogalsky, C., Donnelly, K., Barde, L. & Grant, A. (in press). Bilateral capacity for speech sound processing in auditory comprehension: Evidence from Wada procedures. Brain and Language,

G Hickok, D Poeppel (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

G Hickok, D Poeppel (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language Cognition, 92 (1-2), 67-99 DOI: 10.1016/j.cognition.2003.10.011

Gregory Hickok, David Poeppel (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

C ROGALSKY, E PITZ, A HILLIS, G HICKOK (2008). Auditory word comprehension impairment in acute stroke: Relative contribution of phonemic versus semantic factors Brain and Language DOI: 10.1016/j.bandl.2008.08.003

Thursday, October 2, 2008

More evidence for a sensory-motor interface in the posterior planum temporale region (area Spt)

We have argued previously that the posterior-medial planum temporale is not part of auditory cortex, but instead is multisensory and subserves sensory-motor integration, much like sensory-motor integration areas in the parietal lobe (Pa & Hickok, 2008). (See also a previous post on the topic.) A new paper by Novraj Dhanjal, Richard Wise, and colleagues in J. Neurosci. provides additional evidence for this view.

In an fMRI experiment, they had subjects produce speech (either count or produce propositional utterances) or, in other conditions, make silent repetitive jaw or tongue movements. One would expect a sensory-motor integration area to show somatosensory responses during articulation as this is useful information in movement control (it helps to know where your articulators are), and indeed, sensory-motor integration areas in the parietal lobe of human and non-human primates, areas such as saccade-related LIP and the parietal reach region (PRR), show somato responses.

So does area Spt show somatosensory responses? Yes, according to this new study. The medial planum temporale activated for both the speech articulation and mouth movement conditions, whereas other speech-related areas such as the lateral STG/STS only activated for the speech conditions.

So now we have direct evidence that this portion of the planum temporale is multisensory, as our hypothesis predicts. This is also consistent with evidence from animal studies which have found that the posterior supratemporal plane in monkeys shows multisensory properties (Hackett, et al., 2007).


N. S. Dhanjal, L. Handunnetthi, M. C. Patel, R. J. S. Wise (2008). Perceptual Systems Controlling Speech Production Journal of Neuroscience, 28 (40), 9969-9975 DOI: 10.1523/JNEUROSCI.2607-08.2008

Troy A. Hackett, Lisa A. De La Mothe, Istvan Ulbert, George Karmos, John Smiley, Charles E. Schroeder (2007). Multisensory convergence in auditory cortex, II. Thalamocortical connections of the caudal superior temporal plane The Journal of Comparative Neurology, 502 (6), 924-952 DOI: 10.1002/cne.21326

J PA, G HICKOK (2008). A parietal–temporal sensory–motor integration area for the human vocal tract: Evidence from an fMRI study of skilled musicians Neuropsychologia, 46 (1), 362-368 DOI: 10.1016/j.neuropsychologia.2007.06.024

Bridging the Gap between Blogs and the Academy

Like it or not, Blogs are now part of academia. The benefits of blogs are obvious: fast dissemination of information, the opportunity for informal public debate without the delay or constraints of journal publication or professional conferences, and the possibility of reaching a wide audience including the lay public. A major drawback, however, is quality control. Anyone can post anything they like without peer review or any real constraints. To be sure, there are many legitimate and highly informative blogs out there, but there are also a lot of pseudoscience blogs. How does a reader know that the blogger is legit? How do you find the signal in all the noise? (Reminds of working with fMRI data; maybe we can get Karl Friston or Robert Cox to work on an analysis package.)

A few approaches to quality control exist already. For example restricts its posts to serious commentary on peer-reviewed research. Potential bloggers have to register with/apply to ResearchBlogging and get approved for inclusion (TalkingBrains is a registered blog on this site), and in general this has worked well. Some academic institutions have recognized the power of blogs and have institutionalized blogging, in one form or another, within their academic community. Standford's Blog Directory is one example.

A recently published article by Batts et al. (2008) in PLoS Biology provides an interesting discussion of the role of blogging in academia, as well as some examples and ideas about how to provide quality and control and ultimately "bridge the gap between blogs and the academy." It's worth a look.

Shelley A. Batts, Nicholas J. Anthis, Tara C. Smith (2008). Advancing Science through Conversations: Bridging the Gap between Blogs and the Academy PLoS Biology, 6 (9) DOI: 10.1371/journal.pbio.0060240

Wednesday, October 1, 2008

Edward Klima (1931-2008)

A major contributor to linguistics and the neuroscience of language, Edward Klima, died last week. I had the good fortune to collaborate with and learn from Ed for more than a decade. Two things stand out for me in Ed's approach to science. He didn't put up with any bullshit or impreciseness in a theory -- to use the Klima vernacular -- and he wasn't shy about telling you if you were shoveling something. At the same time, he was probably the most intellectually unbiased and open-minded scholar I've met. Unlike a lot of people in our field (and I assume science generally), he had no theoretical agenda, would listen to and thoughtfully consider the merits of any idea (retaining the good bits and tossing the B.S.), and was more than willing to change his position in light of new evidence. He simply wanted to figure out how language worked. I count Ed as one of the most influential people in my academic career.

These days, Ed is probably best known for his work on sign language. With his wife and academic complement, Ursula Bellugi, Ed published dozens of papers, and a couple of very influential and award-winning books, on the structure of sign language and its neural basis. Before Ed's work, sign language was largely thought to be an unstructured system of pantomimic gestures. It is now known that signed languages are highly structured systems that share many grammatical properties with spoken languages. But before his well known work on signed languages, Ed had already made a mark on the field of language science. His work on English negation in the 1960s was in the vanguard of early research in the budding field of generative linguistics. He was also among the first to recognize that grammaticality judgements are not always cut and dry -- an issue that is prominent in generative linguistic research today.

Ed left his mark in other ways. He was founder of UCSD's Department of Linguistics, Adjunct Professor and co-director (with Ursula Bellugi) of the Laboratory for Cognitive Neuroscience at the Salk Institute, and mentored dozens of Ph.D. students, post docs, and young investigators, myself included, who have made their own contributions to the science of language.

I personally will miss Ed's wry, and sometimes wicked (but always entertaining) sense of humor, and his giddy excitement about a clever idea (his or not), a novel experimental result, or a beautiful piece of art. Most of all I will miss his friendship, depth of knowledge, and guidance. To be a close colleague of Ed's (and Ursula's) is to be part of a family. I am proud to be a part of that family. We'll miss you Ed!