Friday, December 19, 2008

More thoughts on conduction aphasia

Several observations suggest to me a connection between conduction aphasia and disruption to our proposed sensory-motor integration area Spt.

1. Spt is located in the posterior planum temporale region. The lesion distribution in conduction aphasia seems to be centered on this same location (Baldo, et al. 2008).

2. Spt activity is modulated by word length (Okada, et al., 2003) and frequency, and has been implicated in accessing lexical phonology (Graves et al. 2008). Conduction aphasics commit predominately phonemic errors in their output and these errors are increased by longer, less frequent words.

3. Spt is not speech specific in that tonal/melodic tasks also activate this region (Hickok, et al. 2003). Conduction aphasics appear to have deficits that also affect tonal processing (Strub & Gardner, 1974).

The idea is that this sensory-motor circuit is critical in supporting sensory guidance of speech output, and that such guidance is most critical for phonemically complicated words/phrases and/or for low frequency words or for items with little or no semantic constraint (e.g., non-words, phrases like "no, ifs, ands, or buts"). If a word is short or used frequently, the claim goes, its motor representation can be activated as a chunk rather than programmed syllable by syllable.

One problem, raised by Alfonso Caramazza in the form of a question after a talk I gave, is that sometimes conduction aphasics get stuck on the simplest of words. Case in point, in my talk, I showed an example of such an aphasic who was trying to come up with the word cup. He showed the typical conduit d'approche, "its a tup, no it isn't... it's a top... no..." etc. Alfonso justifiably noted that conduction aphasics shouldn't have trouble with such simple words if the damaged sensory-motor circuit wasn't needed as critically in these cases.

So here is a sketch of a possible explanation. I'd love to hear your thoughts. There is a difference between repetition and naming: repetition shows the typical length/frequency effects, whereas naming doesn't. Here's why:

In repetition, a common word like cup can be recognized/understood and then semantic representations can drive the activation of the motor speech pattern. As the word gets more phonologically complicated or less semantically constrained, this route becomes less and less reliable and the sensory-motor system is required. This is the classic explanation of invoked to explain why conduction aphasics sometimes paraphrase in their repetition; a view that has gained some recent support (Baldo, et al. 2008).

In naming, the main hang up in conduction aphasia is in trying to access the phonological word form. Since the lesion in conduction aphasia typically involves the STG, systems involved in representing word forms are likely partially compromised leading to more frequent access failures. Further, in lexical-phonological access simple, high-frequency forms that share a lot of neighbors (cup, pup, cut, cop, cope ...) will actually lead to more difficulty because of the increased competition.


References

Baldo JV, Klostermann EC, and Dronkers NF. It's either a cook or a baker: patients with conduction aphasia get the gist but lose the trace. Brain Lang 105: 134-140, 2008.

William W. Graves, Thomas J. Grabowski, Sonya Mehta, Prahlad Gupta (2008). The Left Posterior Superior Temporal Gyrus Participates Specifically in Accessing Lexical Phonology Journal of Cognitive Neuroscience, 20 (9), 1698-1710 DOI: 10.1162/jocn.2008.20113

Okada K, Smith KR, Humphries C, and Hickok G. Word Length Modulates Neural Activity in Auditory Cortex During Covert Object Naming. Neuroreport 14: 2323-2326, 2003.

Strub RL, and Gardner H. The repetition defect in conduction aphasia: Mnestic or linguistic? Brain and Language 1: 241-255, 1974.

Wednesday, December 10, 2008

Stuttering, the planum temporale, and delayed auditory feedback

This is a follow up to my previous post on the (reduced) effect of delayed auditory feedback (DAF) in conduction aphasia. Here we consider the possible relation between anatomical abnormalities in the planum temporale and DAF in stutterers.

Paradoxically, DAF can improve fluency in people who stutter (it decreases fluency in control subjects). Some stutterers also have an anatomically atypical planum temporale. A study published in Neurology by Foundas et al. (2004) sought to determine whether there was a relation between the paradoxical DAF effect and planum temporale anatomy. There was: stutterers with atypical planum temporale asymmetries (R>L) showed the paradoxical DAF effect, whereas stutterers with typical planum asymmetries did not show the paradoxical DAF effect.

This line of investigation provides a further bit of evidence linking an auditory-motor integration system to the planum temporale. Our functionally defined area Spt (e.g., Hickok et al., 2003), which we believe supports auditory-motor integration, is located in the posterior portion of the left planum temporale. I suspect that it is this region that is somehow implicated in stuttering. Why the symptoms of conduction aphasia and developmental stuttering are different is an important question (assuming that some aspect of the same system is involved)...

Other disorders have been linked to planum temporale (dys)function including dyslexia, schizophrenia, and autism. I seriously doubt that dysfunction of the auditory-motor integration system involving the planum is going to explain the speech/auditory symptoms of all these disorders as there are probably lots of ways to disrupt speech/auditory functions. Following the example in the Foundas et al. study, I wonder if planum temporale atypicalities plus DAF effects might be used in combination to better characterize what might be going on in these disorders.

References

A. L. Foundas, MD, A. M. Bollich, PhD, J. Feldman, MD, D. M. Corey, PhD, M. Hurley, PhD, L. C. Lemen, PhD and K. M. Heilman, MD (2004). Aberrant auditory processing and atypical planum temporale in developmental stuttering Neurology, 63, 1640-1646

Gregory Hickok, Bradley Buchsbaum, Colin Humphries, Tugan Muftuler (2003). Auditory–Motor Interaction Revealed by fMRI: Speech, Music, and Working Memory in Area Spt Journal of Cognitive Neuroscience, 15 (5), 673-682 DOI: 10.1162/089892903322307393

M LINCOLN, A PACKMAN, M ONSLOW (2006). Altered auditory feedback and the treatment of stuttering: A review Journal of Fluency Disorders, 31 (2), 71-89 DOI: 10.1016/j.jfludis.2006.04.001

Tuesday, December 9, 2008

Conduction aphasia and delayed auditory feedback

Here's an interesting nugget of information: conduction aphasics appear to be less susceptible to the disruptive effect of delayed auditory feedback. Why is this interesting? Because it is more evidence for a link between systems supporting auditory-motor interaction and the deficit in conduction aphasia. Here are the details...

Delayed auditory feedback (DAF) disrupts speech production. You can prove this to yourself either by trying to talk on a microphone in a large stadium (where your echo is delayed) or, if you don't regularly speak in large stadiums, you can simply talk to yourself on two cell phones: call one phone with the other, hold them both to your ears and start talking; there is a slight delay in transmission leading to delayed auditory feedback, and so speaking becomes difficult. DAF is strong evidence that auditory speech information interacts with speech production systems.

While the classic view is that conduction aphasia is a disconnection syndrome resulting from damage to the arcuate fasciculus, this view is no longer tenable. I have been promoting the view that the syndrome results from damage to our favorite brain region, Spt, which we believe is a critical node in a network that supports auditory-motor interaction (e.g., see Hickok et al., 2000). This, we claim, explains why conduction aphasics make phonemic errors in production (because speech planning is guided to some degree by auditory speech systems) and why they have trouble with verbatim repetition under conditions of high phonological load such as with multisyllablic words, unfamiliar phrases, or non-words (because these kinds of stimuli maximally rely on sensory speech guidance). One prediction of this view is that conduction aphasics should exhibit other "symptoms" of a disrupted auditory-motor integration system.

So I was digging through some old papers on conduction aphasia and came across two, both published by Francois Boller in 1978, that suggested that conduction aphasics are less susceptible to DAF than controls and patients with other aphasia types. One was a group study that found that conduction aphasics were the least affected by DAF of the groups studied (Boller, Vrtunski, Kim, & Mack, 1978), and the other was a case study showing no effect of DAF (and even some improvement!) on the repetition of speech in a conduction aphasic (Boller & Marcie, 1978). This decreased DAF effect in conduction aphasia makes sense if the system that supports auditory-motor interaction is disrupted in that syndrome.

References

F Boller, P Marcie (1978). Possible role of abnormal auditory feedback in conduction aphasia Neuropsychologia, 16 (4), 521-524 DOI: 10.1016/0028-3932(78)90078-7

Boller, F., Vrtunski, B., Kim, Y., & Mack, J.L. (1978). Delayed auditory feedback and aphasia. Cortex, 14, 212-226.

G Hickok, et al. (2000). A functional magnetic resonance imaging study of the role of left posterior superior temporal gyrus in speech production: implications for the explanation of conduction aphasia Neuroscience Letters, 287 (2), 156-160 DOI: 10.1016/S0304-3940(00)01143-5

Monday, December 8, 2008

The Cortical Dynamics of Intelligible Speech

This is the title of a new paper in J. Neuroscience by Alexander Leff and company (Jennifer Crinion, Karl Friston, and Cathy Price among others) at the Wellcome Trust Centre, University College London. The report is beautifully straightforward and fills an important gap in our understanding of the pathways that support the processing of meaningful speech.

They set out to test two competing hypotheses regarding information flow in the temporal and frontal lobes during the processing of intelligible speech. One hypothesis, put forward by Sophie Scott and Richard Wise, suggests that the pathway for intelligible speech projects anteriorly into the temporal lobe from primary auditory cortex. The other hypothesis, recently promoted by us (Hickok & Poeppel, 2000, 2004, 2007), but by no means unique to us (it is a rather conventional view), holds that the posterior STS is an important projection target for acoustic speech information on its way to being comprehended.

Leff, et al. used fMRI to identify a network of brain regions active during the perception of intelligible speech, which was defined as regions that responded more to word pairs than to time reversed versions of word pairs. Here is a summary map of the regions activated by this contrast:


They didn't see much bilateral activation (must be something in the London water because we have just finished a similar experiment and see TONS of activation on both sides -- more on this in the future), but that's not the point of the paper. Notice that there are foci of activation in the posterior as well as anterior STS, and an inferior frontal area as well that falls within BA47, outside of Broca's region.

They then used dynamic causal modeling and Baysian parameter estimation to determine the model of information flow among these three nodes that best fit their data. Of 216 models tested -- all possible combinations of input (squares with arrows) and interactions between ROIs (dotted lines), diagram on left -- the winning model (right diagram) had sensory input entering the network only via pSTS and projecting in separate pathways to aSTS on the one hand and IFG on the other.


In other words, information flow is not exclusively anterior from primary auditory cortex, nor is it flowing in parallel from A1 to aSTS and pSTS, but rather projects first posteriorly and then anteriorly within the temporal lobe; i.e., the ventral stream runs through the pSTS.

In proposing an exclusively anterior-going pathway from primary auditory cortex, Scott and Wise were particularly persuaded by three observations. (i) monkey data suggested anterior projections from the auditory core, (ii) their own imaging data suggested an anterior focus of activity for intelligible versus unintelligible speech, and (iii) semantic dementia involves word level semantic deficits and has anterior temporal degeneration as a hallmark feature. Their proposal was quite reasonable in light of these facts, but it just didn't seem to pan out: (i) monkey data is useful as a guide, but may not generalize to humans especially when language systems are involved, (ii) subsequent experiments looking at intelligible speech, such as the present one, clearly identified posterior activation foci, and (iii) it seems that the deficit in semantic dementia is to some extent supramodal, i.e., may be well beyond the linguistic computations that appear to be supported by the pSTS, and lesion (stroke) evidence implicates posterior temporal regions in word-level semantic deficits.

To be fair, we didn't completely predict the findings of the Leff, et al. study either. Specifically, we posited no direct projection from pSTS to aSTS, and discussed the function of the anterior temporal region in the context of grammatical type processes only. Neither did we discuss a direct influence of pSTS on the IFG (BA47) within the ventral stream. (Notice that this link does not, presumably, reflect the dorsal stream, which involves more posterior portions of the IFG and should not be a dominant node in network supporting language comprehension.)

Know that we know a bit more about the nature of information flow in this network, it's time to try to figure out exactly what these different regions might be doing. Our suggestion regarding the posterior STS is that it supports phonological processing of some sort. This still makes sense I think. But what is the anterior STS doing?

References

A. P. Leff, T. M. Schofield, K. E. Stephan, J. T. Crinion, K. J. Friston, C. J. Price (2008). The Cortical Dynamics of Intelligible Speech Journal of Neuroscience, 28 (49), 13209-13215 DOI: 10.1523/JNEUROSCI.2903-08.2008

Sophie K. Scott, C. Catrin Blank, Stuart Rosen and Richard J. S. Wise (2000) Identification of a pathway for intelligible speech in the left temporal lobe. Brain, Vol. 123, No. 12, 2400-2406

Wednesday, December 3, 2008

Dual Stream Model of Speech/Language Processing: Tractography Evidence

The Dual Stream model of speech/language processing holds that there are two functionally distinct computational/neural networks that process speech/language information, one that interfaces sensory/phonological networks with conceptual-semantic systems, and one that interfaces sensory/phonological networks with motor-articulatory systems (Hickok & Poeppel, 2000, 2004, 2007). We have laid out our current best guess as to the neural architecture of these systems in our 2007 paper:


It is worth pointing out that under reasonable assumptions some version of a dual stream model has to be right. If we accept (i) that sensory/phonological representations make contact both with conceptual systems and with motor systems, and (ii) that conceptual systems and motor-speech systems are not the same thing, then it follows that there must be two processing streams, one leading to conceptual systems, the other leading to motor systems. This is not a new idea, of course. It has obvious parallels to research in the primate visual system, and (well before the visual folks came up with the idea) it was a central feature of Wernicke's model of the functional anatomy of language. In other words, not only does the model make sense for speech/language processing, it appears to be a "general principle of sensory system organization" (Hickok & Poeppel, 2007, p. 401) and it has stood the test of time.

So, all that remains is to work out the details of these networks. A new paper in PNAS by Saur et al. may provide some of these details. In an fMRI experiment, they used two tasks, one that they argued tapped the dorsal stream pathway (pseudoword repetition), and the other the ventral stream pathway (sentence comprehension). The details of the use of these tasks leave something to be desired in my view, but they did seem to highlight some differences, so I'm not going to quibble for now. Here's the activation maps (repetition in blue, comprehension in red):


Notice the more ventral involvement along the length of the temporal lobe (STS, MTG, AG) for comprehension relative to repetition, as well as the more posterior involvement in the frontal lobe for repetition.

They then used peaks in these activations as seeds for a tractography analysis using DTI. Here is a summary figure showing the distinction between the two pathways (red = ventral, blue = dorsal).



The authors localize the white matter tract of the dorsal pathway as being part of the arcuate/superior longitudinal fasciculi and the tract of the ventral pathway as part of the extreme capsule (not the uncinate).

I haven't looked closely at the details of the analysis (I would love to hear comments!), but this sort of study seems just the ticket to getting us closer to delineating the functional anatomical details of the speech/language system.

References

G Hickok, D Poeppel (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

G Hickok, D Poeppel (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language Cognition, 92 (1-2), 67-99 DOI: 10.1016/j.cognition.2003.10.011

Gregory Hickok, David Poeppel (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

D. Saur, B. W. Kreher, S. Schnell, D. Kummerer, P. Kellmeyer, M.-S. Vry, R. Umarova, M. Musso, V. Glauche, S. Abel, W. Huber, M. Rijntjes, J. Hennig, C. Weiller (2008). Ventral and dorsal pathways for language Proceedings of the National Academy of Sciences, 105 (46), 18035-18040 DOI: 10.1073/pnas.0805234105

Tuesday, November 25, 2008

The Neuro-Cognitive Rehabilitation Research Network



The Neuro-Cognitive Rehabilitation Research Network (NCRRN) is a valuable resource that is worth checking out. In their own words:

This NCRRN is a collaborative effort of investigators at the Moss Rehabilitation Research Institute and the University of Pennsylvania to provide research infrastructure support and expert consultation to individuals interested in pursuing cognitive rehabilitation research.

On the site, you will find announcements for presentations and events, assessment tools like the PNT, as well as information on grants for pilot project in topics related to neuro-cognitive rehab.

Check it out!

Thursday, November 20, 2008

Here's a review ...

OK, here's my favorite recent review, in response to a MEG paper by a post-doc in my lab, Mary Howard:

"The results presented clearly support the proposed model. Procedure and ... analysis ... technically correct and include some sophisticated details. Moreover, the ms. is well written and exceptionally well illustrated."

Result: outright rejection, with no possibility to 'reject the rejection' (my favorite activity). The other reviewer thought the idea was "timely and important," but not enough so, I guess :-(

Ugh! In that case, I'd rather have the brutal rejections like "your entire career is worthless; every piece of research you have touched is wrong; you're hurting the field; incoherent" and so on. 

And apropos semantics: new NRN paper by TB East

At the Society for Neuroscience meeting in DC this past week, I met several people who -- willingly -- admitted to reading this blog. Thank you! Please, though (like I said at SfN), do comment more. It's more fun to hear from more people. Seriously. I won't name names, but, say, if your last name starts with H and ends with -erdman, and you are exceptionally experienced with electrophysiological studies, you should feel free to set us straight. Jonas, you should certainly write more. You are as opinionated as we are (and more well read than I am, although Greg knows everything), so bring it on. Martin, I know you are quietly lurking in the background ... Sonja and Richard -- come on! You *know* you wanna comment :-)

I'd like to hear what people thought of SfN. I had to miss the last two days -- during which most of the relevant stuff occurred -- but what I saw Saturday-Monday was pretty underwhelming. There was a nice talk by Manon Grube from Tim Griffiths' lab on the contribution of cerebellar circuitry to temporal analysis (lesion and stimulation data). Were there any highlights on language or speech or mirror neurons? How was Rizzolatti's plenary talk? I just spent a day with Ramachandran in LA at a different event, and he mentioned to me that he would not be surprised if Rizzolatti was awarded a Nobel Prize. 

And in that vein: Greg, congrats on getting the mirror neuron review accepted. I look forward to the discussion it elicits. Can you post a pre-print here for us all to read now?    

As for another new talkingbrains reading: In the new issue of Nature Reviews Neuroscience, there is a paper by Lau et al. on semantics. Ellen Lau did a magnificent job synthesizing a remarkable amount of data on the N400 to argue for a model that is illustrated in this review. It's called A cortical network for semantics: (de)constructing the N400. We would be interested in discussion on this, of course.


Monday, November 17, 2008

Jeff Binder visits Talking Brains West


I recently had the pleasure of hosting Jeff Binder during his visit to our Center for Cognitive Neuroscience here at UC Irvine. Jeff, of course, was among the first to use fMRI to study the auditory system and has published several important papers in the field. I had met Jeff before at conferences, and had many previous email correspondences, but never had the chance previously to hang out and chat, so it was a fun visit.

His talk was on the neural basis of semantics -- not action semantics, or the semantics of fruits and vegetables, just semantics, broadly construed. Not to steal his thunder when the work eventually gets published but... He presented a meta-analysis of a boatload of imaging studies on "semantic processing." Lots of different kinds of stimuli and tasks were included in the meta-analysis, so the findings are necessarily going to be relevant to a very broad definition of semantics. I can imagine someone critiquing the approach based on this loose definition of semantics, and in one sense I wouldn't disagree: the analysis isn't going to tell you anything about the details of semantic representation or processing. On the other hand, I personally found it very useful as a guide to distribution of brain regions involved in semantic processing. And guess what? It wasn't just motor cortex, or the anterior temporal lobe, or the posterior temporal lobe, or the angular gyrus that was involved. It was a fairly extensive distribution of regions that included many of these "semantic" areas and more. (We'll have to wait for the paper to get the details -- my memory isn't that good.) One thing I found particularly interesting was that the distribution of these semantic areas was virtually identical to the distribution of the "default network" -- the set of brain areas that seem to show increased activity during "rest" periods in functional imaging studies (i.e., when subjects are thinking about any number of things they're not supposed to be thinking about). This, of course, has important implications for how we design our experiments because the resting baseline is really more like a contrasting our task of interest with a semantic task.

Thanks to Jeff for a fun and informative visit!

Friday, November 14, 2008

Publishing manuscript reviews

Well, we have a split vote on the question of whether publishing one's manuscript reviews is an ethical practice or not: 48% say YES, 40% say NO, and 11% say NOT SURE. I personally don't think it is unethical (see Mary Louise Kean's arguments, for example). It may be uncool in some circumstances, but many reviews are also very uncool...

Nonetheless, I'm not going to post the reviews of my mirror neuron critique paper. The good news is, though, that the paper is now accepted and should be appearing soon in Journal of Cognitive Neuroscience. A reviewer of the paper was invited to write a rebuttal paper. I hope s/he does. It will be interested to have a public discussion on the issues.

But back to publishing reviews. Just this week I got another nasty review back on another paper, this one related to the hypothesized sensory-motor response properties of area Spt. We have been plugging away for some years now trying to pin down the response properties of Spt and exploring the similarity of this region with sensory-motor integration regions in the posterior parietal lobe. I had listed the range of findings that show parallels between Spt and parietal lobe areas. Here's a comment about our hypothesis from the reviewer:

The analogy with the inferior parietal lobule is not well supported, and its use in framing the arguments of the paper is based on a number of vague assumptions, over-generalizations, and idiosyncratic inferences about the brain that derive from studies produced virtually exclusively in the authors' laboratory.


(Oops. Did I just publish part of a review?)

We of course apologized profusely for citing our own empirical work, which is clearly inappropriate and should be strictly prohibited in scholarly publications. We make no apologies, however, for being vague, idosyncratic, over-generalizers.

So how about we start a Top Ten list of nasty review excerpts? It might be kind of entertaining. I've got two on the board already. Send them to me offline or as a comment.

Friday, November 7, 2008

More info on the Auditory Cognitive Neuroscience Society Meeting

ACNS 2009 Tucson, AZ
Integrated Learning Center (ILC) on the University of Arizona campus
Room TBA

The University of Arizona will be hosting the 3rd annual Auditory Cognitive Neuroscience Society (ACNS, formerly ACSS) conference on January 9-10, 2009. The conference (co-organized by U of A and Arizona State University) will be a two-day event, taking place on Friday and Saturday. The conference is FREE and open to all!

Invited Speakers include Doctors:
- Tom Christensen, University of Arizona
- Michael Dorman, Arizona State University
- Greg Hickok, University of California-Irvine
- Lori Holt, Carnegie Mellon University
- Julie Liss, Arizona State University
- Andrew Lotto, University of Arizona
- Bob Lutfi, University of Wisconsin-Madison
- Edwin Maas, University of Arizona
- Andrea Pittman, Arizona State University
- Brad Story, University of Arizona
- Arty Samuel, State University of New York-Stony Brook
- Lynne Werner, University of Washington
- Bill Yost, Arizona State University


Topics to be discussed include the perceptual/cognitive/motor foundations of child speech development, a comparison of audition and vision, and the processes and constraints in auditory learning. Each session will conclude with a group discussion or more aptly put, a period of time to “shoot your mouth off.”

Posters: If you are interested in submitting a poster abstract, please see the guidelines provided below or visit the ACNS 2009 Meeting Webpage for a link to a printable version.

ACNS 2009 Abstract Submission Guidelines


This year we will be accepting a limited number of posters to be displayed during the ACNS Conference. Acceptance criteria include the following:
The research to be presented is pertinent to the domain of Auditory Cognitive Neuroscience.
The methodology of the work is sound.
Work in progress will be in presentable form by the time of the conference.

Abstracts should be no longer than 350 words (not inclusive of graphs, figures, or references) and include the following components:
Statement of the Problem
Study Design and Method
Results and Interpretation

Please email abstracts, including author names and affiliations, to julie.liss@asu.edu no later than Monday December 1st, 2008. Details regarding poster size and format will be provided at the time of acceptance notification.

Registration: We will be asking those who plan to attend this year’s conference to RSVP on our new registration page (link soon to come on the ACNS Webpage!). Attendee numbers are necessary in order for us to reserve rooms at local hotels. Please note that CEUs will no longer be offered to ACNS conference attendees.

Hope to see you in January!
Andrew Lotto & Julie Liss

Tuesday, November 4, 2008

Mirror neurons in humans revealed by fMRI adaptation

Riitta Salmelin alerted me to this study which used an fMRI adaptation paradigm to identify mirror neurons in the human brain. Mirror neurons have previously been assumed to exist in humans, but without direct evidence. Here is the abstract for the paper, FYI:

Chong, T. T., R. Cunnington, et al. (2008). "FMRI adaptation reveals mirror neurons in human inferior parietal cortex." Curr Biol 18(20): 1576-80.
Mirror neurons, as originally described in the macaque, have two defining properties [1, 2]: They respond specifically to a particular action (e.g., bringing an object to the mouth), and they produce their action-specific responses independent of whether the monkey executes the action or passively observes a conspecific performing the same action. In humans, action observation and action execution engage a network of frontal, parietal, and temporal areas. However, it is unclear whether
these responses reflect the activity of a single population that represents both observed and executed actions in a common neural code or the activity of distinct but overlapping populations of exclusively perceptual and motor neurons [3]. Here, we used fMRI adaptation to show that the right inferior parietal lobe (IPL) responds independently to specific actions regardless of whether they are observed or executed. Specifically, responses in the right IPL were attenuated when participants observed a recently executed action relative to one that had not previously been performed. This adaptation across action and perception demonstrates that the right IPL responds selectively to the motoric and perceptual representations of actions and is the first evidence for a neural response in humans that shows both defining properties of mirror neurons
.

This is a very cool and cleverly designed study. Basically, they were looking for areas that showed adaptation (decreased BOLD amplitude) for observed actions that followed the same executed actions relative to observed actions that were not previously executed. Here is the result:



They observed adaption in one of their ROIs in the right parietal lobe (ROIs included IFG, IPL, and STS). If you buy the adaptation logic -- it seems reasonable to me -- this means that mirror neurons live in the right parietal lobe of humans. So we finally have some direct evidence for the existence of mirror neurons in humans. Cool. I knew someday we'd have decent evidence. It is surprising, though, that no mirror neurons were found in the frontal lobe or the left hemisphere (where damage can lead to disorders of action production and recognition), but let's not get bogged down in details.

A couple of points are relevant. One is that if this result holds, it means that human mirror neurons and monkey mirror neurons are different. Chong et al. used pantomimed gestures. Classic F5 mirror neurons don't respond to pantomime. In effect, we have a new animal that needs to be studied in its own right. Who knows, maybe the function of these human mirror neurons are completely different! Another relevant point is that just because some form of mirror neuron exists in humans doesn't mean that this system supports action understanding. The Chong et al. study has nothing to say about this question. So all previous critiques of the action understanding portion of the mirror neuron doctrine still hold.

T CHONG, R CUNNINGTON, M WILLIAMS, N KANWISHER, J MATTINGLEY (2008). fMRI Adaptation Reveals Mirror Neurons in Human Inferior Parietal Cortex Current Biology, 18 (20), 1576-1580 DOI: 10.1016/j.cub.2008.08.068

Monday, November 3, 2008

Ventral premotor cortex and action processing: Urgesi, et al.

Here is another pair of studies that a reviewer suggested I failed to discuss because they didn't support my pre-conceived hypothesis regarding mirror neurons. It's true that I didn't discuss them, but not because I cherry picked papers to discuss. I simply wasn't aware of these. After looking at them, I realized that they did not even test action understanding, so I could have justified leaving them out. Nonetheless, because they apparently are viewed a strong evidence for the link between the ventral premotor cortex and action processing, I included a discussion in my review. Here is a summary...

Urgesi et al. (2007a/2007b) used rTMS to study the effects of functional deactivation of ventral premotor cortex (vPMc) on visual discrimination of action-related pictures In both of these studies, subjects were asked to make two-choice, match-to-sample judgments: a picture of a body configuration was presented (the sample) followed by a mask (500msec), and then a picture of two body configurations; the subject was asked to indicate which of the two matched the sample.

First, as I mentioned above, it is important to notice that neither of these studies actually tested action understanding. That is, discrimination performance did not depend on understanding the meaning of the actions, and could be performed based on configural information alone.

Urgesi, Candidi et al. (2007) compared the effects of stimulation of vPMc with stimulation of a ventral temporal-occipital location (the extrastriate body area, EBA) during action discrimination (which action matches the sample?) versus form discrimination (which actor matches the sample, independent of action?).

For action judgments vPMc stimulation yielded longer reaction times than EBA stimulation, and the reverse held for form judgments, longer reaction times for EBA stimulation than vPMc stimulation. Stimulation had no effect on accuracy. In the other study (Urgesi, Calvo-Merino et al., 2007), subjects were asked to judge body configuration only, and an effect of accuracy was observed with vPMc stimulation associated with more errors on the configuration matching task than with EBA stimulation. Oddly, there were no reaction time effects.

So the two studies showed that interference stimulation to vPMc negatively affected performance on a body configuration delayed matched-to-sample task. Again, because these studies did not assess action understanding, they cannot speak to the question of whether the mirror system supports action understanding. However, they do suggest that processing of body configurations at least in the delayed match-to-sample task involves vPMc to some extent. Given that the tasks involved working memory, it seems possible that this region may support some sort of working memory for body configurations. This is interesting, but in my view is more consistent the idea that the "mirror system" is a sensory-motor integration system, not a semantic system. For example, there are many claims regarding the sensory-motor nature of working memory systems (Buchsbaum & D'Esposito, 2008; Hickok, Buchsbaum, Humphries, & Muftuler, 2003; Pa, Wilson, Pickell, Bellugi, & Hickok, in press; Postle, 2006; Ruchkin et al., 2003; Wilson, 2001).

Cosimo Urgesi, Matteo Candidi, Silvio Ionta, Salvatore M Aglioti (2006). Representation of body identity and body actions in extrastriate body area and ventral premotor cortex Nature Neuroscience, 10 (1), 30-31 DOI: 10.1038/nn1815

C. Urgesi, B. Calvo-Merino, P. Haggard, S. M. Aglioti (2007). Transcranial Magnetic Stimulation Reveals Two Cortical Pathways for Visual Body Processing Journal of Neuroscience, 27 (30), 8023-8030 DOI: 10.1523/JNEUROSCI.0789-07.2007

Friday, October 31, 2008

Rock-Paper-Scissors and mirror neurons: Executed and observed movements have different distributed representations in human aIPS

"Shane" left a comment on a previous post about a recently published paper by David Heeger's group.

I have heard about this paper, but haven't had a chance to read it yet. Here is the abstract for a quick summary:

How similar are the representations of executed and observed hand movements in the human brain? We used functional magnetic resonance imaging (fMRI) and multivariate pattern classification analysis to compare spatial distributions of cortical activity in response to several observed and executed movements. Subjects played the rock-paper-scissors game against a videotaped opponent, freely choosing their movement on each trial and observing the opponent's hand movement after a short delay. The identities of executed movements were correctly classified from fMRI responses in several areas of motor cortex, observed movements were classified from responses in visual cortex, and both observed and executed movements were classified from responses in either left or right anterior intraparietal sulcus (aIPS). We interpret above chance classification as evidence for reproducible, distributed patterns of cortical activity that were unique for execution and/or observation of each movement. Responses in aIPS enabled accurate classification of movement identity within each modality (visual or motor), but did not enable accurate classification across modalities (i.e., decoding observed movements from a classifier trained on executed movements and vice versa). These results support theories regarding the central role of aIPS in the perception and execution of movements. However, the spatial pattern of activity for a particular observed movement was distinctly different from that for the same movement when executed, suggesting that observed and executed movements are mostly represented by distinctly different subpopulations of neurons in aIPS.
(Italics added.)

So this is an anti-mirror neuron paper. While I'm fully on-board with the anti-mirror neuron conclusion, I'm not sure the data really support this view. Again, I haven't yet read the paper and am basing my argument on the abstract only, so somebody correct me if I'm missing something. The study found that aIPS activated both for action production and action viewing. No surprise there. The interesting and novel contribution of this paper is that within the activated region, they found different patterns of activation for observation and execution of movements. From this they conclude that the these two functions are supported by distinctly different subpopulations of neurons.

I like the methodology employed here, and I believe their findings do indicate that observation and execution involve non-identical populations of neurons, but I don't think this is strong evidence against a mirror neuron view. Here's why: Suppose there are three types of cells in aIPS:

1. sensory-only cells
2. motor-only cells
3. sensory-motor cells (mirror neurons)

There is evidence for this kind of distribution of cells in parietal sensory-motor areas. Suppose further that action understanding is achieved by cell type #3, the mirror neurons. If this were true, the ROI as a whole would activate for both action observation and action execution, as the study found, but sensory vs. motor events would nonetheless activate non-identical populations of cells within the ROI: observation would activate cell types 1 & 3, whereas execution would activate cell types 2 & 3. This difference may be enough to allow for above chance pattern classification that is based on non-mirror neurons within the ROI.

So if I've got the basics of the study correct (based on the abstract), this is not strong evidence against mirror neurons supporting action understanding. Neither is it evidence FOR mirror neurons, however.

I. Dinstein, J. L. Gardner, M. Jazayeri, D. J. Heeger (2008). Executed and Observed Movements Have Different Distributed Representations in Human aIPS Journal of Neuroscience, 28 (44), 11231-11239 DOI: 10.1523/JNEUROSCI.3585-08.2008

Tuesday, October 28, 2008

Action comprehension in non-human primates: motor simulation or inferential reasoning?

I noticed this forthcoming paper in the same issue of TICS as the Grodzinsky & Santi paper that David highlighted in a previous post. Looks interesting!


Action comprehension in non-human primates: motor simulation or inferential reasoning?

Justin N. Wood1, and Marc D. Hauser2

1University of Southern California, Department of Psychology, 3620 South McClintock Ave, Los Angeles, CA 90089, USA 2Harvard University, Department of Psychology, 33 Kirkland Street, Cambridge, MA 02138, USA

Available online 23 October 2008.

Some argue that action comprehension is intimately connected with the observer’s own motor capacities, whereas others argue that action comprehension depends on non-motor inferential mechanisms. We address this debate by reviewing comparative studies that license four conclusions: monkeys and apes extract the meaning of an action (i) by going beyond the surface properties of actions, attributing goals and intentions to the agent; (ii) by using environmental information to infer when actions are rational; (iii) by making predictions about an agent’s goal, and the most probable action to obtain the goal given environmental constraints; (iv) in situations in which they are physiologically incapable of producing the actions. Motor theories are, thus, insufficient to account for primate action comprehension in the absence of inferential mechanisms.

Monday, October 27, 2008

Mirror neuron review reviews, to see?

Hey, Greg, are the reviews of your mirror neuron review juicy enough that they are worth posting on the blog? I don't know if it's legitimate to post reviews of a journal article on a blog. Are there guidelines about this sort of thing?

However, given what's at stake, and given how much influence the wretched mirror neuron action perception hypothesis has, it would be both intellectually helpful and sociologically fun to see such reviews and pick at them.

I'm certainly willing -- if we can agree that it's ethically defensible -- to post some of the more outrageous reviews that I've gotten. For example, that I "understand virtually nothing". Man, that hurt my feelings! Anyway, this might not be doable, although it would be a whole lot of fun.

It would be particularly interesting to find out about how your paper will be treated in subsequent rounds of pure review and the editorial process.

Maybe, in fact, the occasional readers of this blog would comment more if it meant posting one of the more bizarre reviews that they have gotten in their own research... :-) Nothing like a little levity to balance the pain of negative reviews.

"The battle for Broca’s region" -- lost again

There is a new paper in the journal Trends in Cognitive Sciences that, once again, examines the role of Broca's area and language processing.

The battle for Broca’s region, by Yosef Grodzinsky and Andrea Santi, summarizes four positions about the role of Broca's area and concludes -- who would have thunk it? -- that the 'syntactic movement account' is the best account to date.

Grodzinsky and Santi distinguish between four positions: an "action perception" model (advocated, for example, by Arbib and Rizzolatti), a "working memory" model (Caplan), a "syntactic complexity" model (Goodglass, Friederici), and a "syntactic movement" model (supported by the authors). I think one can quibble about the attributions, but by and large this is more or less of a fair characterization of various positions. The former two are of the "general" variety; the latter two are language specific. The authors examine these positions in light of data from deficit-lesion correlation and neuroimaging evidence, basically from fMRI. They argue, reasonably, that a single model account is likely to underspecified. That being said, they conclude that the recent evidence is most consistent with a "syntactic movement" model Broca's area.

I have rather mixed feelings about this brief review/perspective piece. On the one hand, it is perfectly reasonable for Yosef to work hard on supporting the view that he is fought hard for for a long time. Indeed, attempting to identify a particular kind of computation that's executed in a chunk of brain tissue seems like a sensible goal. On the other hand, I do think it's really time to go further now, and I wish that these authors might lead the way on a more biologically sophisticated perspective.

The fact that their view is too simple is something they state repeatedly. "Importantly, Broca’s region might well be multi-functional." And: "Indeed, Broca’s region might be multifunctional." And so on. Well, yes, then let's actually entertain that...

The fact that we have to make careful distinctions between areas 44, 45, 47, and the frontal operculum is now well established. Yosef has supported important progress in this area, and Friederici and her colleagues as well as Amunts and her colleagues have provided impressive evidence for functionally relevant subdivisions. Moreover, even for a single piece of tissue a la Brodmann, the probability is very a very high that more than one operation is executed. Obviously ... Look, take Brodmann area 17 (primary visual cortex, striate cortex). Beyond subdivisions into ocular dominance columns, orientation pinwheels, and -- obviously -- six differentiated layers of cortical tissue, there are further functionally critical subdivisions into cytochrome oxidase blobs, etc. We are perfectly comfortable attributing multiple functions to local pieces of tissue in the visual system. Yet we persist in trying to find surprisingly monolithic interpretations of the chunk of brain as extensive as Broca's region. Now admittedly we don't have the necessary cell biological analysis of this part of the brain; nevertheless, isn't it time we come up with some more nuanced hypotheses about what gets calculated in these various different parts of the frontal lobe?

Inquiring minds want to know. I'm pretty frustrated with the state-of-the-art in this area of research. Please, somebody, figure this piece of brain out!

Y GRODZINSKY, A SANTI (2008). The battle for Broca’s region Trends in Cognitive Sciences DOI: 10.1016/j.tics.2008.09.001

Saturday, October 25, 2008

A new place for YOU to publish: LCP-CogNeuro

Dear Talking Brains readers,

as of right now, there is a new place to send your papers if they are cognitive neuroscience of language papers. Please see the announcement below -- and then send me your best work.

Lorraine (Lolly) Tyler remains the Editor of LCP. I will be the editor for cognitive neuroscience of language.

There are not that many outlets for theoretically motivated and biologically serious research on speech/language, so please take advantage of this opportunity to publish your best work.

David

Language and Cognitive Processes -- New Special Section Announcement!

In 2009 LCP will broaden its remit by publishing two additional issues a year devoted to the Cognitive Neuroscience of Language. The development of cognitive neuroscience methodologies has significantly broadened the empirical scope of experimental language studies. Both hemodynamic imaging and electrophysiological approaches provide new perspectives on the representation and processing of language, and add important constraints on the development of theoretical accounts of language function.

In light of the strong interest in and growing influence of these new tools LCP will publish two issues a year on the Cognitive Neuroscience of Language. All types of articles will be considered, including reviews, whose submission is encouraged. Submissions should exemplify the subject in its most straightforward sense: linking good cognitive science and good neuroscience to answer key questions about the nature of language and cognition.

Manuscripts should be submitted through the journal's Scholar One website: www.mc.manuscriptcentral.com/plcp. When submitting, please select "Cognitive Neuroscience of Language" from the manuscript type drop down.

Peer Review Integrity
All published research articles in this journal have undergone rigorous peer review, based on initial editor screening and refereeing by independent expert referees.


Friday, October 24, 2008

Mirror neurons in the inferior parietal lobe: Are they really "goal" selective?

A few weeks ago I published a blog entry previewing my critical review of mirror neuron theory of action understanding. The paper has been in the review process since that time, and I've finally received a bit of feedback. As requested, the feedback is from a mirror neuron/action understanding proponent. I find the comments extremely valuable because (i) I have been directed to additional papers that had eluded my attention previously, and (ii) while the review is highly critical of my manuscript -- comments like disappointing, astounding non sequitur, and totally nonsense were used -- I have come away with more confidence that my analysis is correct: there is nothing in the reviews that provide any challenge to my interpretation of the literature.

So i've been looking at the papers that I either hadn't read carefully enough, or just plain missed. Here is one of them.

Fogassi et al., (2005) present very interesting data from mirror neurons in the inferior parietal lobule (IPL) of monkeys. Monkeys were trained either to grasp a piece of food and put it in his (the monkey’s) mouth, or to pick up an object and put it in a container. In some conditions, the container was next to the monkey’s mouth such that the mechanics of the movement were very similar between grasping-to-eat and grasping-to-place. In addition, a condition was also implemented in which the monkey grasped and placed a piece of food in the container to control for differences between food items and objects, both visually and tactilely. In all variants of the experiment, the authors report that some IPL cells preferentially responded to the goal of the action: grasping-to-eat vs. grasping-to-place. Again, this was true even when the placing-action terminated in close proximity to the mouth and involved grasping a piece of food. Some of these cells also responded selectively and congruently during the observation of grasping-to-eat and grasping-to-place.


So both in perception and action, there are IPL cells that seem to be selective for the specific goal of an action rather than the sensory or motor features of an action -- a very intriguing result. Fogassi et al. discuss their motor findings in the context of “intentional chains” in which different motor acts forming the entire action are linked in such a way that each act is facilitated in a predictive and goal-oriented fashion by the previous ones. They give an example of IPL neurons observed in another unpublished study that respond to flexion of the forearm, have tactile receptive fields around the mouth, and respond during grasping actions of the mouth and suggest that, “these neurons appear to facilitate the mouth opening when an object is touched or grasped” (p. 665).

Regarding the action perception response properties of the IPL neurons in their study, Fogassi et al. all conclude, “that IPL mirror neurons, in addition to recognizing the goal of the observed motor act, discriminate identical motor acts according to the action in which these acts are embedded. Because the discriminated motor act is part of a chain leading to the final goal of the action, this neuronal property allows the monkey to predict the goal of the observed action and, thus, to ‘read’ the intention of the acting individual” (p. 666).

According to Fogassi et al., IPL mirror neurons code action goals and can “read the intention” of the acting individual. But is there a simpler explanation? Perhaps Fogassi et al.’s notion of predictive coding and their example of the IPL neuron with receptive fields on the face can provide such an explanation. Suppose the abstract goal of an action and/or it’s meaning is coded outside of the motor system. And suppose that Fogassi et al. are correct in that a complex motor act leads to some form of predictive coding (anticipatory opening of the mouth, salivation, perhaps even forward modeling of the expected somatosensory consequences of the action). The predictive coding in the motor system is now going to be different for the grasping-to-eat versus grasping-to-place actions, even though it is not coding "goals." For eating, there may be anticipatory opening of the mouth, salivation, perhaps even forward modeling of the expected somatosensory consequences of the action. For placing, there will be no mouth-related coding, but there may be other kinds of coding such as expectations about the size, shape or feel of the container, or the sound that will result if the object is placed in it. If cells in IPL differ in their sensitivity to feedback from these different systems, then it may look like the cells are coding goals, when in fact they are just getting differential feedback input from the forward models. Observing an action may activate this system with similar electrophysiological consequences, not because it is reading the intention of the actor, but simply because the sensory event is associated with particular motor acts.

In short, very interesting paper. Not proof, however, that mirror neurons code goals or intentions, or support mind reading.

L. Fogassi, et al. (2005). Parietal Lobe: From Action Organization to Intention Understanding Science, 308 (5722), 662-667 DOI: 10.1126/science.1106138

Auditory Cognitive Neuroscience Society

This is a new organization/conference that looks really interesting. This year's meeting is January 9-10 in Tucson. I've already marked my calendar and plan to go. See you there! A note from the organizers is below.

********************

Mark your calendars!

The 3rd annual conference of the Auditory Cognitive Neuroscience Society (ACNS; formerly the Auditory Cognitive Science Society) is scheduled for Friday-Saturday January 9-10, 2009 on the campus of the University of Arizona (Tucson, AZ). This conference is designed to bring together researchers from psychoacoustics, neuroscience, speech perception, speech production, audiology, speech pathology, psychology, linguistics, computer science etc. to discuss topics related to the perception of complex sounds such as speech and music.

The conference is free and open to everyone*. The talks are organized to provide plenty of opportunity for interaction and exchange.

More details (topics, speakers, location, etc.) will be forthcoming soon. Be sure to check out the ACNS website periodically for updates. For now, please put a note in your favorite digital or analog calendar. If you have any questions, comments or suggestions, please feel free to contact either of us.

*Please note that CEUs will not be available for this year's attendees.

Andrew & Julie


Andrew J. Lotto
Speech, Language & Hearing Sciences
University of Arizona
alotto@email.arizona.edu

Julie M. Liss
Department of Speech & Hearing Science
Arizona State University
julie.liss@asu.edu

Monday, October 20, 2008

Post-doctoral position at the Center for Cognitive Neuroscience, University of California, Irvine (Hickok lab)

We are looking to fill a post doc position in my lab (Laboratory for Cognitive Neuroscience, A.K.A. Talking Brains West). The project involves fMRI studies of the planum temporale including sensory-motor aspects of speech, visual speech, spatial hearing, and sequence learning, among other domains. I'm excited about this project and hope to get a solid and productive team in place.

Official add below:


***************************
School of Social Sciences
Department of Cognitive Sciences
Center for Cognitive Neuroscience
Position: Postdoctoral Scholar
The Department of Cognitive Sciences and the Center for Cognitive Neuroscience announce a Postdoctoral Scholar position in the Laboratory for Cognitive Brain Research.

A postdoctoral position is available in the laboratory of Dr. Greg Hickok at the University California, Irvine. The postdoctoral fellow will collaborate in NIH-funded research investigating the functional anatomy of language and complementary pursuits. Ongoing research projects in the lab employ a variety of methods, including traditional behavioral and neuropsychological studies, as well as techniques such as fMRI, EEG/MEG, and TMS. Opportunities also exist for collaboration with other cognitive science faculty and with faculty in the Center for Cognitive Neuroscience.

Requirements – Candidates should have a Ph.D. in a relevant discipline and experience with functional MRI, preferably in the area of speech and language. Familiarity with computational and statistical methods for neuroimaging (e.g. MatLab, SPM, AFNI) is advantageous.

The appointment would begin as early as December 2008 for a period of 3 years and is contingent on receipt of project funding. Salary will be commensurate with experience, minimum salary: $36,360.

Application Procedure - Candidates should send a CV, a letter of interest (including research skills), and a list of 3 references to the address below:

Lisette Isenberg
Department of Cognitive Sciences and Center for Cognitive Neuroscience
3151 Social Science Plaza
University of California, Irvine
Irvine, CA 92697-5100
aisenber@uci.edu

The University of California, Irvine is an equal opportunity employer committed to excellence through diversity.
(OEOD-4268)
Post: 10/20/08, Close: 11/30/08

Thursday, October 16, 2008

Speech recognition and the left hemisphere: Task matters!

I fully agree with Dorte Hessler's assessment that left hemisphere damage can produce significant "problems to identify or discriminate speech sounds in the absence of hearing deficits." But here is the critical point that David and I have been harping on since 2000: the ability to explicitly identify or discriminate speech sounds (e.g., say whether /ba/ & /pa/ are the same or different) on the one hand, and the ability to implicitly discriminate speech sounds (e.e., recognize that bear refers to a forrest animal while pear is a kind of fruit) on the other hand, are two different things. While it is a priori reasonable to try to study speech sound perception by "isolating" that process in a syllable discrimination task (ba-pa, same or different?), it turns out that by doing so, we end up measuring something completely different from normal speech sound processing as it is used in everyday auditory comprehension. Given that our goal is to understand how speech is processed in ecologically valid situations -- no one claims to be studying the neural basis of the ability to make same-different judgments about nonsense syllables; they claim to be studying "speech perception" -- it follows that syllable discrimination tasks are invalid measures of speech sound processing. I believe the use of syllable discrimination tasks in speech research has impeded progress in understanding its neural basis.

Let me explain.

Some the same studies that Dorte correctly noted as providing evidence for deficits on syllable discrimination tasks following left hemisphere damage also show that the ability to perform syllable discrimination double-dissociates from the ability to comprehend words. Here is a graph from a study by Sheila Blumstein showing auditory comprehension scores plotted in the y-axis and three categories of performance on syllable discrimination & syllable identification tasks on the x-axis. The plus and minus signs indicate preserved or impaired performance respectively. The letters in the graph correspond to clinical aphasic categories (B=Broca's, W=Wernicke's). Notice the red arrows. They point to one patient who has the worst auditory comprehension score in the sample -- a Wernicke's aphasic, not surprisingly -- yet who is performing well on syllable discrimination/identification tasks, and to another patient who has the best auditory comprehension score in the sample -- a Broca's aphasic, not surprisingly -- yet who fails on both syllable discrimination and identification. A nice double-dissociation.



But that's only two patients, and the measure of auditory comprehension is coarse in that it uses sentence level as well as word level performance. Fair enough. So here is data from Miceli et al. that compares auditory comprehension of words (4AFC with phonemic and semantic foils) and syllable discrimination. Notice that 19 patients are pathological on syllable discrimination yet normal on auditory comprehension and 9 patients show the reverse pattern. More double dissociations.



Where are the lesions that are producing the deficits on syllable discrimination versus auditory comprehension? According to Basso et al, syllable discrimination deficits are most strongly associated with non-fluent aphasia, which is most strongly associated with frontal lesions. According to a more recent study by Caplan et al., the inferior parietal lobe is also a critical site. Notice that these regions have also been implicated in sensory-motor aspects of speech, including verbal working memory. This contrasts with work on the neural basis of auditory comprehension deficits (e.g., Bates et al.) which implicates the posterior temporal lobe (STG/MTG).



Some case study contrasts from Caplan et al. underline the point. On the left is a patient who has a lesion in the inferior frontal lobe and who was classified as a Broca's aphasic. On the right, a patient with a temporal lobe lesion and a classification of Wernicke's aphasia. By definition, the Broca's patient will have better auditory comprehension than the Wernicke's patient. Yet look at the syllable discrimination scores of these patients. The Broca case is performing at 72% correct, whereas the Wernicke is at 90%. Again, the patient with better comprehension is performing poorly on syllable discrimination showing that syllable discrimination isn't measuring normal speech sound processing.



To my reading the data are unequivocal. Syllable discrimination tasks tap a different set of processes to auditory comprehension tasks, even though both tasks ostensibly involve the processing of speech sounds. How can this be? Here's an explanation. Syllable discrimination involves activating a phonological representation of one syllable, maintaining that activation while the phonological representation of a second syllable is activated, then comparing the two, and then making a decision. Deficits on this task could arise from activating the phonological representations, maintaining both representations simultaneously in short term memory, comparing the two representations, or in making the decision. Only one of these processes is clearly shared by an auditory comprehension task, namely, activating the phonological representations. I suggest that the deficits in syllable discrimination following left hemisphere damage, particularly left frontal damage, result from one or more of the non-shared components of the task. The fact that the network implicated in syllable discrimination (fronto-parietal regions) is largely identical to that which is independently implicated in phonological working memory supports this claim. If, on the other hand, a patient had a significant disruption of the sensory system that activated phonological representations -- e.g., patients with bilateral lesions and word deafness -- then such a disruption should be evident on both discrimination and comprehension tasks.

It is hard for us to give up syllable discrimination as our bread and butter task in speech research. It seem so rigorous and controlled. But the empirical facts show that it doesn't work. In the neuroscience branch of speech research, the task produces invalid and misleading results (if our goal is to understand speech perception under ecologically valid listening conditions). It's time to move on.

References

Basso, A., Casati, G. & Vignolo, L. A. (1977). Phonemic identification defects in aphasia. Cortex, 13, 84-95

Elizabeth Bates, Stephen M. Wilson, Ayse Pinar Saygin, Frederic Dick, Martin I. Sereno, Robert T. Knight, Nina F. Dronkers (2003). Voxel-based lesion–symptom mapping Nature Neuroscience DOI: 10.1038/nn1050

S Blumstein, W Cooper, E Zurif, A Caramazza (1977). The perception and production of Voice-Onset Time in aphasia Neuropsychologia, 15 (3), 371-372 DOI: 10.1016/0028-3932(77)90089-6

Caplan, D., Gow, D. & Makris, N. (1995). Analysis of lesions by MRI in stroke patients with acoustic-phonetic processing deficits. Neurology, 45: 293 - 298.

G Hickok, D Poeppel (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

Gregory Hickok, David Poeppel (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

G MICELI, G GAINOTTI, C CALTAGIRONE, C MASULLO (1980). Some aspects of phonological impairment in aphasia*1 Brain and Language, 11 (1), 159-169 DOI: 10.1016/0093-934X(80)90117-0

More on speech recognition and the left hemisphere: Important comment from Dorte Hessler

Dorte Hessler has posted an important comment in on my entry Speech recognition and the left hemisphere. Indeed, the comment is thoughtful, thorough, and important enough that I have decided to repost it here as it's own entry. This is exactly the kind of informal (but informed) discussion that I hoped the blog would support. I'll post a response in a new entry shortly.

*****************

dörte hessler said...

Hi again,

First thanks to Greg for your response, which made me think quite a while. Especially your comment of atypical cortical organization. So I went through the articles on phonemic processing deficits I read before, because I seemed to remember that there was a substantial amount of patients with unilateral damage.
But to clarify some things first: Of course my earlier comment was on the acute stroke study – sorry, I should have mentioned it more clearly. Furthermore I definitely did not want to claim that the right hemisphere is does not play any role in phonological processing, I think there is a vast amount of evidence for that it is in fact involved (some of it cited in the comments above). However I did want to claim (and still want to do so), that a damage to solely the left hemisphere can lead to word sound deafness (as e.g. defined by Franklin, 1989): thus problems to identify or discriminate speech sounds in the absence of hearing deficits. I quote Sue Franklin here because she looked at this phenomenon in the light of aphasia and not as a pure syndrome, which, indeed, is very rare. But looking at aphasic cases, quite a lot of aphasic patients suffering from left hemisphere damage have shown problems in discriminating or identifying speech sounds. I won’t quote the single case studies here, but limit myself to larger group studies. I will particularly mention 4 of them which did not investigate only patients with a proven disorder in auditory discrimination, but those who investigated a broader aphasic group:

- Basso, Casati & Vignolo (1977): Of 50 aphasic patients (with unilateral left hemisphere damage) only 13 (26%) were unimpaired in a phoneme identification task (concerning VoiceOnsetTime), the remainder of 37 patients showed impaired performance.

The three other studies are concerned with minimal pair discrimination

- Varney & Benton (1979): Of 39 aphasic patients (with unilateral left hemisphere damage) 10 (~25,6%) showed defective performance on the minimal pair discrimination task and the other 29 showed normal performance

- Miceli, Gainotti, Caltagirone & Masullo (1980): Of 66 aphasic patients (with unilateral left hemisphere damage) 34 (~51,5%) showed pathological performance on a phoneme discrimination task. The other 32 scored normal.

- Varney (1984): Of 80 aphasic patients (with unilateral left hemisphere damage) 14 (17,5%) showed defective performance on the same task as used in Varney & Benton, the remainder was unimpaired.


To sum up 235 aphasic patients (all with unilateral left hemisphere damage) took part in these studies. 95 of them (~40%) were impaired on tasks investigating phonemic processing (discrimination and identification tasks).

For me this seems to underline the notion that a damage in the left hemisphere is definitely sufficient to cause a substantial problem in the recognition/processing of speech sounds!
Also these results differ of course quite from those of the acute stroke study of Rogalsky and colleagues (2008), which I claimed is due to the material used in that study.


Franklin, S. (1989). Dissociations in auditory word comprehension: evidence from nine fluent aphasic patients. Aphasiology 3(3), 189-207.

Basso, A., Casati, G. & Vignolo, L. A. (1977). Phonemic identification defects in aphasia. Cortex, 13, 84-95.

Varney, N.R. & Benton, A.L. (1979). Phonemic discrimination and aural comprehension among aphasic patients. Journal of Clinical Neuropsychology 1(2), 65-73.

Miceli, G., Gainotti, G., Caltagirone, C. & Masullo, C. (1980). Some aspects of phonological impairment in aphasia. Brain and Language 11, 159-169.

Varney, N.R. (1984). Phonemic imperception in aphasia. Brain and Language 21, 85-94.

Rogalsky, C., Pitz, E., Hillis, A. E. & Hickok, G. (2008). Auditory word comprehension impairment in acute stroke: Relative contribution of phonemic versus semantic factors. Brain and Language 107(2), 167-169.

Tuesday, October 14, 2008

Does Parkinson's disease impair action verb processing?

I've been slogging through the evidence typically cited as support for an embodied cognition view of language processing. Much of this research focuses on processing actions verbs, which according to the "EC" view, critically involve motor representations as part of their semantics. In previous posts I've discussed studies that use TMS, ALS, and stroke data to make the case for an embodied view of action word processing. None of it, I argued, was particularly compelling.

Here we have a close look at a recent paper involving Parkinson's disease (PD) patients (Boulenger et al., 2008). These authors used a lexical-decision, masked, identity-priming paradigm: primes were identical to targets (= identity-priming) and were presented rapidly, followed by a mask which precludes conscious awareness of the prime (= masked); priming effects were assessed relative to a control condition where the "prime" was a string of consonants. Priming was compared for visually presented nouns and verbs in PD patients both on and off medication. This is an interesting design because it allowed the team to assess processing when the basal ganglia circuit was relatively functional compared to when it was not. Control subjects were also tested.

So what did they find? On medication, PD patients showed priming for both nouns and verbs (middle panel in figure below), whereas off medication, PD patients only showed priming for nouns. Since nouns primed even off medication, this argues against generalized attentional, perceptual, etc. explanations of the failure of verbs to prime off medication.

(White circles are nouns, black circles are verbs.)

This is a pretty cool result and is interpreted as "compelling evidence that processing lexico-semantic information about action words depends on the integrity of the motor system" (p. 743). I beg to differ.

First, PD is NOT limited to the motor system. In fact, Boulenger et al. point out that "deficits in cognitive functions and subtle semantic language deficits have also been reported" (p. 744). It is impossible to know whether the failure to show priming effects is strictly a matter of motor dysfunction, or whether it stems from disruption of other functions supported by basal ganglia circuits. This is a point similar to one I raised in connection with ALS: just because a prominent symptom of a disease is motor, doesn't mean that the motor deficit is causing all the symptoms.

Second, depending on what you focus on in the reaction time data, the pattern of results could either support a verb processing deficit or a noun processing deficit. Have a look at the top "Patients OFF" panel in the graph above. While it is clear that nouns are priming and verbs are not, it is also the case that RTs to nouns are quite a bit slower than RTs to verbs in the control, unprimed condition (left side of graph). This is puzzling given that ON medication, the PD patients showed no RT difference to the same nouns vs. the same verbs. So one way to look at the result is that being off medication causes a selective deficit in noun processing relative to verb processing!

How do we reconcile these two interpretations? I don't know. It depends on which measure (raw recognition time vs. priming) is a better measure of "lexico-semantic" processing. Sometimes it helps to re-state the findings without all the interpretive baggage. Assuming that basal ganglia dysfunction is exaggerated when the PD patients are off Levodopa medication, the present study leads to the following conclusions:

1. Basal ganglia dysfunction reduces the masked-primer induced pre-activation of essential parts of the cerebral networks for verb (but not noun) processing. (This is a paraphrase of the underlying mechanism of masked priming as provided by the authors on page 744.)

2. Basal ganglia dysfunction slows the ability to recognize nouns relative to verbs in a lexical decision task.

Maybe priming-related pre-activation is a critical function of lexico-semantic networks, but it seems to me that slowed recognition is a bad thing as well, maybe even worse. Still, I don't know whether PD causes noun or verb problems (or both).

More generally, I'm beginning to wonder what lexical decision effects in these sorts of studies are actually telling us. On the one hand, it is possible to argue that lexical decision provides a highly sensitive measure of aspects of language processing, some of which are automatic and unconscious. In this sense, it seems like a good task. On the other hand, we don't normally walk around making lexical decisions on visually presented words. Does this task involve meta-linguistic processes that aren't normally involved in noun and verb processing? Is it a modality-specific (i.e., reading-related) effect? Note that modality-specific verb deficits have been reported (Hillis, et al. 2002).

So while the findings are certainly interesting, and add to the large literature demonstrating dissociations between noun and verb processing, the Boulenger et al. paper is not "compelling evidence" for motor involvement in action verb processing. We don't know that it is the motor system that is causing the problem, the results suggest the possibility of a selective noun deficit, and it is not clear what the task is measuring.

References

Boulenger, V., Mechtouff, L., Thobois, S., Broussolle, E., Jeannerrod, M., Nazir, T.A. (2008). Word processing in Parkinson's disease is impaired for action verbs but not for concrete nouns Neuropsychologia, 46 (2), 743-756 DOI: 10.1016/j.neuropsychologia.2007.10.007

Argye E. Hillis, Elizabeth Tuffiash, Alfonso Caramazza (2002). Modality-Specific Deterioration in Naming Verbs in Nonfluent Primary Progressive Aphasia Journal of Cognitive Neuroscience, 14 (7), 1099-1108 DOI: 10.1162/089892902320474544