Thursday, January 28, 2010

Tonotopic organization of human auditory cortex

Former Talking Brains West grad student, Colin Humphries, in collaboration with Einat Liebenthal and Jeff Binder, has recently published the best study yet of the tonotopic organization of human auditory cortex. They found evidence of frequency sensitive gradients, but oriented differently than previous work has suggested. Definitely worth a look.

Humphries, C., Liebenthal, E., & Binder, J. (2010). Tonotopic organization of human auditory cortex NeuroImage DOI: 10.1016/j.neuroimage.2010.01.046

Tuesday, January 26, 2010

Intelligible speech and hierarchical organization of auditory cortex

It has been suggested that auditory cortex is hierarchically organized with the highest levels of this hierarchy, for speech processing anyway, located in left anterior temporal cortex (Rauschecker & Scott, 2009; Scott et al., 2000). Evidence for this view comes from PET and fMRI studies which contrast intelligible speech with unintelligible speech and find a prominent focus of activity in the left anterior temporal lobe (Scott et al., 2000). Intelligible speech (typically sentences) has included clear speech and noise vocoded variants which are acoustically different but both intelligible, whereas unintelligible speech has included spectrally rotated versions of these stimuli. The idea is that regions that respond to the intelligible conditions are exhibiting acoustic invariance, i.e., responding to the higher-order categorical information (phonemes, words) and therefore reflect high levels in the auditory hierarchy.

However, the anterior focus of activation contradicts lesion evidence which shows that damage to posterior temporal lobe regions is most predictive of auditory comprehension deficits in aphasia. Consequently, we have argued that the anterior temporal lobe activity in these studies is more a reflection of the fact that subjects are comprehending sentences -- which are known to activate anterior temporal regions more than words alone -- than intelligibility of speech sounds and/or words (Hickok & Poeppel, 2004, 2007). Therefore, our claim has been that the top of the auditory hierarchy for speech (regions involved in phonemic level processes) is more posterior.

To assess this hypothesis we fully replicated previous intelligibility studies using two intelligible conditions, clear sentences and noise vocoded sentences, and two unintelligible conditions, rotated versions of these. But instead of using standard univariate methods to examine the neural response, we used multivariate pattern analysis (MVPA) to assess regional sensitivity to acoustic variation within and across intelligibility manipulations.

We did perform the usual general linear model subtractions: intelligible [(clear + noise vocoded) - (rotated + rotated noise vocoded)] and found robust activity in the left anterior superior temporal sulcus (STS), but also in the left posterior STS, and right anterior and posterior STS. This finding shows that intelligible speech activity is not restricted to anterior areas, or even the left hemisphere. A broader bilateral network is involved.



Next we examined the pattern of response in various activated regions using MVPA. MVPA looks at the pattern of activity within a region rather than the pooled amplitude of the region as a whole. If different patterns of activity can be reliably demonstrated in a region, this is an indication that the manipulated features (e.g., acoustic variation in our case) are being coded or processed differently within the region.

The first thing we looked at was whether the pattern of activity in and immediately surrounding Heschl's gyrus was sensitive to intelligibility and/or acoustic variation. This is actually an important prerequisite for claiming acoustic invariance, and therefore higher-order processing, in downstream auditory areas: If you want to claim that invariance to acoustic features downstream reflects higher levels of processing in the cortical hierarchy, you need to show that earlier auditory areas are sensitive to these same acoustic features. So we defined early auditory cortex independently using a localizer scan, AM noise modulated at 8Hz relative to scanner noise. The figure below shows the location of this ROI (roughly that is, as this is a group image and for all MVPA analyses ROIs are defined in individual subjects) and the average BOLD amplitude to the various speech conditions. Notice that we see similar levels of activity for all conditions, especially clear speech and rotated speech which appear to yield identical responses in Heschl's gyrus. This seems to provide evidence that rotated speech is indeed a good acoustic control for speech.



However, using MVPA, we found that the pattern of activity in Heschl's gyrus (HG) could easily distinguish clear speech from rotated speech (it is responding to these conditions differently). In fact, HG could distinguish each condition from the other, including the within intelligibility contrasts such as clear vs. noise vocoded (both intelligible) and rotated vs. rotated noise vocoded (both unintelligible). It appears that HG is sensitive to the acoustic variation between our conditions. The figure below shows classification accuracy for the various MVPA contrasts in left and right HG. The dark black line indicates chance performance (50%) whereas the thinner line indicates the upper bound of the 95% confidence interval determined via a bootstrapping method.



Again this highlights the fact that standard GLM analyses obscure a lot of information that is contained in those areas that appear to be insensitive the manipulations we impose.

So what about the STS? Here we defined ROIs in each subject using the clear minus rotated condition, i.e., the conditions that showed no difference in average amplitude in HG. ROIs where anatomically categorized in each subject as being "anterior" (anterior to HG), "middle" (lateral to HG), or "posterior" (posterior to HG). In a majority of subjects, we found peaks in anterior and posterior STS in the left hemisphere (but not in the mid STS), and peaks in the anterior, middle, and posterior STS in the right hemisphere. ROIs were defined using half of our data, MVPA was performed using the other half -- this ensured complete statistical independence.

Here are the classification accuracy graphs for each of the ROIs. The left two bars in each graph show across-intelligibility contrasts (clear vs. rotated & noise vocoded vs. rotated NV). These comparisons should classify if the area is sensitive to the difference in intelligibility. The right two bars show within-intelligibility contrasts (clear vs. NV, both intell; rot vs. rotNV, both unintell). These comparisons should NOT classify if the ROI is acoustically invariant.



Looking first at the left hemisphere ROIs, notice that both anterior and posterior regions classify the across intelligibility contrasts (as expected). But the anterior ROI also classifies clear vs. noise vocoded, two intelligible conditions. The posterior ROI does not classify either of the within intelligibility contrasts. This suggests that the posterior ROI is the more acoustically invariant region.

The right hemisphere shows a different pattern in this analysis. The right anterior ROI shows a pattern that is acoustically invariant whereas the mid and posterior ROIs classify everything, every which way, more like HG.

If you look at the overall pattern within the graphs across areas you'll notice a problem with the above characterization of the data. It categorizes a contrast as classifying or not and doesn't take into account the magnitude of the effects. For example, notice that as one moves from aSTS to mSTS in the right hemisphere, classification accuracy for the across intelligibility contrasts rises (as it does in the left hemi), and that in the right aSTS clear vs. NV just misses significance, where as in the mSTS clear vs. NV barely passes significance. We may be dealing with thresholding effects. This suggests that we need a better way of characterizing acoustic invariance that uses all of the data.

So what we did is calculate an "acoustic invariance index" which basically measures the magnitude of the intelligibility effect (left two bars compared with right two bars). This difference should be large if an area is coding features relevant to intelligibility. This measure was then corrected by the "acoustic effect" (the sum of the absolute difference in classification accuracy within intelligibility conditions). When you do this, here is what you get (acoustic invariance = positive values, range -1 to 1):



HG is the most sensitive to acoustic variation across conditions and more posterior areas (pSTS in left, mSTS in right) are the least sensitive to acoustic variation. aSTS fall in between these extremes. So left pSTS and right mSTS as we've defined it anatomically appear to be functionally homologous and represent the top of the auditory hierarchy for phoneme-level processing. I don't know what is going on in right pSTS.

What features are these areas sensitive to? My guess is that HG is sensitive to any number of acoustic features within the signals, aSTS is sensitive to suprasegmental prosodic features, and pSTS is sensitive to phoneme level features. Arguments for these ideas are provided in the manuscript.

References

Okada, K., Rong, F., Venezia, J., Matchin, W., Hsieh, I., Saberi, K., Serences, J., & Hickok, G. (2010). Hierarchical Organization of Human Auditory Cortex: Evidence from Acoustic Invariance in the Response to Intelligible Speech Cerebral Cortex DOI: 10.1093/cercor/bhp318

Hickok, G., & Poeppel, D. (2004). Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition, 92, 67-99.

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nat Rev Neurosci, 8(5), 393-402.

Rauschecker, J. P., & Scott, S. K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat Neurosci, 12(6), 718-724.

Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. S. (2000). Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406.

Friday, January 22, 2010

Neurobiology of Language Conference (NLC) 2010

The second annual Neurobiology of Language Conference (NLC) is in the planning stages. It will again be held as a satellite to the Society for Neuroscience meeting, this time in San Diego. Both David and I are on the planning committee so Talking Brains might be a good way to get some feedback on how you would like the meeting to take shape.

If you have ideas on debate topics or issues that you would like to hear about please send us comments. One specific question is whether we should do debate sessions again and if so how many? Let us know what you think by casting your vote in the new poll posted at the top right of the blog home page.

Thursday, January 21, 2010

Job in Washington: language/cognitive- communication disorders

POSITION ANNOUNCEMENT

Cognitive-Communication Research Scientist

The Army Audiology and Speech Center at Walter Reed Army Medical Center is searching for a research scientist specializing in language/cognitive-communication disorders in adults. Expertise in traumatic brain injury and the assessment of cognitive-communication disorders is preferred. The primary responsibility of this position is to conduct independent research that is
relevant to the Audiology and Speech-Language Pathology Clinic and Walter Reed Army Medical Center. This is a fully funded federal government position (GS 12/13) with competitive salary, excellent benefits, and an exceptional work environment. Candidates for the position must have a minimum of a doctoral degree in speech-language pathology, neuropsychology, or a related field, and be U.S. Citizens. Evidence of research productivity and external funding are highly desirable.

The Research Section of the Army Audiology and Speech Center has a strong history of clinically relevant research in both speech and hearing, and is highly regarded as a productive research center. Walter Reed Army Medical Center is the flagship of the US Army health care system. In 2011, we will be relocating to brand-new facilities at the new Walter Reed National Military
Medical Center in Bethesda, MD, directly across from the NIH campus. To inquire about this position, please provide a letter of interest and a curriculum vita to Dr. Nancy Pearl Solomon, Research Speech Pathologist, via email at Nancy.P.Solomon@US.army.mil. For further information regarding this position, you may contact Dr. Solomon by email or by calling
(202) 782-8597

Disentangling syntax and intelligibility -- Or how to disprove two theories with one experiment

I both love and hate a recent paper by Angela Friederici, Sonja Kotz, Sophie Scott, & Jonas Obleser titled Disentangling syntax and intelligibility in auditory language comprehension. The paper is in the "Early View" section of Human Brain Mapping.

Here's why I love it. There are a number of claims in the literature on the neuroscience of language that I disagree with. One is Sophie Scott's claim that speech recognition is a left hemisphere function that primarily involves anterior temporal regions. Another is Angela Friederici's claim that a portion of Broca's area, BA44, is critical for "hierarchical structure processing". In the study reported in this new paper, Friederici and Scott have teamed up and proven both of these claims to be incorrect. This I like.

What I hate about the paper is that the authors don't seem to recognize that their new data provide strong evidence against their previous claims, and in fact argue that it supports their view(s).

So what did they do? The experiment is a nice combination of the intelligibility studies that Scott has published and the syntactic processing studies that come out of Friederici's lab. It was a 2x2 design: grammatical sentences versus ungrammatical sentences x intelligible versus unintelligible (spectrally rotated) sentences.

What did they find? The intelligible minus unintelligible contrast showed bilateral activation up and down the length of the STG/STS, i.e., not just in the left hemisphere and not just anterior to Heschl's gyrus. This contradicts previous studies from Scott's group, particularly with respect to the right hemisphere activation, as the current paper correctly pointed out:

...the right-hemispheric activation in response to increasingly intelligible speech deviates from the original papers on intelligibility [Narain et al., 2003; Scott et al., 2000]. (p. 6)


In short, the primary bit of data that has been driving claims for a left anterior pathway for intelligible speech has been shown to be inaccurate. This is not terribly surprising as those previous studies were severely under powered.

Conclusion #1: the "pathway for intelligible speech" is bilateral and involves both anterior and more posterior portions of the STS/STG.

What about Broca's area and hierarchical structure building? In fairness, most of the paper was about the STG/STS and not about Broca's area, but the role of Broca's area was addressed and of course it is perfectly fair to use data from this study to address a hypothesis proposed by Friederici in other papers. If Broca's area is involved in hierarchical structure building, then it should activate during the comprehension of sentences, which surely are hierarchically structured. Thus, the intelligible (structured) minus unintelligible (unstructured) contrast should result in activation of Broca's area. Yet it did not. The contrast between intelligible and unintelligible sentences resulted only in activation in the superior temporal lobes.

Conclusion #2: Hierarchical structure building can be achieved without Broca's area involvement.

So in light of these findings, how does one maintain the view that intelligible speech primarily involves the left hemisphere and that syntactic (hierarchical) processing involves Broca's area? It all hinges on the response to those pesky ungrammatical sentences.

Here's the assumption on which their argument relies: syntactic processing is really only revealed during the processing of ungrammatical sentences. They don't state it in these terms, but this is what you have to assume for their arguments to work. Right off the bat we have a problem with this assumption. When you listen to an ungrammatical sentence, not only does this mess up syntactic processing, but it also increases the load on semantic integrative processes and who knows what other meta-cognitive processes are invoked by hearing a sentence like, "The pizza was in the eaten", which is an example of the kind of violation they used. In fact, one might even argue that processing an ungrammatical sentence causes the syntactic processing mechanism to shut down and instead crank up cognitive interpretation strategies. Thus rather than highlighting syntax, such a manipulation may highlight non-syntactic comprehension strategies!

So what happens when you listen to ungrammatical sentences and spectrally rotated ungrammatical sentences?

Ungrammatical sentences minus grammatical sentences (intelligible only) resulted in activation the left and right superior temporal lobe, Broca's area (left BA 44), and the left thalamus. So the "syntactic" effect is bilateral in the superior temporal lobe, but at least we now have Broca's area active.

The authors then took these seven ROIs defined in the two main contrasts (intell-unintell and gramm-ungramm), extracted percent signal change around the peaks and performed subsequent ANOVAs to assess interactions. These interactions are what really drives their argument. However, we now have another problem, namely that the data that defined the ROIs is not independent of the data that were subsequently analyzed using ANOVAs. We therefore can't be sure the reported effects are valid. Nonetheless, let's pretend they are see if the conclusions make sense.

Here is a graph of the interactions:



The claim here is that "syntax" (i.e., greater response to ungrammatical) and intelligibility (i.e., greater response to intelligible) significantly interacted only in the left hemisphere ROIs, and indeed in all of them, including BA 44 and the thalamus. Therefore these regions represent the critical network, according the Friederici et al., because they are responding to the syntactic features in intelligible speech and not merely acoustic differences which are present in the unintelligible speech as well. Something is very wrong with this logic even beyond the possible invalid assumption and analysis methods noted above.

Consider the response pattern in BA 44. Zero response to normal syntactically structured sentences (which presumably requires some degree of syntactic processing), significant activation to intelligible ungrammatical sentences, significant (or so it seems) activation to UNINTELLIGIBLE versions of grammatical sentences, and no activation to unintelligible versions of ungrammatical sentences. What possible syntactic computation could be invoked BOTH by a grammatical violation and unintelligible noises but not by grammatical sentences? And this pattern is considered part of the intelligible speech/syntactic processing system whereas the right anterior STS, which shows a very robust intelligibility effect and no obvious effect of violation is not. I would suggest instead that because the right STS area is actually responding to sentences and not just broken sentences or spectrotemporal noise patterns that the right STS is more likely involved in sentence processing.

In the end, Friederici et al.'s entire argument rests on (i) a possibly invalid assumption about their "syntactic" manipulation, (ii) a possibly contaminated statistical analysis, and (iii) a logically questionable definition of what counts as a region involved in the processing of these language stimuli.

The basic findings are extremely important though because they confirm that speech recognition and now the "pathway for intelligible speech" is bilateral and that Broca's area is silent during normal sentence comprehension and therefore is not involved in basic syntactic/hierarchical structure building.

References


Friederici AD, Kotz SA, Scott SK, & Obleser J (2009). Disentangling syntax and intelligibility in auditory language comprehension. Human brain mapping PMID: 19718654

Narain, C., Scott, S. K., Wise, R. J., Rosen, S., Leff, A., Iversen, S. D., & Matthews, P. M. (2003). Defining a left-lateralized response specific to intelligible speech using fMRI. Cereb Cortex, 13(12), 1362-1368.

Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. S. (2000). Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406.

Scott, S. K., & Wise, R. J. (2004). The functional neuroanatomy of prelexical processing in speech perception. Cognition, 92(1-2), 13-45.

Thursday, January 14, 2010

At the Frontiers of Neuro-Aphasiology: Two papers by Julius Fridriksson

The following is a guest post by Whitney Anne Postman-Caucheteux.


If there were a sub-field of Aphasiology devoted to imaging of the brains of people with aphasia, let’s call it “Neuro-Aphasiology”, then Dr. Julius Fridriksson would be one of its most distinguished pioneers. As a long-time admirer of his work, I can think of few other researchers of aphasia who have gone beyond simply talking about issues such as age, task difficulty, and perfusion in neuroimaging of language processes in people with aphasia, to actually conducting and publishing the foundational research (see Fridriksson et al 2006, 2005, 2002, among others).

With two recently published aphasia fMRI papers, Dr. Fridriksson and his team at the University of South Carolina have done it again, by combining advanced fMRI techniques for acquiring overt speech responses with sophisticated psycholinguistic analyses of word production in aphasia. I propose that these two papers should be read as a pair, since each provides complementary investigations of the contributions of perilesional and contralesional regions to language production in chronic aphasia:

“F1”: Fridriksson, J., Baker, J.M. & Moser, D. (2009). Cortical mapping of naming errors in aphasia. Human Brain Mapping, 30, 2487-2498.

“F2”: Fridriksson, J., Bonilha, L., Baker, J.M., Moser, D., & Rorden, C. (in press). Activity in preserved left hemisphere regions predicts anomia severity in aphasia. Cerebral Cortex.

Both papers (hence, “F1” and “F2”) describe Fridriksson et al’s overt picture-naming experiments with chronic stroke patients with aphasia using fMRI. F1 is perhaps the more revolutionary of the two, in being the first to link certain patterns of neural activation in such patients with specific error types that have long been the subject of psycholinguistic investigations (e.g., Schwartz et al, 2006). I will review F1 and F2 in turn before offering suggestions on how they complement each other in providing clues to different pieces of the puzzle of language production in post-stroke aphasia.

Part I on F1: Fridriksson, J., Baker, J.M. & Moser, D. (2009). Cortical mapping of naming errors in aphasia. Human Brain Mapping, 30, 2487-2498.
In F1, Fridriksson et al employed a sparse sampling technique to acquire overt naming responses to object pictures from 11 stroke patients with various types of aphasia and with a range of degrees of anomia severity, all in chronic stages. Their goal was to identify common areas of activation across the entire cohort associated with 1) accurate picture-naming, 2) phonemic errors, and 3) semantic errors. This goal is crucial to understanding the neural substrate of disordered language production post-stroke, and a valuable use of novel techniques for acquisition of overt speech with fMRI. Dr. Bruce Crosson and colleagues have elegantly outlined their recommendations for how best to acquire, analyze and interpret fMRI data from language production tasks with patients with aphasia in Crosson et al (2007). In addition to the familiar complications of having stroke patients with aphasia participate in fMRI studies, acquisition of overt speech responses from such patients during scanning can be confounded by related motor-speech disorders such as apraxia, and the possibility of extremely long response times for some patients.

Prior fMRI research using silent production could not distinguish between activation patterns associated with accurate and inaccurate responses. Distinctive patterns are to be expected, given results from studies linking superior language production performance (measured outside the scanner with fMRI, during scanning with PET) with predominantly perilesional activation, and inferior performance with increased contralesional involvement (see references in F1 and F2). Consequently, studies like F1 are needed to discover how neural patterns may differ for accurate and inaccurate naming. Such discoveries can clarify effective vs. ineffective ways in which neural systems respond to damage, and subsequently, how these ways can be enhanced or suppressed with treatment.

Focusing on areas of activation common to the cohort, Fridriksson et al masked out voxels lesioned in any of the 11 patients, extending over the greater portion of the left hemisphere. The patients achieved a wide range of accuracy, and committed semantic errors, phonemic errors, unrelated errors, neologisms or omissions (see Table II in F1 reproduced and modified below).



The authors correlated the patients’ correct responses and semantic and phonemic errors with increases in BOLD activation in neural regions outside of the aforementioned mask, yielding the following intriguing results:

Result #1: Correct names correlated positively with increases in BOLD response in right inferior frontal gyrus.

Result #2: Phonemic errors correlated positively with increases in BOLD response in left precuneus (BA19), cuneus (BA7) and posterior inferior temporal gyrus (BA37).

Result #3: Semantic errors correlated positively with increases in BOLD response in right cuneus (BA 18), middle occipital gyrus (BA 18/19), and posterior inferior temporal gyrus (BA37).

The first result linking naming accuracy to activation in the right IFG is corroborated by some previous research suggesting positive contributions of this region to successful language production (see references in F1). It is also ostensibly in contradiction with the principal results of F2, but more on that issue in Part III. Here I’d like to concentrate on the top graph in Figure 3:



The tight correlation between accuracy and right IFG activation is striking. Yet even so, it is worth mentioning that for 4 patients, virtually 0% or less than .1% increase in BOLD amplitude was coupled with naming accuracy, raising the question of whether this result was carried largely by a subset of the cohort. If this were indeed the case, it might help to elucidate which patients are expected to show substantial right IFG activation linked to accuracy. This issue was raised in Postman-Caucheteux et al (in press), in a discussion of important case studies by Meinzer et al (2006) and Vitali et al (2007).
The novel results coupling phonemic errors with ipsilesional posterior activation, and semantic errors with contralesional posterior activation, should inspire future research directed at replicating and developing them further. The explanations offered by the authors for why these regions should be involved in the production of these errors are plausible and appealing. Especially interesting was their finding that the neural activation patterns linked to each of these patterns were essentially additional to that observed for correct naming. That is, they both involved the same neural substrate as accuracy, plus activation in the aforementioned posterior areas. This finding is in agreement with that for incorrect vs. correct naming in Postman-Caucheteux et al (in press), although we found a link between right frontal, not posterior, regions for semantic paraphasias (as well as omissions). However, the patients in our smaller cohort had frontal-insular-parietal damage, with almost no temporal damage. Since the brain region most affected in the F1 cohort was posterior temporal, this comparison raises the possibility that semantic errors may principally involve directly contralesional activation, i.e., activation in right frontal regions in patients with left frontal lesions, and in right posterior regions in patients with left posterior lesions. A worthwhile approach to investigate this possibility would take into account the precise nature of the semantic errors, which brings me to my next point.
More qualitative details (including examples) from all of the error types, and measurement of reaction times as an index of naming difficulty, would have been informative. I would also like to know if the high number of unrelated errors produced by P3 were perseverations, which may constitute their own special class of errors. Likewise, more information on the types of semantic errors could have been used to support the authors’ interpretation of right posterior activation as representing less specific semantic representations (p.2496). Furthermore, research on the evolution of neologisms into phonemic paraphasias (Bose & Buchanan, 2007) implies that comparison of possible neural patterns for neologisms with those found for phonemic errors could have been instructive.

The grounds for the authors’ exclusion of other types of errors in their analyses are somewhat unclear, for even though phonemic and semantic errors were the most frequent types, the other types of errors were not infrequently produced by certain patients. With regard to omissions, even though the interpretation of omissions is indeed problematical, nevertheless they are routinely tracked as errors, and they can be predicted by specific psycholinguistic factors such as semantic competition (Schnur et al, 2006). Since the authors did not include an analysis of factors that could have contributed to each type of error (e.g., percent name agreement, age of acquisition, target word length), it is unknown whether certain stimuli were consistently more likely to induce errors, as was found in Postman-Caucheteux et al (in press). Given that the effects of different psycholinguistic variables on picture-naming have been linked to specified neural areas of activation (Schnur et al 2009, Wilson et al 2009), future studies inspired by F1 should seek to isolate the variables that induce errors, perhaps by manipulating stimuli according to factors of interest and measuring reaction times. This approach may be helpful for interpreting the nature of error-linked activation.

As corroboration of their findings in F1, the authors cite a clever treatment study of word learning using PET by Raboyeau et al (2008). Chronic stroke patients with aphasia were trained to produce names of objects that had been difficult for them prior to therapy. At the same time, healthy participants were trained to produce words in second languages that they had acquired in school with varying degrees of proficiency. Raboyeau et al’s findings of increased right insular and frontal activation with word learning in both groups are interpreted in F1 along these lines:

“[They] concluded that increased activity in the right frontal lobe in aphasia is not merely the consequence of damaged homologues in the left hemisphere but, rather, is a reflection of increased reliance on the right hemisphere to support aphasia recovery” (p. 2496).

Raboyeau et al’s findings may actually be trickier to explain, as they also included more activation in left frontal regions (BA’s 10 and 11) in the patients but not the controls. Additional left hemisphere activation may have been present in some patients but, as with F1, lesioned voxels were excluded from their analyses. Here is how Raboyeau et al state their own conclusions:

“[...] Activations observed in these two right frontal regions do not seem to play a true compensatory function in aphasia (italics mine), and do not represent a mere consequence of left hemispheric lesion, as they existed also in non–brain-damaged subjects [...]” (p.296).

So as I understand their discussion, they did not infer that right insular and frontal activation supported recovery, as suggested in F1. Rather, their results indicated greater effort and cued word retrieval as a result of training, in patients as well as controls. If I have misconstrued Raboyeau et al’s and F1’s conclusions, hopefully someone will help me see the light.

Part II on F2: Fridriksson, J., Bonilha, L., Baker, J.M., Moser, D., & Rorden, C. (in press). Activity in preserved left hemisphere regions predicts anomia severity in aphasia. Cerebral Cortex.

While the fMRI task and methods of acquisition for F2 appear to be almost identical to those in F1, the leading question and analytical techniques were virtually the converse. Instead of focusing on patients’ errors as in F1, here the authors asked which brain regions appear to support accurate overt picture naming. Also, instead of analyzing the cohort of patients as a group and creating a group lesion mask, here the authors examined the patients (N=15) individually and compared each one’s activation map to the average from an equal number of healthy age-matched control participants.

Since here I could not do justice to their advanced methods, readers are referred to the original paper for details on the complex steps involved in comparing activation maps derived from each patient’s contrast of correct picture naming vs. abstract (silent) picture viewing, with the controls’ group map. In essence, the degree to which each patient’s activation map deviated from the average control map was correlated with their proportion of correct naming responses. In addition, structural analyses were conducted to investigate whether intensity of activation associated with correct naming was dependent upon specific areas of damage. Results were:

1) In the control group, picture-naming was supported by bilateral activation in posterior regions (cuneus & inferior/middle occipital gyrus (BA 18), middle temporal gyrus (BA37)) but highly left lateralized in the transverse (BA 42) and superior (BA 22) temporal gyri, and frontally in inferior frontal gyrus (BA 45), middle frontal gyrus (BA 10, 11, 47), and anterior cingulate (BA 32).

2) For the 15 participants with aphasia due to left-hemisphere stroke, accurate picture naming was supported by many of the same left-lateralized regions observed in the control group. Most of these were perilesional, namely, medial & middle frontal gyrus (BA 10, 11, 47) and inferior occipital gyrus (18). The left anterior cingulate gyrus (BA 32) was also linked to correct naming in patients, but was considered too medial to qualify as perilesional for this cohort of patients.

3) In the patients, intensity of activation in these left-lateralized areas correlated with number of correct names. Here’s the money shot (Figure 3 in F2), showing cortical areas associated with naming task performance (red-yellow scale) along with the lesion overlay map for all 15 patients (blue-green scale):



4) But Fridriksson et al didn’t stop there. Even more dazzling, those patients who did best on the naming task in the scanner tended to show greater activation than the controls in the regions highlighted above (red-yellow), and those who did less well on the naming task showed less activation than the controls in the same areas. Figure 4 in F2 is copied below, showing “the relationship between intensity of activation (x-axis; measured in Z-scores compared with a group of normal control participants) and the number of correct naming attempts (y-axis; out of 80 pictures) during fMRI scanning”:



5) A final intriguing result: Intensity of activation in the patients was inversely correlated with damage to the posterior left IFG (BA 44).

The findings in 2) are corroborated by those in Postman-Caucheteux et al (in press), showing predominantly left perilesional activation for accurate picture-naming in patients with frontal-insular-parietal damage. To my knowledge, the findings in 3) and 4) provide the most precise characterization of ipsilesional (including perilesional and non-perilesional) areas of activation for language production in aphasia, and the most direct link between this activation and production performance, yet to be discovered.

Part 3: Sum Total of F1 + F2
F2 contributes to the mounting evidence from the nascent wave of overt speech fMRI studies, for the fundamental importance of restoration or re-integration of perilesional tissue for good language production in people with chronic aphasia due to stroke (see references in Fridriksson et al (in press) and Postman-Caucheteux et al (in press)). So I couldn’t help but wonder, how would the authors relate these findings to their earlier paper (F1), in which they found a positive role for the right IFG in patients’ accurate naming? As I understand them, the results of F1 and F2 raise the following possibilities:

1) Contralesional (right) IFG may be working in tandem with perilesional regions (and perhaps also ipsilesional non-perilesional regions such as anterior cingulate) to achieve accurate naming in some patients. Thus in F1, some patients who showed increasing right IFG involvement with increasing naming success could have also shown substantial perilesional involvement that was not observable due to the group lesion mask. If this were the case, then it would provide evidence for partnership, rather than competition, between frontal areas of both stroke and non-stroke hemispheres.

2) In some patients, contralesional activation may be so negligible in comparison with robust perilesional activation that it only becomes apparent when large portions of the left hemisphere are masked out by group lesion analyses, as in F1. Presumably, some of the patients in F2 could have also shown right IFG involvement in successful naming, but the intensity may have been too minimal relative to ipsilesional areas to be reliably detected.

I’d like to propose that contralesional IFG activation might be helpful for good language production jointly with ipsilesional areas and up to a certain relatively low threshold. When it exceeds this threshold, it might constitute over-activation that is not effective, may be more evident for naming errors (as observed in Postman-Caucheteux et al, in press), and may even interfere with functioning of ipsilesional areas (Martin et al, 2009).

In the two studies reviewed here, Dr. Fridriksson’s team found contributions of perilesional and contralesional activation to language production in post-stroke aphasia. A major step forward in disentangling these contributions has been achieved with their identification of areas involved in certain types of naming errors, signaling the way for future fMRI studies to appreciate the details of patients’ production performance. Moreover, they have deepened our understanding of the tight link between activation in certain ipsilesional areas and successful overt word production. To continue progressing in the direction led by Fridriksson et al, recognition of functional partnership between stroke and non-stroke hemispheres, and distinction between effective activation and ineffective/maladaptive over-activation of contralesional areas, may be helpful in future investigations and discussions.

Footnotes

1. The row indicating categories of nonfluent and fluent participants was added here. It does not appear in the original Table II in F1, p.2492.
2. The statistical methods employed by Fridriksson et al were much more sophisticated than mere correlation, but they will not be described in depth here.


References

Bose, A., & Buchanan, L. (2007). A cognitive and psycholinguistic investigation of neologisms. Aphasiology, 21, 726-738.

Crosson, B., McGregor, K., Gopinath, K.S., Conway, T.W., Benjamin, M., Chang, Y.L., et al. (2007). Functional MRI of language in aphasia: A review of the literature and the methodological challenges. Neuropsychology Review, 17, 157–177.

Fridriksson, J., Baker, J.M. & Moser, D. (2009). Cortical mapping of naming errors in aphasia. Human Brain Mapping, 30, 2487-2498.

Fridriksson, J., Bonilha, L., Baker, J.M., Moser, D., & Rorden, C. (in press). Activity in preserved left hemisphere regions predicts anomia severity in aphasia. Cerebral Cortex.

Fridriksson, J., Morrow, K. L., Moser, D., & Baylis, G. C. (2006). Age-related variability in cortical activity during language processing. Journal of Speech, Language, and Hearing Research, 49, 690–697.

Fridriksson, J., & Morrow, L. (2005). Cortical activation and language task difficulty in aphasia. Aphasiology, 19, 239–250.

Fridriksson, J., Holland, A.L., Coull, B.M., Plante, E., Trouard, T.P., & Beeson, P. (2002). Aphasia severity: Association with cerebral perfusion and diffusion. Aphasiology, 16, 859-871.

Martin, P.I., Naeser, M.A., Ho, M., Doron, K.W., Kurland, J., Kaplan, J., et al, (2009). Overt naming fMRI pre- and post-TMS: Two nonfluent aphasia patients, with and without improved naming post-TMS. Brain and Language, 111, 20-35.

Meinzer, M., Flaisch, T., Obleser, J., Assadollahi, R., Djundja, D., Barthel, G., et al. (2006). Brain regions essential for improved lexical access in an aged aphasic patient: A case report. BMC Neurology, 17, 6–28.

Postman-Caucheteux, W.A., Birn, R.M., Pursley, R.H., Butman, J.A., Solomon, J.M., Picchioni, D., McArdle, J., & Braun, A.R. (in press). Single-trial fMRI shows contralesional activity linked to overt naming errors in chronic aphasic patients. Journal of Cognitive Neuroscience.

Raboyeau, G., De Boissezon, X., Marie, N., Balduyck, S., Puel, M., Bézy, C., et al. (2008). Right hemisphere activation in recovery from aphasia: Lesion effect or function recruitment? Neurology, 70, 290–298.

Schnur, T. T., Schwartz, M. F., Brecher, A., & Hodgson, C. (2006). Semantic interference during blocked-cyclic naming. Evidence from aphasia. Journal of Memory and Language, 54, 199–227.

Schnur, T.T., Schwartz, M.F., Kimberg, D.Y., Hirshorn, E., Coslett, H.B., & Thompson-Schill, S.L. (2009). Localizing interference during naming: Convergent neuroimaging and neuropsychological evidence for the function of Broca's area. Proceedings of the National Academy of Sciences, 106, 322-327.

Schwartz, M.F., Dell, G.S., Martin, N., Gahl, S., & Sobel, P. (2006). A case-series test of the interactive two-step model of lexical access: Evidence from picture naming. Journal of Memory and Language, 54, 228-64.

Vitali, P., Abutalebi, J., Tettamanti, M., Danna, M., Ansaldo, A.-I.,
Perani, D., et al. (2007). Training-induced brain remapping in chronic aphasia: A pilot study. Neurorehabilitation and Neural Repair, 21, 152–160.

Wilson, S.M., Isenberg, A.L., & Hickok, G. (2009). Neural correlates of word production stages delineated by parametric modulation of psycholinguistic variables. Human Brain Mapping, 30, 3596-3608.

Wednesday, January 13, 2010

Multi-talker speech recognition

A couple of new papers in J. Neuroscience relevant to speech recognition in multi-talker environments. I haven't read them yet but both look interesting. More soon...

Attentional Gain Control of Ongoing Cortical Speech Representations in a
"Cocktail Party"
Jess R. Kerlin, Antoine J. Shahin, and Lee M. Miller
J. Neurosci. 2010;30 620-628
http://www.jneurosci.org/cgi/content/abstract/30/2/620?etoc


How the Human Brain Recognizes Speech in the Context of Changing Speakers
Katharina von Kriegstein, David R. R. Smith, Roy D. Patterson, Stefan J.
Kiebel, and Timothy D. Griffiths
J. Neurosci. 2010;30 629-638
http://www.jneurosci.org/cgi/content/abstract/30/2/629?etoc

Monday, January 4, 2010

Auditory Cognitive Neuroscience Society meeting 2010

If you are in the vicinity of Tucson late this week, consider dropping in on the Auditory Cognitive Neuroscience Society meeting held at the University of Arizona, Jan. 7-8. The weather forecast is sunny and 70 degrees. Both David and I will be there presenting and discussing.

Special Issue of Brain and Language -- Mirror Neurons: Prospects and Problems for the Neurobiology of Language

The latest issue of Brain and Language is a special issue on mirror neurons. It has 7 original articles spanning topics from speech perception to language evolution to embodied semantics. The papers are authored by the likes of Luciano Fadiga, Michael Arbib, Michael Corballis, David Corina, Marco Iacoboni, David Kemmerer, and Greig de Zubicary. There is also a brief introduction by yours truly which summarizes the facts that any mirror neuron-based theory of language processes must confront. As for my more detailed thoughts on these issues you'll have to check out a forthcoming paper in Language and Cognitive Processes.