Thursday, October 16, 2008

Speech recognition and the left hemisphere: Task matters!

I fully agree with Dorte Hessler's assessment that left hemisphere damage can produce significant "problems to identify or discriminate speech sounds in the absence of hearing deficits." But here is the critical point that David and I have been harping on since 2000: the ability to explicitly identify or discriminate speech sounds (e.g., say whether /ba/ & /pa/ are the same or different) on the one hand, and the ability to implicitly discriminate speech sounds (e.e., recognize that bear refers to a forrest animal while pear is a kind of fruit) on the other hand, are two different things. While it is a priori reasonable to try to study speech sound perception by "isolating" that process in a syllable discrimination task (ba-pa, same or different?), it turns out that by doing so, we end up measuring something completely different from normal speech sound processing as it is used in everyday auditory comprehension. Given that our goal is to understand how speech is processed in ecologically valid situations -- no one claims to be studying the neural basis of the ability to make same-different judgments about nonsense syllables; they claim to be studying "speech perception" -- it follows that syllable discrimination tasks are invalid measures of speech sound processing. I believe the use of syllable discrimination tasks in speech research has impeded progress in understanding its neural basis.

Let me explain.

Some the same studies that Dorte correctly noted as providing evidence for deficits on syllable discrimination tasks following left hemisphere damage also show that the ability to perform syllable discrimination double-dissociates from the ability to comprehend words. Here is a graph from a study by Sheila Blumstein showing auditory comprehension scores plotted in the y-axis and three categories of performance on syllable discrimination & syllable identification tasks on the x-axis. The plus and minus signs indicate preserved or impaired performance respectively. The letters in the graph correspond to clinical aphasic categories (B=Broca's, W=Wernicke's). Notice the red arrows. They point to one patient who has the worst auditory comprehension score in the sample -- a Wernicke's aphasic, not surprisingly -- yet who is performing well on syllable discrimination/identification tasks, and to another patient who has the best auditory comprehension score in the sample -- a Broca's aphasic, not surprisingly -- yet who fails on both syllable discrimination and identification. A nice double-dissociation.



But that's only two patients, and the measure of auditory comprehension is coarse in that it uses sentence level as well as word level performance. Fair enough. So here is data from Miceli et al. that compares auditory comprehension of words (4AFC with phonemic and semantic foils) and syllable discrimination. Notice that 19 patients are pathological on syllable discrimination yet normal on auditory comprehension and 9 patients show the reverse pattern. More double dissociations.



Where are the lesions that are producing the deficits on syllable discrimination versus auditory comprehension? According to Basso et al, syllable discrimination deficits are most strongly associated with non-fluent aphasia, which is most strongly associated with frontal lesions. According to a more recent study by Caplan et al., the inferior parietal lobe is also a critical site. Notice that these regions have also been implicated in sensory-motor aspects of speech, including verbal working memory. This contrasts with work on the neural basis of auditory comprehension deficits (e.g., Bates et al.) which implicates the posterior temporal lobe (STG/MTG).



Some case study contrasts from Caplan et al. underline the point. On the left is a patient who has a lesion in the inferior frontal lobe and who was classified as a Broca's aphasic. On the right, a patient with a temporal lobe lesion and a classification of Wernicke's aphasia. By definition, the Broca's patient will have better auditory comprehension than the Wernicke's patient. Yet look at the syllable discrimination scores of these patients. The Broca case is performing at 72% correct, whereas the Wernicke is at 90%. Again, the patient with better comprehension is performing poorly on syllable discrimination showing that syllable discrimination isn't measuring normal speech sound processing.



To my reading the data are unequivocal. Syllable discrimination tasks tap a different set of processes to auditory comprehension tasks, even though both tasks ostensibly involve the processing of speech sounds. How can this be? Here's an explanation. Syllable discrimination involves activating a phonological representation of one syllable, maintaining that activation while the phonological representation of a second syllable is activated, then comparing the two, and then making a decision. Deficits on this task could arise from activating the phonological representations, maintaining both representations simultaneously in short term memory, comparing the two representations, or in making the decision. Only one of these processes is clearly shared by an auditory comprehension task, namely, activating the phonological representations. I suggest that the deficits in syllable discrimination following left hemisphere damage, particularly left frontal damage, result from one or more of the non-shared components of the task. The fact that the network implicated in syllable discrimination (fronto-parietal regions) is largely identical to that which is independently implicated in phonological working memory supports this claim. If, on the other hand, a patient had a significant disruption of the sensory system that activated phonological representations -- e.g., patients with bilateral lesions and word deafness -- then such a disruption should be evident on both discrimination and comprehension tasks.

It is hard for us to give up syllable discrimination as our bread and butter task in speech research. It seem so rigorous and controlled. But the empirical facts show that it doesn't work. In the neuroscience branch of speech research, the task produces invalid and misleading results (if our goal is to understand speech perception under ecologically valid listening conditions). It's time to move on.

References

Basso, A., Casati, G. & Vignolo, L. A. (1977). Phonemic identification defects in aphasia. Cortex, 13, 84-95

Elizabeth Bates, Stephen M. Wilson, Ayse Pinar Saygin, Frederic Dick, Martin I. Sereno, Robert T. Knight, Nina F. Dronkers (2003). Voxel-based lesion–symptom mapping Nature Neuroscience DOI: 10.1038/nn1050

S Blumstein, W Cooper, E Zurif, A Caramazza (1977). The perception and production of Voice-Onset Time in aphasia Neuropsychologia, 15 (3), 371-372 DOI: 10.1016/0028-3932(77)90089-6

Caplan, D., Gow, D. & Makris, N. (1995). Analysis of lesions by MRI in stroke patients with acoustic-phonetic processing deficits. Neurology, 45: 293 - 298.

G Hickok, D Poeppel (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

Gregory Hickok, David Poeppel (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

G MICELI, G GAINOTTI, C CALTAGIRONE, C MASULLO (1980). Some aspects of phonological impairment in aphasia*1 Brain and Language, 11 (1), 159-169 DOI: 10.1016/0093-934X(80)90117-0

1 comment:

Brad Buchsbaum said...

one prediction might be that Broca's aphasics should perform poorly on non-linguistic decision-making tasks, including difficult visual perceptual judgments (i.e. the directional dot motion task). Wernicke's patients, on the other hand, should perform well on such visual decision-making tasks.