Monday, October 6, 2008

Speech recognition and the left hemisphere

In contrast to the traditional view that all aspects of speech processing are strongly left dominant, we have argued in several papers that the recognition of speech sounds is supported by auditory regions in both hemispheres (Hickok & Poeppel, 2000, 2004, 2007). The evidence for this view comes from neuropsychological studies:

1. Chronic damage to the left superior temporal gyrus alone is not associated with auditory comprehension deficits or speech perception deficits, but instead is associated with speech production deficits (conduction aphasia).

2. More extensive chronic damage to the left temporal lobe IS associated with auditory comprehension deficits (e.g., in Wernicke's aphasia), but these deficits are not predominantly caused by difficulties in perceiving speech sounds. Instead, post phonemic deficits appear to account for a majority of the auditory comprehension deficit in aphasia. Evidence for this conclusion comes from the fact that such patients tend to make more semantic than phonemic based errors on auditory word-to-picture matching tests with semantic and phonemic foils.

3. In contrast to the relatively minimal effects of unilateral damage on speech sound recognition, damage to superior temporal regions in both hemisphere's IS associated with a profound deficit in perceiving speech sounds (e.g., word deafness).

One criticism of this body of neuropsychological data is that it involves patients with chronic lesions, and therefore the possibility of compensatory reorganization of speech recognition processes. For example, it could be that speech recognition is strongly left dominant in the intact brain, but following chronic left hemisphere injury, the right hemisphere gradually assumes speech recognition function.

Two new studies argue against this view. Both examine the effects of acute left hemisphere disruption on auditory word comprehension; one uses Wada methods, the other acute stroke. Both find that (i) auditory word-level comprehension deficits tend to be relatively mild, and (ii) reflect primarily post phonemic deficits.

Evidence from Wada procedures

This study (Hickok, et al. 2008) looked at the ability of patients undergoing clinically indicated Wada procedures to comprehend auditorily presented words with either their left or right hemispheres anesthetized. Patients listened to a stimulus word and were asked to point to the matching picture from a four-picture array that included the target, a semantic foil, a phonemic foil, and an unrelated foil. The basic results are provided in the figure below. Overall, errors were more common following left hemisphere anesthesia, but when errors occurred, they tended to be semantic (>2:1 ratio). Notice that the overall phonemic error rate with left disruption is less than 10%. This indicates that even acute disruption of left hemisphere function does not profoundly affect speech sound recognition during auditory comprehension.

Evidence from acute stroke

One could argue that evidence from Wada procedures may not be generalizable to the population as a whole given that Wada patients have a pre-existing neurological condition. Studies of patients in the acute phase of stroke avoid this potential complication. In a collaborative study with Argye Hillis at Johns Hopkins, we examined the auditory comprehension abilities of 289 patients who were within 24 hours of hospital admission for stroke (Rogalsky, et al. 2008). For this study we used a picture verification paradigm: subjects heard a word and were shown a picture that either matched the word, was a phonological foil, or was a phonemic foil. Subjects were asked to decide if the word and picture matched. We used a signal detection-based analysis to determine how well subjects were discriminating matches from non-matches. The top panel of the figure below shows the distribution of patients across the different performance levels. Notice that a very small fraction of the entire group scored worse than 80% correct overall (~7%). The bottom panel shows how well subjects in each of these performance bins could discriminate targets from semantic versus phonemic foils (y-axis = A-prime scores which approximates % correct). At every performance level, semantic confusions dominated (i.e., scores are lower for semantic foils). Within the bottom 7% of subjects -- those who scored worse than 80% correct -- performance was better on phonemic foils than semantic foils by 10 percentage points (72% vs. 62% correct, respectively), and well above chance (50%).

We conclude that the processing of speech sounds during auditory word comprehension is not profoundly impaired by left hemisphere damage either in chronic or acute stages of insult. This in turn indicates that both hemispheres of the intact brain have the capacity for processing speech sounds during comprehension. In other words, speech sound processing is bilaterally organized to some extent. This stands in sharp contrast to the impact of unilateral lesions on speech production, which can lead to profound deficits.


Hickok, G., Okada, K., Barr, W., Pa, J., Rogalsky, C., Donnelly, K., Barde, L. & Grant, A. (in press). Bilateral capacity for speech sound processing in auditory comprehension: Evidence from Wada procedures. Brain and Language,

G Hickok, D Poeppel (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

G Hickok, D Poeppel (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language Cognition, 92 (1-2), 67-99 DOI: 10.1016/j.cognition.2003.10.011

Gregory Hickok, David Poeppel (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

C ROGALSKY, E PITZ, A HILLIS, G HICKOK (2008). Auditory word comprehension impairment in acute stroke: Relative contribution of phonemic versus semantic factors Brain and Language DOI: 10.1016/j.bandl.2008.08.003


Dörte Hessler said...

Dear Greg,

nice paper, indeed. I however think that the claim that unilateral lesions do not lead to phonemic disorders might be a little bit too strong. Having a closer look at the material used, it becomes clear that the foils used (at least some of them) differ widely from the target. So of these 17 items, 2 foils show a different amount of syllables than the targets and 7 other differ in the vowel, leaving only 8 items with difference in consonants (to different degrees). Given the perceptual salience of vowels the phonological disorder would have to be very substantial to get those items wrong. So 'non-dramatically' impaired subjects will already score correct on those 9 items. Considering then the .5-chance level they will have 4 of the 8 others right merely by guessing. Therefore it seems reasonable that participants score correct on 13 items (or 76,5%) by just guessing.
Anyway, what I want to say here is that this research is based on a really great idea, but the material is not quite optimal. I think it would be necessary to conduct a similar study with better balanced material (with regard to whether vowel or consonant are changed and in how many phonetic features) in order to support your claim. At least I believe that a disorder, with which you cannot discriminate consonants that differ by 2 or 3 phonetic features, should also be regarded as serious (keeping in mind all the problems that causes in comprehension). With the material used in the Rogalsky et al. study however those patients would score almost at ceiling!
Nonetheless you're obviously rigth, that the reported cases of word-deafness generally suffered from a bilateral or, by the way, a subcortical unilateral left lesion (cf. Auerbach et al., 1982; Polster & Rose, 1998). But again those cases were of course chronic again.
Therefore a procedure as you suggested it, is really needed, however with better structured material, I think.

Auerbach, S. H., Allard, M., Naeser, M., Alexander, M. P. & Albert, M. L. (1982). Pure word deafness: analysis of a case with bilateral lesions and a defect at the prephonemic level. Brain 105, 271-300.

Polster, M. R. & Rose, S. B. (1998). Disorders of auditory processing: Evidence for modularity in audition. Cortex, 34, 47-65.

Greg Hickok said...

Thank you for your thoughtful comment. Just to be clear, the concerns you raise are in reference to the acute stroke study, not the Wada study which was comprised of phonological foils that differed from the targets by one feature (bear-pear). That said, I agree with you that the materials are far from optimal. This was a pre-existing data set that we had a good fortune to analyze. Nonetheless, even assuming you are correct that a patient with a severe phonemic perception deficit will perceive vowels normally (not so sure), we're still only talking about a small fraction of the sample who are having a trouble with the task; the vast majority is scoring better than 90%. If you think about the contrast of this performance with that of patients with word deafness -- where they can't tell "dog" from "cat"-- it seems clear that even acute unilateral damage is not producing the kind of phonemic deficits one would expect if the phonemic processing system had just been destroyed. Further, the Wada study provides converging evidence for the findings on the acute stroke study. So on balance, I think the evidence points to relatively mild phonemic deficits with unilateral disruption, which supports a bilateral organization of speech recognition processes.

You note that word deafness can occur following unilateral subcortical lesions. This is true, of course. My position on that observation is that these are aberrant cases: if you consider the frequency of occurrence of unilateral word deafness (a dozen or so in the history of neuropsychology?) against the frequency of occurrence of left unilateral strokes that do not cause word deafness (no doubt a daily occurrence) it is quite possible that unilateral word deaf cases represent an atypical organization of language. For example, a fraction of the population has reversed dominance. This doesn't mean that language is right hemisphere dominant; it just means that some people have a different brain organization for language.

Anonymous said...

Nice papers, it's great to see so many patients involved. I can't read the paper that's in press, so maybe you've addressed this point, but it seems to me one could argue that your WADA result actually points [i]away[/i] from a completely bilateral organisation for phonological processing - i.e. why doesn't right anesthesia have any effect [i]at all[/i] on phonological accuracy? Doesn't that suggest a left hemisphere bias of some sort? Something useful for phonological processing is obviously living in the left hemisphere somewhere. I agree that almost all left hemispherse patients with auditory comprehension deficits make more semantic errors than phonological, but many do make phonological errors, and often perform very badly on minimal pairs etc. The LH patients may not have had their 'phonemic processing system' destroyed, but it's often obviously damaged in a way that's never seen with RH patients.

Greg Hickok said...

We've never claimed that the bilateral organization is symmetric and in fact argued specifically that it is not. With regard to comprehension, what the data (lesion and now Wada) seem to indicate so far is that the left hemisphere is perfectly capable of processing phonological information all by itself, whereas the right hemisphere is not quite as efficient, making phonemic errors somewhere between 5 and 15% of the time (<10% in the Wada study). This could arise because of the different sampling rates of the two hemispheres which allows the left hemisphere to make fine phonological distinctions more reliably, or it could be because the left hemisphere systems are wired to left frontal systems that can exert some top down influence.

What this work shows very clearly, though, is that in the normal brain, the right hemisphere is quite capable of processing phonemic level information during comprehension. It is our view that the two systems work together to achieve optimal performance. It will interesting so see if there are some conditions (e.g., noisy speech) where the contributions of the right hemisphere may be apparent.

David Poeppel said...

A couple more comments on this. First of all, nice set of papers. In my view, they support both that speech perception is clearly bilaterally mediated and that the two sides don't execute the identical computations.

At the risk of sounding sentimental, I have pretty much argued for this position since at least 1995, when, in my dissertation, I made precisely this point on the basis of neuropsychological data, and the imaging data that were at that point available. In a more recent review, in the journal Cognitive Science in 2001 ("Pure word deafness and the bilateral processing of the speech code)", I summarized the word deafness literature and argued for a position in which there are particular processing differences in the two hemispheres while still maintaining that you need both. Moreover, even more recent imaging data are consistent with his position as well (e.g., from my own lab, Poeppel et al, 2004, Neuropsychologia).

Dana Boatman from Hopkins also has relevant data from patients suggesting that the isolated right hemisphere does remarkably well at speech.

Based on the evidence that I am aware of, I favor the view mentioned by Greg, and also supported by work in Jeff Binder's group: the processing of the speech code is bilateral, but the two hemispheres don't execute the identical computations. The (richly documented) lateralization effects are associated with lexical phenomena and, in general, processing BEYOND the initial mapping from sound to speech.

I'd like to point out, in addition, that variety of lexical phenomena are also most likely executed bilaterally. In fact, in Greg and my 2007 review, this point is made quite explicit. So it looks like lexical access even occurs bilaterally, although arguably not in exactly the same way.

What is going on in the left versus right hemispheres during lexical access it is a topic that I would like to investigate more in the near future, and if there are any post-docs that would like to join me in this endeavor (it will be in my new lab at NYU) please send me a note.

Dörte Hessler said...

Hi again,

First thanks to Greg for your response, which made me think quite a while. Especially your comment of atypical cortical organization. So I went through the articles on phonemic processing deficits I read before, because I seemed to remember that there was a substantial amount of patients with unilateral damage.
But to clarify some things first: Of course my earlier comment was on the acute stroke study – sorry, I should have mentioned it more clearly. Furthermore I definitely did not want to claim that the right hemisphere is does not play any role in phonological processing, I think there is a vast amount of evidence for that it is in fact involved (some of it cited in the comments above). However I did want to claim (and still want to do so), that a damage to solely the left hemisphere can lead to word sound deafness (as e.g. defined by Franklin, 1989): thus problems to identify or discriminate speech sounds in the absence of hearing deficits. I quote Sue Franklin here because she looked at this phenomenon in the light of aphasia and not as a pure syndrome, which, indeed, is very rare. But looking at aphasic cases, quite a lot of aphasic patients suffering from left hemisphere damage have shown problems in discriminating or identifying speech sounds. I won’t quote the single case studies here, but limit myself to larger group studies. I will particularly mention 4 of them which did not investigate only patients with a proven disorder in auditory discrimination, but those who investigated a broader aphasic group:

- Basso, Casati & Vignolo (1977): Of 50 aphasic patients (with unilateral left hemisphere damage) only 13 (26%) were unimpaired in a phoneme identification task (concerning VoiceOnsetTime), the remainder of 37 patients showed impaired performance.

The three other studies are concerned with minimal pair discrimination

- Varney & Benton (1979): Of 39 aphasic patients (with unilateral left hemisphere damage) 10 (~25,6%) showed defective performance on the minimal pair discrimination task and the other 29 showed normal performance

- Miceli, Gainotti, Caltagirone & Masullo (1980): Of 66 aphasic patients (with unilateral left hemisphere damage) 34 (~51,5%) showed pathological performance on a phoneme discrimination task. The other 32 scored normal.

- Varney (1984): Of 80 aphasic patients (with unilateral left hemisphere damage) 14 (17,5%) showed defective performance on the same task as used in Varney & Benton, the remainder was unimpaired.

To sum up 235 aphasic patients (all with unilateral left hemisphere damage) took part in these studies. 95 of them (~40%) were impaired on tasks investigating phonemic processing (discrimination and identification tasks).

For me this seems to underline the notion that a damage in the left hemisphere is definitely sufficient to cause a substantial problem in the recognition/processing of speech sounds!
Also these results differ of course quite from those of the acute stroke study of Rogalsky and colleagues (2008), which I claimed is due to the material used in that study.

Franklin, S. (1989). Dissociations in auditory word comprehension: evidence from nine fluent aphasic patients. Aphasiology 3(3), 189-207.

Basso, A., Casati, G. & Vignolo, L. A. (1977). Phonemic identification defects in aphasia. Cortex, 13, 84-95.

Varney, N.R. & Benton, A.L. (1979). Phonemic discrimination and aural comprehension among aphasic patients. Journal of Clinical Neuropsychology 1(2), 65-73.

Miceli, G., Gainotti, G., Caltagirone, C. & Masullo, C. (1980). Some aspects of phonological impairment in aphasia. Brain and Language 11, 159-169.

Varney, N.R. (1984). Phonemic imperception in aphasia. Brain and Language 21, 85-94.

Rogalsky, C., Pitz, E., Hillis, A. E. & Hickok, G. (2008). Auditory word comprehension impairment in acute stroke: Relative contribution of phonemic versus semantic factors. Brain and Language 107(2), 167-169.