Several observations suggest to me a connection between conduction aphasia and disruption to our proposed sensory-motor integration area Spt.
1. Spt is located in the posterior planum temporale region. The lesion distribution in conduction aphasia seems to be centered on this same location (Baldo, et al. 2008).
2. Spt activity is modulated by word length (Okada, et al., 2003) and frequency, and has been implicated in accessing lexical phonology (Graves et al. 2008). Conduction aphasics commit predominately phonemic errors in their output and these errors are increased by longer, less frequent words.
3. Spt is not speech specific in that tonal/melodic tasks also activate this region (Hickok, et al. 2003). Conduction aphasics appear to have deficits that also affect tonal processing (Strub & Gardner, 1974).
The idea is that this sensory-motor circuit is critical in supporting sensory guidance of speech output, and that such guidance is most critical for phonemically complicated words/phrases and/or for low frequency words or for items with little or no semantic constraint (e.g., non-words, phrases like "no, ifs, ands, or buts"). If a word is short or used frequently, the claim goes, its motor representation can be activated as a chunk rather than programmed syllable by syllable.
One problem, raised by Alfonso Caramazza in the form of a question after a talk I gave, is that sometimes conduction aphasics get stuck on the simplest of words. Case in point, in my talk, I showed an example of such an aphasic who was trying to come up with the word cup. He showed the typical conduit d'approche, "its a tup, no it isn't... it's a top... no..." etc. Alfonso justifiably noted that conduction aphasics shouldn't have trouble with such simple words if the damaged sensory-motor circuit wasn't needed as critically in these cases.
So here is a sketch of a possible explanation. I'd love to hear your thoughts. There is a difference between repetition and naming: repetition shows the typical length/frequency effects, whereas naming doesn't. Here's why:
In repetition, a common word like cup can be recognized/understood and then semantic representations can drive the activation of the motor speech pattern. As the word gets more phonologically complicated or less semantically constrained, this route becomes less and less reliable and the sensory-motor system is required. This is the classic explanation of invoked to explain why conduction aphasics sometimes paraphrase in their repetition; a view that has gained some recent support (Baldo, et al. 2008).
In naming, the main hang up in conduction aphasia is in trying to access the phonological word form. Since the lesion in conduction aphasia typically involves the STG, systems involved in representing word forms are likely partially compromised leading to more frequent access failures. Further, in lexical-phonological access simple, high-frequency forms that share a lot of neighbors (cup, pup, cut, cop, cope ...) will actually lead to more difficulty because of the increased competition.
References
Baldo JV, Klostermann EC, and Dronkers NF. It's either a cook or a baker: patients with conduction aphasia get the gist but lose the trace. Brain Lang 105: 134-140, 2008.
William W. Graves, Thomas J. Grabowski, Sonya Mehta, Prahlad Gupta (2008). The Left Posterior Superior Temporal Gyrus Participates Specifically in Accessing Lexical Phonology Journal of Cognitive Neuroscience, 20 (9), 1698-1710 DOI: 10.1162/jocn.2008.20113
Okada K, Smith KR, Humphries C, and Hickok G. Word Length Modulates Neural Activity in Auditory Cortex During Covert Object Naming. Neuroreport 14: 2323-2326, 2003.
Strub RL, and Gardner H. The repetition defect in conduction aphasia: Mnestic or linguistic? Brain and Language 1: 241-255, 1974.
7 comments:
Hi Greg,
Long time reader, first time commenter! Thanks for keeping up this blog.
Two points:
Further, in lexical-phonological access simple, high-frequency forms that share a lot of neighbors (cup, pup, cut, cop, cope ...) will actually lead to more difficulty because of the increased competition.
I think the literature suggests this isn't the case. A number of studies with neurologically intact individuals have shown that in production high density words have shorter naming latencies (Baus, Costa, & Carreira 2008; Vitevitch, 2002; Vitevitch, Armbrüster, and Chu, 2004) and are less susceptible to speech errors (Stemberger, 2004; Vitevitch, 1997, 2002).
Similar results have been found in aphasia errors (Gordon, 2002). In recent work with Brenda Rapp and Jill Folk, I've examined cases of impairment to lexical phonological as well as lexical orthographic processes (you can see a poster reporting some of the results here). In these cases we've found that high density words are more accurate than low density words.
In repetition, a common word like cup can be recognized/understood and then semantic representations can drive the activation of the motor speech pattern. As the word gets more phonologically complicated or less semantically constrained, this route becomes less and less reliable and the sensory-motor system is required.
I'm not sure if I can square this with some of our own results. In Goldrick & Rapp (2007) we discuss an individual with a lexical phonological deficit. His performance is not affected by phonological complexity (but a person with a post-lexical deficit does show such an effect). This suggests to me that the semantic route is not sensitive to phonological complexity (or at least the post-perceptual components of this route are not). Semantic constraint, though, seems quite plausible.
References
Baus, C., Costa, A. & Perreira, M. (2008). Neighbourhood density and frequency effects in speech production: A case for interactivity. Language and Cognitive Processes, 23, 866-888.
Goldrick, M., & Rapp, B. (2007). Lexical and post-lexical phonological representations in spoken production. Cognition, 102, 219-260.
Gordon, J. K. (2002). Phonological neighborhood effects in aphasia speech errors: Spontaneous and structured contexts. Brain and Language, 82, 113-145.
Stemberger, J. P. (2004). Neighbourhood effects on error rates in speech production. Brain and Language, 99, 413-422.
Vitevitch, M. S. (1997). The neighborhood characteristics of malapropisms. Language and Speech, 40, 211-228.
Vitevitch, M.S. (2002). The influence of phonological similarity neighborhoods on speech production. Journal of Experimental Psychology: Learning, Memory and Cognition, 28, 735-747.
Vitevitch, M., S., Ambrüster, J., & Chu, S. (2004). Sublexical and lexical representations in speech production: Effects of phonotactic probability and onset density. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 514-529.
Matt, thanks much for your comments and all the references. Did your study use just conduction aphasics? Or is it a mixture of various clinical subtypes? This may be important.
I know the typical stance is that dense neighborhoods = faster production response times, but I'm not entirely convinced we understand the nature of these effects yet. For example, the Baus, et al. paper you mentioned is a response to a previous paper by Vitevitch and Stamer (2006) which found inhibition in naming for dense neighborhoods in Spanish. We've played with these effects as well and failed to replicate the typical density effect in English finding in two separate studies a robust inhibition in naming for dense neighborhoods (Okada & Hickok, unpublished data).
Something is going on with these density effects for sure, and they may help us understand what is going wrong in lexical access and repetition in conduction aphasia, but we have a ways to go still I think.
In the 'classical' neuropsychology literature -- extending from the 1970s to the 1990s without the benefit of imaging techniques -- show conduction aphasia to be characterized by a failure to show the modality effect in serial short-term memory. That is, auditory-verbal serial recall is usually superior to visual-verbal serial recall -- this is restricted to the last few items in the list, the so-called 'recency' portion' -- but in conduction aphasia this auditory superiority disappears. Usually, this has been interpreted as a phonological store phenomenon, but there are several good reasons -- primarily from the studies of suffix effects -- for supposing that auditory recency is an acoustic phenomenon (that is, you would get the same effects with non-verbal sequences, but there is very little convincing data on this point).
The acoustic nature of the modality effect is discussed in several of our papers, but particularly these two:
Jones, D.M., Macken, W.J., & Nicholls, A. (2004). The phonological store of working memory: Is it phonological and is it a store? Journal of Experimental Psychology: Learning Memory & Cognition, 30, 656-674.
Nicholls, A., & Jones, D. M. (2002). Capturing the suffix: Cognitive streaming in immediate serial recall. Journal of Experimental Psychology: Learning, Memory & Cognition, 28, 12-28.
We (Bill Macken & I) have just reviewed the early neuropsychology literature on conduction aphasia, and conclude that an impairment of higher level auditory processing is just as plausible an explanation as a phonological deficit.
Good to hear from you Dylan. I would love to have a look at your review. Is a manuscript in the works?
The relation between conduction aphasia and working memory deficits is an important issue that isn't fully resolved. My view is that there is no such thing as a dedicated phonological "store" (a buffer that is separable from phonological processing systems) and that verbal working memory deficits result from damage to some of the same systems that produce conduction aphasia. The one puzzle is the handful of patients who reported have severely reduced spans but have normal speech production (i.e., they don't show the phonemic paraphasias typical of conduction aphasia). I think there are ways of explaining this observation, but haven't taken the time to work it out fully yet.
Hey Greg,
Our report is a case series. The spoken production individual with a lexical deficit in Goldrick & Rapp (2007) would probably not be classified as conduction aphasic, as his repetition is nearly 100%. As for the post-lexical case, it depends on your criteria--she was equally impaired in both repetition and naming. The Gordon (2002) paper is a relatively unselected group.
The Vitevich & Stamer result is interesting. I think many of these inconsistent results relate to a poor definition of neighborhood...but that's another paper (in submission ;) ).
Hi, everyone,
thank you very much for this very nice, exciting forum!
Howard & Nickels (2005, Cognitive Neuropsychology 22, 42-77) reported on two individuals with extremely well preserved phonological processing (both input and output processing): They were good discriminating minimal pairs of three syllable length (even with a delay). In contrast, they were impaired in the repetition of single-syllable nonwords.
I think these results suggest an independent short-term buffer.
In addition, someone (forgot who) argued that repeating lists like "dog pen dog zebra dog" is particular difficult in a STM model which consists of segmental and lexical long-term representations without additional buffers.
I am talking of the "repetition (STM) variant" of conduction aphasia, though. I would be quite interested in the Macken/Jones review.
Best wishes
Tobias Bormann, Freiburg
I'm loving the feedback on this post. Thanks folks.
Tobias, thanks for pointing out the interesting cases reported by Howard & Nickels. I definitely need to have a closer look at that paper. I'm still not convinced, though, that there is an independent STM buffer. This may be more of a "trust my gut" kind of view at this point but I think it is worth exploring.
So is there any other explanation for a patient with preserved "phonological processing" and impaired single syllable nonword repetition? Possibly. One assumption that is often made in a typical cognitive psychology (boxes and arrows) approach is that a given computational system, say "phonological processing" is confined to one box. (Ok, maybe two boxes, one for input and one for output.) In the present context, if "phonological processing" (for both input and output) is well preserved, yet nonword repetition is impaired, non-words must be processed/represented outside the "phonological processing" system. Hence, the phonological buffer.
But what if only part of the phonological system was critical for nonword processing? Specifically, suppose that "phonological processing" was supported by the left and right STG, but only the left portion of the system supported nonword processing. Maybe this is because you need higher temporal resolution to process/represent nonwords (i.e., segment level processing) than words, and this higher resolution system is in the left hemisphere. Or maybe it is the left hemisphere system that is uniquely capable of interfacing this kind of info with frontal motor systems. Whatever the reason, if such a story were true, then damage to the left hemisphere "phonological system" may disproportionately affect nonword processing, leaving other forms of phonological processing relatively intact.
I'm not suggesting this as THE explanation, only that we need to consider scenarios where a process is broken up into multiple computational boxes OR can be approached using a variety of computational systems (multiple processing routes).
Btw, is there good evidence for the repetition vs. reproduction subtypes of conduction aphasia? I recall reading recently that this distinction may not hold up; it might have been in the paper by Baldo and Dronkers (among others)...
Post a Comment