Wednesday, September 29, 2010

Disconnection between phonological input and output codes

Neuropsychology is not dead.

I just read an interesting case study, in the traditional neuropsych style with a detailed behavioral work up of a single stroke patient and an extended discussion of what the findings mean for models of language processing. I like it. I think we can still learn a lot from this sort of investigation.

The paper, by Jacquemot, Dupoux, and Bachoud-Levi (2007), reports on a patient, F.A., who suffered a left temporal-parietal stroke with a language profile typical of conduction aphasia: good comprehension, fluent production with occasional paraphasic errors and word-finding problems, and a significant deficit in repetition.

F.A. was administered a battery of tests including:

Syllable discrimination, minimal pair AX design. Performance = normal.

Auditory word-to-picture matching with semantic, phonological, and unrelated foils. Performance = normal.

Picture naming involving words of various length/frequency. Performance = "slightly impaired": 84% accuracy compared to 99.7% for controls; a marginal length effect for low frequency words, but not high frequency words.

Word repetition involving words of various length/frequency. Performance = mild/moderately impaired: 82.3% correct (cf. 99.7% for controls), with significant length and freq effects.

Non-word repetition with items of various length and high vs. low neighborhood density. Performance = SEVERELY impaired: 35.4% correct (cf., 99.6% correct for controls), with no length/density effect. Errors were predominantly phonemic.

What does this mean? Speech recognition systems appear to be intact (normal syllable discrimination), speech production systems are at least partially intact (only moderate deficits on picture naming and word repetition), but the link between the two systems is severely damaged (very poor nonword repetition). Non-word repetition is a particularly sensitive metric of the link between perception and production system because you have to use the lower-level sensory ("input lexicon") to motor ("output lexicon") route to perform the task; you can't circumvent this route by using the higher-order sensory->concept->motor route, which is presumably the mechanism allowing for substantially better word repetition performance.

As a further test, the authors assessed word and nonword reading. On this task F.A. showed no significant difference between words and nonwords and performed reasonably well on both (91.7% and 83.3% respectively). Maybe with more power a difference could be detected, but clearly the word/nonword performance difference is not nearly as dramatic as with the auditorily presented stimuli in the repetition task. In order to explain the good performance in reading nonwords, we must assume that written forms can access the "output lexicon" (motor speech systems) rather directly. This is reasonable: for example, think back to what you know about the phonological loop -- visual word forms seem to be able to gain access to this system via the articulatory mechanism. So again, this result suggests that the motor speech system is relatively intact with damage primarily to the connection between the sensory and motor systems.

Here is the model proposed by Jacquemot et al. as an explanation for F.A.'s performance. (Note: they believe in bidirectional sensory-motor connections, but only show the direction of the damaged link; see below):



For those of you schooled in traditional aphasiology, this should look really familiar as it is essentially the Wernicke-Lichtheim model (note: I use the model depiction that Lichtheim actually subscribed to rather than the more commonly used "house-model" depiction that he actually rejected based on the directional arrows. This depiction also accurately reflects W-L's belief in distributed conceptual representations):



The A-M disconnection in the Wernicke-Lichtheim model resulted in conduction aphasia (good comprehension, errors in otherwise fluent speech production), and F.A. certainly fit this profile clinically. It's amazing how right Wernicke was about so many things -- more on this in a subsequent post. In terms of more modern models, I've argued previously that the "disconnection" is due to damage to area Spt, which we argue supports sensory-motor transformations.

The one thing I disagree with in the Jacquemot et al. paper is the claim for an asymmetry in the connections between input and output systems. Jacquemot et al. claim that while the link between perception and production is impaired, the reverse link, from production to perceptual systems is not. They base this claim on tasks that they argue require phonological access first in the motor system and then transmission to the perception system. One task is written word rhyme judgment, the other is picture-auditory word rhyming. The assumption is that (i) decoding a word or picture into a phonological form can only be achieved on the motor side, and (ii) rhyme judgment is performed on the sensory side. I'm not convinced either of these assumptions are true.

Jacquemot C, Dupoux E, & Bachoud-Lévi AC (2007). Breaking the mirror: Asymmetrical disconnection between the phonological input and output codes. Cognitive neuropsychology, 24 (1), 3-22 PMID: 18416481

1 comment:

Rob Keery said...

The article in question in this article is free-to-read online in its entirety.

Click here to go straight through to the full text of Jacquemot C, Dupoux E, & Bachoud-Lévi AC (2007) Breaking the mirror: Asymmetrical disconnection between the phonological input and output codes (24 (1), 3-22).