Friday, June 25, 2010

How does learning to read affect speech perception?

Sigh... It depends on what you mean by "speech perception" (still).

I just read, with much anticipation, a paper in the current issue of J. Neuroscience by Pattamadilok et al. titled "How Does Learning to Read Affect Speech Perception?" I was really excited because as I've pointed out before, there is evidence indicating that the ability to perform certain "speech perception" tasks (e.g., syllable discrimination/identification) seems to be dependent on the ability to read. Assuming that illiterates can nonetheless understand spoken language (after all we've been doing it for hundreds of thousands of years), such a finding seems to indicate that these "speech perception" tasks are not really measuring speech perception as it is used in normal language processing.

I was hoping this new paper was going to drive home this point, but instead, although the authors seem sensitive to these task issues, the report does more to perpetuate the confusion about speech perception and "phonological processing" than it does to clean things up. Consistent with the field in general, the Pattamadilok et al. report uses a range of terms (not always defined) that a reader may take to mean the same thing (speech perception, speech processing, phonological representations, phonological processing, phono-articulatory patterns) and they employ or refer to an array of tasks (lexical decision, rhyme judgement, phonological awareness) none of which were probably ever performed when the human speech perception system was evolving.

Why is this a problem? (Don't worry I'll get to the actual study in a minute.) Because the title uses the term "speech perception" which implies to most that the reported research is fundamentally about our ability to perceive the speech sounds that allow us to understand spoken language (its evolutionarily relevant function), and how learning to read affects this basic perceptual function. But the paper doesn't assess speech perception in this more fundamental sense and instead assesses the ability to decide whether an acoustic sequence is a word or not, and the ability to decide whether two words rhyme. Further, because "speech perception" was effectively operationalized in this way, they end up assessing a brain region that has not been implicated in the more basic speech perception functions. So the title is misleading. It should be, How Does Learning to Read Affect the Ability to Decide Whether a Sequence of Sounds is a Word or Not?

So what did they do? In short, they used TMS to localize brain regions underlying the orthographic consistency effect: listeners are faster to judge an auditorily presented word as a word (auditory lexical decision) when the word's rime has only one possible spelling (must) compared to when the word's rime has many spellings (break), i.e., a word's spelling affects the "processing" (operationalized as lexical decision) of spoken words. They found that stimulation of the supramarginal gyrus, "an area involved in phonological processing" (p. 8435 -- notice the term speech perception was not used), abolished the orthographic consistency effect, whereas stimulation to an orthographic area in the ventral occipital cortex did not abolish the effect.

They conclude, "...these findings provide strong evidence that 'orthographic' influences in speech perception arise at a phonological, rather than orthographic, level." (p. 8441).

The main problem I have with the study, besides the terminological issues, is that the SMG target areas were defined functionally in a pre-test TMS study as regions that when stimulated caused deficits in making rhyme judgments to visually presented word pairs. Rhyme judgments (and similar phonological awareness tasks) are exactly the kind of abilities that have been related to reading development. So essentially what they've done is selected areas that are involved in reading skills and shown that they are involved in another reading related effect. This strikes me as circular.

Pattamadilok et al., note that the SMG isn't part of the standardly identified speech perception network (which is the STG) and end up explaining its role in "phonological processing" via its link to the articulatory system and phonological STM, which is probably correct: "we hypothesize that the PMv-SMG circuit plays an integral role in representing and processing representations for phono-articulatory patterns that contribute to 'phonological processing.'" (p. 8441) [their quotes, again notice they didn't use the term 'speech perception']. If by "phonological processing" they mean the ability to make lexical decisions, I wouldn't disagree, I just wish they would have put that in the title.

Pattamadilok C, Knierim IN, Kawabata Duncan KJ, & Devlin JT (2010). How does learning to read affect speech perception? The Journal of neuroscience : the official journal of the Society for Neuroscience, 30 (25), 8435-44 PMID: 20573891

No comments: