I finally mustered the courage (i.e., sufficient control over my blood pressure) to read Pulvermuller & Fadiga's recent (2010) review paper in Nature Reviews Neuroscience. But I don't want to talk about their paper -- yet. I want to discuss a paper they cite. Here is the context in which they cite it: P & F are, of course, arguing for the importance of the motor system in receptive language. After arguing correctly that sensory and motor aspects of speech must interact and proposing (controversially) that this interaction is important not only for production but for perception/understanding, they write:
We acknowledge that, apart from action-perception learning, the human brain also supports the purely perceptual learning of small vocabularies of word forms in the absence of articulation, but note that monkeys also exhibit this type of perceptual learning. Notably, children with severe neurological motor deficits that affected articulation had reduced auditory vocabularies -- that is, they understood fewer words than children with similar deficits that did not affect articulation [Bishop et al. 1990] -- a finding consistent with the importance of motor links for vocabulary learning. -Pulvermuller & Fadiga, 2010, pp. 352-353
I was not aware of the Bishop et al. paper (embarrassingly), so I had a look. I'm glad I did because it shows (i) that the ability to produce speech does not affect the ability to perceive speech sounds and (ii) let me say it again: TASK MATTERS.
Bishop, Brown, & Robson studied 48 10-18 year olds, all with cerebral palsy. 12 were congenitally anarthric (A) "never having been able to produce articulate speech", 12 were severely dysarthric (D) "with labored, and often unintelligible, speech", and 24 were control (C) subjects with cerebral palsy but with normal speech. The controls are critical because cerebral palsy is associated with a general lowering of intellectual ability. Thus, to a first approximation, group differences could be attributed to differences motor speech control. (This is not entirely true because the anarthric patients in general had more severe motor problems, e.g., there were nonambulatory unlike most control subjects which as the the authors point out could affect health and learning generally.)
In a first set of experiments subjects were tested on
1. a test of non-verbal intelligence, Raven's Matrices, to ensure good matching between groups
2. a phoneme discrimination task (yes-no, nonword syllable discrimination using minimal pair phonemic contrasts) -- Yes they used d'! Woohoo!
3. a receptive vocabulary task (British Picture Vocabulary Scale, similar to the Peabody scale)
4. a test for receptive grammar (TROG), a sentence-picture matching test.
1. Groups did not differ on the non-verbal test (they are reasonable matched)
2. Speech impaired groups (anarthric and dysarthric) performed worse than controls on the phoneme discrimination test (d' = 1.6, 1.5, ~2.5, for A, D, and C groups respectively -- there were actually two C groups that I've combined here). No difference between the speech impaired groups.
3. Vocabulary was reduced in the speech impaired groups relative to controls. Vocabulary age equivalents were: A, 8:0; D, 8:5; C, ~10. No diff between the speech impaired groups.
4. No differences between any of the groups on the receptive grammar test.
What does this mean? It suggests that the ability to speak indeed affects speech sound discrimination and is associated with vocabulary reduction (although 8 year old vocabularies are probably better than a monkeys, cf, P&V quote above), but lack of motor speech does not impair receptive grammar. The latter finding is relevant (i.e., contradictory) to another of P&V's claims, but won't be discussed here.
Before all you motor theory/mirror neuron enthusiasts start celebrating there are two important caveats regarding the discrimination test. One is the fact that despite the a complete lack of speech development, anarthric patients are nonetheless able to discriminate minimal pair phonemic contrasts better than chance (remember discrimination threshold for d' measures = 1.0), have receptive vocabularies that afford everyday communication, and have relatively good receptive grammar skills. Therefore, motor speech ability is not necessary for basic receptive speech competence.
The other caveat is Bishop et al.'s second experiment involving the same population. They worried that the nonword syllable discrimination task may unnecessarily tax phonological working memory, which is dependent on motor articulatory ability, so they used another task: subjects were presented with a picture (e.g., a boy) and then a spoken syllable (e.g., "boy" or "voy"); they were asked to decide whether the syllable correctly named the picture or whether the syllable was incorrectly pronounced and therefore did not match. The matches and mismatches represented minimal pairs. A standard syllable discrimination task (boy-voy, same or different?) was also administered for comparison.
1. The standard discrimination task replicated what was found in Experiment 1: speech impaired subjects performed worse than controls (d'=1.72 vs. 2.24, respectively; A & D were pooled in this study).
2. The picture-syllable judgment task, which involved the *same* phonemic contrasts, came out differently: no difference between speech impaired and control subjects (d'=2.52 vs. 2.59 respectively).
Bishop et al. summarize the findings nicely:
The lack of impairment on the word [picture-syllable] judgment task rules out the possibility that the speech-impaired persons are operating with a reduced system of phoneme contrasts. An alternative explanation in terms of short-term memory seems the most plausible.... It may be that if one has to retain novel, meaningless phonological information then the process is facilitated by overtly or covertly generating an articulatory representation. Indeed, some of the normal speakers in our study were observed repeating nonword pairs to themselves in the same-different task before making a judgment. This strategy would be difficult or impossible for those with dysarthria or anarthria.... p. 218.
So when measured properly the ability to perceive speech is unimpaired, relative to controls, in individuals who never developed the ability to speak. The motor speech system is not necessary for speech perception.
But what about vocabulary? Is motor speech necessary for vocabulary development? It depends on what you mean by necessary. The Bishop et al. study showed that a vocabulary of an 8 year old is achievable -- which is not bad considering that the control group achieved an average vocabulary of a 10 year old -- but still below par. Why might this be?
Drawing the work of Gathercole & Baddeley (1989) which showed a correlation between vocabulary development and phonological STM, Bishop et al. suggest that it has to do with phonological short-term memory. Learning new words requires the retention of sequences of novel phoneme strings that can be associated with meanings. If the ability to internally rehearse such strings is impaired, one might be consistently behind the curve in vocabulary development, having to rely more on external repeated exposure to new vocabulary items.
Thus, the influence of the motor speech system on receptive language ability all boils down to its role in phonological short-term memory. Basic perceptual abilities are largely unaffected by even severe disruption of the motor speech system.
Bishop DV, Brown BB, & Robson J (1990). The relationship between phoneme discrimination, speech production, and language comprehension in cerebral-palsied individuals. Journal of speech and hearing research, 33 (2), 210-9 PMID: 2359262
GATHERCOLE, S., & BADDELEY, A.D. (1989). Evaluation of the role of phonological STM in the development of vocabulary in children: A longitudinal study Journal of Memory and Language, 28 (2), 200-213 DOI: 10.1016/0749-596X(89)90044-2
Pulvermüller F, & Fadiga L (2010). Active perception: sensorimotor circuits as a cortical basis for language. Nature reviews. Neuroscience, 11 (5), 351-60 PMID: 20383203
Well, for those linguists who accept an innatist framework (many of whom also believe in MT), this discussion may be innocuous, right? If most phonological primitives (even articulatory ones) are "hard-wired", they don't have to be learned through actual production, and so it is only natural that speech impairments would not compromise perception of, say, features. -Teo.
Yes, this is exactly how Liberman and Mattingly squirmed out of this problem when faced with data of exactly this sort. I have a couple of problems with this view though.
One is the fact that chinchillas and quail are surprisingly good at discriminating speech sounds; you wouldn't argue that these critters have innately specified phonological primitives, articulatory or perceptual, I assume. So you have admit that you don't need even the genetic capacity for a motor speech system to perceive speech.
The other problem is that in order to explain the facts and maintain a form of MT you have to assume that the representations are not motor commands but codes for much more abstract gestures. This is because complete destruction or acute deactivation of the motor speech system does not wipe out speech perception. L&M and other modern day descendants of MT, such as Carol Fowler, have taken this route. In Carol's case, if I read her correctly, she doesn't even consider herself a motor theorist.
So what are these abstract gestures? I propose that they are the sensory targets or goals of the actions, i.e., acoustic sounds. So we are back to an auditory theory of speech perception.
Nice to see work I did 20 yr ago being resurrected. Thanks for giving such a clear exposition of the key findings.
The other argument (apart from innateness) that I've seen used against this is just to say that the congenitally speechless children are not relevant for what happens in normal development, because they develop alternative pathways. So the idea would be that if you have motor correlates you will use them, but if you don't, you can find other ways around the speech perception problem.
When I started out, people were trying to find *the* specific set of cues used by humans in speech perception, but I suspect the reality is there are multiple possible cues, auditory and articulatory, and they are given different weighting according to how useful they are to you. That might help resolve some of the discrepancies in the literature.
Hi deevybee (Dorothy Bishop, I assume)...
There's a lot of highly relevant work already in the literature that is too often ignored. Your study is a prime example. It really is a fantastic paper.
I agree completely that speech is most likely perceived using any and all cues available. There seem to be multiple routes to just about every neural function.
However, I think we need to think of the role of the motor system as a top-down modulatory source of information rather than the more typical MT-type view that it is (or can be) a fundamental mechanism for extracting information. Just like lexical or pragmatic context can fill in acoustic gaps in the speech signal, the motor system can be used to generate forward predictions (as presumably it does in speech production) that can prime certain analyses of incoming speech. This is a sensory-motor view of the process that, to my mind, is very different in spirit from the MT/mirror neuron "motor simulation" approach to "understanding" gestures.
According to deevybee:
"the reality is there are multiple possible cues, auditory and articulatory"
and visual cues (like viseme; Fischer,68; Sumby & Pollack,54);
i'm just a young student but i agree with the "perceptuo-motor units" proposed by Schwartz in PACT (moreover i'm french too). Concerning the mirror neuron system, maybe it could be useful in adverse condition, like noisy environment?
I think there are perceptuo-motor units (and I'm not even French), but I believe these are part of the sensory-motor dorsal stream not the ventral stream which is the pathway for speech recognition. However, I believe that dorsal stream information can *modulate* activity in the ventral stream. I think of it as a top-down attentional modulation.
As you have written: "The idea of auditory–motor interaction in
speech is not new." (NRN, 2007);
even without "motor representations".
I agree with a "modulation" in the ventral stream, maybe it can occur in the "phonological store"?
I'm phonetician and I believe in a phonological representation, but
French people also love duality!
I'm delighted that the field is open to reconsidering motor theory, and hope it finds our critique of motor theory helpful.
Massaro, D. W., & Chen, T. H. (2008). The motor theory of speech perception revisited. Psychonomic Bulletin & Review, 15(2), 453-457. PDF
As a speech pathologist who works with many non-verbal children who communicate using alternative communication, couldn't part of the discrepancy be a result of differences in life experiences? Since children who are non-verbal tend to have more involved motor impairments globally, they would, as a group, have fewer chances to explore their environments, and more limited mobility.
Many of my students, who are quite competent communicators, have never been to their parents' offices, or the post office or many other places where children encounter novel vocabulary.
Still to much begging the question in the discussion. For example, if the phoneme doesn't exist, what then is a phonemic discrimination task?
See this paper:
Another issue has been: linguists etc. have needed a standard, static way to refer to speech in written discourse, and this has always ended up with static and discrete units. These are very nice to put in print like English /l/ as in 'like', but that might not have much of anything to do with the reality of the way the human mind perceives, stores, manipulates, and controls language for speech production.
I would need to know more about the study's subjects. Have they been subjected to extensive exposure to written English? It could be that for them learning English is, to some extent, like learning a second or foreign language. In that case, we know that L2's are learned often without much resort to motor interaction.
I'm not sure that either, as extreme situations, tells us much about the more typical ones.
Hi, I am a new student to the field of speech and language, and I am a big fan of your work! I recently read your article on "The functional neuroanatomy of language" and thought it was brilliant. I am enjoying following your research.
Respectfully, I had a question about the above article. It states "But what about vocabulary? Is motor speech necessary for vocabulary development? The Bishop et al. study showed that a vocabulary of an 8 year old is achievable." and" Learning new words requires the retention of sequences of novel phoneme strings that can be associated with meanings. If the ability to internally rehearse such strings is impaired, one might be consistently behind the curve in vocabulary development."
I was reading a critique on Vygotski. It said a child does not develop the ability to use inner speech or mental rehearsal until 7 - 8 years old. This also fits in with Piaget's concrete operational stage of development (which states "7 - 12 years olds can think logically about things experienced, and can mentally manipulate them".
It seems contradictory to say that an 8 year old vocabulary can be achieved without articulation, but possibly some internal rehearsal, when a child developing that 8 year old vocabulary cannot internally rehearse, and therefore has to articulate for understanding. What do you think of this?
I look forward to reading more of your work!
Post a Comment