Short answer: We don't know. Despite the title of a recent paper by Riikka Mottonen and Kate Watkins' in The Journal of Neuroscience, Motor Representations of Articulators Contribute to Categorical Perception of Speech Sounds, the data reported are, unfortunately, uninterpretable.
Here's the long answer: Mottonen and Watkins asked subjects to perform syllable identification (which of two syllables did you hear?) and syllable discrimination (are the two syllables you just heard same or different). The stimuli were place or voice onset time continua and the design followed a standard categorical perception (CP) experimental design. These tasks were performed either pre-rTMS of after rTMS was applied to the lip area of left motor cortex. I will focus here on the discrimination task. The critical condition was when the subjects discriminated a lip-related sound (ba or pa) from a non-lip-related sound (da or ga). They report that discrimination across a category boundary was less accurate after TMS to motor lip areas than before. Specifically, for the ba-da stimuli, the mean proportion of "different responses" to cross category (i.e., physically different) syllables was .73 pre-TMS and .58 post-TMS; similar findings are reported for the pa-ta stimuli. Discriminations that did not involve lip-related sounds (ka-ga or da-ga) were not affected by stimulation, nor was lip-related speech sound discrimination affected by motor hand area stimulation.
So how is this uninterpretable? At risk of becoming a methods curmudgeon (it's probably too late), this paper, like many I've discussed here, used the wrong analysis. Instead of using an unbiased measure, d-prime, they used a biased measure, proportion of different responses. This renders the data uninterpretable because we can't actually tell whether TMS affected the perception of the speech sounds (an interesting possibility) or the subjects' response biases (a less interesting possibility). It's not the authors' fault though. Speech studies of this sort have used the wrong measure for decades. Does this make the entire field uninterpretable? Well... I'll let you draw your own conclusions.
To illustrate the problem, consider the graph below which plots d-prime values as a function of a range of possible hit rate values (hits in this study would be correctly identifying same trials as same) for two constant false alarm (FA) rates corresponding to the pre- and post-TMS values reported by Mottonen and Watkins. FA was calculated simply by subtracting the proportion of different response values from 1, i.e., proportion of different responses (to different trials) is the proportion of correct rejections (the two stimuli are not the same), and FAs is just the inverse of correct rejections. d-prime is a bias corrected measure of discrimination where 0=chance and anything over 3 is real good.
Notice from the graph that depending on the proportion of hits, the d-prime value can range anywhere between 0 to almost 5! But this is true for any constant FA rate, of course, as discrimination ability is only meaningful by looking at the relation between hits and FAs. Think of it this way. If you have a FA rate of 0 (perfect performance on different trials) that seems fantastic, but at the same time, if your hit rate is also 0, well then you clearly just like to say "different" all the time no matter what you hear. A FA rate of 0 is only meaningful if you have a higher than 0 hit rate and the higher the hit rate, the higher your d-prime score.
Notice too that because a given FA value can result any number of d-prime values depending on the hit rate there is virtually complete overlap in d-prime values for the two curves (except for the upper end). This means that TMS could have had no affect whatsoever on the ability of subjects to discriminate lip-related speech sounds. For example, if the hit rate for pre-TMS was .7 and hit rate for post-TMS was .8 then the d-prime in both cases is approximately 2.5 -- no difference. It is also possible that discrimination was worse prior to TMS than after TMS (e.g., if hit rates were .6 and .8 respectively)!
I'm not saying that the study is necessarily wrong, or that TMS didn't affect the perception of lip-related speech sounds. What I'm saying is we can't tell from these data. Maybe once they calculate d-prime the data will show an even more impressive effect. But since we only have half of the information there is no way to know whether perception was affected: the findings are uninterpretable. T
This study is potentially really important and very interesting. As such, I would like to urge the authors to redo the analysis and publish an addendum in J. Neuroscience. I certainly would like to know if it actually worked!
References
Mottonen, R., & Watkins, K. (2009). Motor Representations of Articulators Contribute to Categorical Perception of Speech Sounds Journal of Neuroscience, 29 (31), 9819-9825 DOI: 10.1523/JNEUROSCI.6018-08.2009
8 comments:
What about the categorization task? I don't see how that version of the experiment and the data can be fit into a SDT concept. Sure, in the same-different task, "different" trials can be thought of as representing the "signal" trials, where "change"="signal" and "same" trials represent no signal. (But I already find it a bit of a stretch to categorize this as "detecting a signal in noise".)
But the categorization task is symmetrical. So suppose I hear a stimulus in category A and say so, then it's a HIT. If I hear A say B, it's a MISS. But if I hear B and say A, it's arbitrary to label this as a false alarm. Or vice versa. So what about the categorization part of the experiment? If those data are analyzed with non-D' methods and doesn't converge with your D' analysis, then what? Those two tasks are always presented as two sides of the same coin, so they should come out with same inferences.
(Btw, this is my first blog ever..)
Hi Aryld, the main point about the two alternative forced choice task (the identification task) is that it too is subject to bias. Suppose I told subjects I would pay them $100 for each correct "ba-identification". This would induce a bias toward ba responses. Even without an incentive subjects may come with there own biases. SDT methods can correct these biases. There are d-prime look up tables for these kinds of designs.
For CP though it is not clear what to count as a ba or a pa since boundaries (or biases) vary from subject to subject. I'll have to thunk about that more.
More on this soon.
Hi Greg,
It’s good to see a posting on this interesting and important paper.
However, I think that you’ve gone too far in the first line of your blog post:
Does stimulation of motor lip areas affect categorical perception of lip related speech sounds? Short answer: We don't know.
I’m not going to criticise you for being a methods curmudgeon. Our field needs more reviewers that read papers carefully and assess whether the science is done correctly, not just whether the right scientists are listed in the citations.
However, I think that you’re wrong to suggest that Mottonen and Watkin’s data can be dismissed based on the lack of signal detection analysis alone for the following two reasons:
1) Even if response bias contributes to the effect observed, this doesn’t explain why the effect of response bias is somatopically specific. Why should lip TMS stimulation alter response bias for /ba/-/da/ and /pa/-/ta/ but not for /ka/-/ga/ within the same experiment? I can’t see any way to explain this without invoking a motoric contribution to the perception of syllables containing phonemes produced using those articulators.
2) There are some practical constraints that I’m sure explain why M&W didn’t include trials containing acoustically-identical syllable pairs in their discrimination test. There is insufficient time before the rTMS effect wears off to do all the tests that one could ideally do in one single experiment. Personally, I’d be interested in the Ganong effect and perceptual learning, but I’m sure that other scientists can think of other tests that could be run.
For the present data, however, the lack of “same” syllable trials makes it difficult but not impossible to apply Signal Detection analysis. We can make the assumption that the False Alarm rate is simply the proportion of responses in which participants say “different” to within-category pairs. This data is reported in Table 2, and so based on that we can derive the following:
Lip (Expt 1a): d' Pre d' Post
/ba/–/da/ 1.45 0.91
/ka/–/ga/ 2.18 2.17
Hand (Expt 1b): d' Pre d' Post
/ba/–/da/ 1.55 1.99
/ka/–/ga/ 2.00 1.68
Lip (Expt 2): d' Pre d' Post
/pa/–/ta/ 1.70 1.36
/da/–/ga/ 1.18 1.42
That is, d’ shows a numerical decline in the critical predicted conditions and an increase or no change in the comparison cases. M&W could run statistics on d’ values derived from their original data. However, even if this analysis was not significant this would not alter their conclusions. It would only show that the TMS effect is not on “perceptual sensitivity” alone, but also on “response bias”. As I explained above I don’t think that their conclusions are even slightly altered if the effect is in part due to articulator-specific effects on response bias.
Like you, though, I’d still be interested to see the outcome of this analysis from M&W as future studies might be interested in pulling apart perceptual and response effects.
Matt Davis
Greg,
I would like to point out that you ignored half of our data in your comment. In our paper, we reported proportions of “different responses” to both across- and within-category pairs (see Table 2). I agree with you that it is impossible to “tell whether TMS affected the perception of speech sounds or the subjects’ response biases” by analyzing the proportions of “different responses” to across-category pairs only. Therefore, we also reported responses to within-category pairs. Our conclusion that TMS affected categorical perception of speech sounds is supported by both halves of the data.
Our results show that during the discrimination task subjects’ gave fewer “different responses” to lip-related across-category pairs (ba-ga and pa-ta) in the post-rTMS condition than in the pre-rTMS condition (see Table 2). In contrast, the proportions of “different responses” to within-category pairs (e.g. ba-ba, pa-pa) did not change (see Table 2).
The lack of change in proportions of “different responses” to within-category pairs is really important, because it gives evidence against the possibility that rTMS affected subjects’ response biases. Let’s assume that our subjects’ were biased to give “different responses” in the pre-rTMS condition and to give “same responses” in the post-rTMS condition. As a result of this change in biases, there should be fewer “different responses” in the post-rTMS condition to all stimulus pairs (i.e., to both across- and within-category pairs). In this case it would be indeed wrong to conclude that rTMS affected perception of speech sounds. Importantly, our results showed a decrease in proportions of “different responses” to across-category pairs but not to within-category pairs. These results are in conflict with the assumption that subjects’ response biases changed. Therefore, we concluded that rTMS affected categorical perception of speech sounds.
Best wishes,
Riikka.
Hi Greg,
If you think that TMS to lip cortex might simply be changing subjects' response bias, wouldn't you expect TMS to also change response bias for ka - da?
SDT issues aside, my reading of this paper is that;
1) TMS to motor cotex has no real consistent effect on the identification of speech sounds (small effect on one of two measures for expt. 1a, but none at all for expt. 2).
2) TMS to lip cortex does seem to have a real effect on discrimination performance for lip/tongue articulated speech sounds.
As you have pointed out before, discrimination doesn't necessarily equate to perception; it's possible that subjects used a rehearsal strategy to make the discrimination decision (e.g. silently repeating the pair to themselves for comparison purposes). TMS to lip cortex would presumably interfere with this strategy and therefore change accuracy rates. I suppose the EMG data recorded during the decision period might help to answer this point. More convincing to me would be if someone combined TMS with a mismatch paradigm, i.e. attempt to show that the mismatch response to lip/tongue deviants (but not VOT deviants) is attenuated after TMS to lip cortex.
Hi Riikka,
Thanks for your comment. By looking at the number of different responses at the category boundary versus within category, you can indeed conclude the TMS affected something. But my point was that you can't tell *what* it affected. It could have affected only the subjects' response bias and not the acoustic perception of the sounds.
How could stimulation affect bias only at the category boundary? Think of the boundary as an acoustically ambiguous region and the within category region as acoustically unambiguous. Bias will have a much stronger effect under ambiguous conditions and a weak or even negligent effect for unambiguous conditions: if I'm not sure what I heard, my bias strongly influences my decision, if I'm sure of what I heard, this overrides my bias.
So you can conclude that TMS affected performance at the boundary of a categorical perception task, but you can't conclude that it affected the perceptual discrimination. You simply can't tell from the data. Did you not collect responses to same trials? I think I missed that in reading your paper.
Hi Matt,
I don't think I've gone too far. First let me address your second point: you argue that we can infer FA rate from "different responses" in the within-category conditions. But we already know the FA rate. What we are missing is the hit rate for *category boundary* stimuli -- i.e., the ones that are the most ambiguous and therefore most subject to bias -- so this doesn't help.
More importantly you ask how can TMS affect bias only for lip sounds, and doesn't this invoke a "motoric contribution" to syllable discrimination? Well, it suggests a motor contribution either to syllable perception or response bias - we can't tell which.
How could motor lip areas contribute to response bias? I'm not sure. I do know that lots of things can affect bias. For example, I could have induced the same pattern of response (fewer different responses) by changing the subject's incentive only for lip-related sounds (S loses $1 for every incorrect "different response" on lip-related trials). Does this mean there is a "monetary contribution" to lip-related syllable discrimination? Presumably this manipulation doesn't affect the "perception" of lip-sounds, only the decision criteria.
So what if the "motoric contribution" to this task is to influence decision criteria rather than to modulate acoustic perception? Maybe subjects generate a motor-based prediction upon hearing the first stimulus in the pair. Because the stimulus is ambiguous, this prediction reflects their bias. This prediction may then influence the decision criteria upon hearing the second sound (or it could actually modulate the acoustic response -- we need a d-prime analysis to determine this though). Now with TMS applied, the motor prediction is disrupted and the bias is changed thus changing the response criteria and the proportion of "different responses." All of this could happen even without affecting perceptual discriminability.
This is just off-the-cuff speculation, but again, once we have the facts about whether perception or bias has been affected by TMS we can develop theories about what is going on.
No let me throw the question back to you. Suppose a d-prime based analysis showed that discriminability didn't change following TMS. Doesn't this completely change the interpretation of this study despite the motor specificity of the effect on proportion of different responses?
Hi Tom,
Responses to your comments below.
Tom said: "If you think that TMS to lip cortex might simply be changing subjects' response bias, wouldn't you expect TMS to also change response bias for ka - da?"
Yes, if it were a general bias, but biases could be item specific. See my response to Matt above for one possibility.
Tom said: "1) TMS to motor cotex has no real consistent effect on the identification of speech sounds (small effect on one of two measures for expt. 1a, but none at all for expt. 2)."
That is an important point. The paper really does depend on the discrimination data, which is one reason I focused on it. Without the discrimination result you have an effect that didn't replicate.
Tom said: "it's possible that subjects used a rehearsal strategy to make the discrimination decision (e.g. silently repeating the pair to themselves for comparison purposes). TMS to lip cortex would presumably interfere with this strategy and therefore change accuracy rates."
This is actually what I suspect might be happening. We need confirmation using d-prime, but my guess is the experiment worked: stimulation to lip areas decreased the ability to discriminate ambiguous speech sounds. But as you suggest, I think the TMS is interfering with articulatory coding which is used in a strategic manner: subjects hear stimulus #1 and recode it into a motor representation which is then compared against stimulus #2. TMS decreases the capacity to recode the stimuli in a motor effector specific manner (a cool effect) and performance falls off. But we still need to know what the data are before we spend too much time speculating about how to interpret them.
Post a Comment