David Gow has published a series of papers on the cortical basis of speech perception using pretty sophisticated analytic tools that do not often get applied to the type of data we are used to.
For example, in this Cognition paper: "Articulatory mediation of speech perception: A causal analysis of multi-modal imaging data" (2009) by Gow and Segawa, Granger causality analyses are used to support the Motor Theory.
And in this NeuroImage paper: "Lexical influences on speech perception: A Granger causality analysis of MEG and EEG source estimates" by Gow, Segawa, Ahlfors, and Lin (2008), top-down effects demonstrated by Granger causality analysis appear to have a lexical origin and compelling effects on phonetic perception. This is a longstanding battle in spoken word recognition, and I'm pretty enthused to see new data of this type addressing this controversy.
Common to some of David's recent work is a demonstration of the pretty compelling contribution of top-down factors in the analysis of the speech signal. One thing that is a little less obvious is why these top-down effects should have the supramarginal gyrus as a critical ingredient. It's my homework this week to work through these papers in sufficient detail to really get my head around them. I am already predisposed to the top-down part, but I do need to understand why the SMG would be the critical node. What's really impressive is the thoughtful integration of EEG, MEG and MRI data.
David, if you're reading this, it would be great to get a bit of discussion going on this issue. For example, how deeply held is your commitment to the SMG? Did you guys find any data that challenge that conclusion, or are you willing to bet something substantial?
4 comments:
SMG is a big structure that, depending on how you define it, could stretch from somatosensory areas at its anterior extent to IPL (sensory-motor areas?), even to STG. So it matters a lot where in the SMG we are talking.
Also in fMRI we've found that activation that localize to "SMG" are sometimes actually in the planum temporale area. This is because the anatomy of the posterior Sylvian is variable and the standard templates to which we warp our subject's brains seem to misrepresent the sample in some studies. I gave an example of this in a previous post using single subject data. Maybe I'll post another example showing group data. Would this kind of mislocalization be even more of a problem with MEG-MRI alignment?
My feeling is that the MEG-MRI alignment step in itself shouldn't increase by much the amount of mislocalization you would get due to anatomical variability in MRI alone--you're only supposed to lose a few mm of accuracy at that alignment step.
It's a different question what kinds of errors the MEG source localization algorithm makes. I'm sure it makes a significant amount(!) but I kind of doubt they're straightforwardly predictable from the areas that tend to be confusable in fMRI, since it's building in all these complicated contingencies about how the geometry causes activity to be canceled or amplified.
The activation we get in SMG tends to be relatively dorsal, which makes it less likely that we are measuring mislocalized PT sources. We have run the same paradigm in iEEG and also found increased activation in the SMG, although no behavioral or neural evidence of SMG influence on pSTG for reasons that probably have to do wit the patient's pathology. We don't constrain our source localization with fMRI, but people who have compared fMRI-constrained MEG with our MRI-constrained, depth-weighted and noise normalized MEG/EEG approach have found the approaches to yield comparable results (Sharon et al., 2007 NeuroImage).
The SMG question is the interesting one from our perspective. I am working on a paper right now that proposes that there are two lexica: a ventral lexicon consistent with the H&P model that mediates between phonetic and semantic representations, and a dorsal lexicon in the left SMG that mediates between phonetic and articulatory representations. Significantly, both give feedback to the STG providing two possible normalization or top-down facilitation mechanisms. This view would explain a number of phenomena including dissociations between form and semantic priming effects in normals using the same materials in work by Dennis Norris and others, the existence of both parietal and temporal forms of anomia, and BOLD imaging results showing SMG sensitivity to lexical neighborhood density (c.f. Prabhakaran et al., 2006, Neuropsychologia). Fadiga’s lab has produced several results that suggest that premotor activation in speech perception is influenced by lexicality and word frequency, so it seems reasonable to suggest that the dorsal stream deals to some extent in lexical representations (although I suspect that there is sublexical represention in the dorsal pathway, probably involving the angular gyrus). Our Granger results suggest that task effects (e.g. explicit phonetic categorization versus picture matching) influence which lexicon does the work. My thought at the moment is that lexica are just means to a variety of ends including comprehension, articulation and normalization and that they function almost as hidden nodes to facilitate different mappings. That is one of the reasons that I really like your description of the pMTG and pITS not as a lexicon, but as a lexical interface. I’d of course be very interested in your thoughts on all of this.
@Wave: Just so I'm clear on what's being siad, when you say "phonetic" (e.g. in "mediates between phonetic and articulatory representations") do you mean something like "acoustic/perceptual"?
I haven't had a chance to look at your paper, but this sentence "task effects (e.g. explicit phonetic categorization [...]) influence which lexicon does the work" caught my eye. There's work by Démonet & colleagues (Démonet et al. (2002). Towards imaging the neural correlates of language functions. In Durand & Laks (eds), Phonetics, phonology, and cognition (pp. 244–253). OUP.) that seems to indicate that phoneme identification follows, rather than preceding lexical identification.
Also, there's evidence that successful phoneme monitoring is more or less limited to alphabetic literates (Read et al (1986). The ability to manipulate speech sounds depends on knowing alphabetic writing. Cognition, 24:31. So people have speculated that whatever knowledge of phon*-things we have is at least partly driven by the feedback loop of perceiving our own productions, so on that story you might expect illiterates (or literates of non-alphabetic writing systems) to show differential SMG activity.
Of course, I'm no neurolinguist, and it's sure that there's a pile of literature that addresses this...
Post a Comment