It seems like every now and then, this concept comes up from different angles, for many of us. For me, the ‘analysis-by-synthesis’ perspective on internal forward models has come up in various experimental contexts, initially in work with Virginie van Wassenhove on audio-visual speech. There, based on ERP data recorded during perception of multi-sensory syllables, we argued for an internal forward model in which visual speech elicits the cascade of operations that comprise, among others, hypothesis generation and evaluation against input. The idea (at least in the guise of analysis-by-synthesis) has been recently reviewed as well (Poeppel & Monahan, 2010, in LCP; Bever & Poeppel 2010, in Biolinguistics, provides a historical view dealing with sentence processing a la Bever).
It is worth remembering that work on visual perception has been exploring a similar position (Yuille & Kersten on vision; reverse hierarchy theory of Hochstein & Ahissar; the seemingly endless stream of Bayesian positions).
Now, in new work from my lab, Xing Tian comes at the issue from a new and totally unconventional angle, mental imagery. In a new paper, Mental imagery of speech and movement implicates the dynamics of internal forward models, Xing discusses a series of MEG experiments in which he recorded from participants doing finger tapping tasks and speech tasks, overtly and covertly. For example, after training, you can do a pretty good job imagining that you are saying (covertly) the syllable da, or hearing the syllable ba.
This paper is long and has lots of intricate detail (for example, we conclude that mental imagery of perceptual processes clearly draws on the areas implicated in perception, but imagery of movement is not like a ‘weaker’ form of movement but resembles movement planning). Anyway, the key finding from Xing’s work is this. We support the idea of an efference copy, but there is arguably a cascade of predictive steps (a dynamic) that is schematized in the figure from the paper. The critical data point: a fixed interval after a subjects imagines articulating a syllable (nothing is said, nothing is heard!), we observe activity in auditory cortex that is indistinguishable from hearing the token. So, as you prepare/plan to say something, an efference copy is sent not just to parietal cortex but also auditory cortex, possible in series. Cool, no?
And on a totally different note … An important paper from Anne-Lise Giraud’s group just appeared in PNAS, Neurophysiological origin of human brain asymmetry for speech and language, by Benjamin Morillon et al. This paper is based on the concurrent recording of EEG and fMRI. It builds on the 2007 Neuron paper and incorporates an interesting task contrast and a sophisticated analysis allowing us to (begin to) visualize the network at rest and during language comprehension. The abstract is below:
The physiological basis of human cerebral asymmetry for language remains mysterious. We have used simultaneous physiological and anatomical measurements to investigate the issue. Concentrating on neural oscillatory activity in speech-specific frequency bands and exploring interactions between gestural (motor) and auditory-evoked activity, we find, in the absence of language-related processing, that left auditory, somatosensory, articulatory motor, and inferior parietal cortices show specific, lateralized, speech-related physiological properties. With the addition of ecologically valid audiovisual stimulation, activity in auditory cortex synchronizes with left-dominant input from the motor cortex at frequencies corresponding to syllabic, but not phonemic, speech rhythms. Our results support theories of language lateralization that posit a major role for intrinsic, hardwired perceptuomotor processing in syllabic parsing and are compatible both with the evolutionary view that speech arose from a combination of syllable-sized vocalizations and meaningful hand gestures and with developmental observations suggesting phonemic analysis is a developmentally acquired process.
Morillon B, Lehongre K, Frackowiak RS, Ducorps A, Kleinschmidt A, Poeppel D, & Giraud AL (2010). Neurophysiological origin of human brain asymmetry for speech and language. Proceedings of the National Academy of Sciences of the United States of America, 107 (43), 18688-93 PMID: 20956297
1 comment:
Looks really cool. The only thing I would say I'm not entirely convinced by is the localization of your first forward model in parietal cortex. I think of motor state estimation as part of the frontal motor system, but I'm in the minority on this one (what's new) and in general I think that the function-region correlations in all of the internal model work is darn close to pure speculation.
Post a Comment