Wednesday, May 30, 2007

More Refections on Mirror Neurons

A nice motor-learning experiment was reported in J. Neuroscience recently (Lahav et al., 2007, 27, 308-314). The authors trained non-musicians to play a piece of music on a keyboard. They then scanned the subjects using fMRI while they passively listened to the piece they learned or to a piece the hadn't learned to play. Both trained and untrained pieces activated auditory regions in the superior temporal lobe. However, the trained piece in addition activated posterior frontal areas, whereas the untrained piece did not. This is a very nice demonstration that motor associations of the trained musical piece were indeed acquired, and that these associations involve posterior frontal areas.

Here's the puzzling thing: the title of the paper is "Action Representation of Sound: Audiomotor Recognition Network While Listening to Newly Acquired Actions" and the frontal activations are attributed to, The Mirror Neuron System.

First of all, what evidence is there that the frontal activations are "action representations of sound?" Why can't they be motor representations of movement? Second, how is this an experiment on "listening to actions"? And what does listening to actions mean? Third, what is being "recognized" by this mirror system? The melody? Can't be, unless one holds that subjects are incapable of recognizing melodies that they don't know how to play. In fact, the data make a very nice case for the idea that one can recognized action-generated sensory events without mapping that sensory event onto the motor system. Maybe the mirror system is "recognizing" the unseen actions that produce the melody. Ok fine, but how is this different from, or more to the point, what is the evidence that it is different from the simpler idea that frontal activation during perception of trained melodies reflects (pun intended) the motor side of a learned sensory-motor association? You pair a tone with a puff of air to the eye and pretty soon, the tone elicits an eye blink response. Is the eye blink recognizing the tone? Or the air puff? Or anything? You associate a melody with a sequence of finger movements and pretty soon the melody activates motor areas involved in coding those finger movements. What is the evidence that activation of the "mirror system" is more than simple sensory-motor association?

Friday, May 25, 2007

Dorsal & ventral streams: language-science scooped vision science

We have written that our dual stream model of speech processing builds on research on cortical models of the visual system, particularly the distinction between dorsal and ventral processing streams. And it does. But there is a much older precedent both to our own proposal, and to current dual-stream vision theories: Wernicke's classic 1874 language model. As we all know, Wernicke proposed that sensory representations of speech ("auditory word images") interfaced with two distinct systems, the conceptual system, which he believed was broadly distributed throughout cortex, and the motor system located in the frontal lobe. The interface with the conceptual system supported comprehension of speech, whereas the interface with the motor system helped support the production of speech. Thus, one stream processes the meaning of sensory information (the "what" stream), while the other allows for interaction with the action system (the "how" stream). This is basically identical to what David and I have been claiming in terms of broad organization of our dual stream model, and identical to what folks like Milner and Goodale have proposed in the vision domain. When will those vision folks get an idea of their own. ;-)

Tuesday, May 22, 2007

Faculty position at Univ. College London ICN

There is a Lecturer/Senior Lecturer position at UCL Institute for Cognitive Neuroscience. Here's the link:

http://www.psychol.ucl.ac.uk/info/icnlecturer.htm

Monday, May 21, 2007

Where is area Spt?

We occasionally get questions regarding how to define area Spt -- the key dorsal-stream region we believe performs sensory-motor transformations for speech. The acronym stands for Sylvian parietal temporal to reflect the fact that it is located within the Sylvian fissure at the parietal-temporal boundary. The region involves a portion of the planum temporal/parietal operculum (very hard to distinguish the two), and is a subportion of area tpt. For those interested in more detail, below is a definition that we included in a recently submitted manuscript. My former student, Brad Buchsbaum can provide more details regarding typical locations in standardized space.

Spt is functionally defined within an anatomically constrained region of interest. It was initially identified in an anatomically unconstrained analysis that specified regions showing both auditory (speech responsive) and motor (responsive during covert speech production) response properties (Buchsbaum et al. (2001), Hickok et al. (2003)). A network of regions is identified in such an analysis including area 44 and a more dorsal pre-motor site in the frontal lobe, a region in the superior temporal sulcus, and a region in the posterior aspect of the planum temporale, sometimes extending up into the parietal operculum. Anatomically this latter area, Spt, appears to be a sub-portion of Galaburda and Sanides’ (1980) area Tpt. Although auditory-motor responses are identifiable within this region very consistently in individual subjects (using their own anatomy as a guide to localization), the activation location in standardized space can vary substantially, due the extensive anatomical variability of the posterior Sylvian region. Thus Spt is defined as a region with the posterior portion of the Sylvian fissure that exhibits both auditory and motor response properties.

Thursday, May 17, 2007

Take a Good Hard Look at Your Mirror Neurons

Last month Alison Gopnik published an article on Slate.com titled, "Cells that read minds: What the myth of mirror neurons gets wrong about the brain." Gopnik argued that mirror neurons have replaced left-brain/right-brain mythology as the neural basis for just about everything from human language, to social understanding, to art appreciation. It's not that the discovery of mirror neurons by Rizzolatti and colleagues is unimportant -- on the contrary, it is arguably one of the most significant findings in recent years -- it's that speculation about their function is completely out of control. I'm not sure it has quite reached the level of insanity that the left-brain/right-brain craze did (I recently came across a claim that left- vs. right-brain tendencies explains why men are "beer guzzling, TV-glued, and sex-driven"), but it wouldn't surprise me if mirror neuron-based personality, management, or learning-style self tests started popping up on the web. Of course, Gopnik could easily have gone even farther back in the history of neuroscience and drawn parallels between the current mirror neuron fad and 19th century phrenology, which, like left-/right-brain function, is another example of the over-application of legitimate scientific ideas. Hmmm... even the functions that are claimed to be supported by Gall's mental organs on the one hand, and the mirror neuron system on the other, are starting to sound alike: altruism, empathy, morality...

But these complex functions are way beyond my capacity to understand (not enough mirror neurons, I guess), so let's stick to language. Interestingly, as with phrenology, language was one of the first "applications" of the mirror neuron machinery. It was low-hanging fruit that came ripe with a rich (albeit ailing) cognitive grounding in the form of the Motor Theory of Speech Perception. Now in the case of phrenology, we know that the fundamental tenant of the theory -- that the cerebral cortex is functionally differentiated -- was ultimately proven correct by using language as a test case. Will the mirror neuron system parallel phrenology in this respect also? Not so much. Data from language research provides rather strong evidence against the mirror neuron theory of perception/understanding.

In the domain of language, the mirror neuron theory is basically this: speech gestures are understood in the listener via a mapping of heard speech onto speech production systems. This theory makes a very clear prediction: destruction of speech production systems should destroy the ability to understand speech. But Broca's aphasia disconfirms this prediction. In many cases of severe Broca's aphasia the entire convexity of the left frontal lobe is destroyed, along with the ability to produce speech. Yet these patients can understand speech quite readily at the lexical level. The same holds true in the visual/manual modality. Frontal lesions can severely impair sign language production in deaf signers while leaving sign comprehension rather nicely spared. If the "mirror system" forms the basis for understanding speech, Broca's aphasics should not be able to comprehend speech any better than they can produce it. Yet the evidence is clear: left frontal lobe, while critically involved in speech/sign production, is not critically involved speech/sign comprehension.

So the same clinical syndrome, Broca's aphasia, that confirmed the fundamental claim of phrenology, refutes the mirror neuron theory of speech/language understanding. Does this mean that the whole mirror neuron enterprise is misguided? Who knows. But if the most straightforward and cognitively grounded human application of mirror neuron function doesn't hold water, it certainly makes you wonder. In this context it is worth emphasizing that while mirror neurons are widely held to be the neural basis for imitation, and it is on this base that more elaborate functional speculations are built, the species in which mirror neurons were discovered, macaque, does not appear to have the capacity to imitate. Hmmm.

Good new review by Davis & Johnsrude

Matt Davis and Ingrid Johnsrude have a very thoughtful new review, "Hearing speech sounds: Top-down influences on the interface between audition and speech perception" in the journal Hearing Research. The paper is available on the journal's web site already.

Matt and Ingrid review several critical bottom-up and top-down aspects of speech perception, both from the perspective of perception research and cognitive neuroscience. They review four types of phenomena: grouping, segmentation, perceptual learning, and categorical perception. The paper makes a persuasive case for the nuanced interaction between top-down factors in bottom-up analysis.

Indeed, their review converges in interesting ways with the Hickok & Poeppel (2007) review, and with another forthcoming review by -- yes, sorry -- me (Poeppel), Bill Idsardi, and Virginie van Wassenhove: "Speech perception at the interface of neurobiology and linguistics", a paper in press at Philosophical Transactions of the Royal Society.

Across these three papers I think there is a fair amount of covergence -- a successful model bridging perception, computation, and brain must account for the subtle but principled interaction between the bottom-up processes we need (by logical necessity) and the top-down processes we (our brains) bring to the problem (by sheer damn luck).

Is this news? Well, there certainly is still controversy, although there may be no issue ... But it seems to me that, say, research on automatic speech recognition is not particularly well-informed by the processes that human brains actually execute in perception. (Neuromorphic engineering approaches are, it goes without saying, an exception.)

Wednesday, May 16, 2007

The Cortical Organization of Speech Processing

Our new article, "The Cortical Organization of Speech Processing" has recently been published in Nature Reviews Neuroscience, 8, 393-402 (May 2007). Although it is an extension of the model we proposed in our 2000 TICS and 2004 Cognition papers, there are several new features in the current proposal. One is the claim that within the ventral stream there are parallel routes from acoustic input to lexical access. Another is that within the dorsal stream, there are also parallel circuits, one supporting auditory-motor integration at the phoneme level, and another supporting auditory-motor integration at the level of sequences of phonemes or syllables. We also suggest that the dorsal auditory-motor integration system may be fundamentally organized around the vocal tract effector system, rather than the auditory-system per se. More on these recent claims in future posts. Let us know what you think of the new paper!