Wednesday, April 30, 2008

Post Doc at Medical College of Wisconsin (MCW)

There is an opening in the Language Imaging Laboratory
(www.neuro.mcw.edu) for a postodoctoral fellow, beginning
immediately. The position offers intensive training in fMRI and
computational studies of language processing, including opportunities
for clinical research in aphasia, epilepsy, and developmental
dyslexia. Faculty mentors include Jeffrey Binder, Einat Liebenthal,
Rutvik Desai, Colin Humphries, and Lisa Conant. Ongoing
collaborations exist with University of Wisconsin faculty including
Mark Seidenberg, Tim Rogers, and Brad Postle. State-of-the-art
facilities include two 3T human research scanners, MEG, TMS, and MRI-
compatible ERP. Interested candidates should contact me by email at:
jbinder@mcw.edu.

Rizzolatti & Craighero (2004): Class discussion summary #4

The section titled, "Evidence in Favor of the Mirror Mechanism in Action Understanding" (p. 173) gets to the heart of the matter. So what's the evidence?

The first point made by RC is that the "simplest, and most direct, way to prove that the mirror-neuron system underlies action understanding..." -- namely to lesion the relevant area(s) and show that monkeys can no longer understand actions -- is not a feasible research strategy. RC provide three reasons why this is the case:

"First, the mirror-neuron system is bilateral and includes ... large portions of the parietal and premotor cortex." (p. 173). Hmmm. It's interesting that the Parma group report that "... inactivation of a large part of F5 [on one side] produced a bilateral deficit in preshaping and grasping." (Gallese, Fadiga, Fogassi, Luppino, & Murata. (1997). A parietal-frontal circuit for hand grasping movement in the monkey: evidence from reversible inactivation experiments. In "parietal lobe contribution to orientation in 3D Space." P. Thier and H.-O, Karnath (eds). Springer-Verlag, Heidelberg. p. 264, italics theirs). If you can disrupt grasping, why wouldn't a similar deficit be evident in the understanding of grasping actions?

"Second, there are other mechanisms that may mediate action recognition..." (p. 173). Why do we need the mirror system then? See previous post.

"Third, vast lesions as those required to destroy the mirror neuron system may produce more general cognitive deficits that would render difficult the interpretation of the results." (p. 173). See my comment on their first reason.

Sounds to me like somebody needs to do the critical experiment.

So, instead of using the "simplest, most direct" method to test the MN theory, RC have to resort to correlative methods: "If mirror neurons mediate action understanding, their activity should reflect the meaning of the observed action, not its visual features." (p. 173). The problem here is the usual problem with correlation studies: there's no way to assess causality. Do MNs cause action understanding? Or is their activity just correlated with action understanding that is achieved by "other mechanisms that may mediate action recognition." So none of these studies actually test the hypothesis.

RC describe two such studies. In one it is shown that F5 neurons respond to action-associated sounds (ripping paper). This shows that sounds can be associated with actions. Cool, but it doesn't prove a thing regarding understanding. (We're reading this empirical paper for next week.)

The other is potentially interesting. Here's the logic: "If mirror neurons are involved in action understanding, they should discharge also in conditions in which [the] monkey does not see the occurring action but has sufficient clues to create a mental representation of what the experimenter does." (p. 173). The study by Umilta et al. (2001, Neuron, 32: 91-101), recorded cells in a full vision condition (monkey sees an action toward a visible object), and a hidden condition. The hidden condition was this: while the monkey is watching, the experimenter places a piece of food behind a screen. The action is then directed toward an object that the monkey can't actually see. (Recall that pantomiming actions doesn't activate MNs.) The result was that "more than half of the tested neurons" (p. 174) responded during the hidden condition. RC conclude, "It was ... the understanding of the meaning of the observed actions that determined the discharge in the hidden condition."

What can we conclude from this result? First, following the logic of the authors, the study shows that a little less than half of mirror neurons are NOT involved in action understanding. Second, we can conclude that monkeys can mentally represent objects in working memory (no surprise) and that this representation can interact with response properties of (some) MNs. There is no evidence that the MNs supported the understanding of these actions, as the actual understanding of the hidden action could have been achieved by the OTHER action understanding system.

RC conclude the section by stating that "... the activity of mirror neurons correlates with action understanding" (p. 174). I don't think action understanding was ever measured, and only about half of MNs seem to correlate, but even ignoring these limitations, correlation doesn't test the hypothesis and so proves nothing.

So, in fact, the main conclusion from the "Evidence in Favor of the Mirror Mechanism in Action Understanding" section is that there is no evidence.

Post-Doctoral Position, Center for Mind & Brain, UC Davis

Location: Center for Mind and Brain, University of California. Davis
Begin date: Immediately/TBA
End date: TBA
Principle Investigator: Professor David P. Corina, Departments of Linguistics and Psychology
A postdoctoral position is currently available for a project investigating the neural representation of American Sign Language (ASL). We seek a candidate with a background in spoken or sign language psycholinguistics and or in human action processing/motor control and who wishes to gain expertise in the neuroscience of language. Techniques used in this project include behavioral testing, electrophysiology (ERP), and fMRI. A Ph.D. in a related field and strong quantitative expertise and writing skills are required.

Please send a CV, statement of interests, and three (3) letters of recommendation to Professor David Corina, corina @ ucdavis .edu, Center for Mind and Brain, 267 Cousteau Pl., University of California Davis, 95618. UC Davis is an Affirmative Action/Equal Opportunity Employer.

Tuesday, April 29, 2008

Mirror Neuron Course: Reading set #3

This week we finally finished discussion of the Rizzolatti & Craighero review, and we discussed the two early empirical papers on MNs: 1992 Di Pellegrino et al, and the 1996 Gallese et al. I'm still working on posting summaries from RC which I will hopefully finish by the end of the week. In the meantime, we will move on with a new reading set for next week:

Rizzolatti, Fogassi, & Gallese (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nat. Rev. Neurosci. 2:661-70.

Kohler et al. (2002). Hearing sounds, understanding actions: action representation in mirror neurons. Science 297:846-48.

Fogassi et al. (2005). Parietal lobe: From action organization to intention understanding. Science. 308:662-7.

Rizzolatti & Craighero (2004): Class discussion summary #3

In discussion summary #2, we commented on Rizzolatti & Craighero's (RC) admission that the mirror neuron (MN) system isn't the only mechanism for action understanding, and why this is a serious problem for the MN theory. Here we discuss the two functions that have been ascribed to the MN system, (1) action understanding, and (2) imitation.

"Two main hypotheses have been advanced on what might be the functional role of mirror neurons. The first is that mirror-neuron activity mediates imitation (see Jeannerod 1994); the second is that mirror neurons are at the basis of action understanding (see Rizzolatti et al. 2001). Both these hypotheses are most likely correct." (p. 172).

This is a funny position for reasons that RC correctly raise: "...imitation, the capacity to learn to do an action from seeing it done ... is present among primates, only in humans, and (probably) in apes..." (p. 172). So real mirror neurons, those documented in macaque, cannot be the neural basis for imitation because monkeys don't imitate. RC therefore claim, quite reasonably, that "... the mirror neuron system is the system at the basis of imitation in humans." (p. 172, italics mine).

Why is this a funny position? There are two reasons.

First, since real mirror neurons -- i.e., cells that respond both during the execution of an action and during the perception of a similar action, and that have sensory-related activity that cannot be explained by movement preparation, etc. -- have not been documented in humans, we are left to assume without empirical confirmation that regions involved in action imitation in human functional imaging studies (there are many) in fact involve cells with mirror neuron properties. If instead imitation activations in humans involve a circuit that is related to, say movement preparation instead of action perception (has this ever been tested?), then we are talking about entirely different animals.

The second reason it is funny to ascribe action understanding and imitation to the same system stems from RC's point about the complexity of imitation. "Although laymen are often convinced that imitation is a very primitive cognitive function, they are wrong." (p. 172). In other words, imitation is cognitively complicated, explaining why apes can do it and monkeys can't. Now I don't know the literature on imitation, but this seems reasonable. The complexity of imitation is a problem for the joint understanding/imitation theory of MNs, because it seems to me that one would need to understand an action in order to know whether it is worth imitating, yet without having performed the task yourself, you can't understand it. So this either means that imitation is a blind reflex where apes and humans go around imitating EVERYTHING and then when they blindly hit on a useful action they see the consequences of their own action, and then understand it, (in which case imitation is NOT a complicated function at all), OR, the usefulness of actions one has never performed and therefore are not part of the MN "vocabulary" can be understood "directly" without the MN system, and then selected for imitation because of its usefulness.

Put simply, if the MN system is required for action understanding and imitation, how does the animal know what to imitate? Unless you are imitating reflexively, you need action understanding to drive imitation, but without having previously imitated or performed an action, you can't understand it. So there's a bit of circularity in this position. Of course, if you admit of some other mechanism that allows for action understanding without the MN system, then you can crack the circuit. But this raises serious problems of its own: see discussion summary #2.

It seems to me that what people are calling the mirror system in humans really does support imitation (in the form of a network doing some form of sensory-motor integration). But the association of this system with the macaque mirror system is tenuous given that they don't imitate.

Caveat: I don't know much about imitation! Is imitation reflexive? Or is it selective? I would love to hear from someone who knows about this stuff.

Postdoctoral Research Fellowships at the University of Queensland

The University of Queensland is offering a limited number of Postdoctoral Research Fellowships commencing in 2009, to be awarded to persons wishing to conduct full-time research at the University.

Applicants interested in joining the fMRI Laboratory at the Centre for Magnetic Resonance, University of Queensland, Australia are invited to apply. Research should focus upon the neural mechanisms of language (production and/or comprehension). Research facilities available to staff include state-of-the-art MRI scanners (1.5T, 4T and a soon to be installed 3T system) and newly-installed EEG equipment (128 channel). Information about research conducted at the fMRI Lab is available at http://www.fmrilab.net

An applicant must not have had more than five years full-time professional research experience or equivalent part-time experience since the award of a PhD, as at 30 June 2008. An applicant who does not hold a doctoral degree may be offered an appointment if evidence is subsequently provided that a doctoral thesis has been submitted by 31 December 2008. Further information including an application kit is available at http://www.uq.edu.au/research/rrtd/grants-internal-fellowships-uq-postdoctoral

Informal enquiries can be made to Dr Greig de Zubicaray
(greig.dezubicaray@cmr.uq.edu.au )

Applications must be received by 19 May 2008.


__
Dr Greig de Zubicaray
Senior Research Fellow
fMRI Laboratory, Centre for Magnetic Resonance
University of Queensland, QLD 4072, Australia
Tel: (+617) 3365 4100 (Office)
(+617) 3365 4250 (B106, Ritchie Building)
Fax: (+617) 3365 3833
fMRI Lab Page: http://www.fmrilab.net

CRICOS Code 00025B

Thursday, April 24, 2008

The search for the phonological store: From loop to convolution


Former TB West grad student, Brad Buchsbaum has found the phonological store. But don't try to punch the talairach coordinates into your NPS (Neural Positioning System), because it's not in any one place. Here's the background...

As early as our 2000 TICS paper (Hickok & Poeppel, 2000, TICS, 4:131-138), we have been saying that our concept of a sensory-motor based auditory dorsal stream is THE neural circuit that supports “phonological” working memory. Specifically, in 2000 we said, “We hypothesize that activations attributed to the functions of the phonological store reflect the operations of the proposed auditory-motor interface system... [this] is not the site of storage of phonemic representations per se ... but rather serves to interface sound-based representations of speech in auditory cortex with articulatory-based representations of speech in frontal cortex.” p. 136.

The idea is that the storage of phonological information is not contained in a dedicated buffer that lives in the parietal lobe (see any Smith/Jonides paper from the late 1990s), but rather that “storage” is nothing more than the active maintenance of the same phonological representations that are used in speech recognition, that these networks are in the STS, and that the dorsal sensory-motor circuit is the pathway that allows for frontal articulatory mechanisms to actively maintain these representations over a delay. This is essentially Fuster’s concept of working memory, i.e., an active state of “long term memory.”

This question of the relation between the proposed dorsal auditory stream and working memory was taken up by TB West alum Brad Buchsbaum. Brad and I did a handful of experiments which first identified area Spt and the circuit we now believe supports sensory-motor integration for speech and some aspects of music, and which supports verbal working memory. See Buchsbaum et al. 2001 (Cognitive Science, 25:663-78) and Hickok et al. 2003 (JoCN, 15: 673-82) for the first empirical studies on this topic. Brad has continued to push this issue with several more empirical papers. Check ‘em out, they’re all good.

But, here’s the real reason for this post: Brad has a new review/theory paper out in JoCN with Mark D’Esposito that does a really nice job of summarizing the history of the neural search for the phonological store, and developing the theory and empirical argument behind the link between phonological working memory and auditory-motor integration. The main claim is that the phonological store does not correspond to a single brain region, but emerges from the interaction of systems involved in speech perception and production. I like the way he thinks! Definitely a must-read, so check it out! Brad's an entertaining writer as well, so it's a fun read.

Buchsbaum & D’Esposito (2008). The search for the phonological store: From loop to convolution. J of Cog. Neurosci., 20:762-78.

Wednesday, April 23, 2008

Ani Patel's Book

If you happen to have a spare week ... Ani Patel's book Music, Language, and the Brain is out and about and available on Amazon for a very reasonable price. It's an Oxford University Press 2008 release.

The book was very positively reviewed in Nature Neuroscience by Josh McDermott and very, very positively reviewed in Nature by me and Elika Bergelson from TB East.

This thing is 'fiercely' scholarly (to quote Christian from Project Runway). It's just fierce. The book summarizes an unbelievable amount of reading and reflection. If you are at all interested in music cognition and its relation to language processing, this pretty much sets the high bar. Its references are almost ridiculously complete. It's slightly disturbing that Ani knows this much. Ani, go take a vacation!


Dispatch from CNS: The TB East Perspective

The CNS meeting in San Francisco, as usual, was a smorgasbord of cognitive and neuro science, and, as usual, ranged from the ridiculous to the sublime. I will refrain from identifying what I considered ridiculous (buy me a beer and I'll point to a bunch of stuff) and also what I considered sublime (what the hell do I know?), but there were a bunch of interesting things. Here's a smorgas-reaction to the smorgas-bord.
  • Lolly Tyler organized a symposium on visual object recognition on the day before CNS. This was my favorite part of the meeting. I anticipated a bunch of discussion about the role of top-down mechanisms and prediction; interestingly, the main message I got from this pre-symposium symposium was that the feed-forward mechanisms of the visual system (in particular, the ventral stream) get you really far in terms of object recognition subroutines. Poggio from MIT presented the computational model he's developed over the years and argued that feed-forward mechanisms comfortably account for the first 100 ms of perception of single images. That is not to say that there aren't important contributions of 'analysis by synthesis' operations, but the early aspects of this are well accounted for by feed-forward aspects of the visual system. Shimon Ullman from the Weizman Institute presented an object recognition model in which she appealed to a hierarchical organization of features of an object. The challenge, obviously, is to figure out what the features are: they should be less generic than, say, Gabor filters but more generic than the specific object. Obviously. So what is the right 'alphabet'? The challenge is to find the features with the largest amount of information. Great talk. I'd like to adapt some of these ideas to hearing and speech. Moshe Bar from Harvard gave a very interesting talk about how a quick glance at low-resolution, low spatial frequency information can be used to generate predictions about possible objects that are then 'verified' by the ventral stream, high-resolution, high spatial frequency stuff. Bar has a TICS paper that describes his model, an interesting read. DiCarlo from MIT presented neurophysiological data from recordings in the ventral stream. The most provocative part of his stuff argued that IT neurons reflect invariant properties of target objects but V4 neurons do not. I'd like to believe that, but what is the magic mojo from which derives that mythical invariance? Anyway, very cool talk addressing a very deep problem. There were some other presentations that were not quite as compelling.
  • On Tuesday morning, there was a symposium on the anterior temporal lobes and semantic memory. This topic has been discussed in this blog at length, and one of the presentations (by Richard Wise) was amusingly summarized today by the blogger Neurocritic (see Greg's earlier post). In that symposium, Matt Lambon Ralph gave a nice presentation of his research, which has been discussed in this blog (see earlier postings). Wise's talk was less narrowly focused on ATL and semantic memory/semantic dementia, but rather constituted an amusing and sharply formulated tour through neurolinguistic history. Richard highlighted a few papers that he clearly values but that he cleverly and humorously criticized. Richard is one of the people in the field who is the most rigorous about keeping us honest about the anatomy. And he's quite right to do so. And in terms of good publicity for Hickok & Poeppel 2007, he did mention our ideas articulated there as not entirely useless. Richard is thinking very deeply about what it can or must mean to have modality-independent language processing, and he has done a series of experiments investigating this. This is difficult stuff, and I think this kind of research is making important incremental progress in our understanding of the 'homework problem' of figuring out the anatomic aspects of the functional architecture.
  • Other stuff I found noteworthy: The dinner with Sonja Kotz, Jonas Obleser, Matt Lambon Ralph, Richard Wise, and others at Cosmopolitan. Great soup. Nice dessert, too. Weird pasta. I'd never met Manuel Carreiras, and one evening I had drinks with him and my friend Päivi Helenius. These are cool people to hang out with. Smart as hell and non-nerdy.
  • There were lots of interesting posters, and lots of utter schlock. Some of the stuff was like painfully stupid. Stuff I rather liked included the growing body of evidence on lexical decomposition in a variety of contexts. There were terrific posters on this by the Carreiras and colleagues, Marantz, our own shop (he says humbly), and other labs as well. It's pretty clear that there's 'total convergence' on the view that there is extensive decomposition during lexical access and language processing more generally. The cool thing is that we're now figuring out the detailed mechanisms and time course and worrying less about the ideological mumbo-jumbo. One more thing for good measure: I was quite fascinated by the growing body of work on infants using NIRS. There was cool work on this by Heather Bortfeld from Texas A & M and by Silke Telkemeyer from Berlin. This work is still very much in its infancy, haha, but once a more normative set of data have been acquired for various aspects of perception, NIRS can become an exciting view on the earliest possible chunks of life. For example, the Berlin group presented data from three-day-old babies from which they recorded simultaneous ERP and NIRS.

Rizzolatti & Craighero (2004): Class discussion summary #2

A comment from the section, "Function of the Mirror Neuron in the Monkey..." (p. 172)

RC state, "... although we are fully convinced ... that the mirror neuron mechanism is a mechanism of great evolutionary importance through which primates understand actions done by their conspecifics, we cannot claim that this is the only mechanism through which actions done by others may be understood." (p. 172).

The existence of another mechanism for action understanding is a serious problem for the mirror neuron theory of action understanding. Here's why:

To evaluate the importance of the mirror system for action understanding, we must understand what the mirror system adds beyond action understanding achieved by other means.

It's like discovering the existence of a 40 horsepower electric motor in a hybrid vehicle and claiming that the electric motor is the "basis" for the vehicle's power. This may be largely true if the gas-powered motor in said vehicle is a 5 horsepower Briggs & Stratton, but if it is a 700+ horsepower Formula One race engine, then "basis" is probably the wrong word.

So what do we know about this other mechanism for action understanding? Not much. In fact, as far as I know, there is no monkey data on this question. E.g., no one has tested whether monkeys fail to understand actions when the mirror system is lesioned -- RC further claim that such a study is not feasible for reasons we will highlight later. So this means that there is no basis for the claim that the mirror system plays a fundamental role in action understanding in the monkey, because the claim has never actually been evaluated empirically. In humans, the evidence in connection with speech suggests that the mirror system (assuming it exists in humans) plays a secondary role at best in action understanding: damage to frontal motor speech systems can destroy speech production without dramatically affecting speech understanding. Apparently, the other action understanding system is a pretty powerful system.

To summarize: RC's (correct!) admission that the mirror system isn't the only system for action understanding means that the claim that the mirror system is "fundamental" to, or at the "basis" of action understanding has never been scientifically tested in the monkey, and where it has been tested (human speech) has been shown to be incorrect.

So this is how the picture seems to be shaping up...

Mirror system:














That other action understanding system:











Well, maybe it's not THAT lopsided. But how would we know? It's never been tested.

Monkey Lip Reading


First it was Broca's area in the chimp, now there is a new study examining audiovisual integration in the perception of monkey vocalizations.

The study by Ghazanfar, Chandrasekaran, & Logothetis (J. Neurosci. 2008, 28:4457-69) recorded single units as well as local field potentials in the STS and in auditory cortex of macaque monkeys. They report that responses in auditory cortex (lateral belt regions) are influenced by visual inputs from the STS. (Abstract below)

This looks like a pretty nice study that provides direct evidence for multisensory integration in belt areas of auditory cortex. This may not be the only source of input to these multisensory cells in the lateral belt region. In humans, lip reading activates a large network that include frontal regions. Feedback projections from motor-speech areas may also influence responses in auditory cortex (at least in humans) as mentioned previously.



Interactions between the Superior Temporal Sulcus and Auditory Cortex Mediate Dynamic Face/Voice Integration in Rhesus Monkeys

Asif A. Ghazanfar,1,2 Chandramouli Chandrasekaran,1 and Nikos K. Logothetis2

1Neuroscience Institute and Department of Psychology, Princeton University, Princeton, New Jersey 08540, and 2Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany

Correspondence should be addressed to Asif A. Ghazanfar, Neuroscience Institute and Department of Psychology, Green Hall, Princeton University, Princeton, NJ 08540. Email: asifg@princeton.edu

The existence of multiple nodes in the cortical network that integrate faces and voices suggests that they may be interacting and influencing each other during communication. To test the hypothesis that multisensory responses in auditory cortex are influenced by visual inputs from the superior temporal sulcus (STS), an association area, we recorded local field potentials and single neurons from both structures concurrently in monkeys. The functional interactions between the auditory cortex and the STS, as measured by spectral analyses, increased in strength during presentations of dynamic faces and voices relative to either communication signal alone. These interactions were not solely modulations of response strength, because the phase relationships were significantly less variable in the multisensory condition as well. A similar analysis of functional interactions within the auditory cortex revealed no similar interactions as a function of stimulus condition, nor did a control condition in which the dynamic face was replaced with a dynamic disk mimicking mouth movements. Single neuron data revealed that these intercortical interactions were reflected in the spiking output of auditory cortex and that such spiking output was coordinated with oscillations in the STS. The vast majority of single neurons that were responsive to voices showed integrative responses when faces, but not control stimuli, were presented in conjunction. Our data suggest that the integration of faces and voices is mediated at least in part by neuronal cooperation between auditory cortex and the STS and that interactions between these structures are a fast and efficient way of dealing with the multisensory communication signals.

Tuesday, April 22, 2008

The Neurocritic on Richard Wise at CNS

Check out the entry posted by the mysterious neuroblogger, Neurocritic, on Richard Wise's talk at CNS. Was Richard really THAT snarky? And toward Hickok & Poeppel of all people? And did he really say we should "throw out most of the literature from stroke aphasia"? Can I get independent confirmation of this, please?

Neurocritic had some kind words in defense of Hickok & Poeppel, so thanks much NeuroCrit.

Who is this Neurocritic dude anyway?

Talking Chimp Brains




A new study reported in Current Biology has identified the homologue of Broca's area in the chimp brain. It's a PET study of communicative behaviors in chimps by Jared P. Taglialatela, Jamie L. Russell, Jennifer A. Schaeffer, and William D. Hopkins
(Communicative Signaling Activates ‘Broca's’ Homolog in Chimpanzees Current Biology 2008 18: 343-348).

Now if we only knew what Broca's area was doing in humans, we might be able to interpret this cool finding.

Rizzolatti & Craighero (2004): Class discussion summary

Even though I've been spouting off for the last few months about the problems with mirror neurons, having read this review paper carefully, I have to admit there was much I didn't know about their properties or the theories behind their function. Unfortunately, learning more only solidified my concerns and lead to even more doubts and questions about the mirror neuron theory of action understanding, imitation, and speech processing.

There is a LOT to discuss in the Rizzolatti & Craighero paper. It took us a couple of hours in class this week to get only about halfway through. Quite an interesting read, but I have to say, it's not exactly the tightest paper I've read. Plenty of speculation, hints of circularity, over-generalization, etc. We all agree that mirror neurons are very interesting neural creatures, but the idea that they are the basis for action understanding is darn near incoherent, and by the authors' own admission effectively untestable.

Since there's so much to discuss, I'll breakdown the summary discussion of the Rizzolatti & Craighero paper (hencefore RC) into a few posts.

A couple of initial observations:

1. There are two types of "visuomotor" neurons in macaque F5, "canonical neurons, which respond to the presentation of an object, and mirror neurons, which respond when the monkey sees object-directed action" p. 170. Having not read all the literature on F5 physiology, I wondered if anyone has ever claimed that canonical neurons are the neural basis for object understanding. If not, why? Maybe this is a naive question, but if a cell that responds to the perception of an object-directed action is the basis of action understanding, why isn't a cell that responds to an object the basis of object understanding? Somebody correct me.

2. It is claimed that there are two types of mouth-related mirror neurons, "ingestive mirror neurons" and "communicative mirror neurons." (p. 171). However, according to RC's definitions, in fact communicative mirror neurons do not exist:

a) "Mirror neurons in which the effective observed and effective executed actions correspond in terms of the goal... and means for reaching the goal... have been classed as 'strictly congruent' ... Mirror neurons that do not require the observation of exactly the same action that they code motorically have been classed as 'broadly congruent.' (p. 170)

b) "The most effective observed action for [communicative mirror neurons] is a communicative gesture... However, from a motor point of view they behave as the ingestive mirror neurons, strongly discharging when the monkey actively performs an ingestive action" (p. 171).

So basically, what RC are calling "communicative mirror neurons" are just ingestive mirror neurons that are "broadly congruent." I wonder if monkeys confuse the perception of communicative gestures with the perception of ingestive actions?

3. Cells in the monkey superior temporal sulcus respond to the perception of actions. "STS appears to code a much larger number of movements than F5... [and] STS neurons do not appear to be endowed with motor properties." (p. 171). Hmm. So STS codes for a much larger number of actions than F5. Sounds to me like STS, not F5, is the neural basis of action understanding... If STS is not the basis for action understanding, what IS it doing? RC's position on the relation between the STS and mirror system is downright puzzling: "STS is strictly related to it [the mirror neuron circuit] but, lacking motor properties, cannot be considered part of it." (p. 172). What does it mean to be "strictly related" but "not part of"???

Monday, April 21, 2008

UC Irvine Symposium: Center for Hearing Research and Center for Cog. Neurosci.

The third annual UC Irvine Center for Hearing Research symposium is coming up Saturday, May 10. We have two sessions, one of which is co-sponsored by our Center for Cognitive Neuroscience. Registration is free if anyone is interested...

While you were working ...

Well, it's true that it's my job to work on weekends, while Greg surfs and chills with his family, but ... I decided that Greg is right, and I have to lighten things up. So while you guys were reading mirror-related workmanship, I was

(a) partying in San Francisco at CNS with the TB East crowd and
(b) partying at drag brunch in DC ...















We started with mild-mannered breakfast (which included TB East alumni Huan Luo, Ming Xiang as well as current folks like Minna Lehtonen, Ariane Rhone, Eri Takahashi, Phil Monahan and Diogo Almeida). We moved on to quasi-civilized in-room discussion (Clare Stroud, Phil Monahan, Bill Idsardi). And devolved into what could have gotten pretty sketchy ...















After the intense work (and tourism) at the CNS meeting -- more on that later -- it was critical to get back into the normal swing of things in DC, for which I recommend Drag Brunch at Perry's in Adams Morgan. Hip neighborhood and very therapeutic. And hilarious. Greg, can you find me in the pictures? It goes without saying that I was there principally for the food. Which is excellent.

CNS Virtual Poster Session

Below are abstracts from the two TB West posters presented at Cognitive Neuroscience Society this year. You can get a copy of the posters from here: http://lcbr.ss.uci.edu/virtual_poster_session/

If you would like your CNS poster included in this virtual poster session, feel free to email the title, authors, and abstract, along with a link to your poster. Thanks!

*************

An fMRI Investigation of the Functional Specificity of Sentence Processing Networks: A Comparison of Sentences and Melodies

Corianne Rogalsky & Gregory Hickok
UC Irvine

A number of recent studies have identified portions of the anterior temporal lobe (ATL) that respond preferentially to structured sentence-level stimuli (versus word-lists, for example). It is unclear, however, whether this response to sentences reflects syntactic computations, semantic integration operations, or more general hierarchical structure-building. The present study directly compares the neural systems associated with sentence and melodic structure processing to investigate the specificity of this ATL activity. We implemented a mixed-design fMRI paradigm to compare activity in the ATL whiles subjects listened to blocks of jabberwocky sentences, scrambled jabberwocky sentences, simple novel melodies, and scrambled novel melodies. In order to separate activations associated with hierarchical structure processing from activations resulting from general temporal processing, stimuli were presented at three different rates within each block. Regions with a greater BOLD response to sentences than to scrambled sentences, and regions with a greater response to melodies than to scrambled melodies were identified. In agreement with previous research, inferior frontal areas and ATL sub-regions, bilaterally, were found to prefer sentence-level structure. Similar regions were found to prefer hierarchical structure in general: these areas were more active for melodies than scrambled melodies. Further analysis indicates that inferior frontal, not anterior temporal, regions are more active for sentences than melodies, once the response to corresponding scrambled conditions are subtracted out. These preliminary analyses suggest that regions that prefer sentence-structure in the ATL also are recruited during more general hierarchical-structure building. Supported by NIH DC03681.

Modulation of brain regions involved in overt picture naming by parametric variation in word frequency, word length and reaction time

Stephen M. Wilson [1,2], A. Lisette Isenberg [2], and Gregory S. Hickok [2]

[1] Dept of Neurology
University of California, San Francisco

[2] Dept of Cognitive Sciences
University of California, Irvine

Abstract

Picture naming is a cognitive task commonly used to study lexical access. However it is a complex operation, entailing not only semantic and phonological stages of lexical access, but also ancillary processes such as visual processing, articulation, self-monitoring and executive functions. Previous neuroimaging studies have demonstrated recruitment of a wide range of brain areas which presumably support these various components of the task. In order to better delineate the functions of regions activated by picture naming, we used fMRI to identify brain areas where BOLD responses to picture naming were modulated by three different parametric variables: word frequency, word length and reaction time, each of which we hypothesized to be associated with different aspects of the task. Twelve subjects were scanned while they named 165 pictures in a rapid event-related design, and digital signal processing was used to extract vocal responses from background scanner noise. Lower frequency words were associated with greater BOLD responses in occipitotemporal cortex bilaterally. These regions are associated with visual and semantic processing. Longer words led to increased BOLD activity in speech motor areas as well as superior temporal cortex. Longer reaction times resulted in greater BOLD activity in areas including inferior frontal regions associated with both cognitive control and linguistic processes, and the pre-SMA. Of particular interest was a region in the left superior temporal sulcus correlated both with word length and reaction time (each independently of the other). We argue that such a pattern suggests a role for this region in retrieval of phonological form.

Saturday, April 19, 2008

More on the Motor Theory

Ok, first of all, I shouldn't be working on a weekend. That's David's job, but here I am anyway...

By leaving the issue out of my previous post, I was hoping someone would raise a question about whether motor-related information CAN influence speech perception, and how that might be explained on an "acoustic" theory of speech perception. Without even encouraging him, one of my own students brought it up. (I swear it wasn't a plant.) Kenny Vaden made the following point in a comment on my last post:

"While aphasias undermine the MT [Motor Theory] claim that phonological representations are *completely* motoric/gestural, the survival of speech perception without production does not mean that motoric information is not available to speech perception at all."

He's absolutely right. Knowledge of how speech is produced does appear to influence our perception of speech, at least under some circumstances. The paper we read discusses several lines of evidence in support of this position, and the data are reasonably compelling. The most obvious demonstration of this is probably the McGurk effect, but there are others.

So we must acknowledge that the motor knowledge can influence perception. But does this mean that an acoustic model is not correct? Do we have to admit that at least part of speech perception, or speech perception under some circumstances, involves perceiving gestures? No, we don't. Here's a simple explanation: knowledge of how speech is produced can have a top-down influence on the acoustic perception of speech information. Top-down expectations of a variety of sorts can influence all kinds of perceptual events, including speech. For example, the lexical status of a CVC syllable affects the perception of its constituent phonemes (e.g., category boundaries shift toward the lexical item in a b-p continuum with bag and pag as endpoints). So why can't motor expectations (e.g., forward modeling, predictive coding, analysis by synthesis -- whatever you want to call it) have a top down influence on acoustic representations? They can.

Conclusion: motor effects on perception do not falsify an acoustic theory of speech perception. But they do suggest that motor knowledge can influence perception in a top-down fashion, just like many types of knowledge can.

Thursday, April 17, 2008

The Motor Theory of Speech Perception: Discussion summary

Quick summary of our in class discussion of "The Motor Theory of Speech Perception Reviewed" by Galantucci et al. (2006), Psychonomic Bull. & Rev., 13: 361-77.

1. Very worthwhile and thorough article. Everyone needs to read it.

2. The authors make an excellent case for the fact that there is a very tight connection between speech perception and speech production.

3. In contrast to the authors' conclusions however, none of the arguments make a case for the central claim of the Motor Theory, that "Perceiving speech is perceiving phonetic gestures." p. 367. Instead, I would argue that there is much better evidence for the reverse claim: That speech production is producing auditory targets. Call it the Perceptual Theory of Speech Production. (See Frank Guenther's work for computational implementations of this viewpoint.) This view has all the advantages of the Motor Theory in terms of maintaining parity between perception and production, and explaining the tight association between sensory and motor systems, AND, unlike the Motor Theory is consistent with the empirical facts from aphasia, namely that damage to frontal speech production systems does not lead to a concomitant impairment in speech recognition, whereas damage to posterior auditory-related brain regions does produce production deficits.

4. Relatedly, and as pointed out in a previous post, evidence from aphasia is glaringly lacking from the otherwise very thorough review.

I would love to have a discussion about #3, in particular, whether anyone can think of any evidence to support a Motor Theory account rather than a Sensory Theory account. I didn't see it in the Galantucci et al. paper, which is the best review I've seen, so speak up if you disagree! I would love to know why I'm wrong.

From the Is-there-ANYTHING-they-won't-study-with-functional-imaging file...

In the current issue of J. Neurosci:

Anticipatory Brain Activity in Irritable Bowel Syndrome
Jerry Chen and Udi Blankstein
J. Neurosci. 2008;28 4113-4114

Mirror Neuron Course: Reading set #2

Here is the second set of readings for our MN course, These are the early empirical demonstrations of MN in the monkey. We are bit behind schedule because (i) I was out of town last week, and (ii) the CNS meeting this week. Our plan is to discuss the MN review paper (Reading Set #1), and the papers below next week. I will post a discussion summary of the Motor Theory of Speech Perception shortly.

di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G.
Understanding motor events: a neurophysiological study.
Exp Brain Res. 1992;91(1):176-80.

Gallese V, Fadiga L, Fogassi L, Rizzolatti G.
Action recognition in the premotor cortex.
Brain. 1996 Apr;119 ( Pt 2):593-609.

Rizzolatti G, Fadiga L, Gallese V, Fogassi L.
Premotor cortex and the recognition of motor actions.
Brain Res Cogn Brain Res. 1996 Mar;3(2):131-41.

Monday, April 14, 2008

Talking Brains gets burned

The Talking Brains blog is now a burned feed. Not that I know what this means exactly, but it is supposed to make it easier for people who subscribe to feeds (Talking Brains, depressing news, boring blogs, etc.) to access content easier.

So if you are already a TB subscriber, you know what to do. If you're not, but would like to be, just go down to the bottom of the main page and click where it says "subscribe to: posts(atom)". This will take you to a page that will allow you view Talking Brains on the reader of your choice.

TB Journal Club: New paper in J. Neurosci on auditory cortex asymmetries

Well, here is a must-read for the TB Journal Club:

Right-Hemisphere Auditory Cortex Is Dominant for Coding Syllable Patterns in Speech

The paper by Daniel A. Abrams, Trent Nicol, Steven Zecker, and Nina Kraus appears in the latest issue of J. Neuroscience (2008;28 3958-3965)
http://www.jneurosci.org/cgi/content/abstract/28/15/3958?etoc

Looks like confirmation that auditory cortex in the right hemisphere is tracking speech envelope information (i.e., slow temporal features in the 3-5 Hz range). See abstract below. So in addition to the Mirror Neuron readings for our SimulCourse, let's read this one and "discuss" next week. BTW, is anyone else reading these papers, or am I the only one? :-)

Cortical analysis of speech has long been considered the domain of left-hemisphere auditory areas. A recent hypothesis poses that cortical processing of acoustic signals, including speech, is mediated bilaterally based on the component rates inherent to the speech signal. In support of this hypothesis, previous studies have shown that slow temporal features (3–5 Hz) in nonspeech acoustic signals lateralize to right-hemisphere auditory areas, whereas rapid temporal features (20–50 Hz) lateralize to the left hemisphere. These results were obtained using nonspeech stimuli, and it is not known whether right-hemisphere auditory cortex is dominant for coding the slow temporal features in speech known as the speech envelope. Here we show strong right-hemisphere dominance for coding the speech envelope, which represents syllable patterns and is critical for normal speech perception. Right-hemisphere auditory cortex was 100% more accurate in following contours of the speech envelope and had a 33% larger response magnitude while following the envelope compared with the left hemisphere. Asymmetries were evident regardless of the ear of stimulation despite dominance of contralateral connections in ascending auditory pathways. Results provide evidence that the right hemisphere plays a specific and important role in speech processing and support the hypothesis that acoustic processing of speech involves the decomposition of the signal into constituent temporal features by rate-specialized neurons in right- and left-hemisphere auditory cortex.

Friday, April 4, 2008

TB Journal Club: Effective and Structural Connectivity of Human Auditory Cortex


Ok, so I have had a chance to read through Upadhyay et al.'s recent paper in J. Neuroscience (2008: 3341-9). It's a good one. They asked subjects to listen to short sentences during BOLD fMRI, and also collected scans for DTI imaging. Using Heschl's gyrus (HG) as a seed, they performed Granger causality mapping to identify regions that are functionally coupled with activity in HG. Two regions showed up, one anterior to HG and the other posterior to HG. The locations are pretty dorsal, involving the lateral portions of the supratemporal plane and wrapping out toward the crown of the STG in both locations (see figure). DTI analysis showed that these two sites are connected to different regions of HG: the anterior site appears to get its input from rostral HG, whereas the posterior site appears to get its input from caudal HG.

This finding is consistent with dual pathway models of primate auditory cortex which distinguish between ventral/rostral and dorsal/caudal streams and suggest that the distinction between these pathways in humans is present at the level of A1.

Mapping connectivity patterns is critically important to understanding the functional anatomy of audition including speech/language processes, and this study is a big step in that direction. But still we don't seem to be any closer to resolving the functional roles of these projections. For example, there remains debate over the extent to which anterior vs. posterior STS regions (which seem to be beyond the scope of the Upadhyay et al. analysis) support speech processing. Regions of the STS. both anterior and posterior, seem to behave very ventral stream ("what") like in their response to speech stimulation. Also, there is debate over whether the dorsal/caudal stream supports spatial functions, sensory-motor integration functions, spectro-temporal analysis, or some combination of these. And then there's the Computation Hub idea. The present study doesn't really resolve any of these questions (not that it was intended to though).

Still lots to work out. I've got plenty of ideas on the topic, and hope to put together a paper soon that lays them out. Here's a couple of preview tidbits:

1. The computational hug idea (typo intended) is wrong, at least in its broadest conceptualization. It's an interesting hypothesis, but the fact that lots of different types of stimuli are getting cozy in the planum T. doesn't mean that there is a single mechanism devoted to sorting them out into different processing streams. Hickok & Poeppel 2007 touched on this issue.

2. Evidence for pure spatial functions in the dorsal/caudal stream -- whether we are talking auditory motion or spatial localization -- is weak at best, and probably non-existent, as Robert Zatorre has suggested (Google "Where is 'where' in auditory cortex'). Here's another post on the topic.

3. Both anterior and posterior projections are involved in "what" processes (although I don't know what kind of what).

4. Sensory-motor integration is an important function of the posterior planum region, although it is probably a mistake to refer to it as part of an "auditory" stream. See this previous post.

Thursday, April 3, 2008

Exciting jobs in Trieste, Italy

The Cognitive Neuroscience Sector at SISSA seeks to recruit independent group leaders.

The 3-year Plan approved by SISSA last Fall includes strengthening cognitive neuroscience research and identifies as priorities:
- Behavioural Neuroscience, investigated through electrophysiology in awake animals
- Cognitive Development and/or Learning
- Functional Imaging, in connection with the new fMRI-sharing agreement in Udine
- Language and/or Higher Cognitive Function

The Sector aims to identify up to 3 suitable candidates already this Spring, although the appointments and so the establishment of new research groups may be scattered over the period 2008-10. The Sector is particularly interested in reaching candidates with no previous history of collaboration with SISSA. If selected, they will be offered positions at a level commensurate with their qualifications, in the expectation that within 5 years they will succeed in obtaining tenure as Associate or Full Professors. Candidates with whose work SISSA is familiar may be offered ad hoc arrangements if selected, but they will first be assessed together with the others.

SISSA is one of the three purely postgraduate and postdoctoral institutions within the Italian university system. It operates in English and the Sector is keen to enhance its international character and its intellectual diversity. The Sector currently has 23 PhD students supported on SISSA fellowships, almost half of whom are not Italians. Postdocs, however, are normally supported by individual research funding. Faculty members are required to teach limited PhD mini-courses, and to individually supervise the research of students in their groups. Current faculty members are Mathew Diamond, Jacques Mehler, Raffaella Rumiati, Tim Shallice and Alessandro Treves, with visiting professors Evan Balaban, Luca Bonatti and Marina Nespor. Further information about the Sector can be found on the webpage http://www.sissa.it/cns/

Those interested should write to Alessandro Treves, alessandrotreves@gmail.com, before April 30th, 2008, attaching their curriculum vitae. Receipt of CVs will be acknowledged weekly.

Info sent by Prof. Raffaella Rumiati

Wednesday, April 2, 2008

POSTDOCTORAL FELLOWSHIP, NIH

POSTDOCTORAL FELLOWSHIP, NIH

A postdoctoral fellowship dedicated to neuroimaging studies of higher level language processing and functional recovery in aphasia is available in the Language Section, NIDCD, NIH Intramural Program, Bethesda, MD

We conduct multimodal imaging studies using hemodynamic (fMRI, PET) and electrophysiological (EEG, MEG) techniques. An ideal candidate would have experience with one or more of these, or other functional imaging methods, and a strong interest in using them to study brain-language relationships

A PhD or MD degree is required. Knowledge of MRI or MEG and/or experience in statistical methods, computer programming or image processing is strongly preferred. The fellowship carries an initial contract of two years with an option to renew. The typical duration of such a fellowship is three to five years.

Allen R. Braun, M.D.
Chief, Language Section
National Institute on Deafness
and Other Communication Disorders
National Institutes of Health
Building 10, Room 8S235A
Bethesda, Maryland 20892

Phone: 301-402-1497
Fax_ 301-451-5353
brauna@nidcd.nih.gov

Tuesday, April 1, 2008

Mirror Neuron Course: Reading Set #1

First order of business in our course is to get an update on the mirror system's cognitive science bedfellow: the Motor Theory of Speech Perception.  As pointed out in a previous entry, there is an excellent recent review of the Motor Theory, which we plan to read and discuss at our next meeting.

Galantucci, Fowler, & Turvey (2006). The motor theory of speech perception reviewed. Psychonomic Bulletin & Review, 13:361-77.

We also plan to read a recent review of the mirror system to get an overview before we jump into some of the details.  Here is the paper we selected:

Rizzolatti & Craighero (2004). The mirror-neuron system. Annual Review of Neuroscience, 27:169-92.