Saturday, December 29, 2007

Favorite Jackendoff quote

While doing some reading for my Semantics and Brain course, I've found my favorite quote from Jackendoff, maybe my favorite from all of linguistics. Jackendoff (2003, BBS, 26, 651-707) was making the point that the structure of language is still far from understood despite decades of research by a whole community of linguists. He continues,

"Yet every child does it by the age of ten or so. Children don't have to make the choices we do... They already f-know it in advance." p. 653

Although linguists may be justified in being f-annoyed that little tykes know more about language than they do, Jackendoff's use of the term f-know is not an abbreviated expletive. The f actually stands for functional, and the point is that kids seems to have some functional knowledge of language structure (they f-know it) when they approach the task of language acquisition. This, of course, is not a new claim. I just like the way Jackendoff f-puts it. :-)

If you haven't read Ray's book, Foundations of Language, or at least the precis in BBS, it is worth a serious look. Lot's of ideas that make contact between linguistics, psycholinguistics, and neuroscience.

Jackendoff R.
Précis of Foundations of language: brain, meaning, grammar, evolution.
Behav Brain Sci. 2003 Dec;26(6):651-65; discussion 666-707.

Wednesday, December 26, 2007

The French Connection III: New Neuron paper by Giraud team

Hi there. Sorry for the brief absence -- I've been a bit 'indisposed' medically, but am now ready for a fun 2008 on Talking Brains. Happy New Year.

As I indicated before, Anne-Lise Giraud and her colleagues (including me) have a new paper in Neuron that illustrates one of the points I've been carrying on about for a while -- multi-time resolution processing and the Asymmetric Sampling in Time (AST) idea.

The paper, Endogenous Cortical Rhythms Determine Cerebral Specialization for Speech Perception and Production (Neuron 56: 1127-1134), describes a study using concurrent EEG and fMRI. The study shows how theta (slower sampling) and gamma (faster sampling) rhythms (as quantified by EEG) are bilaterally but asymmetrically distributed in the auditory cortices. Moreover (cool bonus data), mouth and tongue motor areas showed theta and gamma -- illustrating that the same cortical oscillations are observed on auditory and speech-motor areas. Cool, no?

Anyway, if you have wondered about some of the temporal claims (and specifically AST) that have appeared in Hickok & Poeppel 2000/2004/2007, recent papers that show exciting empirical support are:

• Giraud et al. (2007), Neuron
• Luo & Poeppel (2007), Neuron
• Boemio et al. (2005), Nature Neuroscience

Martin Meyer and his colleagues (Zürich) have also accumulated some interesting evidence regarding these hypotheses. More on the work by that group soon, in a separate posting.

Friday, December 21, 2007

Semantics and Brain course - reading set #1

I thought we would start with a little linguistic foundation for understanding semantic organization in the brain. In the neuroscience literature, the term "semantics" is often used as it were a simple unified concept, and often refers to lexical and/or conceptual semantics. But from a linguistic standpoint there's a lot more to it. This first set of readings is aimed at scratching the surface of this complexity. One is on lexical semantics and the other two are more general papers by Ray Jackendoff, which will provide some additional linguistic background including discussions of syntax and phonology. If you have access to Jackendoff's book, Foundations of Language, Chapters 9 and 10 provide a more thorough discussion of issues in the semantics.

Barker, C. 2001. Lexical semantics. Encyclopedia of Cognitive Science. Macmillan.
http://barker.linguistics.fas.nyu.edu/Research/barker-lexical.pdf

Jackendoff R. (2003). Précis of Foundations of language: brain, meaning, grammar, evolution. Behav Brain Sci., 26(6):651-65; discussion 666-707.

Jackendoff R. (2007). A Parallel Architecture perspective on language processing. Brain Res., 18;1146:2-22.

Tuesday, December 18, 2007

Mirror Neurons on Scientific American blog

Check out the latest entry on Scientific American's "Mind Matters" blog. It is a comment on mirror neurons by yours truly.

http://science-community.sciam.com/thread.jspa?threadID=300005636

Friday, December 14, 2007

Is there an auditory "where" stream? or Congrats to Dr. Smith

Congratulations to Dr. Kevin Robert Smith, who just this morning successfully defended his dissertation here at Talking Brains West (aka Hickok Lab at UC Irvine). Kevin's thesis started out asking the question, What is the nature of the human auditory "where" stream? But ended up concluding that there might not be a "where"...

I had originally gotten interested in spatial hearing, motion in particular, because folks like Josef Rauchecker and Tim Griffiths were finding "motion" sensitive activations in the human planum temporale, darn near our beloved Area Spt. This meant there were two presumed dorsal stream functions (spatial "where" and sensory-motor "how") co-mingling in the same neural neighborhood. I wondered whether the dorsal stream might be composed of two anatomically separate, and functionally independent systems, or whether the very same neural real estate was occupied by spatial and sensory-motor systems.

Enter Kevin Smith (my grad student) and Kourosh Saberi (my colleague and collaborator). Together we decided first to make sure we could replicate the auditory motion effects in the planum and then see how it relates to area Spt. Kourosh, our local auditory guru, suggested that we build a control into our first experiment. While other folks had contrasted moving with non-moving sounds and found PT activation, no one had tried to assess the effects of non-moving but spatially varying sound sources. So we had the usual moving condition and had another condition in which stationary sounds randomly appeared at different locations during the activation block (other studies used blocks of stationary sounds that only appeared at one location). To our surprise, the non-moving but spatially varying stimuli activated the PT "motion area" just as robustly as the moving stimuli. Kevin's second experiment searched for a motion-selective area using an event-related/adaptation design. Same result: PT regions that respond to motion also respond just as well to non-moving but spatially varying stimuli.

So there's no motion area. But clearly there is still a "where" pathway right? After all, in both of Kevin's expeirments, manipulating the location of a sound source causes activation in the PT. Well, Robert Zatorre for one, might argue otherwise. In a 2002 paper, Zatorre found that putative "spatial" activation effects were only evident when spatial information provided cues to auditory object identity. He suggested that there is no pure "where" pathway. Instead, where interacts extensively with "what."

We were not so sure, so in Kevin's third experiment (almost submitted, right Kevin?), he compared activation during listening to a single talker that was either presented at a single location, or bounced around between three locations. He found more PT activation for the three location condition than the one location condition. A clear spatial effect, right? Yes, but... He also had a three-talker condition: three voices presented simultaneously. These voices were presented either at a single location (and stayed put) or at three different locations (and also stayed put at their respective locations). We found more activation for the three location condition than the one location condition, which might be viewed as a spatial effect, except that this 3-talker/3-location condition produced significantly more activation than the 1-talker/3-location condition. This is odd according to a pure spatial account because the 3-talker/3-location condition doesn't involve any spatial change -- all sound sources stay put -- whereas the 1-talker/3-location condition involved a lot of spatial change (new location every second). It seems that the increase in activation for the 3-talker/3-location condition results from the interaction for spatial and object information.

In other words, I think Zatorre is right. There is no pure auditory "where" system, but rather a system that uses spatial information (perhaps computed subcortically?) to segregate auditory objects.

So what is the auditory dorsal stream doing? I would say sensory-motor integration is the best characterization, except that I have suggested that such a system may not be part of the auditory system proper (see "The Auditory Stream May Not Be Auditory"). Maybe the "stream" concept is nearing the end of its usefulness. Rather than thinking about processing streams within a sensory modality, maybe we need to start thinking about interfaces between sensory systems and other systems, such as a sensory-motor interface and a sensory-conceptual interface. So where does that leave "where"? Who knows.

References

Smith KR, Saberi K, Hickok G.
An event-related fMRI study of auditory motion perception: no evidence for a
specialized cortical system.
Brain Res. 2007 May 30;1150:94-9. Epub 2007 Mar 7.

Smith KR, Okada K, Saberi K, Hickok G.
Human cortical auditory motion areas are not motion selective.
Neuroreport. 2004 Jun 28;15(9):1523-6.

Zatorre RJ, Bouffard M, Ahad P, Belin P.
Where is 'where' in the human auditory cortex?
Nat Neurosci. 2002 Sep;5(9):905-9.

Wednesday, December 12, 2007

Semantics and brain course

I'm teaching a graduate course next quarter on semantics and the brain. It's my annual, 'I need to know more about this topic so I might as well learn out loud and get teaching credit for doing it' course. I thought it might be fun to post readings and discussion summaries on this blog. So if anyone wants to follow along and join the discussion, you are welcome to! Our Winter quarter here at UC Irvine starts the week of Jan. 7 and runs for 10 weeks. We will emphasize semantic dementia, as this seems to be the syndrome du jour for understanding semantic functions, but we will also look at functional imaging, aphasia, recent computational models, etc.

My working hypothesis is that semantic dementia involves a general conceptual-semantic deficit (i.e., not specific to language). This is different than the kind of lexical semantic interface system that David and I talk about, which really is concerned specifically with lexical semantic linkages to phonological representations. This idea may reconcile the opposing views regarding "semantic" processing in the anterior temporal lobe based on data from semantic dementia (a la Sophie Scott and Richard Wise) vs. in the posterior temporal lobe based on data from aphasia (as we and others have argued). Specifically, posterior temporal systems may be more lexical-semantic, interfacing semantic systems with lexical-phonological representations, whereas anterior temporal systems may involve more general conceptual-semantic operations beyond the language system. Hopefully, based on readings in this course, we will be able to confirm or refute this working hypothesis.

Monday, December 10, 2007

Bilateral organization of motor participation in speech perception?

"Shane" left an important comment on our Mirror Neuron Results entry, pointing out a couple of papers by Iacoboni's group that address the neuropsychological data relevant to the MN theory of speech perception. Thanks for bringing up these papers, Shane, they are definitely worth discussing here.

Let's start with the Wilson and Iacoboni (2006) paper, which I actually like quite a bit. The fundamental result is that when subjects passively listen to native and non-native phonemes that varied in terms of how readily they can be articulated, fMRI-measured activity in auditory areas covaried with the producibility of non-native phonemes. This suggests that sensory mechanisms are important in guiding speech articulation, as we and others, such as Frank Guenther, have suggested. Wilson and Iacoboni agree, but also argue that the motor system "plays an active role" concluding that "speech perception is neither purely sensory nor motor, but rather a sensorimotor process." I don't think the data from this paper provides crucial evidence supporting a critical role of the motor system, but let's hold that discussion for a subsequent post. What I'd like to address is the point that Shane brought up regarding this paper:

Admirably, Wilson & Iacoboni attempt to deal with the question of Broca's aphasia. In attempting to explain why Broca's aphasics, who have large left frontal lesions yet preserved speech recognition, they suggest, "It is possible that in Broca's aphasia, motor areas in the right hemisphere continue to support speech perception..." (p. 323). This is an odd proposal. Basically, one has to assume that there are motor-speech systems in the right frontal lobe that are neither necessary nor sufficient for speech production, but which can, nonetheless fully support speech perception. This is a strange kind of motor-speech system. More to the point though, if speech perception depends on active participation of the motor speech system, then functional destruction of the motor speech system, as occurs in severe Broca's aphasia, should severely impact speech recognition. It does not. I don't see any theoretical detour around this empirical fact.

Wilson and Iacoboni offer another possibility to explain Broca's aphasia in the context of a motor theory of speech perception. They point out that many such patients indeed have speech perception deficits when assessed using sublexical tasks such as syllable discrimination. This is true, of course, but as we have argued repeatedly (see any Hickok & Poeppel paper, and/or the series of entries on meta-linguistic tasks), performance on these sorts of tasks is not predictive of ecologically valid speech recognition abilities.

Conclusion: motor speech systems are NOT playing any kind of major role in speech recognition. The mirror neuron theory of speech perception, just like its predecessor, the motor theory of speech perception, is wrong in any strong form.

Once we all agree to this, then we are in a position to have an interesting discussion, because we can then begin to ask questions like, Do motor speech systems participate in any, say supportive, aspect of speech recognition? If so, under what conditions? (Perhaps under noisy listening conditions.) What kind of operations might be supported by this system? (How about predictive coding, attentional modulation, etc.?)

So let's start this discussion by looking first for evidence of motor involvement in speech recognition. Shane suggested this paper: Meister et al. (2007). The essential role of premotor cortex in speech perception. Current Biology, 17, 1692-1696, so we'll start here in our next post.

Thursday, December 6, 2007

Talking Brains continues to grow

We started Talking Brains as an experiment to see if blogging is actually a useful way to communicate in the scientific community. Two things have surprised me over the last 6+ months. 1. People read blogs, including this one which received 1400 visits last month. This is not much compared to the big boys -- I learned that some Scientific American blogs have a million or so visits a month -- but 1400 is not bad considering the size of our field. 2. People don't interact much on blogs, at least this one. I had hopped initially that this would be a discussion forum. But it hasn't turned out that way except in the few instances. So is it useful? Hard to tell since no one comments. :-) Hopefully, some of the ideas and commentary we've put up here has stimulated research. If so, then it's probably worth it.

If anyone has any ideas on how to get more interaction, please let us know.

Monday, December 3, 2007

Task dependent "sensory" responses in prefrontal cortex

Tania Pasternak from Rochester visited the Center for Cognitive Neuroscience here at UC Irvine as part of our colloquium series, and presented some interesting data on prefrontal cortex responses in a visual motion discrimination task. One finding is relevant to language work:

PFC neurons show visual motion direction selectivity (top right panel; cf prefered direction curve vs. anti-preferred) like MT (top left panel) -- an interesting observation in its own right. But this effect holds only when the monkey is performing a direction discrimination task. If instead, the monkey performs a speed discrimination task, or just passively views the stimulus, the selectively disappears (bottom right panel). Thus, stimulus specific responses are task dependent in PFC. MT direction selectivity is independent of task, however. MT neurons respond to a moving stimulus in their preferred direction whether the monkey is performing a direction discrimination task or just passively fixating (bottom left panel).

So what's the connection to language research? The connection is the observation of task-dependent involvement of frontal cortex in a putatively "sensory" ability. Ask aphasics with frontal lesions to discriminate pairs of syllables and chances are they will be impaired. Ask healthy participants to discriminate syllables in an fMRI experiment and chances are you'll find frontal activation. We have argued this is because the task (discrimination), not the sensory processing, induces frontal lobe involvement (see Hickok & Poeppel, 2000, 2004, 2007 for review). Pasternak's data validates this claim: frontal lobe involvement in putatively sensory abilities, is task dependent.

Reference:

Zaksas & Pasternak (2006). Directional signals in the prefrontal cortex and in area MT during a working memory for visual motion task. J. Neurosci., 26(45):11726-42.

New Survey: Best Conference for Brain-Language Research

Which is the best annual conference for brain-language research? Is it a language conference that has a bit of neuroscience representation? A neuroscience conference with a bit of language representation? Or an aphasia conference? Here are links to the various meetings. What other conferences do you present at?

Academy of Aphasia
Architectures and Mechanisms of Language Processing (AMLaP)
Cognitive Neuroscience Society Meeting (CNS)
CUNY Conference on Human Sentence Processing
Society for Neuroscience (SfN)