Saturday, December 29, 2007

Favorite Jackendoff quote

While doing some reading for my Semantics and Brain course, I've found my favorite quote from Jackendoff, maybe my favorite from all of linguistics. Jackendoff (2003, BBS, 26, 651-707) was making the point that the structure of language is still far from understood despite decades of research by a whole community of linguists. He continues,

"Yet every child does it by the age of ten or so. Children don't have to make the choices we do... They already f-know it in advance." p. 653

Although linguists may be justified in being f-annoyed that little tykes know more about language than they do, Jackendoff's use of the term f-know is not an abbreviated expletive. The f actually stands for functional, and the point is that kids seems to have some functional knowledge of language structure (they f-know it) when they approach the task of language acquisition. This, of course, is not a new claim. I just like the way Jackendoff f-puts it. :-)

If you haven't read Ray's book, Foundations of Language, or at least the precis in BBS, it is worth a serious look. Lot's of ideas that make contact between linguistics, psycholinguistics, and neuroscience.

Jackendoff R.
Précis of Foundations of language: brain, meaning, grammar, evolution.
Behav Brain Sci. 2003 Dec;26(6):651-65; discussion 666-707.

Wednesday, December 26, 2007

The French Connection III: New Neuron paper by Giraud team

Hi there. Sorry for the brief absence -- I've been a bit 'indisposed' medically, but am now ready for a fun 2008 on Talking Brains. Happy New Year.

As I indicated before, Anne-Lise Giraud and her colleagues (including me) have a new paper in Neuron that illustrates one of the points I've been carrying on about for a while -- multi-time resolution processing and the Asymmetric Sampling in Time (AST) idea.

The paper, Endogenous Cortical Rhythms Determine Cerebral Specialization for Speech Perception and Production (Neuron 56: 1127-1134), describes a study using concurrent EEG and fMRI. The study shows how theta (slower sampling) and gamma (faster sampling) rhythms (as quantified by EEG) are bilaterally but asymmetrically distributed in the auditory cortices. Moreover (cool bonus data), mouth and tongue motor areas showed theta and gamma -- illustrating that the same cortical oscillations are observed on auditory and speech-motor areas. Cool, no?

Anyway, if you have wondered about some of the temporal claims (and specifically AST) that have appeared in Hickok & Poeppel 2000/2004/2007, recent papers that show exciting empirical support are:

• Giraud et al. (2007), Neuron
• Luo & Poeppel (2007), Neuron
• Boemio et al. (2005), Nature Neuroscience

Martin Meyer and his colleagues (Zürich) have also accumulated some interesting evidence regarding these hypotheses. More on the work by that group soon, in a separate posting.

Friday, December 21, 2007

Semantics and Brain course - reading set #1

I thought we would start with a little linguistic foundation for understanding semantic organization in the brain. In the neuroscience literature, the term "semantics" is often used as it were a simple unified concept, and often refers to lexical and/or conceptual semantics. But from a linguistic standpoint there's a lot more to it. This first set of readings is aimed at scratching the surface of this complexity. One is on lexical semantics and the other two are more general papers by Ray Jackendoff, which will provide some additional linguistic background including discussions of syntax and phonology. If you have access to Jackendoff's book, Foundations of Language, Chapters 9 and 10 provide a more thorough discussion of issues in the semantics.

Barker, C. 2001. Lexical semantics. Encyclopedia of Cognitive Science. Macmillan.
http://barker.linguistics.fas.nyu.edu/Research/barker-lexical.pdf

Jackendoff R. (2003). Précis of Foundations of language: brain, meaning, grammar, evolution. Behav Brain Sci., 26(6):651-65; discussion 666-707.

Jackendoff R. (2007). A Parallel Architecture perspective on language processing. Brain Res., 18;1146:2-22.

Tuesday, December 18, 2007

Mirror Neurons on Scientific American blog

Check out the latest entry on Scientific American's "Mind Matters" blog. It is a comment on mirror neurons by yours truly.

http://science-community.sciam.com/thread.jspa?threadID=300005636

Friday, December 14, 2007

Is there an auditory "where" stream? or Congrats to Dr. Smith

Congratulations to Dr. Kevin Robert Smith, who just this morning successfully defended his dissertation here at Talking Brains West (aka Hickok Lab at UC Irvine). Kevin's thesis started out asking the question, What is the nature of the human auditory "where" stream? But ended up concluding that there might not be a "where"...

I had originally gotten interested in spatial hearing, motion in particular, because folks like Josef Rauchecker and Tim Griffiths were finding "motion" sensitive activations in the human planum temporale, darn near our beloved Area Spt. This meant there were two presumed dorsal stream functions (spatial "where" and sensory-motor "how") co-mingling in the same neural neighborhood. I wondered whether the dorsal stream might be composed of two anatomically separate, and functionally independent systems, or whether the very same neural real estate was occupied by spatial and sensory-motor systems.

Enter Kevin Smith (my grad student) and Kourosh Saberi (my colleague and collaborator). Together we decided first to make sure we could replicate the auditory motion effects in the planum and then see how it relates to area Spt. Kourosh, our local auditory guru, suggested that we build a control into our first experiment. While other folks had contrasted moving with non-moving sounds and found PT activation, no one had tried to assess the effects of non-moving but spatially varying sound sources. So we had the usual moving condition and had another condition in which stationary sounds randomly appeared at different locations during the activation block (other studies used blocks of stationary sounds that only appeared at one location). To our surprise, the non-moving but spatially varying stimuli activated the PT "motion area" just as robustly as the moving stimuli. Kevin's second experiment searched for a motion-selective area using an event-related/adaptation design. Same result: PT regions that respond to motion also respond just as well to non-moving but spatially varying stimuli.

So there's no motion area. But clearly there is still a "where" pathway right? After all, in both of Kevin's expeirments, manipulating the location of a sound source causes activation in the PT. Well, Robert Zatorre for one, might argue otherwise. In a 2002 paper, Zatorre found that putative "spatial" activation effects were only evident when spatial information provided cues to auditory object identity. He suggested that there is no pure "where" pathway. Instead, where interacts extensively with "what."

We were not so sure, so in Kevin's third experiment (almost submitted, right Kevin?), he compared activation during listening to a single talker that was either presented at a single location, or bounced around between three locations. He found more PT activation for the three location condition than the one location condition. A clear spatial effect, right? Yes, but... He also had a three-talker condition: three voices presented simultaneously. These voices were presented either at a single location (and stayed put) or at three different locations (and also stayed put at their respective locations). We found more activation for the three location condition than the one location condition, which might be viewed as a spatial effect, except that this 3-talker/3-location condition produced significantly more activation than the 1-talker/3-location condition. This is odd according to a pure spatial account because the 3-talker/3-location condition doesn't involve any spatial change -- all sound sources stay put -- whereas the 1-talker/3-location condition involved a lot of spatial change (new location every second). It seems that the increase in activation for the 3-talker/3-location condition results from the interaction for spatial and object information.

In other words, I think Zatorre is right. There is no pure auditory "where" system, but rather a system that uses spatial information (perhaps computed subcortically?) to segregate auditory objects.

So what is the auditory dorsal stream doing? I would say sensory-motor integration is the best characterization, except that I have suggested that such a system may not be part of the auditory system proper (see "The Auditory Stream May Not Be Auditory"). Maybe the "stream" concept is nearing the end of its usefulness. Rather than thinking about processing streams within a sensory modality, maybe we need to start thinking about interfaces between sensory systems and other systems, such as a sensory-motor interface and a sensory-conceptual interface. So where does that leave "where"? Who knows.

References

Smith KR, Saberi K, Hickok G.
An event-related fMRI study of auditory motion perception: no evidence for a
specialized cortical system.
Brain Res. 2007 May 30;1150:94-9. Epub 2007 Mar 7.

Smith KR, Okada K, Saberi K, Hickok G.
Human cortical auditory motion areas are not motion selective.
Neuroreport. 2004 Jun 28;15(9):1523-6.

Zatorre RJ, Bouffard M, Ahad P, Belin P.
Where is 'where' in the human auditory cortex?
Nat Neurosci. 2002 Sep;5(9):905-9.

Wednesday, December 12, 2007

Semantics and brain course

I'm teaching a graduate course next quarter on semantics and the brain. It's my annual, 'I need to know more about this topic so I might as well learn out loud and get teaching credit for doing it' course. I thought it might be fun to post readings and discussion summaries on this blog. So if anyone wants to follow along and join the discussion, you are welcome to! Our Winter quarter here at UC Irvine starts the week of Jan. 7 and runs for 10 weeks. We will emphasize semantic dementia, as this seems to be the syndrome du jour for understanding semantic functions, but we will also look at functional imaging, aphasia, recent computational models, etc.

My working hypothesis is that semantic dementia involves a general conceptual-semantic deficit (i.e., not specific to language). This is different than the kind of lexical semantic interface system that David and I talk about, which really is concerned specifically with lexical semantic linkages to phonological representations. This idea may reconcile the opposing views regarding "semantic" processing in the anterior temporal lobe based on data from semantic dementia (a la Sophie Scott and Richard Wise) vs. in the posterior temporal lobe based on data from aphasia (as we and others have argued). Specifically, posterior temporal systems may be more lexical-semantic, interfacing semantic systems with lexical-phonological representations, whereas anterior temporal systems may involve more general conceptual-semantic operations beyond the language system. Hopefully, based on readings in this course, we will be able to confirm or refute this working hypothesis.

Monday, December 10, 2007

Bilateral organization of motor participation in speech perception?

"Shane" left an important comment on our Mirror Neuron Results entry, pointing out a couple of papers by Iacoboni's group that address the neuropsychological data relevant to the MN theory of speech perception. Thanks for bringing up these papers, Shane, they are definitely worth discussing here.

Let's start with the Wilson and Iacoboni (2006) paper, which I actually like quite a bit. The fundamental result is that when subjects passively listen to native and non-native phonemes that varied in terms of how readily they can be articulated, fMRI-measured activity in auditory areas covaried with the producibility of non-native phonemes. This suggests that sensory mechanisms are important in guiding speech articulation, as we and others, such as Frank Guenther, have suggested. Wilson and Iacoboni agree, but also argue that the motor system "plays an active role" concluding that "speech perception is neither purely sensory nor motor, but rather a sensorimotor process." I don't think the data from this paper provides crucial evidence supporting a critical role of the motor system, but let's hold that discussion for a subsequent post. What I'd like to address is the point that Shane brought up regarding this paper:

Admirably, Wilson & Iacoboni attempt to deal with the question of Broca's aphasia. In attempting to explain why Broca's aphasics, who have large left frontal lesions yet preserved speech recognition, they suggest, "It is possible that in Broca's aphasia, motor areas in the right hemisphere continue to support speech perception..." (p. 323). This is an odd proposal. Basically, one has to assume that there are motor-speech systems in the right frontal lobe that are neither necessary nor sufficient for speech production, but which can, nonetheless fully support speech perception. This is a strange kind of motor-speech system. More to the point though, if speech perception depends on active participation of the motor speech system, then functional destruction of the motor speech system, as occurs in severe Broca's aphasia, should severely impact speech recognition. It does not. I don't see any theoretical detour around this empirical fact.

Wilson and Iacoboni offer another possibility to explain Broca's aphasia in the context of a motor theory of speech perception. They point out that many such patients indeed have speech perception deficits when assessed using sublexical tasks such as syllable discrimination. This is true, of course, but as we have argued repeatedly (see any Hickok & Poeppel paper, and/or the series of entries on meta-linguistic tasks), performance on these sorts of tasks is not predictive of ecologically valid speech recognition abilities.

Conclusion: motor speech systems are NOT playing any kind of major role in speech recognition. The mirror neuron theory of speech perception, just like its predecessor, the motor theory of speech perception, is wrong in any strong form.

Once we all agree to this, then we are in a position to have an interesting discussion, because we can then begin to ask questions like, Do motor speech systems participate in any, say supportive, aspect of speech recognition? If so, under what conditions? (Perhaps under noisy listening conditions.) What kind of operations might be supported by this system? (How about predictive coding, attentional modulation, etc.?)

So let's start this discussion by looking first for evidence of motor involvement in speech recognition. Shane suggested this paper: Meister et al. (2007). The essential role of premotor cortex in speech perception. Current Biology, 17, 1692-1696, so we'll start here in our next post.

Thursday, December 6, 2007

Talking Brains continues to grow

We started Talking Brains as an experiment to see if blogging is actually a useful way to communicate in the scientific community. Two things have surprised me over the last 6+ months. 1. People read blogs, including this one which received 1400 visits last month. This is not much compared to the big boys -- I learned that some Scientific American blogs have a million or so visits a month -- but 1400 is not bad considering the size of our field. 2. People don't interact much on blogs, at least this one. I had hopped initially that this would be a discussion forum. But it hasn't turned out that way except in the few instances. So is it useful? Hard to tell since no one comments. :-) Hopefully, some of the ideas and commentary we've put up here has stimulated research. If so, then it's probably worth it.

If anyone has any ideas on how to get more interaction, please let us know.

Monday, December 3, 2007

Task dependent "sensory" responses in prefrontal cortex

Tania Pasternak from Rochester visited the Center for Cognitive Neuroscience here at UC Irvine as part of our colloquium series, and presented some interesting data on prefrontal cortex responses in a visual motion discrimination task. One finding is relevant to language work:

PFC neurons show visual motion direction selectivity (top right panel; cf prefered direction curve vs. anti-preferred) like MT (top left panel) -- an interesting observation in its own right. But this effect holds only when the monkey is performing a direction discrimination task. If instead, the monkey performs a speed discrimination task, or just passively views the stimulus, the selectively disappears (bottom right panel). Thus, stimulus specific responses are task dependent in PFC. MT direction selectivity is independent of task, however. MT neurons respond to a moving stimulus in their preferred direction whether the monkey is performing a direction discrimination task or just passively fixating (bottom left panel).

So what's the connection to language research? The connection is the observation of task-dependent involvement of frontal cortex in a putatively "sensory" ability. Ask aphasics with frontal lesions to discriminate pairs of syllables and chances are they will be impaired. Ask healthy participants to discriminate syllables in an fMRI experiment and chances are you'll find frontal activation. We have argued this is because the task (discrimination), not the sensory processing, induces frontal lobe involvement (see Hickok & Poeppel, 2000, 2004, 2007 for review). Pasternak's data validates this claim: frontal lobe involvement in putatively sensory abilities, is task dependent.

Reference:

Zaksas & Pasternak (2006). Directional signals in the prefrontal cortex and in area MT during a working memory for visual motion task. J. Neurosci., 26(45):11726-42.

New Survey: Best Conference for Brain-Language Research

Which is the best annual conference for brain-language research? Is it a language conference that has a bit of neuroscience representation? A neuroscience conference with a bit of language representation? Or an aphasia conference? Here are links to the various meetings. What other conferences do you present at?

Academy of Aphasia
Architectures and Mechanisms of Language Processing (AMLaP)
Cognitive Neuroscience Society Meeting (CNS)
CUNY Conference on Human Sentence Processing
Society for Neuroscience (SfN)

Friday, November 30, 2007

Mirror Neuron Survey Results

Ok, the results are in! A majority (62%) of Talking Brains readers shun mirror neurons as the primary substrate for speech perception. Only 13% believe these cells play a critical role, and 23% are not sure. I personally, would love to have a discussion here between those of us who don't believe the mirror neuron theory of speech perception and those who do. It doesn't have to get nasty. I has some great face to face discussions with my former post doc Stephen Wilson about this stuff. Stephen came from Iacoboni's lab, which has published on the role of motor areas in speech perception. We ended up coming to a reasonable consensus, I think. (Of course, he did leave for UCSF so...) So speak up on the topic! New survey coming soon.

Post Doc Opportunities -- Moss Rehab & UC Irvine

A new post doc opportunity has opened up at Moss Rehab in Philadelphia. This is a great lab!

Also, Hickok lab is still looking to fill a post doc position. Contact Lisette Isenberg (aisenber@uci.edu)

POST DOC AND RA OPENINGS
The Language and Aphasia Laboratory of Moss Rehabilitation Research Institute (MRRI), Philadelphia PA. is accepting applications for post-doctoral fellowships and full-time BA/BS assistantships, starting Spring or Summer 2008. Under the direction of Myrna Schwartz, Ph.D., the laboratory conducts research on normal and aphasic language processes. Topics include connectionist modeling of lexical disorders, cognitive control in short-term memory and language processing, and advanced methods of lesion-symptom mapping. Candidates can expect on-the-job training in patient research. Send cover letter, C.V., and references to Laura Barde:
email: bardel@einstein.edu; fax: 215-456-9613; mail: Moss Rehabilitation
Research Institute, 1200 West Tabor Road, MossRehab 4th fl. Sley, Philadelphia, PA, 19141.

Wednesday, November 28, 2007

Nature Reviews Neuroscience Research Highlights

Last month I posted a pretty harsh commentary on a mirror neuron-related Research Highlight piece in Nature Reviews Neuroscience. These Highlights pieces are written by the Editors of NRN, and are quite good, which is why I've taken to reading them ever since my free subscription started after our NRN paper appeared. Just in case the nice folks at NRN read my comment on their piece, let me clarify -- for fear they will never consider one of my papers again! :-) -- that the Highlights piece in fact accurately described the original article's position on the role of mirror neurons in complex social behaviors. In other words, the over-interpretation of the mirror-neuron data that was summarized in the NRN piece was not the editor's interpretation but the paper authors' interpretation. By the way, one of the editors at NRN is Katherine Whalley, who worked with us on our paper. She was fantastic to work with. Her editorial comments and suggestions were right on, and really helped pull the paper into shape. I wish she could help tune all of my papers!

This month's Research Highlights section has some interesting tidbits including pieces on the physiological basis of TMS (Allen et al. 2007, Science, 317:1918-21), the demonstration of resting-state neural networks in infants (Fransson, et al. 2007, PNAS, 104:15531-6), and what looks to be a very interesting computational study (Roudi & Latham 2007, PloS Comput. Biol. 3: e141) showing that the number of memories that can be stored in a neural network is smaller than previously thought. This implies that multiple networks must be employed by the brain to store large amounts of information. There's also a new review paper in the current issue by Larry Squire, John Wixted, and Robert Clark arguing that recollection and familiarity are not anatomically separated in the medial temporal lobe. Check it out!

Tuesday, November 27, 2007

The French Connection II: TalkingBrains in Paris

And in the spirit of highlighting work from other labs:

Anne-Lise Giraud and her colleagues, at the Ecole Normale Superieure in Paris, generate a steady stream of important papers. If you have not yet become a reader of her work, start now.

Anne-Lise has done important studies on speech perception, auditory perception, cochlear implants, language comprehension, and multi-sensory processing, especially voice-face interactions.

Here are a few papers, to stimulate the appetite:
  • Giraud AL, Lorenzi C, Wable J, Johnsrude IS, Frackowiak RSJ, Kleinschmidt A (2000) Representation of temporal envelope in the human auditory cortex. Journal of Neurophysiology, 84: 1588-1598.
  • Giraud AL, Price CJ. (2001) The constraints functional anatomy places on classical models of auditory word processing, Journal of Cognitive Neuroscience 13, 754-765.
  • v. Kriegstein K., Giraud AL (2004) Functionally distinct territories in the right STS for the processing of voices, Neuroimage 22, 948-55.
  • Giraud AL, Kell C, Thierfelder C, Sterzer P, Preibisch C, Kleinschmidt A. (2004) Neural substrates of speech processing: effects of sensory features, auditory search and comprehension. Cerebral Cortex 14:247-55. [see other post]
  • V Kriegstein K, Giraud AL. (2006). Implicit multisensory associations influence voice recognition. PLoS Biology.
I am a big fan of this work -- so I managed to join them in a study. Stay tuned for a French Connection III posting on a new paper, hopefully very very soon ....

The French Connection I: An important paper that Greg and I overlooked

When working on our paper for Nature Neuroscience Reviews (Hickok & Poeppel, 2007; see an early blog entry), we overlooked a terrific paper that we should have cited and the results of which we should have incorporated.

Giraud AL, Kell C, Thierfelder C, Sterzer P, Russ MO, Preibisch C, Kleinschmidt A. Contributions of sensory input, auditory search and verbal comprehension to cortical activity during speech processing. Cerebral Cortex. 2004 Mar;14(3):247-55.

Giraud and her colleagues presented participants with (i) regular sentences, (ii) broad-band speech-envelope noise signals (BBSEN), and (iii) narrow-band speech-envelope noise (NBSEN). Subjects were scanned with fMRI before a training period (only regular sentences intelligible) and after training (BBSEN intelligible, NBSEN not intelligible). BBSEN is highly intelligible after training, NBSEN remains entirely unintelligible -- see paper for details of the materials.

Anne-Lise and her colleagues were then able to separate the same physical stimulus when it could be understood (comprehension condition) versus not. This is the same intuition that forms the basis for sine-wave speech studies, some of Sophie Scott's studies, and many others (e.g. Athena Vouloumanos' experiment in J. Cog. Neuroscience).

(1) The stimulus attributes were reflected in activation in superior temporal cortex (including STS) bilaterally. (2) Natural speech compared to speech-envelope modulated noise selectively activated STS, again bilaterally. (3) Comprehension -- i.e. BBSEN after training as well as regular speech -- implicated bilateral MTG and inferior temporal areas.

So they found a convincing separation between areas principally responsible for sound analysis and areas mediating intelligible speech. Their data also enrich what the interpretation for the left STS should be. And their data show quite nicely that ventral stream areas are strikingly bilateral.

Sorry Broca-Wernicke-Lichtheim-Geschwind --- that part of lateralization is just wrong! It's much more bilateral when you look at comprehension and ventral stream contributions.

This paper is full of interesting details and discussion. If you are working on the neural basis of speech perception, language comprehension, intelligibility etc. I suggest you read this one. And -- a nice bonus -- the paper provides quite a bit of strong evidence for the model that Greg and I argued for in the 2007 paper.

Sunday, November 25, 2007

A few more ideas for our Top-10 list

I thought that readers (insofar as there are any left) would contribute some votes/ideas here, but the holiday (Thanksgiving in the US, at least) may have slowed down everyone's cortical metabolism.

To continue this Top-10 list (by the way, Greg, do we have the nerve to do the Top-10 most silly or stupid papers one has come across? I bet Bill Idsardi has the nerve ...), here are articles and books that have influenced how I think about the neural basis of language. Again, an unnecessarily small selection, it should go without saying.

David's End-of-Thanksgiving Approximately Top-10 List:

• Kutas M, Hillyard SA. (1980). Reading senseless sentences: brain potentials reflect semantic incongruity. Science 207(4427):203-5.
The birth of the N400. Hard not to be influenced by that one. In fact, this year we published our own first N400 paper, which was closely related to the original work [Sandeep Prasada, Anna Salajegheh, Anita Bowles, David Poeppel (2007). Characterising kinds and instances of kinds: ERP reflections. Language and Cognitive Processes. DOI: 10.1080/01690960701428292].

• Gallistel, C. R. (1980) The organization of action: A new synthesis. Hillsdale, N. J.: Lawrence Erlbaum Associates, Inc. [[Or, I you want a shorter piece, and inspiring about a different set of issues: Gallistel, C.R. (1998) Symbolic processes in the brain: The case of insect navigation. In D. Scarorough & S Sternberg (Eds) Methods, models and conceptual issues. Vol 4 of An invitation to cognitive science. 2nd edition (D. Osherson, General Editor) Cambridge, MA: MIT Press.]]
Randy Gallistel's work is not about psycho- or neurolinguistics; but, pound for pound, he is the best damn cognitive scientist out there. Practically every page has an idea worth considering for our own area of research.

• Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information," W.H. Freeman and Company, NY.
Marr's book and his way of thinking about problems is immensely useful for neurolinguistics. Everyone should read the first few chapters.

• Chomsky, N. (1986). Knowledge of language: its nature, origins and use, Praeger, New York.
Nooooaaaam ..... Nooooaaaam ..... The E-language/I-language distinction, among other things. Lots of great stuff. I mean, come one, who has had more key ideas??

• McCarthy RA, Warrington EK. (1988). Evidence for modality-specific meaning systems in the brain. Nature 334(6181):428-30.
Just plain cool lesion data.

• Felleman DJ, Van Essen DC. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cereb Cortex 1(1):1-47.
Again, not neuroscience of language but vision -- but which contemporary study of functional anatomy is not deeply influenced by this seminal paper?

• Corina DP, Vaid J, Bellugi U. (1992). The linguistic basis of left hemisphere specialization. Science. 255(5049):1258-60.
An important contribution to our understanding of modality-independent representation.

• Osterhout L, Holcomb PJ, Swinney DA. (1994). Brain potentials elicited by garden-path sentences: evidence of the application of verb information during parsing. J Exp Psychol Learn Mem Cogn. 20(4):786-803.
The birth of the P600, although one must also acknowledge some of the Colin Brown/Peter Hagoort papers on this at about the same time.

• Salmelin R, Hari R, Lounasmaa OV, Sams M. (1994). Dynamics of brain activation during picture naming. Nature 368(6470):463-5.
A tour de force demonstration that one can use MEG to 'follow a signal through the brain,' by determining the cortical activation sequence.

• Friederici AD. (1995). The time course of syntactic activation during language processing: a model based on neuropsychological and neurophysiological data. Brain Lang. 50(3):259-81.
The most clear statement of the model that argues for structure-to-insertion-to-cleanup, a la Frazier, and developing the ELAN/LAN-N400-P600 model sequence.

• Turennout M, Hagoort P, Brown CM. (1998). Brain activity during speaking: from syntax to phonology in 40 milliseconds. Science 280(5363):572-4.
A clever study that begins to show how rapidly processing stages are likely to interact or follow one another.

*****Please comment/add/subtract suggestions. At the very least, if enough of us play this game, we can generate a pretty decent syllabus for a graduate seminar that all of us can use -- that would be a decent public service, no? ******

Tuesday, November 20, 2007

Top 10 most important/influential papers in the neuroscience of language

Just curious what folks think are the most important/influential papers or monographs in the history of brain-language research. Here's 16 prominent and generally very highly cited papers off the top of our heads (in alphabetical order). DISCLAIMER: This list was generated based on only a few minutes of thought. It is not intended to be a complete listing of important papers. If we have omitted YOUR important paper, or your personal favorite most important paper, do not take offense. DO, however click 'comment' at the bottom of this entry and tell us which papers we've failed to include, or which of the papers listed below don't belong in the Top 10!

Binder, J. R., Frost, J. A., Hammeke, T. A., Cox, R. W., Rao, S. M., & Prieto, T. (1997). Human brain language areas identified by functional magnetic resonance imaging. Journal of Neuroscience, 17, 353-362.

Broca, P. (1861). Remarques sur le siège de la faculté du langage articulé; suivies d'une observation d'aphémie (perte de la parole). Bulletins de la Société Anatomique (Paris), 6, 330-357, 398-407.

Broca, P. (1865). Sur le siège de la faculté du langage articulé. Bulletins de la Société d'Anthropologie, 6, 337-393.

Caramazza, A., & Zurif, E. B. (1976). Dissociation of algorithmic and heuristic processes in sentence comprehension: Evidence from aphasia. Brain and Language, 3, 572-582.

Damasio, H., Grabowski, T. J., Tranel, D., Hichwa, R. D., & Damasio, A. R. (1996). A neural basis for lexical retrieval. Nature, 380, 499-505.

Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M., & Gagnon, D. A. (1997). Lexical access in aphasic and nonaphasic speakers. Psychological Review, 104, 801-838.

Friederici, A. D. (2002). Towards a neural basis of auditory sentence processing. Trends Cogn Sci, 6, 78-84.

Geschwind, N. (1965). Disconnexion syndromes in animals and man. Brain, 88, 237-294, 585-644.

Grodzinsky, Y. (2000). The neurology of syntax: Language use without Broca's area. Behavioral and Brain Sciences, 23, 1-21.

Linebarger, M. C., Schwartz, M., & Saffran, E. (1983). Sensitivity to grammatical structure in so-called agrammatic aphasics. Cognition, 13, 361-393.

Näätanen, R., Lehtokoski, A., Lennes, M., Cheour, M., Huotilainen, M., Iivonen, A., Vainio, M., Alku, P., Ilmoniemi, R. J., Kuuk, A., Allik, J., Sinkkonen, J., & Alho, K. (1997). Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385, 432-434.

Petersen, S. E., Fox, P. T., Posner, M. I., Mintun, M., & Raichle, M. E. (1988). Positron emission tomographic studies of the cortical anatomy of single-word processing. Nature, 331, 585-589.

Poizner, H., Klima, E. S., & Bellugi, U. (1987). What the hands reveal about the brain. Cambridge, MA: MIT Press.

Price, C. J., Wise, R. J. S., Warburton, E. A., Moore, C. J., Howard, D., Patterson, K., Frackowiak, R. S. J., & Friston, K. J. (1996). Hearing and saying: The functional neuro-anatomy of auditory word processing. Brain, 119, 919-931.

Wernicke, C. (1874/1977). Der aphasische symptomencomplex: Eine psychologische studie auf anatomischer basis. In G. H. Eggert (Ed.), Wernicke's works on aphasia: A sourcebook and review (pp. 91-145). The Hague: Mouton.

Zatorre, R. J., Evans, A. C., Meyer, E., & Gjedde, A. (1992). Lateralization of phonetic and pitch discrimination in speech processing. Science, 256, 846-849.

Friday, November 16, 2007

Speech specificity and fMRI resolution

A number of functional imaging studies have found that contrasting speech with various non-speech control stimuli eliminates vast areas of speech-responsive cortical activations; i.e., many areas are equally activated by speech and non-speech sounds. Many investigators discount these regions that are jointly activated by speech and non-speech sounds as being somehow less critical for speech. -- the primary quest being to identify The Speech Area. We have previously disagreed with this view, and the general approach, suggesting that regions that respond to non-speech sounds could still be carrying out critical speech-related computations. We further have suggested that these "non-speech specific" regions could still be speech specific if only we had the resolution to image the neural substructure.

Three years late, I discover a paper by Michael Beauchamp, Alex Martin and colleagues showing just this (Beauchamp et al. 2004, Nat Neurosci, 7:1190-2). They imaged a multisensory region of the STS using both typical fMRI resolution and higher resolution methods in a multisensory paradigm presenting auditory and/or visual stimuli. Using typical lower resolution imaging they found that the STS region showed equivalent responses to stimuli from either modality. Higher resolution imaging, however, found a patchy organization within this broader region that contained zones that were specifically responsive to one or the other sensory modality, as well as some zones that were responsive to both.

No difference doesn't always mean no difference. It sometimes means we just don't have the resolution to see the difference.

Thursday, November 15, 2007

Descended from Helmholtz, Wundt, James, and Freud: Neurogenealogy

According to Neurotree.org, I'm an academic descendant of Hermann von Helmholtz, Wilhelm Wundt, William James, Clark Hull, and yes, that would-be saboteur of the classical model of aphasia, Sigmund Freud. What's more, the originator of the classical model, Wernicke, is a distant cousin. I'm apparently also a sibling of Michael Tarr and Martha Farah, and uncle to the likes of Sharon Thompson-Schill, Isabel Gauthier, Josh Tenenbaum and many more (from whom I never receive holiday cards, btw). These linkages are all through my post-doc advisor Stephen Pinker. My PhD advisor, Edgar Zurif, didn't even get a twig listed. I wonder if that makes this my adopted tree.

One wonders about the accuracy of these trees, but ultimately it doesn't much matter because for the most part everyone is related to everyone else, and just about everyone's lineage can be traced back to luminaries like Helmholtz and James. It's kind of like real family trees: fascinating to dig into, but once you are a couple generations removed, they're pretty much meaningless. (Did I tell you I was related to Wild Bill Hickok, and that the Hickoks came from Stratford-upon-Avon and were neighbors -- OK, well, employees actually -- of the Shakespeares? The only advantage my distant history ever brought me was that I got into the tourist-attraction cemetery where Wild Bill is buried, for free.)

For an interesting essay on the psychology and evolutionary significance of kinship and genealogy, check out Steve Pinker's recent piece in the New Republic, "Strangled by Roots."

Wednesday, November 14, 2007

SfN 2007 Virtual Poster Session

Why don't we use this forum for virtual poster sessions associated with recent conferences, starting with SfN? If you couldn't make it to SfN, or went but missed some relevant posters, or presented a poster that you'd like to continue to promote, or have a poster that you wished you could have presented, just click "comment" at the bottom of this post and provide the abstract and a URL where a pdf of your poster can be downloaded.

Tuesday, November 13, 2007

Talking Brains Down Under

Greig de Zubicaray's comment on our new survey feature reminded me to highlight the nice body of brain-language fMRI work coming from Down Under. Greig and colleagues at the University of Queensland have been pumping out an impressive number of very cool papers on lexical processing both in perception and production. The work is thoughtful, and psycholinguistically informed. A few sample citations are below. Their work is definitely worth paying attention to.
Copland DA, de Zubicaray GI, McMahon K, Eastburn M.
Neural correlates of semantic priming for ambiguous words:
an event-related fMRI study. Brain Res. 2007 Feb 2;1131(1):163-72.

de Zubicaray G, McMahon K, Eastburn M, Pringle A, Lorenz L.
Classic identity negative priming involves accessing
semantic representations in the left anterior temporal cortex.
Neuroimage. 2006 Oct 15;33(1):383-90.

de Zubicaray G, McMahon K, Eastburn M, Pringle A.
Top-down influences on lexical selection during spoken
word production: A 4T fMRI investigation of refractory
effects in picture naming.
Hum Brain Mapp. 2006 Nov;27(11):864-73.

Copland DA, de Zubicaray GI, McMahon K, Wilson SJ,
Eastburn M, Chenery HJ.
Brain activity during automatic semantic priming revealed
by event-related functional magnetic resonance imaging.
Neuroimage. 2003 Sep;20(1):302-10.

Saturday, November 10, 2007

Talking Brains mirror neuron entry referenced in German science magazine

Our Talking Brains discussion of mirror neurons was recently cited in a German science magazine, bild der wissenschaft. I have no idea what the article says, but I also recognized Alison Gopnik's name, so that's got to be good right? Check it out here.

So David, what's it say?

Friday, November 9, 2007

New survey feature!

Ok, I just discovered that we can easily add surveys to the blog, so let's try it out for fun. How do people feel about mirror neurons and speech perception? Take the survey in the right column of the blog.

SfN07 News -- No link between conduction aphasia and the arcuate fasciculus

It's not exactly news, but now we know for sure: conduction aphasia is NOT caused by damage to the arcuate fasciculus. Nina Dronkers presented data from over 100 patients at the SfN meeting showing convincingly that arcuate fascisulus damage does not cause conduction aphasia. In fact, it causes a much more serious language production deficit.

The idea of conduction aphasia resulting from damage to the arcuate fasciculus (AF) has been seriously challenged since 1980 when the Damasio's published their study of the anatomical correlates of conduction aphasia. That paper showed that conduction aphasia was often associated with left auditory cortex lesions (dorsal STG), not AF lesions. Subsequent case studies showed that damage to the AF does not cause conduction aphasia, and that conduction aphasia-like symptoms can be elicited by cortical stimulation (arguing against the disconnection theory of conduction apahsia). We reviewed some of this evidence in Hickok et al. 2000.

Nina's new study, though, is the first large scale investigation of the question, and really puts a nail in the AF-conduction aphasia coffin.

Wednesday, November 7, 2007

Accountability in the review process

Having been knee deep recently responding to both grant proposal and paper reviews (as reviewee), I find myself more and more annoyed with the review process. Sometimes, maybe even most of the time, reviews can be helpful. But we all get those off-base or nitpicky reviews that are at best a tedious annoyance or at worst, a grant killer. Anonymity and lack of public accountability for what one writes in a review, I think, gives some reviewers carte blanche to shoot from the hip, often causing collateral damage.

There's a solution: Make the reviews public.

Not that many people would read them. We've already got more than enough to read. But maybe just knowing that your off the cuff remarks might be subject to public ridicule -- on some blog, for example ;-) -- would be enough to induce a little constraint and rationality.

There are other benefits to public reviews. Sometimes reviewer-reviewee exchanges are highly instructive, and sometimes more interesting than what goes in the paper. It could be beneficial, or at least discussion-provoking, to see these behind the scenes debates. Published reviews could also cut down on work when responding to reviews: when you get the same criticism over and over again, you could just cite a previous review response rather than writing a whole new response every time ("see Hickok & Poeppel review response 2000, 2004, 2007 for repeated and thorough dismantling of the same tired point you raise here"). It might also promote more constructive criticism in reviews, or even more willingness to review papers because the reviewer would get some credit for suggesting that clever control or theoretical insight. Folks might even become so proud of their reviews that they might start signing them and listing them as pubs.

Maybe we'll start publishing reviews of our papers here. I wonder if that would cause a stir.

Tuesday, November 6, 2007

Raise your martini glass to Sylvius

News from the SfN meeting...

Q: What does the Sylvian fissure and gin have in common?
A: Both can be traced back to one Franciscus Sylvius.

According to a poster by Andre Parent of Laval University, Flemish anatomist Franciscus Sylvius (1614-1672) not only gave his name to the prominent lateral fissure, but more importantly, the dude invented gin. Apparently in an effort to develop a diuretic for the treatment of kidney disease, Sylvius mixed the oil of juniper berry with grain alcohol. The concoction became known as jenever (juniper in Dutch) and geniévre (in French). The term was eventually anglo-thrashed into the word “gin.” English soldiers brought it back to their homeland where it became wildly popular. Not sure how well it worked for kidney disease.

Monday, November 5, 2007

Anterior Temporal Lobe and Syntax

As promised, more from Richard Wise's visit to Talking Brains West...

When I asked Richard what he didn't like about our 2007 paper, his response was, our claim about the ATL involvement in syntax. I'm inclinded to agree. Here's the details.

I mentioned in a previous entry, as well as in our NRN paper, that because of the diffuse damage, semantic dementia (SD) cannot provide convincing evidence regarding what the anterior temporal lobe(s) (ATL) is(are) doing. However, it can provide fairly convincing evidence regarding what the ATL is not doing. If some function is spared despite extensive damage to the ATL bilaterally, we can conclude that the ATL is not critical for that function.

In the functional imaging literature, the ATL has emerged as a possible site involved in some form of syntactic computation because responses in this region tend to be sentence-selective. A big problem for this idea, however, is that resection of the ATL does not produce syntactic deficits or any substantial language deficits at all. This problem might be circumvented by proposing that syntactic functions are bilaterally organized in the ATL, explaining why unilateral resection doesn't impair function substantially.

Here's where SD comes in. It has been claimed that SD patients have relatively preserved syntactic ability. I never really understood how a patient could have severely impaired word comprehension with preserved syntactic ability (there's lots of syntactically relevant information in words), so I didn't view SD as strong evidence against a role for syntactic processing in the ATL. Consequently, we have claimed that the ATL may be a site for some kind of syntactic processing, despite claims emanating from the SD literature.

Back to Richard's visit: According to Dr. Wise, SD patients have no trouble with syntax, including a preserved ability to make grammaticality judgments. I believe him. While I wasn't terribly convinced by the published claim -- lots of published claims are wrong -- to hear it from a good clinician who has seen SD patients convinced me (clinical intuition is an extremely valuable research tool).

So does this mean we need to revise our views on the role of the ATL in syntactic processing. Yes, I think so. If SD patients have badly diseased ATLs bilaterally, and can still do reasonably good at the syntactic level, I think we need to rethink things. Part of this rethinking should involve (i) a clear specification of what SD patients can and cannot do syntactically (Grodzinsky, check it out for us will you?), (ii) understanding how word-level semantic deficits might impact sentence-level processing (it has to right?), and (iii) determining whether the ATL might still be involved in combinatorial semantic operations.

Friday, November 2, 2007

Macaque motherese??


Last week's Quirks and Quarks science show (from CBC radio) had a segment about Dr. Jessica Whitham's research on infant-directed vocalizations in macaques. The vocalizations share some characteristics with descriptions of human motherese (higher pitch, greater pitch range, etc.). It's not yet clear that this adds macaques in learning their vocalizations, in fact, it's not entirely clear why they do this at all, since it's apparently only directed at other macaque's infants ("Hey you kids, get out of that Jello tree!").

Link (scroll down to find the segment): http://www.cbc.ca/quirks/archives/07-08/oct27.html

Bill Idsardi

The Intense World Syndrome

Browsing through the first set of articles on the new Frontiers in Neuroscience journal, I came across this paper:

"The Intense World Syndrome – an alternative hypothesis for autism" by Markram et al.

The basic idea is that autism spectrum disorders result from hyper-reactive and hyper-plastic neural circuits. This is in contrast to typical accounts that emphasize hypo-functionality. This seems like a much more interesting and potentially plausible account than the mirror neuron view. The paper describes evidence for their claims, derived from a rat model of autism.

Even if the hypothesis turns out to be incorrect, the authors at least deserve credit for coming up with an attention-grabbing name. It makes me think we should rename our Dual Stream Model to something bit more flashy. How about, The Galactic Double Parallel Pathway Model?

Talking Brains at Society for Neuroscience

If you are in San Diego for the SFN meeting, stop by our poster on Tuesday. I'll be there...

Program#/Poster#:
738.10/XX3
Title:
The neural organization of linguistic short-term memory is sensory modality-dependent: Evidence from signed and spoken language
Location:
San Diego Convention Center: Halls B-H
Presentation Start/End Time:
Tuesday, Nov 06, 2007, 2:00 PM - 3:00 PM
Authors:
*J. PA1, S. M. WILSON1, H. PICKELL2, U. BELLUGI2, G. HICKOK1;
1Univ. California-Irvine, Irvine, CA; 2The Salk Inst. for Biol. Sci., San Diego, CA
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for specifically phonological codes, whereas others argue for more general sensory traces. We test these hypotheses by investigating linguistic STM in two distinct sensory-motor modalities, signed and spoken language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during fMRI scanning. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been shown previously to respond also to non-linguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed. We conclude that linguistic STM involves both sensory-dependent and sensory-independent neural networks, and further propose that STM may be supported by multiple, parallel circuits.

Thursday, November 1, 2007

Richard Wise visits TalkingBrains West

Had a nice visit with Richard Wise yesterday. We are long time fans of Richard's work and share a number of theoretical views with him, such as bilateral organization of speech recognition and sensory-motor integration in the auditory dorsal stream. We have friendly disagreements on a few points as well, specifically the relative role of anterior vs. posterior temporal lobe areas in lexical-semantic aspects of language processing.

Richard views the anterior temporal lobe as a critical site for lexical-semantic processing, based largely on evidence from semantic dementia, whereas we are more impressed by the stroke data showing impairments of auditory comprehension and production following left posterior temporal lesions as is often found in Wernicke's aphasia. We have discounted semantic dementia data because the disease, while most severe in ATL areas, is nonetheless quite diffuse and therefore one cannot link ATL damage to lexical-semantic impairments with confidence. Richard has discounted the stroke data by (correctly) pointing out that strokes can disrupt function not only at the site of tissue loss, but also in distal sites by disrupting functional connectivity.

Of course we discussed this issue yesterday, and here was the outcome:

1. We agree on the data, and each acknowledge the feasibility of the counterarguments (diffuse damage in semantic dementia and remote effects of stroke).

2. We still disagree on the relative importance of these sources of data.

3. But, as Richard put it nicely, when reasonable people have legitimate disagreements it probably means both are right to some extent. So I suppose the disagreement has weakened into one of degrees: I would argue that while both anterior and posterior areas are involved, the posterior regions are more important, whereas Richard would view the anterior areas as more important. Of course, both of us are ignoring the likely contributions of frontal areas...

4. We both agree strongly that whatever the ATL is doing, it's doing it bilaterally. Unilateral resection of the left or right ATL does not produce semantic dementia, and while atrophy in semantic dementia is often greater on the left, the disease affects the ATL in both hemispheres (as well as several other areas including medial temporal lobe). The best explanation of these data is that bilateral disruption is accounting for the severity of the deficit in semantic dementia. The rumor is that another group has done a TMS study disrupting function in the ATL bilaterally. Could be interesting.

More on Richard's visit to follow...

Monday, October 29, 2007

Fabulous New Computational Job!

Assistant Professor, Neuroscience and Cognitive Science Program -- University of
Maryland at College Park -- Maryland

The Neuroscience and Cognitive Science program (NACS) at the University of Maryland is seeking a new tenure-track faculty member, at the assistant professor level. Computational neuroscientists working in any areas including sensory and motor physiology, analysis of
control systems, and cognitive neuroscience [THIS INCLUDES SPEECH, LANGUAGE, FOLKS] will be considered. The successful candidate will hold a joint appointment in both the NACS
Program and an academic department. The department of tenure will depend on the research interests of the faculty member and may be in Biology, Computer Science, Electrical and Computer Engineering, Hearing and Speech Sciences, Kinesiology, Linguistics, or Psychology. NACS is a tightly integrated community of scholars focused on aspects of neuroscience and
cognitive science. Many faculty also enjoy highly productive research collaborations with scientists at federal agencies in the Washington DC area such as the NIH. Responsibilities: Candidates will be expected to develop a vigorous extramurally-funded research program. Teaching duties will include a graduate-level course in computational neuroscience, as
well as undergraduate/graduate courses to be determined by the tenure-track department. Duties will also include student advising and administration as determined by the Director of NACS and the department of tenure.Qualifications: An earned doctorate in a discipline relevant to the candidate's field of teaching and research is required. Candidates who
integrate theoretical with experimental research are preferred. We seek candidates with demonstrated teaching and research excellence capable of maintaining an extramurally-funded research program. Details of the NACS program may be found at: www.nacs.umd.edu.S
alary: Commensurate with qualifications and experience.Position available: Earliest starting date is the beginning of the fall semester 2008. Applications: For best consideration send, by December 15, 2007, a CV, names and addresses (including e-mails) of three possible references, and statements of both research interests (documenting any previous extramural funding) and teaching interests to NACS Search, Neuroscience and Cognitive Science
Program, 2131 Bio/Psyc Building, University of Maryland, College Park, MD
20742.WOMEN AND MEMBERS OF UNDER REPRESENTED MINORITIES ARE ENCOURAGED TO APPLY. THE UNIVERSITY OF MARYLAND IS AN EQUAL OPPORTUNITY AFFIRMATIVE ACTION EMPLOYER.

Interesting new neuroscience journal

There is a new **open access** neuroscience journal:

http://frontiersin.org/neuroscience/

The people editing this are all terrific -- so the journal should be quite interesting.

Thursday, October 25, 2007

More Mirror Neuron Mania

The current issue of Nature Reviews Neuroscience has a Research Highlights piece on a new "mirror neuron" paper by Catmur, Walsh, & Hayes (2007, Curr. Biol., 17, 1527-1531). Although I've only read the highlight piece, the paper looks pretty interesting. The authors used TMS to induce motor evoked potentials in the abductor muscles of the hand. When subjects watched a video of a hand with the index finger moving the MEPs were greater in the subjects own index finger, whereas when the video showed movement of the little finger, MEPs were greater in the little finger of the observer. Standard "mirror" effect. Note that an action-based theory of perception would hold that this motor activity in the observer reflects the subject's "understanding" of the observed action by mapping the action onto his or her own motor system.

But there's more: the study authors then trained subjects to move in a manner incongruent with the hand in the video: move little finger when index finger movement is shown and vise versa. After training MEPs were greater in the little finger when index finger movement was observed, and vise versa. So "mirror" effects are easily trained simply by association. Nice result.

Question: does the subject now fail to correctly understand the movement of the hand in the video? If asked, would subjects report that index finger movement had taken place when in fact the pinky moved? Of course not. So this is another demonstration of a dissociation between "mirror neuron" activity and action comprehension. Conclusion: the "mirror system" reflects sensory-motor associations, NOT the neural foundation of action understanding.

Although I don't know what the study authors actually concluded about their experiment, the Nature Reviews Neuroscience piece concluded that, "These findings imply that insufficient social interactions and consequent inadequate sensory experience might affect the development of the mirror neuron system, for example in children with autism" (p. 737). Seriously? That's the logical equivalent of trying to leap across the Grand Canyon from a crumbly ledge. Hmm... I wonder if skull measurements might be able to detect this "mirror neuron" dysfunction in autism.

Wednesday, October 24, 2007

Post2 by Bill Idsardi: TalkingBrains: Going mobile

Last week TB East dispatched two undercover agents to MIT's "Whither Syntax?" mini-conference, and one of them (codename "Cookie Master") managed to sneak onto the discussion panel. The report back was that things were entirely too cordial, with pleasantries exchanged by both sides.

This week most of TB East heads out to Kansas for MALC http://www2.ku.edu/~ling/malc/. The ideal paper this year would seem to be "MEG evidence for the emergence of Siouan grammatical morphemes" but that abstract was inadvertently classified as spam.

Next week TB East meets TB West at SfN http://www.sfn.org/.

I forget what's next after that, and I can't think of a smart Pete Townshend reference either.

--Bill Idsardi

Post by Bill Idsardi: Music of the Hemispheres

Bill Idsardi (TalkingBrains East) says this:

On Monday's broadcast Bob Edwards interviewed Oliver Sacks (probably a well-known figure to TalkingBrains readers). Sacks has a new book out, Musicophilia, and on the show he speculated that musical rhythm, like language, is species specific and species uniform (also a well-worn concept for TB readers). In the book Sacks cites several of Aniruddh Patel's publications and concludes that language and music arose separately in humans. There's a clip from the interview (it's just under two minutes long):

[I need to figure out how to insert an audio file here -- blogger supports videos of various types but not this audio ... more on that soon. David]

The relevant passage of the new book is on pages 242-244, starting out, "[t]he fact that 'rhythm' ... appears spontaneously in human children, but not in any other primate, forces one to reflect on its phylogenetic origins." Sacks goes on to quote Patel (2006), and they both suggest that "[m]usical rhythm, with its regular pulse ... is very unlike the irregular stressed syllables of speech."

This view of rhythm seems somewhat naïve, both as ethnomusicology (what about polyrhythms?) and as linguistic typology (what about syllable-timed languages like French?, see Grabe and Low 2002). Talking Brains is trying to track some of this down, building on Luo and Poeppel (2007). Stay tuned.

• Grabe E, Low EL. 2002. Acoustic correlates of rhythm class. Laboratory Phonology 7: 515-546. Berlin: Mouton de Gruyter.
• Luo H, Poeppel D. 2007. Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex.
Neuron 54: 1001-10. DOI: 10.1016/j.neuron.2007.06.004
• Patel AD. 2006. Musical rhythm, linguistic rhythm, and human evolution.
Music Perception 24(1): 99-104. http://caliber.ucpress.net/doi/pdf/10.1525/mp.2006.24.1.99
• Patel AD. 2007. Music, Language and the Brain. Oxford: Oxford University Press. http://www.us.oup.com/us/catalog/general/subject/Medicine/Neuroscience/?view=usa&ci=9780195123753
• Sacks O. 2007. Musicophilia: Tales of Music and the Brain. New York: Knopf. http://musicophilia.com/index.htm

Episodes of the Bob Edwards show are available at Audible.com


Southern California Fire Update

For those of you who don't know where UC Irvine is (home of Talking Brains West), we are on the coast right in the middle of the So. Cal. inferno, between L.A. (~40 miles north) and San Diego (~80 miles south). Some friends and colleagues have been contacting me about the fires, so I thought I would provide an update. The City of Irvine is currently not directly threatened. However, one of the fires is in the foothills of Orange County not too far away. As you can see from the satellite image taken yesterday afternoon , the brunt of the fire storm is to our north and south, but there is still plenty of smoke in the air with ash falling like a light snow dusting. It feels pretty severe here today. I can't imagine what it must be like in some of the more severely affected areas. A current (10/24) message regarding the state of the campus from UCI's chancellor can be found here.

Thursday, October 18, 2007

"Avoid Boring People" - JD Watson

OK, this is not relevant to Talking Brains, but I figure it's a public service announcement ... I just bought a book based on its title.

James Watson just wrote a biography, and I thought the title was really good, in all its ambiguity: Avoid Boring People. As a professor and someone who lectures occasionally, I realize that nothing is worse than boring people. And as a human being, few things are more irritating than having to hang out with boring people. So, Watson's title is pretty good, and it promised to be an interesting read.

Fatal flaw: this book is boring. He bored me, and showed himself as boorish. The book is an endless series of anecdotes about his stations in academia, what a lovely guy he is, how many Radcliffe undergrads he dated, how many people's careers he helped, and so on. This might be riveting, if you're in a circle of people who already know everyone here, but otherwise it's just plain boring. So he committed the cardinal sin of boring me. This is disappointing, because his snarkiness and directness promised to make for some amusing stuff--but I guess he just turns out to be another old fart who needs to recycle old files in his cabinet.

Possibly the most annoying part of the book isn't the boring anecdotes or the boorish remarks on his relationships with women, but the pretentious "remembered lessons" at the end of each chapter. These are supposed to be life lessons that give interesting insights into how to do science and be a big deal, but they end up being comments at the level of "work with a teammate who is your intellectual equal." Gee, thanks, JD. That's real good. Who woulda thunk it. Never occurred to anyone.

So while I love the title and I absolutely agree with its message, in all its ambiguity, this book is really weak.

Monday, October 8, 2007

Ask the Talking Brains

Check out the current issue of Scientific American Mind -- on newsstands now! -- to see an "Ask The Brains" feature co-authored by Greg Hickok and Carol Padden on the topic of whether deaf people talk to themselves in sign language. Short answer: Of course they do! For a slightly longer answer you can view the article itself for the ridiculously low price of only $4.95. Act now and you not only get the Ask The Brains piece, but you will also receive attractive mug shots of Dr. Padden and yours truly. But wait, there's more! The good folks at SciAm will throw in the entire Oct/Nov issue for that low, low price, which includes what looks to be an interesting article by Eric Kandel, among others.

Seriously though, the question of whether deaf people talk to themselves connects with a burgeoning literature on the nature of neural/cognitive representations of "inner sign" vs. "inner speech" which, of course, if typically studied under the guise of linguistic (verbal) working memory. I say "under the guise" because our own view (see Hickok & Poeppel, 2000/2004 & Hickok et al. 2003) is that verbal working memory is not its own encapsulated cognitive system, but instead falls out of the operations of the dorsal-stream auditory-motor integration circuit that we have been promoting for several years now. The relevance of sign language to the question of neural representations of such inner linguistic abilities (whether we call them verbal working memory or sensory-motor integration) should be obvious: it allows us to assess whether the representations involved are tied to specific sensory-motor modalities, or whether they involve more abstract amodal processes. Short answer to this question: a little bit of both. More on this in future blog entries.

Btw, if YOU have a brain/language question you'd like answered, send us an email with the heading "Ask the Talking Brains." If the question is halfway coherent and/or we can think of something halfway coherent to say about it, we'll post your question and our response in a blog entry. :-)


Refs:

Hickok, G., Buchsbaum, B., Humphries, C., & Muftuler, T. (2003). Auditory-motor interaction revealed by fMRI: Speech, music, and working memory in area Spt. Journal of Cognitive Neuroscience, 15, 673-682

Hickok, G. & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences, 4, 131-138

Hickok, G. & Poeppel, D. (2004). Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition, 92, 67-99.

Tuesday, October 2, 2007

It's Job Season

Faculty positions
Tufts Psychology- Linguistics and psycholinguistics
Georgetown Psychology- Cognitive neuroscience of language
UMass Amherst Psychology - Language processing is one area
U Kansas Speech, Language, Hearing - Speech/sciences and disorders
Penn State Psychology - Neuroscience of language is one area

Postdoc positions
Univ of Pittsburgh - LRDC Reading and Language
UC Irvine - Hickok lab for Cognitive Brain Research

Happy hunting!

Wednesday, September 26, 2007

Post Doc opening at Talking Brains West!

Department of Cognitive Sciences
Center for Cognitive Neuroscience
Position: Postdoctoral Scholar

The Department of Cognitive Sciences and the Center for Cognitive Neuroscience announce a Postdoctoral Scholar position in the Laboratory for Cognitive Brain Research.

A postdoctoral position is available in the laboratory of Dr. Greg Hickok at the University California, Irvine. The postdoctoral fellow will collaborate in NIH-funded research investigating the functional anatomy of language and complementary pursuits. Ongoing research projects in the lab employ a variety of methods, including traditional behavioral and neuropsychological studies, as well as techniques such as fMRI, EEG/MEG, and TMS. Opportunities also exist for collaboration with other cognitive science faculty and with faculty in the Center for Cognitive Neuroscience.

Candidates should have a Ph.D. in a relevant discipline and experience with functional MRI, preferably in the area of speech and language. Familiarity with computational and statistical methods for neuroimaging (e.g. MatLab, SPM, AFNI) is advantageous.

Candidates should send a CV, a letter of interest (including research skills), and a list of 3 references to the address below. The position start date is flexible, available beginning Fall 2007 for a period of 3 years.

Salary will be commensurate with experience, minimum annual stipend: $36,012.

Contact information:
Lisette Isenberg
Department of Cognitive Sciences and Center for Cognitive Neuroscience
3151 Social Science Plaza
University of California, Irvine
Irvine, CA 92697-5100
aisenber@uci.edu

The University of California, Irvine is an equal opportunity employer committed to excellence through diversity.

Sunday, September 23, 2007

Meta-ling tasks - Lt. Columbo Finale II ("Just one more thing ...")

As Greg showed very clearly in the last few posts -- and as we have argued in our papers, as well -- one has to be very *very* careful to interpret the task-related cognitive neuroscience data, because the execution of the speech tasks can mask, disguise, or distort the findings vis-a-vis speech processing in its ecologically natural form.

[Greg: nice job with the historical argumentation :-) I like how you nail Lichtheim and show how his approach led to a modification of the Wernicke model for all the wrong reasons.]

I wanted to add one more small point, as we leave this issue (just like Peter Falk as Columbo). Even 'early' cortical responses are changed dramatically during the execution of experimental task demands, a phenomenon exploited extensively in the attention literature. One example comes from work we did 10+ years ago at UCSF, using MEG to characterize the neuromagnetic responses evoked by CV syllables. In a within-subjects design, we recorded neuronal activity while participants listened to CV syllables passively (no explicit task required) and when they listened to the very same material when making a phonological judgment. When we examined the N100m (M100) response, the pattern of data showed that executing the task differentially modulated the N100m amplitude and lateralization. The critical finding in the context of speech perception research was that -- compared to the baseline (same stimuli but no meta-linguistic task -- lateralization was induced by the task when in the passive case there was none! ** This illustrates that even temporally early cortical responses are affected by tasks in a way that complicate the interpretation of how speech perception is implemented in the brain.

**Poeppel, D., Yellin, E., Phillips, C., Roberts, T.P.L., Rowley, H., Wexler, K., Marantz, A. (1996). Task-induced asymmetry of the auditory evoked M100 neuromagnetic field elicited by speech sounds. Cognitive Brain Research 4: 231-242.

Thursday, September 20, 2007

Meta-ling tasks -- Finale: How Lichtheim's meta-ling task helped bring down Wernicke's Model

Hopefully, the first three parts of this thread on metalinguistic tasks has shown that reliance on data from such tasks can lead one astray from an accurate understanding of speech processing in the context of more ecologically valid situations. There seems to be a prominent historical precedent for language research being misled by metalinguistic tasks. In particular, it seems to be Lichtheim's use of such a task that contributed to the downfall of the classical model of aphasia...

In Wernicke’s original model, volitional speech production consisted of two parallel pathways. A direct pathway from conceptual representations to motor word memories, and an indirect pathway from conceptual representations to auditory word memories to motor word memories. The indirect pathway, was thought to exert a “corrective” influence on the selection of motor word memories. As such this pathway explained the occurrence of selection errors (paraphasias) in the speech production of sensory (Wernicke’s) and conduction aphasics.

Lichtheim, in his 1885 development of Wernicke's model, concurred with his predecessor that auditory word representations were indeed activated during speech production, and that this activation helped constrain motor word selection. He even devised a (metalinguistic) task to assess the ability to activate these auditory representations in patients who could not speak.

"…this is the method I use: I ask the patient to press my hand as often as there are syllables in the word to which an object corresponds. Those who have not lost the auditory representations can do this, even if their intelligence be limited, as I have been able to satisfy myself even under the least favourable circumstances. For instance, a patient who, besides a focal lesion of the right hemisphere, had had a haemorrhage in the left half of the pons, and suffered, among other pseudo-bulbar symptoms, from complete speechlessness, preserved the faculty of fulfilling the test to the very last." p. 441.

He applied this test to Broca’s aphasics (although he admits they were not pure forms), and found that they could not perform the task. From this he concluded that these patients “lost the innervation of the auditory word-representations” (p. 441), and therefore that in Broca’s aphasia “the path from concept- to sound-centre must be interrupted” (p. 441). This conclusion forced Lichtheim to reject Wernicke’s position that conceptual representations can directly activate auditory word representations. Instead, Lichtheim proposed that the auditory activation in volitional speech production must pass through Broca’s region; i.e., concept → motor → auditory. An alternative interpretation, which Lichtheim did not consider, is that it is Broca's region that somehow supports the ability to perform his syllables task. On this interpretation, he would not have had to reject Wernicke's original position that conceptual systems can activate both motor and sensory representations of speech directly, and in parallel.

Lichtheim's conclusions from his metalinguistic task led him to several awkward claims and logical contradictions, for example, how to explain paraphasias in transcortical sensory aphasia. These problems were targeted by subsequent authors (e.g., Freud) as serious problems for the general connectionist (i.e., classical) approach. If Lichtheim had stuck with Wernicke’s original claim, these problems would not have arisen, and perhaps the classical models would not have fallen out of favor.

Monday, September 17, 2007

Georgetown Tenure Track Job Opening

Georgetown University Tenure Track Position in Cognitive Neuroscience: The Department of Psychology at Georgetown University anticipates a tenure-track assistant professor position, effective August 1, 2008.

Applications in any area of cognitive neuroscience are welcome, but we are especially interested in candidates specializing in the neural bases of language or in social/affective neuroscience, with a focus on any area of lifespan development. Successful applicants should bring an active research program with potential for external funding. They should also be prepared to teach courses in cognitive neuroscience and other areas related to their specialty, as well as general psychology, our introductory course. Excellent teaching skills, a strong publication record, and previous demonstration of funding will be advantageous.Georgetown University has a state-of-the-art brain imaging facility with a research-dedicated 3T magnet and technical support for fMRI, DTI, and MRS. The Psychology Department offers an undergraduate major in psychology, an honors program, and a doctoral degree with concentrations in Lifespan Cognitive Neuroscience and in Human Development and Public Policy. In addition, Psychology faculty may mentor Ph.D. students in other programs such as the Interdisciplinary Program in Neuroscience based in the adjacent Georgetown University Medical Center. For more information about our department, visit our website at *http://www.georgetown.edu/departments/psychology*.

Please send a letter of interest, a curriculum vita, teaching statement, and three letters of reference to: Chandan Vaidya, Chair, Cognitive Neuroscience Search Committee, Department of Psychology, 306 White Gravenor Hall, Georgetown University, 37^th & O Streets, NW, Washington, D.C. 20057. For administrative questions, contact Amber Matzke at shifflal@georgetown.edu .Applications will be accepted until the position is filled, but we aim to complete the search as early as possible. Georgetown University, the oldest Catholic University in the United States, is an Affirmative Action/Equal Opportunity employer.

Friday, September 7, 2007

UMass Amherst Open Rank Job

*THE DEPARTMENT OF PSYCHOLOGY, UNIVERSITY OF MASSACHUSETTS AMHERST*
invites applications for an open-rank, tenure track position in COGNITIVE PSYCHOLOGY beginning Fall 2008. Candidates in any area of Cognitive are encouraged to apply, but we have special interests in human memory, categorization, judgment and decision making, and language processing. Use of computational modeling techniques is particularly desirable. Candidates applying at the junior level must have a strong record of research, clear potential to obtain support for and maintain an active research program, and strong teaching skills. Senior
candidates must additionally have a record of extramural support. Candidates will be expected to collaborate with other faculty members with similar interests across campus. Rank and salary are dependent on experience and qualifications. Applicants should send a vita, a statement of research and teaching interests, reprints of recent publications, and at least three letters of recommendation to: Cognitive Search Committee, Department of Psychology, University of Massachusetts, Amherst, MA 01003-7710. Applications are due on November 1, 2007. The search committee will begin reviewing applications on that date, and will continue until the position is filled. Hiring is contingent upon the availability of funds. The University of Massachusetts is an Affirmative Action/Equal Opportunity Employer. Women and members of
minority groups are strongly encouraged to apply.

Tuesday, September 4, 2007

Metalinguistic Tasks -- Part 4

It is clear that speech sound processing as measured by metalinguistic speech perception tasks such as syllable discrimination and identification can double dissociate from speech sound processing as measured by auditory comprehension tasks. This means that at some stage of processing, these two abilities rely on different neural systems. Does this mean that the two tasks rely on entirely segregated neural systems? Of course not! It is a good bet, for example that the two classes of tasks do not differentially engage the cochlea. But at what level in the nervous system do they diverge? We don't know.

We have suggested that the divergence occurs at fairly advanced stages of auditory processing, in non-primary cortical auditory regions. The speculation is that whatever basic auditory and phonetic/phonological processing goes on in auditory cortex -- as opposed to meta-phonological processes supported by say frontal systems, such as phonological working memory or attentional processing -- is common to the two tasks. This predicts that damage to auditory regions that disrupts these superior temporal lobe auditory/speech sound processing networks should lead to some degree of correlation between deficits on the two types of tasks. I believe there is some support for this speculation from studies of word deafness, where speech comprehension deficits have been linked to relatively low level speech sound processing.

Thursday, August 23, 2007

Meta-linguistic tasks -- Part 3

What leads us to the conclusion that meta-linguistic tasks, such as syllable discrimination, are not valid measures of normal speech sound processing? The data tell the story:

Speech sound processing in comprehension vs. in syllable discrimination double dissociate, even when contextual cues are controlled in comprehension tasks. We reviewed the evidence most thoroughly in our 2004 Cognition paper. There are several reports that examined phoneme identification and/or discrimination that we described in that paper (Basso et al, 1977; Blumstein et al., 1977; Caplan et al., 1995), but the Miceli et al., 1980 (Brain and Language, 11:159-169) paper is worth highlighting again. They studied 60+ aphasics using a CCVC syllable discrimination task and an auditory comprehension task using word-to-picture matching. Critically, the comprehension task employed phonological and semantic foils. The inclusion of phonological foils (e.g., a picture of a pear if the stimulus word is bear) minimizes the possibility of using contextual cues in comprehension. Performance was categorized into normal and pathological based on comparison with age-matched controls. The table, reproduced from our Cognition article, summarizes the findings. Notice that 19 patients had pathological performance on the discrimination task yet were normal on the comprehension task, and 9 showed the reverse pattern. A clear double-dissociation.

Anatomical correlations with syllable discrimination deficits are also revealing. The most severe deficits in syllable discrimination tasks are associated with frontal lobe lesions. For example, Gainotti et al. 1982 (Acta Neurol. Scandinav. 66: 652-665) report error rates as a function of lesion location. Patients with left hemisphere lesions restricted to the frontal or parietal lobes made significantly more errors than patients with lesions restricted to the temporal lobe. The worst performance was found in frontal patients. This is an important observation because (i) it suggests that deficits on syllable discrimination tasks are not particularly related to auditory processes (auditory cortex damage appears neither necessary nor sufficient to produce the deficit), and (ii) since frontal or parietal damage typically spares lexical comprehension, such a finding provides further evidence for the non-relation between auditory comprehension and syllable discrimination tasks.

Conclusion: syllable discrimination is not a valid measure of speech sound processing, at least in the context of aphasia. What we have suggested is that performance of syllable discrimination tasks requires frontal-lobe related cognitive processes, such as working memory, that are not as critical for normal auditory comprehension, and it is these processes that are being disrupted by frontal and/or parietal lesions, rather than the (bilateral!) temporal lobe-based mechanisms involved in speech sound processing that are critical to auditory comprehension.

Wednesday, August 15, 2007

Meta-linguistic tasks -- Part 2

Our observation is that data from meta-linguistic tasks (e.g., syllable discrimination or identification) impedes progress in understanding the functional anatomy of speech processing. How so?

Take lesion data as one example. If you look at the evidence, you find that deficits on syllable discrimination tasks are commonly observed following left hemisphere damage, with the most severe deficits associated with frontal and/or parietal lesions. The straightforward conclusion from such a result is that speech perception is supported predominantly by left frontal and/or parietal regions. The problem with this conclusion is that patients with damage to frontal and/or parietal regions in the left hemisphere typically have quite good auditory comprehension. More to the point, as Blumstein* has pointed out, "Significantly, there does not seem to be a relationship between speech perception abilities [performance on discrimination tasks] and auditory language comprehension. Patients with good auditory comprehension skills have shown impairments in speech processing; conversely, patients with severe auditory language comprehension deficits have shown minimal speech perception deficits." (p. 924)

This is a bit of a paradox: why is it that deficits on syllable discrimination tasks don't predict auditory comprehension problems? There are two possibilities. One is that auditory comprehension tasks contain contextual cues that allow the listener to get by even with an imperfect phonemic processor. The other possibility is that syllable discrimination tasks are invalid measures of normal speech sound processing. We've argued that the latter is true. More on that next entry...

*Blumstein, S.E. (1995). The Neurobiology of the Sound Structure of Language. In Gazzaniga (E.d.) The Cognitive Neurosciences. MIT Press.