Monday, March 31, 2008

Mirror Neuron Course: Some Mirror Neuron Hype


In case  you've been living in a cave for the past several years and haven't heard of mirror neurons or experienced the depth these macaque cells have infiltrated theorizing about human cognition, here's a few links:

NOVA video segment (includes some "Science update" links as well)



Science Daily article based on press releases from Society for Neuroscience 2007

Mirror Neuron Graduate Course Starts Today

TB graduate course #2 starts today.  Topic: Mirror Neurons.  The goal is to survey what we know about MNs in the monkey and presumed correlates in humans, and to consider what kind of theoretical conclusions the data allow.  We will cover a range of empirical and theoretical topics related to MNs, but all with an eye toward the implications of this work on understanding the functional anatomy of speech/language.

As with our course on semantics and brain, this is a live course that I am teaching at TB West (UC Irvine), Mondays 1-4.  I will post our readings, as well as summaries of our in-class discussion.  Please feel free to jump in with your own comments or suggestions.

Today's meeting will review evidence on task effects in mapping neural systems involved in speech perception, as this point is critical in evaluating the role of frontal cortex in speech tasks.  The main points have already been summarized on this blog in a thread on "Meta-linguistic tasks."

Sunday, March 30, 2008

Post-doc opportunity at UMD: cortical plasticity/visual system

If you have friends doing vision research that are looking for a post-doc, alert them to this:

Postdoctoral position to investigate human cortical plasticity, principally utilizing MEG to participate in the development of therapeutic treatments for human amblyopia. A Ph.D. in neuroscience or related discipline is required, experience in non-invasive imaging (MEG, fMRI, EEG) is desired. Please send CV, a summary of research experience, and contact information for three references.

Elizabeth M. Quinlan, Ph.D. OR David Poeppel, Ph.D.
Neuroscience and Cognitive Sciences Program
Department of Biology
University of Maryland
College Park, MD 21403

equinlan@umd.edu
dpoeppel@umd.edu

Here are two recent publications from Betsy Quinlan's lab that illustrate some of the relevant phenomena:

He HY, Hodos W, Quinlan EM. (2006) Visual deprivation reactivates rapid ocular dominance plasticity in adult visual cortex. J Neurosci. 26:2951-5.

He HY, Ray B, Dennis K, Quinlan EM.(2007) Experience-dependent recovery of vision following chronic deprivation amblyopia. Nat Neurosci. 10(9):1134-6.

Friday, March 28, 2008

The Motor Theory of Speech Perception Reviewed

Ok, if you intend to do any research in the field of speech/language, here is a paper you have to read:


If you don't read this paper, you might as well get out of the field. Bye! The paper presents a brilliant review of the Motor Theory past and present, summarizing the motivation for its initial development, changes to the theory over the years, and presenting an almost thorough review of existing evidence for each of the three main claims of the theory, (1) speech processing is special, (2) perceiving speech is perceiving gestures, and (3) the motor system is recruited for perceiving speech.  The review covers an impressively broad range of data, from the classic Motor Theory results (McGurk Effect, duplex perception, etc.) to mirror neurons and the role of the motor system in perception beyond speech.  The authors suggest that claim #1 should probably be retired, but claims 2 & 3 remain viable, likely even true.  

I say "almost thorough review" because there is not one line of text dedicated to aphasia, which in my view provides some of the best evidence against claims 2&3.  I've discussed this issue before in the context of mirror neurons (click here or here) but it applies with equal force in the context of the Motor Theory (mirror neuron-related theories of perception are basically motor theories of perception).  Let me reiterate the relevant aphasia evidence: 

Destruction of the motor systems controlling the ability to produce speech does not produce a commensurate destruction of the ability to recognize/comprehend speech.  

This means that speech can be recognized/comprehended without the speech-motor system, which in turn, shows that any strong interpretation of the Motor Theory or mirror neuron-like accounts of speech perception is WRONG.  

As I read this paper, I was struck generally by the fact that aphasia data seems to have been largely ignored in the decades of debate over the Motor Theory.  I haven't done an exhaustive lit search, but a cursory search didn't turn up anything, and the Galantucci et al. review, which is very thorough otherwise, didn't even allude to it.  Why?  Critical and obvious evidence from Broca's aphasia (bad production, good comprehension) has been around even longer than the Motor Theory.  I seriously don't understand the omission.

So why is the paper mandatory reading if it missed such a critical piece of evidence?  Because except for that one omission, it really is a brilliant review.  Of particular importance, is the case the paper makes for sensory-motor interaction in speech, which in my view is absolutely dead-on correct, as our work on the topic attests.  I would suggest that it is this aspect of the Motor Theory -- it's emphasis on the connection between perception and production -- that is the most accurate and enduring aspect of the theory.  It's just that the Motor Theory weights the motor side of the interaction too heavily in its theory of perception.  

Wednesday, March 26, 2008

TB Journal Club: Effective and Structural Connectivity in the Human Auditory Cortex

I just came across this paper today, but the hot-off-the-press article in J. Neurosci. by Helen Tager-Flusberg and company appears to be a must-read, and therefore qualifies for a "TB Journal Club" listing. The paper investigates the connectivity patterns in human auditory cortex using a combination of BOLD fMRI, Granger causality mapping, and diffusion tensor mapping.

So everyone have a look and be ready to discuss next week!

Effective and Structural Connectivity in the Human Auditory Cortex
Jaymin Upadhyay, Andrew Silver, Tracey A. Knaus, Kristen A. Lindgren,
Mathieu Ducros, Dae-Shik Kim, and Helen Tager-Flusberg
J. Neurosci. 2008;28 3341-3349
http://www.jneurosci.org/cgi/content/abstract/28/13/3341?etoc

Aphasianniversary: Peter Rommel (1643-1708)

2008 marks the 300th anniversary of the death of Peter Rommel (1643-1708). Who's Peter Rommel? He was a German physician who, along with Johann Schmidt (1624-1690) provided the first detailed descriptions of aphasic disorders (although the term aphasia wouldn't be coined for another couple hundred years).

In 1683, Rommel described a patient with severe motor aphasia, but with preserved comprehension, preserved ability to recite some prayers and Bible verses, and with preserved memory for past events. Rommel writes,

"After a fairly strenuous walk which she took after dinner, she suffered a mild delirium and apoplexy with paralysis of the right side. She lost all speech with the exception of the words "yes" and "and." She could say no other word, not even a syllable, with these exceptions; the Lord's Prayer, the Apostles' Creed, some Biblical verses and other prayers, which she could recite verbatim and without hesitation but somewhat precipitously.... Nevertheless, her memory was excellent. She grasped and understood everthing that she saw and heard and she answered questions, even about events in the remote past, by affirmative or negative nods of the head." (from a translation by Benton and Joynt, 1960, Archives of Neurology, 3: 205-221, p. 210).

Thursday, March 20, 2008

Cool ECoG recording study on word perception by Canolty et al.


A study by Ryan Canolty and a host of other co-authors including Bob Knight, and our friend Nina Dronkers, published in the online journal, Frontiers in Neuroscience, used electrocorticogram (ECoG) recordings to monitor the spatiotemporal dynamics (such a fancy sounding phrase) of word perception. ECoG strikes me as a method that is highly under-utilized in cognitive neuroscience research. We are all so interested in getting both high spatial and temporal resolution, yet very few people have used ECoG, which has both (Dana Boatman is one person who comes to mind as having used this method). The downside is that to record electrical signals directly from the surface of the brain, you have to implant electrode grids, and you can only do this in patients with neurological diseases, such as epilepsy. Usual caveats aside regarding the generality of findings from such populations, the method seems to have a lot of promise.

Canolty et al. summarize their main findings more eloquently that I could, so I'll just quote them:

"Word processing involves sequential activation of the post-STG, mid-STG, and STS and these results validate previous spatial results regarding the cortical regions involved in word processing, and, in turn, language comprehension. These neuroanatomical results support lesion and neuroimaging studies which have shown word-related activity to occur in the post-STG, mid-STG, and STS (Belin et al., 2002; Binder et al., 2000; Démonet et al., 1994; Dronkers et al., 2004; Dronkers et al., 2007; Fecteau et al., 2004; Giraud and Price, 2001; Indefrey and Cutler, 2005; Mummery et al., 1999; Petersen et al., 1988; Price et al., 1992; Price et al., 1996; Scott and Wise, 2004; Vouloumanos et al., 2001; Wise et al., 2001; Wong et al., 2002; Zatorre et al., 1992). However, these results also reveal the temporal flow of information between these distinct brain regions and support a component of serial processing in language. This study complements and extends Binder and colleagues (2000) by demonstrating that word processing first activates the post-STG, then the mid-STG, and finally the STS."

This is interesting, particularly because it doesn't show much activity in anterior temporal regions which some have argued as being critical to word-level processing (e.g., Scott et al. 2000). The authors of the ECoG study suggest that STS activation, which tended to arrive at the party a bit late relative to STG regions, are involved in word meaning-related functions, because real word stimuli modulated activity there relative to non-words. They suggested that this finding may contradict the Hickok & Poeppel view of the STS supporting phonological functions. It may, or may not. For example, their STS activations could reflect activation of networks involved in processing or representing phonological word forms. Whatever the correct interpretation, it is nice to have decent spatiotemporal resolution in the process of word recognition. (Too bad they couldn't implant grids bilaterally!)

Philadelphia Naming Test on-line + Research positions in Phily

Via Myrna Schwartz...

Dear colleague,

We invite you to access and download the Philadelphia Naming Test (PNT) at www.ncrrn.org/assessment/pnt. The PNT is a 175-item picture naming test developed at Moss Rehabilitation Research Institute (MRRI) for the psycholinguistic exploration of lexical access in nonaphasic and aphasic speakers. On the web site you’ll also find instructions for administering and scoring the PNT, relevant references, and contact information in case of questions.

Also, MRRI is recruiting for an Institute Scientist engaged in research in this area. For more information about the position visit www.ncrrn.org/opportunities.

Finally, if you haven't browsed the NCRRN web site (NCRRN.org), I encourage you to do so and to to sign up for future email postings. (NCRRN stands for "Neuro-Cognitive Rehabilitation Research Network", a collaboration between researchers at MRRI and University of Pennsylvania).

Feel free to forward this email to all interested colleagues.

Best wishes,
Myrna

Wednesday, March 19, 2008

RA/Lab Manager Positions at NYU in Neurolinguistics

Via Alec Marantz & Liina Pylkkänen:

We have two positions here at NYU between Linguistics and Psychology for next year ideally suited for someone in transition between an undergraduate degree or an MA and a PhD program in linguistics, psychology, or neuroscience. Please distribute the advertisement to students that might be interested, and please don't hesitate to let us know about anyone we might contact directly to persuade to apply.

The advertisements will appear quite soon in the Cognitive Neuroscience Society newsletter and on the Linguist List.

Thanks,

Alec Marantz & Liina Pylkkänen
Departments of Psychology and Linguistics, NYU


1. Lab Manager/Research Assistant Position

A full-time Lab Manager position at the NYU Neurolinguistics Laboratory. BA/BS or MA/SM in cognitive science related discipline (psychology, linguistics, etc.) or computer science. Starting date is negotiable, but preferably July 2008.

The lab manager will be involved in all stages of the execution and analysis of magnetoencephalography (MEG) experiments on language processing. Previous experience with MEG or some other cognitive neuroscience method is highly preferred. A background in statistics and some programming ability (especially Matlab) are essential.

To apply, please email CV and names of references to Prof. Liina Pylkkanen (liina.pylkkanen@nyu.edu).

Contact Information:

email: liina.pylkkanen@nyu.edu

Tel: (212) 992-8764 or (212) 998-8386

http://www.psych.nyu.edu/pylkkanen/lab/


2. Research Assistant/Lab Manager Position

Full-time research assistant for Cognitive Neuroscience of Language projects at the KIT/NYU MEG Lab. BA/BS in cognitive science related discipline (psychology, linguistics, etc.) or computer science. Starting date is negotiable, but preferably July 2008.
RA would help analyze data from MEG and joint MEG/fMRI experiments and help design and program additional experiments. Job includes some responsibility for managing the KIT/NYU MEG lab at NYU's Psychology Department. For 2008-09, research will concentrate on lexical access and morphological decomposition in auditory word perception.

To apply, please email CV and names of references to Prof. Alec Marantz (marantz@nyu.edu)

Contact Information:

email: marantz@nyu.edu

Tel: (212) 998-3593

http://www.psych.nyu.edu/meglab/

What's the best way to integrate lesion and fMRI experiments?

It depends on the theoretical question under investigation. Of course, the general idea regarding integration is that one method can provide information that the other can't, and so together we can get a better picture of a given function. Based on my experience with grant proposal submissions combining lesion and fMRI methods, some folks seem to believe that the best approach is to use the same task in the context of both methods. This seems quite reasonable. Healthy subjects may activate, say, the posterior temporal lobe and inferior frontal cortex while listening to words, and we may wonder what role these regions might be playing the task. We could then examine lesion data, where we might find that lesions to the posterior temporal lobe better predict auditory comprehension deficits than frontal lesions.

But this complete parallelism approach doesn't always work because a given task doesn't always translate well across methods. Consider a simple word-to-picture matching task which is commonly used in lesion studies. Patients listen to a word and then point to the matching picture in an array containing phonemic and semantic foils. Aphasic patients with unilateral lesions and auditory comprehension deficits, tend to make semantic errors on such tasks indicating a breakdown at some post phonemic processing level. Lesion data broadly implicate posterior temporal areas. It would be nice to use fMRI to provide further spatial resolution regarding the localization of the disrupted function. But if we simply import the picture-matching task into the magnet, we would see activations associated with all stages of the comprehension process, not just the level that is primarily disrupted in the lesion cases. So to provide the "converging" fMRI evidence that we are after, we would have to change the paradigm. For example, we might use semantic priming, or some other post-phonemic task in fMRI to try to selectively highlight relevant regions. If we found posterior temporal activations, this would be decent converging evidence.

In the grant proposals I have submitted, we used not only parallel-task integration approaches, (when possible), but also this other form of integration where the task necessarily has to change to answer the same kind of question. It was the later case that seemed to raise concerns among some reviewers. We were able to successfully argue our case in subsequent revisions with one proposal, but had less success with similar arguments with a different proposal and different set of reviewers.

I guess the upshot is (i) there's more than one way to integrate data from multiple methods, where the approach you use depends a lot on the specific questions you're asking, and (ii) if you are writing a grant that proposes cross-method integration AND you are NOT using a parallel-task approach, be sure to be very clear about the logic behind your approach because there does seem to be a parallel task bias among some reviewers.

Tuesday, March 18, 2008

Grant writing advice

In general, grants proposals that include multiple methods (e.g., fMRI & lesion), put you at a disadvantage compared to single-method proposals.  I've submitted three grant proposals now that have included both fMRI and lesion-symptom mapping methods, and reviewed a bunch of similar combo-method proposals  from other investigators, and my experience is that they are much less likely to get favorably reviewed.

The funny thing is, there is typically unanimous reviewer praise for the multi-method approach, but then most of the criticism centers on how the multi-method approach doesn't work.  The reason why multi-method proposals are more problematic in the review process is obvious: you have to get favorable evaluations of each method separately, and then get a positive opinion of the relation between the two.  Triple jeopardy.  You are much better off putting the different methods into separate proposals, and then pointing out in each separate proposal that you are approaching the same issues using a different methodological approach "under a different funding mechanism." This way, reviewers can praise you for your methodological diversity, but no one gets to ding you for how you put them together. 

Of course, one might argue that if the various methods are integrated in a thoughtful, scientifically justifiable way, it shouldn't matter.  In theory, yes.  But as we all know, reviewers often disagree about the best way to do things.  By including more things to potentially disagree about, you open yourself to more criticism.  And these days, getting dinged on just about any one thing by any one reviewer is enough to kill your score.

This is a sad state of affairs because investigators who try to do more, end up getting penalized. I personally have had modest suggest with multi-method proposals, getting two funded, although it took until the third submission in both cases and much argumentation about how to integrate across methods. But my experience in this last round of submissions has sharpened my thoughts on the proposal approach. I submitted two grant proposals, one that was straight fMRI and one that was combined fMRI and lesion. The straight fMRI proposal fared reasonably well on its first round of reviews, whereas the the combo proposal is in danger of failing on its third time around, in part because of questions about cross-method integration.  In the future, I'm going to continue to do multi-method research, but I'm going stick to single method proposals.  Oh, the games we have to play...

Friday, March 14, 2008

Semantics and Brain -- more on modality specificity

This is a follow up on my last semantics and brain entry concerning modality/language-specificity.

Word-level semantic deficits in aphasia -- usually defined in terms of comprehension errors dominated by semantic confusions -- have been found in some patients to be specific to the auditory-verbal modality, and in other patients to extend at least to the visual modality as well. I would have predicted that the modality specific deficits would be associated with lesion in the posterior temporal lobe (~MTG). This seems not to be the case, however.

Data presented both in Hart & Gordon (1990, Ann. Neurol., 27:226-31), and in Chertkow et al. (1997: Brain and Language, 58: 203-232) suggest that more general semantic deficits -- affecting single word processing as well as non-verbal object processing -- are associated with the posterior temporal lobe. The image on the left is from Chetkow et al. and shows, in the top panel, the outline of lesions from their group of 8 aphasics with verbal+non-verbal semantic deficits, and, in the bottom panel the region of overlap of these lesions (shaded). The image on the right is from Hart & Gordon, and shows the lesion outlines and region of overlap (shaded) of their 3 patients with semantic deficits. Although these are not fancy voxel-based lesion-symptom mapping studies, the similarity of findings in the two studies makes me think there is something to the anatomical findings. Patients with language-specific semantic deficits had this posterior temporal region completely spared; their lesions were all (n=3) in the parietal lobe (lesion outlines of these patients are in the bottom panel of the Chertkow et al. figure on left).

I honestly don't know what to make of the parietal lobe localization for language-specific semantic deficits; such a finding certainly doesn't fit neatly into our Dual-Stream model. So I'll restrict discussion to the posterior temporal lobe localization of the more general semantic deficits. A number of points can be made in this regard.

1. The posterior temporal lobe is involved in semantic processing at the word/object level. The lesion evidence is quite clear on this point. We can infer this from Wernicke aphasics who tend to have lesions in the posterior temporal lobe, and typically present with semantic deficits, and now we see it explicitly in two studies of patients with word/object level semantic deficits.

2. Patients with semantic deficits and posterior temporal lesions tend to have deficits that are not linguistic-specific. Does this mean that the deficits are amodal, affecting the representation of semantic knowledge? Not necessarily. It is possible that this region contains distinct networks for accessing semantic knowledge from the auditory-verbal modality on the one hand, and from the visual modality on the other. That is, the general semantic deficit could result from anatomical proximity of largely independent system. Or it could be that there is a single system that accesses semantic knowledge, that can take input from either the visual or auditory-verbal modalities. Evidence for an access, as opposed to a knowledge representation deficit comes from the fact that semantic errors in aphasia tend to be unstable: patients may make an error on a particular item on one trial, and then get it right the next time they are presented with it. Both of these possibilities are consistent with our proposal that the posterior temporal lobe (MTG region) supports the mapping between sound and meaning, or in more psycholinguistic terms, functions as a "lexical interface." Note that if this network also turns out to support mappings between visual percepts and meaning, it does not disprove our claim, it only means that the region's function is more general than a pure sound-to-meaning mapping system.

3. Single word/object semantic deficits in aphasia are of a different character than semantic deficits found in semantic dementia. This is a point made explicitly by Jeffries and Lambon Ralph (a paper we did not read for this course; 2006, Brain, 129:2132-47), but also made by the tendency for the semantic deficit to be more variable on an item level in aphasia than in semantic dementia, suggesting more of an access problem in aphasia and a representation deficits in SD (correct me, semantic dementia experts, if I got this wrong).



Wednesday, March 12, 2008

Guest entry #432: Bill Idsardi on "A Shaving Mirror?"

OK, so we're sitting around TB-East HQ (i.e. DP's office) this afternoon, trying to design an fMRI experiment with Al Braun and Nuria Abdulsabur, batting around ideas about embodied cognition (search this blog for "mirror neurons") and possible ways to test it.

Then it occurs to us--we can use secondary sex characteristics and, better yet, blatant gender stereotypes to good effect here. Forget about "lick, pick, kick" (http://www.neuron.org/content/article/abstract?uid=PIIS0896627303008389), use "shave". Then we'll see the somatosensory/motor area for faces light up in men, and the area for legs light up in women (NSFW image deleted).

Post your suggestions for additional stimuli in the comments. Best suggestion wins a Gillette Sensor (your choice: women's or men's).

Monday, March 10, 2008

TB Biography: Harold Goodglass

This entry starts a new, occasional feature of Talking Brains: Bios of the Language Stars. We start with Mr. Aphasia (or shall I say, Dr. Aphasia) himself, Harold Goodglass. I had the opportunity to meet Goodglass a couple of times while I was at Brandeis and then MIT, and quite enjoyed my interaction with him. His clinical intuitions were astounding, and his excitement for the field unbounded. I can't think of anyone who has contributed more to our understanding of aphasia in the last five decades than Harold Goodglass. After all, he wrote the book.

Harold Goodglass (1920-2002)

Dr. Goodglass was born in New York City August 18, 1920, graduated from Townsend Harris High School in 1935, and received a BA in French from City College of New York in 1939. He served in the Army Air Force from 1942 to 1946, and was discharged as a Captain. He then attended New York University, receiving an MA in Psychology in 1948. He earned his PhD in Clinical Psychology from the University of Cincinnati in 1951.

Upon completion of his doctorate, Dr. Goodglass became the first psychologist for the National Veterans Aphasia Center at the VA in Framingham, MA. Among his pioneering research findings was the demonstration that speech is mediated by the left hemisphere in most left-handed people, as in almost all right-handers, thus invalidating the assumption of right-hemispere dominance. With the research support of the Veterans Administration and the National Institutes of Health he published research articles on disorders of naming in aphasia, on category specific disorders of lexical comprehension and production, on the comprehension of syntax and on the syndrome of agrammatism. He also carried out a program of studies on cerebral dominance. He collaborated with many clinicians and researchers, and in 1960 he developed a standardized aphasia test known as the Boston Diagnostic Aphasia Examination, which has been translated into many languages. He was the author of over 130 research articles, and of the books "Psycholinguistics and Aphasia" (with Sheila Blumstein), "Assessment of Aphasia and Related Disorders" (with Edith Kaplan), "Anomia" (with Arthur Wingfield), and "Understanding Aphasia".

In 1969 he became Director of the NIH-funded Aphasia Research Center, and remained in that post until 1996. He was a founding member of the Academy of Aphasia and the International Neuropsychological Symposium. He established the American Psychological Association's Division 40 (Clinical Neuropsychology) and served as its first president (1979-1980). He was Professor of Neurology (Neuropsychology) at Boston University School of Medicine. In 1996 he was awarded the APA Gold Medal Award for Life Achievement in the Application of Psychology.

Source: www.bu.edu/aphasia/index.html

Saturday, March 8, 2008

Meeting gossip #2: Deutsche Gesellschaft für Sprachforschung

The German Linguistic Society met in Bamberg, Germany last week. About 500 people attended, and many of them (including us) spent much time in the restaurants eating serious amounts of meat (Fränkischer Sauerbraten, Schäuferla, Würstchen, and more meat meat meat). Bamberg is nice, your basic 1000-year-old small German city, two very impressive churches, a nice chill vibe. Great espresso, who would have thunk it?!

Together with Dietmar Zaefferer from the Ludwig-Maximialiansuniversität München, I chaired a session that was -- we thought -- about universals. And -- we also thought -- that one speaker that has stimulated provocative discussion is Dan Everett. But, at the last minute, he cancelled. Go figure ... Dietmar and I had never met, but he persuaded me to do this on the basis of the fact that we went to the same Gymnasium in München, Das Max. And I learned that the German film director Werner Herzog (Fitzcarraldo, Aguirre, etc) went to our little school.

The challege for this workshop was to see whether it is possible to have fruitful discussions that bridge anthropology, linguistics, and neuroscience. Given the vigorous recent interest in biolinguistics, could we insert some bio? I think that the topic was not really engaged or addressed. However, there were a bunch of interesting lectures on various topics, so it was not too onerous.

While this might be shameless advocacy, I think many attendees would agree that TB_East faculty Jeff Lidz gave a stellar lecture on acquisition. Read his stuff! He discussed some of his Kannada data as well as recent experiments on artificial language learning. Very good stuff. The most fun and snarky attendee was, I think, Tom Bever. He asked many amusing and insightful questions -- and also made some harsh comments, which were (mostly) deserved. Andrew Nevins gave, in my view, the funniest and liveliest talk -- Andrew needs to switch to decaf if he wants to adjust his clockspeed to those around him. Michael Ullman presented his new data on sex differences and the English past tense. There were some linguistics talks that were interesting qua linguistics but failed to connect to anyone outside of the immediate minimalist audience. And there were some nice talks about anthropology, but, again, they did not connect to anything in language research. I had a nice time, I enjoyed meeting new colleagues, and I learned a few factoids. But, in my view, the bridging discussions were not had. And the question is whether they can be had at all, or if that is even desirable. As some readers know, although I work at the interfaces between areas, I am pretty nihilist about these things and like to use the phrase interdisciplinary cross-sterilization. But let's be optimistic ... Maybe there is a chance for genuine linking hypotheses.

Keeping up with the Jones-Hickoks (TB West)

Between the semantic dementia course -- Thank You, Greg! -- and the travelogue, I can barely keep up with all the reading ... Greg, you are a good citizen, and I have been a slacker.

One of us, DP from TB_East, attended two curious meetings recently. Here 's a little update on that.

AAAS in Boston: This meeting is largely for the media, apparently over 900 journalists attended. There were a few sessions that were relevant to our research interests. Phil Rubin from Haskins chaired a session on language technologies which included Dominic Massaro (Talking Faces) and Justine Cassell (Northwestern University). Massaro presented the work with Baldi, the talking head -- which is a cool tool to investigate audio-visual speech but seems a little bit behind-the-times in terms of state-of-the art animation and visualization. Given the quality of animation in current cinema, it should be possible to generate analytically precisely specified faces that give more realistic/naturalistic output. That being said, Massaro has been a leading figure in the investigation of audiovisual speech perception, and (whether one likes his Baldi figure or not) anyone studying AV speech is certainly (or should be) aware of how Massaro's FLMP model handles multi-sensory integration. Justine Cassell presented some provocative data on how children interact with avatar-style computerized friends onscreen. She applied her ideas about 'embodied conversational agents' to the interaction between autistic chuldren and the onscreen partner. A little puzzling but fascinating. The work is not yet published but bstay tuned.

I chaired a session on brain and speech that had three interesting talks. First, Pat Kuhl presented her program of research on language development/speech perception, the highlight being the new baby MEG scanner that Pat apparently convinced the Finnish MEG manufacturer to build. Pictures of babies in an MEG machine ... how can you go wrong? I am looking forward to seeing the new data coming from this approach. Jack Gandour presented a lot of data on the neural basis of tone language perception and comprehension. Jack is arguably the world's leading expert on the cognitive neuroscience of tone languages, and a 30 minute presentation cannot do justice to the huge range of data he has on these issues. Finally, former TB_East graduate student Nina Kazanina presented some of her recent work, published last year in PNAS. Nina's paper (with TB_East faculty Bill Idsardi and Colin Phillips) is called The influence of meaning on the perception of speech sounds and uses a clever cross-linguistic design (Korean, Russian) in the context of a mismatch study to test how native phonology shapes early auditory responses. Nina is now on the faculty of the University of Bristol, and we are all very proud of her.

The session in Boston that really got my blood pressure high was called The mind of a tool maker, and concerned -- allegedly -- the evolution of language and cognition. A very high-powered cast, a terrible session. The cast: Lewontin, Berwick, Walsh, Hauser, Deacon, as well as some other folks I did not know, and whose performance did not make me wat to run out and read their work (e.g. Mimi Lam, Dean Falk). There were, to be sure, some sensible ideas buried in there, and one genuinely good talk, by Marc Hauser. Among other reasons it stood out as good (contrast enhancement) because (a) he stayed within his alloted time (b) the talk had a point/hypothesis (c) the work actually related to the topic of the session. Berwick had an interesting idea about FoxP2, a really nice deconstruction/debunking based on a computational analysis, and Deacon presented some interesting ideas -- but too many and too scattered. But the bottom line is this: the study and discussion of evolution of cognition and language requires extreme caution, subtlety, rigor, nuance, a high-pass filter for bullshit, and so on and so forth. And, alas, the level of speculation and pure unadulterated paleo-nonsense was off the scale. This session made me appreciate why the French Academy forbade language evolution as a topic. The audience deserved better. My favorite line: the organizer of the workshop, Dr. Lam, in her opening remarks, said that one reason she wanted to have this workshop was because she had such a hard time getting her ideas on evolution of cognition published .... Yikes!

Friday, March 7, 2008

Semantics and Brain -- Do modality/language-specific semantic deficits exist?

The goal of the last set of semantics and brain readings was to determine whether language-specific semantic deficits occur, and if so, whether they are associated with posterior temporal lobe regions that David and I have proposed to be involved in "lexical-interface" processes. The readings have provided relatively clear answers to these two questions: Yes and no, respectively.

First, some background. There is plenty of evidence supporting the view that word-level comprehension deficits in aphasia are predominantly semantic in nature. For example, such patients more often make semantic than phonemic errors in auditory word-to-picture matching tasks (see Baker et al. 1981, Neuropsychologia, 19:1-15; Gainotti et al., 1982, Acta Neurol. Scandinav., 66:652-65). Such deficits are loosely associated with posterior temporal regions in that Wernicke's aphasics present with such semantic deficits (Baker et al.) and Wernicke's aphasia is associated with temporal lobe lesions.

But are these deficits modality specific? That is, are they restricted to the audio-verbal modality, or are they more general? The evidence demonstrates that both patterns exist in aphasics with single-word comprehension deficits.

Of the papers we read this week, Chertkow et al. (1997: Brain and Language, 58: 203-232) provide the strongest evidence. They tested a group of 16 aphasics of various diagnostic categories (Global, Broca, Wernicke) using a range of tasks. Visual perception tasks were used to rule out visual perception as a source of their problems (all were in the normal range). Half of the subjects -- all of the Wernicke and Global aphasics, and none of the Broca patients -- were impaired in auditory word-to-picture matching when the picture choice array comprised a set of semantically related items. The same pattern held in another comprehension test that required subjects to answer forced choice conceptual knowledge questions about heard words (e.g., lemon: is it used with coffee or tea?). A force-choice picture version of this latter task was also administered (picture of lemon: subject must indicate whether it goes with picture of coffee or tea). Five of the 8 patients with single word impairments were equally impaired on the non-verbal version of this task, but 3 patients (including both of the Wernicke aphasics) improved to normal performance levels with the non-verbal task. Conclusion: In some patients with single word comprehension deficits, the semantic impairment is language-specific, on other patients, it is more general.

The paper by Goodglass et al. (1997, Brain and Language, 56:138-58) used a very different paradigm, concept similarity judgments, to assess modality specific effects. They found that making concept similarity judgments across modalities (hear: skirt, see: jacket, decide: same category or not) was harder for aphasics (measured in terms of RTs) than making the same judgments in an all visual format. Crucially, the reverse held for control subjects: auditory-visual pairs were more quickly judged than visual-visual pairs. Presumably, the cross modality difficulty in aphasia stems from trouble accessing semantic representations from auditory-verbal input.

So it seems that language-specific word-level semantic deficits do occur in aphasia, even if it is also the case that some aphasics seem to have deficits that extend beyond the linguistic system. This is what we'd expect. However, where are the lesions associated with the language-specific vs. the non-specific deficits? I'll discuss that in the next post.

Thursday, March 6, 2008

Comment on Conceptual Organization from Mike Bonner

I (GH) got a nice email from Mike Bonner, who is in Murray Grossman's lab at Penn. There is some useful information here including highly relevant citations, so I'm passing it along, with Mike's permission. Thanks Mike!

**************************
Hi Greg,

I'm a grad student in Murray Grossman's lab. I enjoy reading your Talking Brains blog. You've brought up many of the same issues that I and others in Murray's lab have with the semantic memory literature. I just read your March 4th post. You finished with:

"What kind of evidence do we need to really settle the issue? I would say that a convincing finding would be something like lesion (or TMS) evidence that when a person damages the lip area of motor cortex, they lose the concept KISS. Is there any such evidence out there?"

I'm glad that you raised this point It's exactly the issue that I'm hoping to address in my thesis work. I'm writing up a prelim now. I wanted to point to this Pulvermuller TMS study, which may be of interest to you:

Pulvermüller F, Hauk O, Nikulin VV, Ilmoniemi RJ.
Functional links between motor and language systems.
Eur J Neurosci. 2005 Feb;21(3):793-7.

They facilitate speed of response on lexical decision for leg words by activating what should be the leg area of motor cortex. Their results for arms words are a bit dubious though.

There's also a study suggesting that a deficit for concepts (both verbs and nouns) involving manipulation knowledge correlates with damage to hand motor areas of cortex:

Arévalo A, Perani D, Cappa SF, Butler A, Bates E, Dronkers N.
Action and object processing in aphasia: from nouns and verbs to the effect of
manipulability.
Brain Lang. 2007 Jan;100(1):79-94.

Furthermore, results of action knowledge deficits in MND and of reversal of the concreteness effects in SD may be relevant (corresponding with damage to motor (MND) or visual areas (SD)). The upshot for me is that the evidence is still insufficient. I'd love to know if you come across any other relevant studies.

Wednesday, March 5, 2008

Where's Irvine?

When I give talks outside of Southern California I find that very few people know where Irvine is, beyond the notion that it is "somewhere in California." This is unfortunate for us here at UCI because not knowing where we are, probably affects the likelihood that students and job seekers will think of Irvine as a place to go for graduate training or work. So, for future reference, here is some geographical information on UC Irvine.

We are located along the Southern California coast, in Orange County (The OC -- that's right, we're UCOC). We are between LA, 45 miles to the north, and San Diego, 75 miles to the south.


The UC Irvine campus (outlined in the image below) is located a couple of miles from the coast, near Newport Beach and Laguna Beach.

Here's a couple of photos of our local beach communities.

Laguna Beach:

Newport Beach:

Here is a picture of Aldrich Park, at the center of the UC Irvine campus:

And finally, a photo of our building on campus:


Ok, so now you know.

Tuesday, March 4, 2008

Semantics and Brain -- Comments on Caramazza; Martin; Hart

Outside of the semantic dementia literature, most of the discussion of conceptual/semantic organization in the brain centers around category specific deficits. There is, of course, good evidence for dissociations in the ability to perform tasks (typically, but not limited to, naming) involving one conceptual category versus another. The primary cleavage in conceptual categories in this research seems to be living things vs. artifacts.

The readings we surveyed provided nice overviews of both the data, and the various theories put forth to account for these data. Caramazza & Mahon (2003, TICS, 7:354-61), for example, discussed the relative strengths and weaknesses of the Sensory/Functional Theory (categories are organized around sensory and functional systems), the Domain-Specific Hypothesis (some categories are organized into separate modules as a function of evolutionary pressures to process information in those categories), the Conceptual-Structure account (a semantic feature-based account). Hart et al. (2007, J. Int. Neuropsych. Soc., 13: 865-80) provide an even more exhaustive review of the various theories, complete with handy crib sheet tables. And Martin (2007, Annu. Rev. Psychol. 58:25-45) gives a nice summary of the range of imaging data on the topic. The Caramazza & Mahon (neuropsych oriented) and Martin (imaging oriented) papers are particularly useful if you want to get a quick overview of the field.

The conclusion one gets though, after pouring over all these papers is that there are a number of different ways to account for the available data, and no obvious way to choose between them. For example, Martin summarizes an impressive range of findings from neuroimaging that seems to show that the same sensory-motor systems involved in processing a given bit of information is involved in representing that bit of information in conceptual memory. But from functional imaging alone, it is very hard to know whether these activations reflect the substrate for semantic memory, or merely the sensory-motor associates of the "real" concepts stored somewhere else. To compound matters, Caramazza & Mahon argue explicitly that the neuropsychological data do not support a sensory-functional hypothesis, and suggest instead that "the first-order constraint on the organization of conceptual knowledge is object domain [i.e., animal, fruits/veggies, conspecifics, a possibly tools]" p. 356. But even these authors admit a possible "fine-grained" organization involving other properties, or that there may be two independent organizational levels.

If you think about this issue hard enough, it all seems to come back to the question, what is a concept? If you believe a concept is nothing more than its sensory-motor/functional properties (for concepts that fit into those dimensions) then you can point to imaging data showing activation of the lip area of motor cortex when you read the word "kiss" and many other findings of this sort. If you believe a concept is something more complicated, you can point to complex patterns of associations/dissociations in the neuropsychological realm, and you could write off the imaging data as a peripheral association of the core of the concept: lip movements are associated with the concept KISS but don't define it, or even contribute substantively to it.

What kind of evidence do we need to really settle the issue? I would say that a convincing finding would be something like lesion (or TMS) evidence that when a person damages the lip area of motor cortex, they lose the concept KISS. Is there any such evidence out there?