News and views on the neural organization of language moderated by Greg Hickok and David Poeppel
Monday, March 31, 2008
Mirror Neuron Course: Some Mirror Neuron Hype
Mirror Neuron Graduate Course Starts Today
As with our course on semantics and brain, this is a live course that I am teaching at TB West (UC Irvine), Mondays 1-4. I will post our readings, as well as summaries of our in-class discussion. Please feel free to jump in with your own comments or suggestions.
Today's meeting will review evidence on task effects in mapping neural systems involved in speech perception, as this point is critical in evaluating the role of frontal cortex in speech tasks. The main points have already been summarized on this blog in a thread on "Meta-linguistic tasks."
Sunday, March 30, 2008
Post-doc opportunity at UMD: cortical plasticity/visual system
Postdoctoral position to investigate human cortical plasticity, principally utilizing MEG to participate in the development of therapeutic treatments for human amblyopia. A Ph.D. in neuroscience or related discipline is required, experience in non-invasive imaging (MEG, fMRI, EEG) is desired. Please send CV, a summary of research experience, and contact information for three references.
Elizabeth M. Quinlan, Ph.D. OR David Poeppel, Ph.D.
Neuroscience and Cognitive Sciences Program
Department of Biology
University of Maryland
College Park, MD 21403
equinlan@umd.edu
dpoeppel@umd.edu
Here are two recent publications from Betsy Quinlan's lab that illustrate some of the relevant phenomena:
He HY, Hodos W, Quinlan EM. (2006) Visual deprivation reactivates rapid ocular dominance plasticity in adult visual cortex. J Neurosci. 26:2951-5.
He HY, Ray B, Dennis K, Quinlan EM.(2007) Experience-dependent recovery of vision following chronic deprivation amblyopia. Nat Neurosci. 10(9):1134-6.
Friday, March 28, 2008
The Motor Theory of Speech Perception Reviewed
Wednesday, March 26, 2008
TB Journal Club: Effective and Structural Connectivity in the Human Auditory Cortex
So everyone have a look and be ready to discuss next week!
Effective and Structural Connectivity in the Human Auditory Cortex
Jaymin Upadhyay, Andrew Silver, Tracey A. Knaus, Kristen A. Lindgren,
Mathieu Ducros, Dae-Shik Kim, and Helen Tager-Flusberg
J. Neurosci. 2008;28 3341-3349
http://www.jneurosci.org/cgi/content/abstract/28/13/3341?etoc
Aphasianniversary: Peter Rommel (1643-1708)
In 1683, Rommel described a patient with severe motor aphasia, but with preserved comprehension, preserved ability to recite some prayers and Bible verses, and with preserved memory for past events. Rommel writes,
"After a fairly strenuous walk which she took after dinner, she suffered a mild delirium and apoplexy with paralysis of the right side. She lost all speech with the exception of the words "yes" and "and." She could say no other word, not even a syllable, with these exceptions; the Lord's Prayer, the Apostles' Creed, some Biblical verses and other prayers, which she could recite verbatim and without hesitation but somewhat precipitously.... Nevertheless, her memory was excellent. She grasped and understood everthing that she saw and heard and she answered questions, even about events in the remote past, by affirmative or negative nods of the head." (from a translation by Benton and Joynt, 1960, Archives of Neurology, 3: 205-221, p. 210).
Thursday, March 20, 2008
Cool ECoG recording study on word perception by Canolty et al.
A study by Ryan Canolty and a host of other co-authors including Bob Knight, and our friend Nina Dronkers, published in the online journal, Frontiers in Neuroscience, used electrocorticogram (ECoG) recordings to monitor the spatiotemporal dynamics (such a fancy sounding phrase) of word perception. ECoG strikes me as a method that is highly under-utilized in cognitive neuroscience research. We are all so interested in getting both high spatial and temporal resolution, yet very few people have used ECoG, which has both (Dana Boatman is one person who comes to mind as having used this method). The downside is that to record electrical signals directly from the surface of the brain, you have to implant electrode grids, and you can only do this in patients with neurological diseases, such as epilepsy. Usual caveats aside regarding the generality of findings from such populations, the method seems to have a lot of promise.
Canolty et al. summarize their main findings more eloquently that I could, so I'll just quote them:
"Word processing involves sequential activation of the post-STG, mid-STG, and STS and these results validate previous spatial results regarding the cortical regions involved in word processing, and, in turn, language comprehension. These neuroanatomical results support lesion and neuroimaging studies which have shown word-related activity to occur in the post-STG, mid-STG, and STS (Belin et al., 2002; Binder et al., 2000; Démonet et al., 1994; Dronkers et al., 2004; Dronkers et al., 2007; Fecteau et al., 2004; Giraud and Price, 2001; Indefrey and Cutler, 2005; Mummery et al., 1999; Petersen et al., 1988; Price et al., 1992; Price et al., 1996; Scott and Wise, 2004; Vouloumanos et al., 2001; Wise et al., 2001; Wong et al., 2002; Zatorre et al., 1992). However, these results also reveal the temporal flow of information between these distinct brain regions and support a component of serial processing in language. This study complements and extends Binder and colleagues (2000) by demonstrating that word processing first activates the post-STG, then the mid-STG, and finally the STS."
This is interesting, particularly because it doesn't show much activity in anterior temporal regions which some have argued as being critical to word-level processing (e.g., Scott et al. 2000). The authors of the ECoG study suggest that STS activation, which tended to arrive at the party a bit late relative to STG regions, are involved in word meaning-related functions, because real word stimuli modulated activity there relative to non-words. They suggested that this finding may contradict the Hickok & Poeppel view of the STS supporting phonological functions. It may, or may not. For example, their STS activations could reflect activation of networks involved in processing or representing phonological word forms. Whatever the correct interpretation, it is nice to have decent spatiotemporal resolution in the process of word recognition. (Too bad they couldn't implant grids bilaterally!)
Philadelphia Naming Test on-line + Research positions in Phily
Dear colleague,
We invite you to access and download the Philadelphia Naming Test (PNT) at www.ncrrn.org/assessment/pnt. The PNT is a 175-item picture naming test developed at Moss Rehabilitation Research Institute (MRRI) for the psycholinguistic exploration of lexical access in nonaphasic and aphasic speakers. On the web site you’ll also find instructions for administering and scoring the PNT, relevant references, and contact information in case of questions.
Also, MRRI is recruiting for an Institute Scientist engaged in research in this area. For more information about the position visit www.ncrrn.org/opportunities.
Finally, if you haven't browsed the NCRRN web site (NCRRN.org), I encourage you to do so and to to sign up for future email postings. (NCRRN stands for "Neuro-Cognitive Rehabilitation Research Network", a collaboration between researchers at MRRI and University of Pennsylvania).
Feel free to forward this email to all interested colleagues.
Best wishes,
Myrna
Wednesday, March 19, 2008
RA/Lab Manager Positions at NYU in Neurolinguistics
We have two positions here at NYU between Linguistics and Psychology for next year ideally suited for someone in transition between an undergraduate degree or an MA and a PhD program in linguistics, psychology, or neuroscience. Please distribute the advertisement to students that might be interested, and please don't hesitate to let us know about anyone we might contact directly to persuade to apply.
The advertisements will appear quite soon in the Cognitive Neuroscience Society newsletter and on the Linguist List.
Thanks,
Alec Marantz & Liina Pylkkänen
Departments of Psychology and Linguistics, NYU
1. Lab Manager/Research Assistant Position
A full-time Lab Manager position at the NYU Neurolinguistics Laboratory. BA/BS or MA/SM in cognitive science related discipline (psychology, linguistics, etc.) or computer science. Starting date is negotiable, but preferably July 2008.
The lab manager will be involved in all stages of the execution and analysis of magnetoencephalography (MEG) experiments on language processing. Previous experience with MEG or some other cognitive neuroscience method is highly preferred. A background in statistics and some programming ability (especially Matlab) are essential.
To apply, please email CV and names of references to Prof. Liina Pylkkanen (liina.pylkkanen@nyu.edu).
Contact Information:
email: liina.pylkkanen@nyu.edu
Tel: (212) 992-8764 or (212) 998-8386
http://www.psych.nyu.edu/pylkkanen/lab/
2. Research Assistant/Lab Manager Position
Full-time research assistant for Cognitive Neuroscience of Language projects at the KIT/NYU MEG Lab. BA/BS in cognitive science related discipline (psychology, linguistics, etc.) or computer science. Starting date is negotiable, but preferably July 2008.
RA would help analyze data from MEG and joint MEG/fMRI experiments and help design and program additional experiments. Job includes some responsibility for managing the KIT/NYU MEG lab at NYU's Psychology Department. For 2008-09, research will concentrate on lexical access and morphological decomposition in auditory word perception.
To apply, please email CV and names of references to Prof. Alec Marantz (marantz@nyu.edu)
Contact Information:
email: marantz@nyu.edu
Tel: (212) 998-3593
http://www.psych.nyu.edu/meglab/
What's the best way to integrate lesion and fMRI experiments?
But this complete parallelism approach doesn't always work because a given task doesn't always translate well across methods. Consider a simple word-to-picture matching task which is commonly used in lesion studies. Patients listen to a word and then point to the matching picture in an array containing phonemic and semantic foils. Aphasic patients with unilateral lesions and auditory comprehension deficits, tend to make semantic errors on such tasks indicating a breakdown at some post phonemic processing level. Lesion data broadly implicate posterior temporal areas. It would be nice to use fMRI to provide further spatial resolution regarding the localization of the disrupted function. But if we simply import the picture-matching task into the magnet, we would see activations associated with all stages of the comprehension process, not just the level that is primarily disrupted in the lesion cases. So to provide the "converging" fMRI evidence that we are after, we would have to change the paradigm. For example, we might use semantic priming, or some other post-phonemic task in fMRI to try to selectively highlight relevant regions. If we found posterior temporal activations, this would be decent converging evidence.
In the grant proposals I have submitted, we used not only parallel-task integration approaches, (when possible), but also this other form of integration where the task necessarily has to change to answer the same kind of question. It was the later case that seemed to raise concerns among some reviewers. We were able to successfully argue our case in subsequent revisions with one proposal, but had less success with similar arguments with a different proposal and different set of reviewers.
I guess the upshot is (i) there's more than one way to integrate data from multiple methods, where the approach you use depends a lot on the specific questions you're asking, and (ii) if you are writing a grant that proposes cross-method integration AND you are NOT using a parallel-task approach, be sure to be very clear about the logic behind your approach because there does seem to be a parallel task bias among some reviewers.
Tuesday, March 18, 2008
Grant writing advice
The funny thing is, there is typically unanimous reviewer praise for the multi-method approach, but then most of the criticism centers on how the multi-method approach doesn't work. The reason why multi-method proposals are more problematic in the review process is obvious: you have to get favorable evaluations of each method separately, and then get a positive opinion of the relation between the two. Triple jeopardy. You are much better off putting the different methods into separate proposals, and then pointing out in each separate proposal that you are approaching the same issues using a different methodological approach "under a different funding mechanism." This way, reviewers can praise you for your methodological diversity, but no one gets to ding you for how you put them together.
Of course, one might argue that if the various methods are integrated in a thoughtful, scientifically justifiable way, it shouldn't matter. In theory, yes. But as we all know, reviewers often disagree about the best way to do things. By including more things to potentially disagree about, you open yourself to more criticism. And these days, getting dinged on just about any one thing by any one reviewer is enough to kill your score.
This is a sad state of affairs because investigators who try to do more, end up getting penalized. I personally have had modest suggest with multi-method proposals, getting two funded, although it took until the third submission in both cases and much argumentation about how to integrate across methods. But my experience in this last round of submissions has sharpened my thoughts on the proposal approach. I submitted two grant proposals, one that was straight fMRI and one that was combined fMRI and lesion. The straight fMRI proposal fared reasonably well on its first round of reviews, whereas the the combo proposal is in danger of failing on its third time around, in part because of questions about cross-method integration. In the future, I'm going to continue to do multi-method research, but I'm going stick to single method proposals. Oh, the games we have to play...
Friday, March 14, 2008
Semantics and Brain -- more on modality specificity
Word-level semantic deficits in aphasia -- usually defined in terms of comprehension errors dominated by semantic confusions -- have been found in some patients to be specific to the auditory-verbal modality, and in other patients to extend at least to the visual modality as well. I would have predicted that the modality specific deficits would be associated with lesion in the posterior temporal lobe (~MTG). This seems not to be the case, however.
Data presented both in Hart & Gordon (1990, Ann. Neurol., 27:226-31), and in Chertkow et al. (1997: Brain and Language, 58: 203-232) suggest that more general semantic deficits -- affecting single word processing as well as non-verbal object processing -- are associated with the posterior temporal lobe. The image on the left is from Chetkow et al. and shows, in the top panel, the outline of lesions from their group of 8 aphasics with verbal+non-verbal semantic deficits, and, in the bottom panel the region of overlap of these lesions (shaded). The image on the right is from Hart & Gordon, and shows the lesion outlines and region of overlap (shaded) of their 3 patients with semantic deficits. Although these are not fancy voxel-based lesion-symptom mapping studies, the similarity of findings in the two studies makes me think there is something to the anatomical findings. Patients with language-specific semantic deficits had this posterior temporal region completely spared; their lesions were all (n=3) in the parietal lobe (lesion outlines of these patients are in the bottom panel of the Chertkow et al. figure on left).
I honestly don't know what to make of the parietal lobe localization for language-specific semantic deficits; such a finding certainly doesn't fit neatly into our Dual-Stream model. So I'll restrict discussion to the posterior temporal lobe localization of the more general semantic deficits. A number of points can be made in this regard.
1. The posterior temporal lobe is involved in semantic processing at the word/object level. The lesion evidence is quite clear on this point. We can infer this from Wernicke aphasics who tend to have lesions in the posterior temporal lobe, and typically present with semantic deficits, and now we see it explicitly in two studies of patients with word/object level semantic deficits.
2. Patients with semantic deficits and posterior temporal lesions tend to have deficits that are not linguistic-specific. Does this mean that the deficits are amodal, affecting the representation of semantic knowledge? Not necessarily. It is possible that this region contains distinct networks for accessing semantic knowledge from the auditory-verbal modality on the one hand, and from the visual modality on the other. That is, the general semantic deficit could result from anatomical proximity of largely independent system. Or it could be that there is a single system that accesses semantic knowledge, that can take input from either the visual or auditory-verbal modalities. Evidence for an access, as opposed to a knowledge representation deficit comes from the fact that semantic errors in aphasia tend to be unstable: patients may make an error on a particular item on one trial, and then get it right the next time they are presented with it. Both of these possibilities are consistent with our proposal that the posterior temporal lobe (MTG region) supports the mapping between sound and meaning, or in more psycholinguistic terms, functions as a "lexical interface." Note that if this network also turns out to support mappings between visual percepts and meaning, it does not disprove our claim, it only means that the region's function is more general than a pure sound-to-meaning mapping system.
3. Single word/object semantic deficits in aphasia are of a different character than semantic deficits found in semantic dementia. This is a point made explicitly by Jeffries and Lambon Ralph (a paper we did not read for this course; 2006, Brain, 129:2132-47), but also made by the tendency for the semantic deficit to be more variable on an item level in aphasia than in semantic dementia, suggesting more of an access problem in aphasia and a representation deficits in SD (correct me, semantic dementia experts, if I got this wrong).
Wednesday, March 12, 2008
Guest entry #432: Bill Idsardi on "A Shaving Mirror?"
Then it occurs to us--we can use secondary sex characteristics and, better yet, blatant gender stereotypes to good effect here. Forget about "lick, pick, kick" (http://www.neuron.org/content/article/abstract?uid=PIIS0896627303008389), use "shave". Then we'll see the somatosensory/motor area for faces light up in men, and the area for legs light up in women (NSFW image deleted).
Post your suggestions for additional stimuli in the comments. Best suggestion wins a Gillette Sensor (your choice: women's or men's).
Monday, March 10, 2008
TB Biography: Harold Goodglass
This entry starts a new, occasional feature of Talking Brains: Bios of the Language Stars. We start with Mr. Aphasia (or shall I say, Dr. Aphasia) himself, Harold Goodglass. I had the opportunity to meet Goodglass a couple of times while I was at Brandeis and then MIT, and quite enjoyed my interaction with him. His clinical intuitions were astounding, and his excitement for the field unbounded. I can't think of anyone who has contributed more to our understanding of aphasia in the last five decades than Harold Goodglass. After all, he wrote the book.
Harold Goodglass (1920-2002)
Dr. Goodglass was born in New York City August 18, 1920, graduated from Townsend Harris High School in 1935, and received a BA in French from City College of New York in 1939. He served in the Army Air Force from 1942 to 1946, and was discharged as a Captain. He then attended New York University, receiving an MA in Psychology in 1948. He earned his PhD in Clinical Psychology from the University of Cincinnati in 1951.
Upon completion of his doctorate, Dr. Goodglass became the first psychologist for the National Veterans Aphasia Center at the VA in Framingham, MA. Among his pioneering research findings was the demonstration that speech is mediated by the left hemisphere in most left-handed people, as in almost all right-handers, thus invalidating the assumption of right-hemispere dominance. With the research support of the Veterans Administration and the National Institutes of Health he published research articles on disorders of naming in aphasia, on category specific disorders of lexical comprehension and production, on the comprehension of syntax and on the syndrome of agrammatism. He also carried out a program of studies on cerebral dominance. He collaborated with many clinicians and researchers, and in 1960 he developed a standardized aphasia test known as the Boston Diagnostic Aphasia Examination, which has been translated into many languages. He was the author of over 130 research articles, and of the books "Psycholinguistics and Aphasia" (with Sheila Blumstein), "Assessment of Aphasia and Related Disorders" (with Edith Kaplan), "Anomia" (with Arthur Wingfield), and "Understanding Aphasia".
In 1969 he became Director of the NIH-funded Aphasia Research Center, and remained in that post until 1996. He was a founding member of the Academy of Aphasia and the International Neuropsychological Symposium. He established the American Psychological Association's Division 40 (Clinical Neuropsychology) and served as its first president (1979-1980). He was Professor of Neurology (Neuropsychology) at Boston University School of Medicine. In 1996 he was awarded the APA Gold Medal Award for Life Achievement in the Application of Psychology.
Source: www.bu.edu/aphasia/index.html
Saturday, March 8, 2008
Meeting gossip #2: Deutsche Gesellschaft für Sprachforschung
Together with Dietmar Zaefferer from the Ludwig-Maximialiansuniversität München, I chaired a session that was -- we thought -- about universals. And -- we also thought -- that one speaker that has stimulated provocative discussion is Dan Everett. But, at the last minute, he cancelled. Go figure ... Dietmar and I had never met, but he persuaded me to do this on the basis of the fact that we went to the same Gymnasium in München, Das Max. And I learned that the German film director Werner Herzog (Fitzcarraldo, Aguirre, etc) went to our little school.
The challege for this workshop was to see whether it is possible to have fruitful discussions that bridge anthropology, linguistics, and neuroscience. Given the vigorous recent interest in biolinguistics, could we insert some bio? I think that the topic was not really engaged or addressed. However, there were a bunch of interesting lectures on various topics, so it was not too onerous.
While this might be shameless advocacy, I think many attendees would agree that TB_East faculty Jeff Lidz gave a stellar lecture on acquisition. Read his stuff! He discussed some of his Kannada data as well as recent experiments on artificial language learning. Very good stuff. The most fun and snarky attendee was, I think, Tom Bever. He asked many amusing and insightful questions -- and also made some harsh comments, which were (mostly) deserved. Andrew Nevins gave, in my view, the funniest and liveliest talk -- Andrew needs to switch to decaf if he wants to adjust his clockspeed to those around him. Michael Ullman presented his new data on sex differences and the English past tense. There were some linguistics talks that were interesting qua linguistics but failed to connect to anyone outside of the immediate minimalist audience. And there were some nice talks about anthropology, but, again, they did not connect to anything in language research. I had a nice time, I enjoyed meeting new colleagues, and I learned a few factoids. But, in my view, the bridging discussions were not had. And the question is whether they can be had at all, or if that is even desirable. As some readers know, although I work at the interfaces between areas, I am pretty nihilist about these things and like to use the phrase interdisciplinary cross-sterilization. But let's be optimistic ... Maybe there is a chance for genuine linking hypotheses.
Keeping up with the Jones-Hickoks (TB West)
One of us, DP from TB_East, attended two curious meetings recently. Here 's a little update on that.
AAAS in Boston: This meeting is largely for the media, apparently over 900 journalists attended. There were a few sessions that were relevant to our research interests. Phil Rubin from Haskins chaired a session on language technologies which included Dominic Massaro (Talking Faces) and Justine Cassell (Northwestern University). Massaro presented the work with Baldi, the talking head -- which is a cool tool to investigate audio-visual speech but seems a little bit behind-the-times in terms of state-of-the art animation and visualization. Given the quality of animation in current cinema, it should be possible to generate analytically precisely specified faces that give more realistic/naturalistic output. That being said, Massaro has been a leading figure in the investigation of audiovisual speech perception, and (whether one likes his Baldi figure or not) anyone studying AV speech is certainly (or should be) aware of how Massaro's FLMP model handles multi-sensory integration. Justine Cassell presented some provocative data on how children interact with avatar-style computerized friends onscreen. She applied her ideas about 'embodied conversational agents' to the interaction between autistic chuldren and the onscreen partner. A little puzzling but fascinating. The work is not yet published but bstay tuned.
I chaired a session on brain and speech that had three interesting talks. First, Pat Kuhl presented her program of research on language development/speech perception, the highlight being the new baby MEG scanner that Pat apparently convinced the Finnish MEG manufacturer to build. Pictures of babies in an MEG machine ... how can you go wrong? I am looking forward to seeing the new data coming from this approach. Jack Gandour presented a lot of data on the neural basis of tone language perception and comprehension. Jack is arguably the world's leading expert on the cognitive neuroscience of tone languages, and a 30 minute presentation cannot do justice to the huge range of data he has on these issues. Finally, former TB_East graduate student Nina Kazanina presented some of her recent work, published last year in PNAS. Nina's paper (with TB_East faculty Bill Idsardi and Colin Phillips) is called The influence of meaning on the perception of speech sounds and uses a clever cross-linguistic design (Korean, Russian) in the context of a mismatch study to test how native phonology shapes early auditory responses. Nina is now on the faculty of the University of Bristol, and we are all very proud of her.
The session in Boston that really got my blood pressure high was called The mind of a tool maker, and concerned -- allegedly -- the evolution of language and cognition. A very high-powered cast, a terrible session. The cast: Lewontin, Berwick, Walsh, Hauser, Deacon, as well as some other folks I did not know, and whose performance did not make me wat to run out and read their work (e.g. Mimi Lam, Dean Falk). There were, to be sure, some sensible ideas buried in there, and one genuinely good talk, by Marc Hauser. Among other reasons it stood out as good (contrast enhancement) because (a) he stayed within his alloted time (b) the talk had a point/hypothesis (c) the work actually related to the topic of the session. Berwick had an interesting idea about FoxP2, a really nice deconstruction/debunking based on a computational analysis, and Deacon presented some interesting ideas -- but too many and too scattered. But the bottom line is this: the study and discussion of evolution of cognition and language requires extreme caution, subtlety, rigor, nuance, a high-pass filter for bullshit, and so on and so forth. And, alas, the level of speculation and pure unadulterated paleo-nonsense was off the scale. This session made me appreciate why the French Academy forbade language evolution as a topic. The audience deserved better. My favorite line: the organizer of the workshop, Dr. Lam, in her opening remarks, said that one reason she wanted to have this workshop was because she had such a hard time getting her ideas on evolution of cognition published .... Yikes!
Friday, March 7, 2008
Semantics and Brain -- Do modality/language-specific semantic deficits exist?
First, some background. There is plenty of evidence supporting the view that word-level comprehension deficits in aphasia are predominantly semantic in nature. For example, such patients more often make semantic than phonemic errors in auditory word-to-picture matching tasks (see Baker et al. 1981, Neuropsychologia, 19:1-15; Gainotti et al., 1982, Acta Neurol. Scandinav., 66:652-65). Such deficits are loosely associated with posterior temporal regions in that Wernicke's aphasics present with such semantic deficits (Baker et al.) and Wernicke's aphasia is associated with temporal lobe lesions.
But are these deficits modality specific? That is, are they restricted to the audio-verbal modality, or are they more general? The evidence demonstrates that both patterns exist in aphasics with single-word comprehension deficits.
Of the papers we read this week, Chertkow et al. (1997: Brain and Language, 58: 203-232) provide the strongest evidence. They tested a group of 16 aphasics of various diagnostic categories (Global, Broca, Wernicke) using a range of tasks. Visual perception tasks were used to rule out visual perception as a source of their problems (all were in the normal range). Half of the subjects -- all of the Wernicke and Global aphasics, and none of the Broca patients -- were impaired in auditory word-to-picture matching when the picture choice array comprised a set of semantically related items. The same pattern held in another comprehension test that required subjects to answer forced choice conceptual knowledge questions about heard words (e.g., lemon: is it used with coffee or tea?). A force-choice picture version of this latter task was also administered (picture of lemon: subject must indicate whether it goes with picture of coffee or tea). Five of the 8 patients with single word impairments were equally impaired on the non-verbal version of this task, but 3 patients (including both of the Wernicke aphasics) improved to normal performance levels with the non-verbal task. Conclusion: In some patients with single word comprehension deficits, the semantic impairment is language-specific, on other patients, it is more general.
The paper by Goodglass et al. (1997, Brain and Language, 56:138-58) used a very different paradigm, concept similarity judgments, to assess modality specific effects. They found that making concept similarity judgments across modalities (hear: skirt, see: jacket, decide: same category or not) was harder for aphasics (measured in terms of RTs) than making the same judgments in an all visual format. Crucially, the reverse held for control subjects: auditory-visual pairs were more quickly judged than visual-visual pairs. Presumably, the cross modality difficulty in aphasia stems from trouble accessing semantic representations from auditory-verbal input.
So it seems that language-specific word-level semantic deficits do occur in aphasia, even if it is also the case that some aphasics seem to have deficits that extend beyond the linguistic system. This is what we'd expect. However, where are the lesions associated with the language-specific vs. the non-specific deficits? I'll discuss that in the next post.
Thursday, March 6, 2008
Comment on Conceptual Organization from Mike Bonner
**************************
Hi Greg,
I'm a grad student in Murray Grossman's lab. I enjoy reading your Talking Brains blog. You've brought up many of the same issues that I and others in Murray's lab have with the semantic memory literature. I just read your March 4th post. You finished with:
"What kind of evidence do we need to really settle the issue? I would say that a convincing finding would be something like lesion (or TMS) evidence that when a person damages the lip area of motor cortex, they lose the concept KISS. Is there any such evidence out there?"
I'm glad that you raised this point It's exactly the issue that I'm hoping to address in my thesis work. I'm writing up a prelim now. I wanted to point to this Pulvermuller TMS study, which may be of interest to you:
Pulvermüller F, Hauk O, Nikulin VV, Ilmoniemi RJ.
Functional links between motor and language systems.
Eur J Neurosci. 2005 Feb;21(3):793-7.
They facilitate speed of response on lexical decision for leg words by activating what should be the leg area of motor cortex. Their results for arms words are a bit dubious though.
There's also a study suggesting that a deficit for concepts (both verbs and nouns) involving manipulation knowledge correlates with damage to hand motor areas of cortex:
Arévalo A, Perani D, Cappa SF, Butler A, Bates E, Dronkers N.
Action and object processing in aphasia: from nouns and verbs to the effect of
manipulability.
Brain Lang. 2007 Jan;100(1):79-94.
Furthermore, results of action knowledge deficits in MND and of reversal of the concreteness effects in SD may be relevant (corresponding with damage to motor (MND) or visual areas (SD)). The upshot for me is that the evidence is still insufficient. I'd love to know if you come across any other relevant studies.
Wednesday, March 5, 2008
Where's Irvine?
We are located along the Southern California coast, in Orange County (The OC -- that's right, we're UCOC). We are between LA, 45 miles to the north, and San Diego, 75 miles to the south.
The UC Irvine campus (outlined in the image below) is located a couple of miles from the coast, near Newport Beach and Laguna Beach.
Here's a couple of photos of our local beach communities.
Laguna Beach:
Newport Beach:
Here is a picture of Aldrich Park, at the center of the UC Irvine campus:
And finally, a photo of our building on campus:
Ok, so now you know.
Tuesday, March 4, 2008
Semantics and Brain -- Comments on Caramazza; Martin; Hart
The readings we surveyed provided nice overviews of both the data, and the various theories put forth to account for these data. Caramazza & Mahon (2003, TICS, 7:354-61), for example, discussed the relative strengths and weaknesses of the Sensory/Functional Theory (categories are organized around sensory and functional systems), the Domain-Specific Hypothesis (some categories are organized into separate modules as a function of evolutionary pressures to process information in those categories), the Conceptual-Structure account (a semantic feature-based account). Hart et al. (2007, J. Int. Neuropsych. Soc., 13: 865-80) provide an even more exhaustive review of the various theories, complete with handy crib sheet tables. And Martin (2007, Annu. Rev. Psychol. 58:25-45) gives a nice summary of the range of imaging data on the topic. The Caramazza & Mahon (neuropsych oriented) and Martin (imaging oriented) papers are particularly useful if you want to get a quick overview of the field.
The conclusion one gets though, after pouring over all these papers is that there are a number of different ways to account for the available data, and no obvious way to choose between them. For example, Martin summarizes an impressive range of findings from neuroimaging that seems to show that the same sensory-motor systems involved in processing a given bit of information is involved in representing that bit of information in conceptual memory. But from functional imaging alone, it is very hard to know whether these activations reflect the substrate for semantic memory, or merely the sensory-motor associates of the "real" concepts stored somewhere else. To compound matters, Caramazza & Mahon argue explicitly that the neuropsychological data do not support a sensory-functional hypothesis, and suggest instead that "the first-order constraint on the organization of conceptual knowledge is object domain [i.e., animal, fruits/veggies, conspecifics, a possibly tools]" p. 356. But even these authors admit a possible "fine-grained" organization involving other properties, or that there may be two independent organizational levels.
If you think about this issue hard enough, it all seems to come back to the question, what is a concept? If you believe a concept is nothing more than its sensory-motor/functional properties (for concepts that fit into those dimensions) then you can point to imaging data showing activation of the lip area of motor cortex when you read the word "kiss" and many other findings of this sort. If you believe a concept is something more complicated, you can point to complex patterns of associations/dissociations in the neuropsychological realm, and you could write off the imaging data as a peripheral association of the core of the concept: lip movements are associated with the concept KISS but don't define it, or even contribute substantively to it.
What kind of evidence do we need to really settle the issue? I would say that a convincing finding would be something like lesion (or TMS) evidence that when a person damages the lip area of motor cortex, they lose the concept KISS. Is there any such evidence out there?