Dear Readers, Jim Pekar from the Kirby Center/Johns Hopkins has post-doc openings. Please respond directly to him. Sounds like a fun opportunity. Here is what he sent:
Postdocs in fMRI of Fun Activities
Two postdoctoral positions are available in the F.M. Kirby Research Center for Functional Brain Imaging in Baltimore, Maryland USA.
A partnership between Kennedy Krieger Institute and the Department of Radiology of the Johns Hopkins University, the Kirby Center houses research-dedicated Philips MRI scanners at 1.5 and 3.0 Tesla. Delivery of a 7.0 Tesla human scanner is scheduled for 2008. The Kirby Center is a nationally recognized (NIH P41 funded) research resource for MR technology development. Information on the Kirby Center is available at http://mri.kennedykrieger.org
The two postdoctoral positions will focus on acquisition and analysis of MRI data for the study of brain function using rich naturalistic behaviors (such as playing games and watching movies) to reduce demands on participant compliance.
Ph.D. in biophysics, neuroscience, or related field required. Experience with exploratory data analysis preferred.
Please send CV and names of three references to: James J. Pekar, F.M. Kirby Research Center, Kennedy Krieger Institute, 707 N. Broadway, Baltimore MD 21205 USA. Email: pekar@jhu.edu
News and views on the neural organization of language moderated by Greg Hickok and David Poeppel
Tuesday, July 24, 2007
Friday, July 20, 2007
Talking Brains is getting noticed
We launched Talking Brains in May, and to our great surprise, it's actually getting a few hits -- about 500 a month actually. Of course 480 of those hits are me and David checking out each other's posts. :-) Actually that's not true, our own visits are not included in the count, and if you check out the "activation" map below (a map of the last 100 visits), not all of them are from Irvine and College Park. In fact, we are activating the northeast and midwest United States quite well, with some additional hot spots on the US west coast, in central Europe, and even some sparse regions in Japan, South America, Africa, and Australia (but these probably wouldn't survive a cluster threshold). I see that Polynesia is not strongly represented, so I'll probably have to go down there myself and see what's up. ;-) Also of note is that the blog is getting picked up by online news organizations. Just today, I came across a news report on HearingProductsReport.com discussing David's "Syllables Paper." They site the Talking Brains blog along side AAAS as their sources. Geez, we're going to have to careful about what we say!
Remember: we're more than happy to promote YOUR recent work. Just email either of us with your latest cool result or field-rocking theoretical insight and we'll post it as a "From the Lab of..." entry.
Remember: we're more than happy to promote YOUR recent work. Just email either of us with your latest cool result or field-rocking theoretical insight and we'll post it as a "From the Lab of..." entry.
Wednesday, July 18, 2007
The problem with group studies in fMRI
You miss things. For example, in our first fMRI experiments designed to identify an auditory-motor integration circuit, we kept seeing, in individual subjects, a region at the back of the Sylvian fissure on the left that was highly activated both during listening to and covert production of speech. This region, now called Spt, did NOT show up in a group analysis of the same data. It appears that the variability in anatomy across subjects, coupled with the fairly focal activation, washed out the effect in a group analysis.
We've recently been looking through a new fMRI dataset and found another, but related, problem with group level analyses. The study involves a short term memory task: stim set #1 --> maintenance period --> stim set #2 --> subject judges whether the lists are the same or not. Participants are bilingual in English and American Sign Language, so the primary manipulation is STM for speech vs. sign. In group data we found a region in the posterior STS that responded to sensory stimulation in either modality (speech or sign) AND showed a maintenance response. Interesting, but that's not the point of this entry. We did a subsequent analysis of this region in individual subjects and examined the resulting timecourses as revealed by the group analysis compared to the individual subject analysis. What we found was that the amplitude of the response in the group analysis in the ROI was about half of the amplitude in the individual subject analysis (see graph, which shows the response for the speech condition only). This makes sense, of course, because in a group analysis the peak of the activation in an ROI in a given subject will vary somewhat, so when you look at the average timecourse in a group-determined ROI you end up with a diluted average. In individual subject analysis, however, you can identify the peak of the activation in each subject thus getting a less diluted average across subjects. For large activations this may not matter, but for more focal activations it can either completely wash out an effect, as we found previously, or make it look minimal, as in this case. For example, if I pointed at the green curve in the graph above and told you there is a robust maintenance response (the region between the peaks compared to the baseline at the start and end points of the curve), you might not buy it. However, the maintenance response is quite apparent in the red curve.
So go ahead and continue with your groups studies, but be aware, if you are looking for a relatively focal activation, you just might miss it.
We've recently been looking through a new fMRI dataset and found another, but related, problem with group level analyses. The study involves a short term memory task: stim set #1 --> maintenance period --> stim set #2 --> subject judges whether the lists are the same or not. Participants are bilingual in English and American Sign Language, so the primary manipulation is STM for speech vs. sign. In group data we found a region in the posterior STS that responded to sensory stimulation in either modality (speech or sign) AND showed a maintenance response. Interesting, but that's not the point of this entry. We did a subsequent analysis of this region in individual subjects and examined the resulting timecourses as revealed by the group analysis compared to the individual subject analysis. What we found was that the amplitude of the response in the group analysis in the ROI was about half of the amplitude in the individual subject analysis (see graph, which shows the response for the speech condition only). This makes sense, of course, because in a group analysis the peak of the activation in an ROI in a given subject will vary somewhat, so when you look at the average timecourse in a group-determined ROI you end up with a diluted average. In individual subject analysis, however, you can identify the peak of the activation in each subject thus getting a less diluted average across subjects. For large activations this may not matter, but for more focal activations it can either completely wash out an effect, as we found previously, or make it look minimal, as in this case. For example, if I pointed at the green curve in the graph above and told you there is a robust maintenance response (the region between the peaks compared to the baseline at the start and end points of the curve), you might not buy it. However, the maintenance response is quite apparent in the red curve.
So go ahead and continue with your groups studies, but be aware, if you are looking for a relatively focal activation, you just might miss it.
Sign language piece made Scientific American's "Best of the Brain..." volume
Run, don't walk... actually, you can probably walk for this one. Scientific American has just released a "Best of the Brain" volume which is a compilation of neuro articles, edited by Floyd Bloom, that have appeared in the magazine in the last several years. The 2002 Hickok, Bellugi, Klima piece, Sign Language in the Brain, made the cut.
Tuesday, July 10, 2007
The auditory dorsal stream may not be auditory
No this is not a territorial concession to that other sensory modality (although I have heard "seers," shall we call them, claim regions of the posterior planum for their beloved albeit imperialist visual system). Instead it is closer to a territorial concession to the motor system, a concession, we propose, that the see-scientists also need to make in their dorsal stream.
Some background: When we first started writing about a dorsal processing stream for speech back in 2000, we looked to the dorsal visual stream for inspiration. (Actually, the dorsal stream idea was included in our 2000 TICS paper as a response to a reviewer's comment; it wasn't in the original manuscript. But that is another blog entry.) The dorsal visual stream started out as a "where" system, but was then re-invented, on the basis of good evidence, as a "how" stream (i.e., a visual-motor integration system). In the late 1990s, a ventral/dorsal division of labor had been proposed for the auditory system as well, in which dorsal=where. Given that there were obvious needs for auditory-motor integration for speech (see any Hickok & Poeppel paper), we proposed that the dorsal auditory stream supported such auditory-motor functions analogous to the dorsal visual stream. Subsequent fMRI studies in the Talking Brains West lab identified area Spt as the auditory version of "visual" areas such as LIP, AIP, and so on.
But there's a problem with thinking about any of these areas as part of a specific sensory stream. For example, the "visual" areas are organized around motor effector systems (LIP=eyes/saccades, AIP=hands/grasping), and they can take input from multiple sensory systems. So these regions appear to be linked more tightly to specific motor modalities than to a given sensory modality: LIP is not going to get excited about grasping behaviors, but if auditory input helps guide behaviorally relevant eye movements, LIP is happy to oblige as Richard Andersen showed a few years back. Maybe, then, it makes more sense to talk about sensory-motor integration areas in the posterior parietal lobe rather than visual-motor integration areas. And maybe area Spt, our previously hypothesized auditory-motor area, is not so much auditory as it is a sensory-motor integration area for the vocal tract effector. If this is true, we can make two predictions. (1) Spt should be multisensory as long as the sensory input is relevant to vocal tract actions (somato feedback is a likely candidate), and (2) Spt should be less excited about auditory tasks when the output behavior doesn't involve the vocal tract.
My (now former) grad student, Judy Pa, tested prediction number 2 in an fMRI experiment reported in a forthcoming paper in Neuropsychologia. She had skilled pianists listen to novel melodies and then reproduce them either by covert humming (vocal tract) or covert playing (manual articulators). The critical finding was that Spt showed an attenuated response during the motor phase of the task for the playing condition compared to the humming condition. A region in the intraparietal sulcus showed the reverse pattern.
So, as David and I hinted in our Nature Reviews Neuroscience paper, current evidence suggests that the posterior parietal lobe and posterior planum contain a network for sensory-motor integration areas. These areas are not part of one sensory system, but are tightly linked to specific motor effector systems. Spt is part of this network, and is specifically tied to the vocal tract. Conclusion: Spt and the posterior portion of the planum that it occupies is not part of the auditory system. But then neither is LIP or AIP part of the visual system. Does this mean that it doesn't make sense to talk about dorsal streams at all then? Not sure.
Some background: When we first started writing about a dorsal processing stream for speech back in 2000, we looked to the dorsal visual stream for inspiration. (Actually, the dorsal stream idea was included in our 2000 TICS paper as a response to a reviewer's comment; it wasn't in the original manuscript. But that is another blog entry.) The dorsal visual stream started out as a "where" system, but was then re-invented, on the basis of good evidence, as a "how" stream (i.e., a visual-motor integration system). In the late 1990s, a ventral/dorsal division of labor had been proposed for the auditory system as well, in which dorsal=where. Given that there were obvious needs for auditory-motor integration for speech (see any Hickok & Poeppel paper), we proposed that the dorsal auditory stream supported such auditory-motor functions analogous to the dorsal visual stream. Subsequent fMRI studies in the Talking Brains West lab identified area Spt as the auditory version of "visual" areas such as LIP, AIP, and so on.
But there's a problem with thinking about any of these areas as part of a specific sensory stream. For example, the "visual" areas are organized around motor effector systems (LIP=eyes/saccades, AIP=hands/grasping), and they can take input from multiple sensory systems. So these regions appear to be linked more tightly to specific motor modalities than to a given sensory modality: LIP is not going to get excited about grasping behaviors, but if auditory input helps guide behaviorally relevant eye movements, LIP is happy to oblige as Richard Andersen showed a few years back. Maybe, then, it makes more sense to talk about sensory-motor integration areas in the posterior parietal lobe rather than visual-motor integration areas. And maybe area Spt, our previously hypothesized auditory-motor area, is not so much auditory as it is a sensory-motor integration area for the vocal tract effector. If this is true, we can make two predictions. (1) Spt should be multisensory as long as the sensory input is relevant to vocal tract actions (somato feedback is a likely candidate), and (2) Spt should be less excited about auditory tasks when the output behavior doesn't involve the vocal tract.
My (now former) grad student, Judy Pa, tested prediction number 2 in an fMRI experiment reported in a forthcoming paper in Neuropsychologia. She had skilled pianists listen to novel melodies and then reproduce them either by covert humming (vocal tract) or covert playing (manual articulators). The critical finding was that Spt showed an attenuated response during the motor phase of the task for the playing condition compared to the humming condition. A region in the intraparietal sulcus showed the reverse pattern.
So, as David and I hinted in our Nature Reviews Neuroscience paper, current evidence suggests that the posterior parietal lobe and posterior planum contain a network for sensory-motor integration areas. These areas are not part of one sensory system, but are tightly linked to specific motor effector systems. Spt is part of this network, and is specifically tied to the vocal tract. Conclusion: Spt and the posterior portion of the planum that it occupies is not part of the auditory system. But then neither is LIP or AIP part of the visual system. Does this mean that it doesn't make sense to talk about dorsal streams at all then? Not sure.
Sunday, July 1, 2007
The funnest part of academia: job gossip
OK, here are three happy messages about the job world in the TalkingBrains domain.
(1) There is a God, at least in Miami. Here is the evidence: One of the graduates of the UMD lingustics department, Ana Gouvea (Ph.D. 2002), worked as a post-doc in San Francisco (at UCSF and SFSU), started a family, began to retrain a bit to get more experience in acquisition and other more applied areas. Aa was principally working on sentence processing and language aquisition. A year or so ago, her husband Luca was moved by his company to Miami, and the family moved there. This is not an area known for its density of psycholinguistics ... But Ana is flexible and smart. She applied to a program at Florida International University to get an MA degree that would permit her to do clinical work. Hmmm, an M.A., eh? Well, they saw her application materials, and FIU decided that they would not admit her to the MA program -- instead they hired her as an assistant professor, on a tenure track line, in the Department of Communication Sciences and Disorders! Congratulations, Ana! Most successful MA application I've ever heard of :-)
(2) A new faculty member in linguistics at UMD. Valentine Hacquard is joining the department. Valentine, who got her Ph.D. at MIT and has been a visiting professor at U Mass Amherst, is a semanticist and can abstract lambdas with the best of them. But I am pretty sure that we can persuade her to play with us, too! Valentine already has strong cog neuro street cred: she did a phonetics/phonology MEG study with Alec Marantz when they were both at MIT.
Hacquard, V., Walter, M.A., Marantz, A. (2007). The effects of inventory on vowel perception in French and Spanish: an MEG study. Brain Lang. 100(3):295-300.
(3) A new post-doc at the CNL at UMD. For a long time, I have been a fan of the work of Matti Laine. Now we have managed to recruit one of his student's a post-doc, Minna Lehtonen. Minna has done work using behavioral, imaging, and electrophysiological techniques and will help us continue research on lexical access and lexical structure.
A really nice example of her work is this recent ERP paper:
Minna Lehtonen, Toni Cunillera, Antoni Rodríguez-Fornells, Annika Hultén, Jyrki Tuomainen and Matti Laine (2007). Recognition of morphologically complex words in Finnish: Evidence from event-related potentials. Brain Research, 1148, 123-137.
We are all pretty persuaded (here at TalkingBrains East, at least ...) that morphological decomposition is demonstrable (early in the processing stream), for example based on the work of Rob Fiorentino (e.g. in Fiorentino & Poeppel, 2007, Compound words and structure in the lexicon, Language and Cognitive Processes). Minna has already been going after the next steps. Based on cross-lingustic studies that incorporate all kinds of cognitive neuroscience techniques, they have been going after the composition part ... It's great that we take words apart -- but how do we put them back together? A hard and good question. Good luck with that one Minna -- and welcome to the CNL lab!
(1) There is a God, at least in Miami. Here is the evidence: One of the graduates of the UMD lingustics department, Ana Gouvea (Ph.D. 2002), worked as a post-doc in San Francisco (at UCSF and SFSU), started a family, began to retrain a bit to get more experience in acquisition and other more applied areas. Aa was principally working on sentence processing and language aquisition. A year or so ago, her husband Luca was moved by his company to Miami, and the family moved there. This is not an area known for its density of psycholinguistics ... But Ana is flexible and smart. She applied to a program at Florida International University to get an MA degree that would permit her to do clinical work. Hmmm, an M.A., eh? Well, they saw her application materials, and FIU decided that they would not admit her to the MA program -- instead they hired her as an assistant professor, on a tenure track line, in the Department of Communication Sciences and Disorders! Congratulations, Ana! Most successful MA application I've ever heard of :-)
(2) A new faculty member in linguistics at UMD. Valentine Hacquard is joining the department. Valentine, who got her Ph.D. at MIT and has been a visiting professor at U Mass Amherst, is a semanticist and can abstract lambdas with the best of them. But I am pretty sure that we can persuade her to play with us, too! Valentine already has strong cog neuro street cred: she did a phonetics/phonology MEG study with Alec Marantz when they were both at MIT.
Hacquard, V., Walter, M.A., Marantz, A. (2007). The effects of inventory on vowel perception in French and Spanish: an MEG study. Brain Lang. 100(3):295-300.
(3) A new post-doc at the CNL at UMD. For a long time, I have been a fan of the work of Matti Laine. Now we have managed to recruit one of his student's a post-doc, Minna Lehtonen. Minna has done work using behavioral, imaging, and electrophysiological techniques and will help us continue research on lexical access and lexical structure.
A really nice example of her work is this recent ERP paper:
Minna Lehtonen, Toni Cunillera, Antoni Rodríguez-Fornells, Annika Hultén, Jyrki Tuomainen and Matti Laine (2007). Recognition of morphologically complex words in Finnish: Evidence from event-related potentials. Brain Research, 1148, 123-137.
We are all pretty persuaded (here at TalkingBrains East, at least ...) that morphological decomposition is demonstrable (early in the processing stream), for example based on the work of Rob Fiorentino (e.g. in Fiorentino & Poeppel, 2007, Compound words and structure in the lexicon, Language and Cognitive Processes). Minna has already been going after the next steps. Based on cross-lingustic studies that incorporate all kinds of cognitive neuroscience techniques, they have been going after the composition part ... It's great that we take words apart -- but how do we put them back together? A hard and good question. Good luck with that one Minna -- and welcome to the CNL lab!
Subscribe to:
Posts (Atom)