Tuesday, April 17, 2012

Post-Doctoral Position in Spoken Language Comprehension in Aphasia

Post-Doctoral Position in Spoken Language Comprehension in Aphasia

Applications are invited for at least one post-doctoral position in
the Language and Cognitive Dynamics Laboratory directed by Dr. Dan
Mirman at Moss Rehabilitation Research Institute (MRRI: www.mrri.org)
to study the neural basis of spoken language processing. The
postdoctoral researcher will take a leading role in designing and
conducting computational and experimental (behavioral and
eye-tracking) work on phonological, semantic, and cognitive control
aspects of spoken language comprehension in aphasia. The project
combines cognitive neuropsychology, computational modeling, and
eye-tracking methods – background in at least one of those is
preferred, training in the others will be provided.

MRRI is a vibrant collaborative research center in the Greater
Philadelphia area focusing on cognitive neuroscience and cognitive
rehabilitation with a particular focus on language and aphasia. In
addition to primary research with Dr. Mirman, the post provides
opportunities to collaborate with researchers at MRRI, University of
Pennsylvania, and Temple University, and to learn cognitive
neuropsychology, computational modeling, eye-tracking, voxel-based
lesion-symptom mapping, TMS and tDCS, and functional neuroimaging.

The successful candidate will have a doctorate in cognitive
psychology, cognitive science, cognitive neuroscience, linguistics, or
related field, a strong interest in computational modeling and/or
cognitive neuropsychology, and be interested in developing an
independent research career. Send CV, letter describing research
interests and goals, and at least 2 letters of recommendation to Dr.
Dan Mirman at dan@danmirman.org, or
Moss Rehabilitation Research Institute
50 Township Line Rd.
Elkins Park, PA 19027

Monday, April 16, 2012

Neurobiology of Language Conference (NLC) - Abstract submissions OPEN!

NLC 2012 will be held in San Sebastian, Spain, Oct 25-27.

Abstract submissions are now open.  Visit the Society for the Neurobiology of Language webpage and click the 'conference' tab for information.  Keynotes and debate sessions have been scheduled:

Keynote Sessions

Barbara K. Finlay

Beyond columns and areas: developmental gradients and regionalization of the neocortex and their likely consequences for functional organization.

Barb Finlay is a Professor of Psychology, Cornell University. Professor Finlay holds the William R. Kenan Chair of Psychology and is co-Editor of Brain and Behavioral Sciences. Finlay is an expert on the evolution and development of sensory systems and the cerebral cortex.

Nikos K. Logothetis

In vivo Connectivity: Paramagnetic Tracers, Electrical Stimulation &   Neural-Event Triggered fMRI

Nikos Logothetis is the Director of the Department of Cognitive Processes at the Max Planck Institute for Biological Cybernetics, Tubingen, Germany. Logothetis is well known for his studies of the physiological mechanisms underlying visual perception and object recognition as well as his more recent work on measurements of how the functional magnetic resonance imaging signal relate to neural activity. Logothetis will talk to us on:

Panel Discussions

Nina F. Dronkers vs Julius Fridriksson

What is the role of the insula in speech and language?

Nina Dronkers is the Director of the Center for Aphasia and Related Disorders, and Adjunct Professor of Neurology and Language, U.C. Davis, California. Dronkers is an expert in the Aphasia and more generally the cerebral localization of language.
Julius Fridriksson is a Professor of Communication Sciences and Disorders, University of South Carolina, and Director of the Aphasia Laboratory, UNC. Fridriksson is well known for his work in aphasia – neuroimaging and treatment.

Matthew Lambon Ralph vs Jeffrey R. Binder

Role of Angular Gyrus in Semantic Processing

Matt Lambon Ralph is a Professor of Cognitive Neuroscience and Associate Vice-President Research, University of Manchester, U.K. His lab uses neuropsychology, computational modeling, TMS, and functional neuroimaging to investigage semantic memory, language, recovery, rehabilitation, and neuroplasticity.
Jeffrey Binder, M.D. is a Professor of Neurology at the Medical College of Wisconsin and Director of the Language Imaging Laboratory. Professor Binder has made important contributions on the neural basis of language (esp. speech and word recognition) and is the incoming president of SNL.

Monday, April 2, 2012

POSTDOCTORAL POSITION - MEG/EEG - National Institutes of Health, NIDCD Division of Intramural Research

POSTDOCTORAL POSITION - MEG/EEG - National Institutes of Health, NIDCD Division of Intramural Research

Applications are invited for a postdoctoral position in the Language Section, NIDCD, National Institutes of Health, to study paralinguistic processes, social interaction, and related issues using MEG/EEG. The research will focus on discourse level language comprehension and production in naturalistic contexts.  Investigations will be carried out in normal adults and clinical populations including stroke, traumatic brain injury and stuttering. Major experimental methods include MEG source analysis, time-frequency analysis and simultaneous EEG-fMRI.

Applicants should have a doctoral-level degree in neuroscience, psychology, medicine or a related area. Prior experience in MEG/EEG experimental design, data acquisition and analysis is necessary. Advanced skills for time series analysis and MATLAB programming are highly desirable. Experience with fMRI is preferred but not required. Salary will be commensurate with the salary scale of the National Institute of Health, NIDCD Division of Intramural Research. The position is funded for two to five years. Applications will be considered until the position is filled.

For further information or to submit an application (including a brief CV and two references) please contact Allen Braun, M.D.  email: brauna@nidcd.nih.gov.

Sunday, April 1, 2012

Some low-level research from TB East

One of the issues I have been thinking about for some time now concerns cortical oscillations and their potential role for speech processing. This topic continues to be debated vigorously. (At the Auditory Cortex Conference this coming September in Lausanne, I will be moderating a debate between Charlie Schroeder and Shihab Shamma; that should be quite lively ...) In a new paper, Anne-Lise Giraud and I outline a hypothesis what sorts of operations oscillations at different scales could be useful for. This paper makes some pretty strong predictions about the perceptual analysis of continuous speech and spike trains. We are obviously interested in feedback.

Giraud AL, Poeppel D.
Nat Neurosci. 2012 15(4):511-7. doi: 10.1038/nn.3063.

And on a similarly low-level topic: It has become commonplace to argue that as one ascends the auditory hierarchy, and especially as one goes from core to belt and parabelt areas, sensitivity to broadband (and complex) sounds increases. Tones and narrow-band sounds principally excite core fields, on this view.  In a pair of studies using fMRI and MEG, we crossed bandwidth and modulation frequency. We find that bandwidth (while crucial at the inferior colliculus) doesn't play as central a role as modulation rate. Both studies converge in a striking way, showing that auditory cortex is exquisitely sensitive to low modulation rates, and precisely in the range that forms the basis for spoken language processing (i.e. great below ~16Hz, best below 8 Hz). We are interested in figuring out how our findings reconcile with the single-unit data providing a very different perspective (at least with regard to spectral sensitivity).

Wang Y, Ding N, Ahmar N, Xiang J, Poeppel D, Simon JZ.
J Neurophysiol. 2011