Thursday, December 15, 2011

Doctoral position in General Linguistics at the Johannes Gutenberg University Mainz, Germany

The Emmy Noether Research Group ''Neurolinguistic Foundations of Information Structure'' (funded by the German Research Foundation, DFG) headed by Petra Schumacher is seeking to fill the position of a doctoral researcher. Starting date: February 1, 2012 or soon thereafter. 
The doctoral position is part of a neurolinguistic project that investigates language comprehension at the interface of syntax, semantics and information structure ( The doctoral researcher will work within our group on a project that investigates referential processing in German. The project will either extend previous research with unimpaired comprehenders or investigate referential processing in aphasia patients. The primary experimental method employed will be event-related brain potentials. 
The ideal candidate should have a degree in linguistics or a related discipline. S/he should have a keen interest in experimental work, as well as in syntactic and pragmatic theory. Experience with event-related brain potentials and/or language impairment research will be a plus. Knowledge of German would be beneficial. The successful candidate will work towards a PhD in Linguistics from the University of Mainz on a topic related to the research group.
The position has an initial term for two years, with the possibility of extension contingent on future funding. The salary and social benefits are determined by the German pay scale for state employees (EG 13 TV-L 50%).
Interested candidates are invited to send their application materials electronically to Pdf files are preferred. Applications can be written in German or English and should include CV, contact information of at least two referees, and a brief statement of research interests.
Applications received by January 06, 2012 will receive full consideration, but the search continues until the position is filled.
For more information contact Petra B. Schumacher by email at

Monday, December 12, 2011

open-rank faculty position in Cognitive Neuroscience at Florida International University in Miami, FL

Position Title: Cognitive Neuroscience (Open-rank). We seek an outstanding researcher with expertise in fMRI, EEG, or TMS methodologies. Applicants with interest in developmental or adult cognitive neuroscience are encouraged to apply. Area of focus is open, but we especially welcome applicants with a focus on memory, language, executive function, or multisensory processing. Address inquiries to Dr. Bennett Schwartz, Chair, Neuroscience Search,

THE DEPARTMENT OF PSYCHOLOGY AT FLORIDA INTERNATIONAL UNIVERSITY: seeks applicants for five faculty positions, to begin fall 2012, including open-rank positions in Behavior Analysis, Cognitive Neuroscience, Developmental Science, and Quantitative Methodology and an instructor level position in Research Methods. Successful candidates will join a growing department with nationally prominent programs and faculty. The Department offers Ph.D. programs in Child-Adolescent Clinical Science, Developmental Science, Industrial Organizational Psychology and Legal Psychology and M.S. programs in Behavior Analysis and Counseling Psychology. A Ph.D. in Psychology or related area is required for all positions. For open-rank positions, preference will be given to applicants with strong research credentials and demonstrated potential to obtain external funding. Interested candidates should send application materials to with the position title in the subject line. Include curriculum vitae, recent publications or reprints, contact information for three potential references, and a letter describing research interests. Review of applications will begin November 1 and continue until the positions are filled. For more information about the Department, visit our website: FIU is a member of the State University System of Florida, serving a diverse population of over 40,000 students, and is an Equal Opportunity/Equal Access/Affirmative Action Employer.

Thursday, December 8, 2011

Why autism has nothing to do with 'broken mirrors'

I've argued that the mirror neuron theory of action understanding is backwards.  Mirror neurons do not fire because they are critically involved in action understanding (the typical claim), they fire because perceiving an action is critically involved in selecting actions for movement (Hickok & Hauser, 2010).  This point has broad implications for any theory that builds on the mirror neuron theory of action understanding.  I've had plenty to say about the broader implications in the domain of speech (e.g., Hickok, 2009; Hickok, et al. 2011; Lotto, et al. 2009); now it's time to take a stab at the claims made beyond speech.

One of the most prominent and potentially important extrapolations of the mirror neuron theory of action understanding concern autism spectrum disorder (ASD).  The "broken mirrors" hypothesis of ASD, exemplified by a Scientific American article by V.S. Ramachandran, is built on the following logic.

1. The mirror system allows us to understand the action of others.
2. The mirror system, by extrapolation, allow us to understand the emotions, intentions, and perspectives of others.
3. ASD involves a lack of sensitivity to the emotions, intentions, and perspectives of others (in particular a lack of empathy).
4. Therefore ASD results from functional disruption to the mirror system, that is, from "broken mirrors".

Since I've argued that assumption #1 is false, the logic simply falls apart.  However, let me attack another of these assumptions, namely #3, that ASD involves a lack of sensitivity to emotion, etc.  I'm going to argue instead that ASD involves, in fact, hyper-sensitivity to emotional states, both their own and others.  This hypothesis is not new.  In fact, Henry Markram, Tania Rinaldi, and Kamila Markram proposed exactly this in their 2007 paper titled, The Intense World Syndrome -- An Alternative Hypothesis for Autism. I would just like to underline their perspective here.

An analogy is useful for seeing why hyper- rather than hypo-sensitivity makes more sense.  Imagine a person who is hypo-sensitive to sound.  Is such a person more or less likely to walk into very loud environments?  More likely!  If you are less sensitive to sound you might actually prefer loud environments because that's how you get your acoustic sensation to a normal level. Conversely, a person who is hyper-sensitive to sound is going to avoid loud environments because it hurts.

Now consider the same scenario translated to the social/emotional domain.  For starters let's agree that a large part of one's emotional stimulation comes from social situations.  Not all emotional stimulation comes from the social domain, to be sure: I can get pretty emotional when I can't figure out how to fix my leaky faucet.  But we get a lot more emotional and more often when a family member is sick or injured, or when a colleague rolls his or her eyes when we try to make a point, and so on. Now, imagine a person who is hyper-sensitive emotionally. Is such a person more or less likely to engage in social situations?  Less!  For someone who is hyper-sensitive to emotion, engaging in a normal social situation would be like walking into an excessively loud environment: it's uncomfortable and causes an avoidance response. ASD individuals may avoid social interaction not because they lack empathy but rather because social interaction is simply too stressful.

Some data:  A characteristic of autism is decreased gaze fixation, particularly on faces.  Correspondingly, the fusiform face area (FFA) appears to be less active in autistic individuals during face processing tasks.  This might be interpreted as social indifference, a lack of interest in faces.  However, another interpretation is that looking at faces, which are a major source of emotional information, is stressful for individuals who are hyper-sensitive to emotion.  An imaging study of face perception published by Dalton et al. in 2005 supports this view.  This study found that the autistic group did indeed spend less time fixating the eyes and that there was less activity overall in the FFA for the autistic than control group.  However, this is not all that surprising because if you spend less time looking at a stimulus, your brain activation in regions sensitive to that stimulus will naturally be lower.  And in fact, when Dalton et al. looked the correlation between FFA activity and gaze duration they found a strong and positive correlation in autistic subjects.  In other words, the FFA of autistic folks is working fine, it's just that they spend less time looking at the stimulus.  More importantly, they report that amygdala activity was strongly correlated with gaze duration in the autistic group but not in the control group.  Interpretation: looking at faces is more emotionally arousing in autistic than control individuals.

Hyper-sensitivity to emotion is perfectly consistent with the well-known sensory sensitivity noted in ASD.  In other words, there are other reasons to believe that hyper-sensitivity is a key feature of the syndrome across the board.

The mirror neuron folks got it perfectly backwards again.  You don't have to take my word for it though.  Here are some links to essays written by individuals with ASD or by a parent.  The links were provided by Morton Gernsbacher, an author on the study referred to above.  I found them particularly illuminating.


Dalton, K., Nacewicz, B., Johnstone, T., Schaefer, H., Gernsbacher, M., Goldsmith, H., Alexander, A., & Davidson, R. (2005). Gaze fixation and the neural circuitry of face processing in autism Nature Neuroscience DOI: 10.1038/nn1421

Hickok, G., Eight problems for the mirror neuron theory of action understanding in monkeys and humans. J Cogn Neurosci, 2009. 21(7): p. 1229-43.

Hickok, G. and M. Hauser, (Mis)understanding mirror neurons. Curr Biol, 2010. 20(14): p. R593-4.

Hickok, G., J. Houde, and F. Rong, Sensorimotor integration in speech processing: computational basis and neural organization. Neuron, 2011. 69(3): p. 407-22.

Lotto, A.J., G.S. Hickok, and L.L. Holt, Reflections on Mirror Neurons and Speech Perception. Trends Cogn Sci, 2009. 13: p. 110-114.

Lab Research Assistant position at UCSF Speech Neuroscience Lab

UCSF is a world-class research institution with a wide array of scanner facilities that includes MRI (both 3T and 7T systems) as well as a 275-channel whole-head MEG/EEG scanner. There is also an active program of research using intracranial ECoG recordings from epilepsy patients. Here at the Speech Neuroscience Lab, we make use of these technologies to investigate the neural basis of speech motor control. The research focus of the lab is investigating the neural basis of feedback processing in speech production, but other ongoing projects in the lab include studies of sequential speech production, as wells as studies of speech motor disorders like spasmodic dysphonia and stuttering.
We are looking for a research assistant to join our research group for a 1-2 year stint, with possibility of extension. Our ideal candidate is a recently graduated undergraduate engineering student who is interested in running human imaging and psychophysics experiments and also has reasonable programming skills in MATLAB.
Start date for the position would be as soon as possible
Those interested in applying should contact Prof. John F. Houde (

Friday, December 2, 2011

Post-doctoral position in Cognitive Neuroscience - Barcelona

Post-doctoral position in Cognitive Neuroscience
Applications are invited for a full-time post-doctoral research position in the
MULTISENSORY RESEARCH GROUP at the Pompeu Fabra University (Barcelona).
The post is part of the BRAINGLOT project, a Research Network on Bilingualism and
Cognitive Neuroscience (Consolider-Ingenio 2010 Scheme, Spanish Ministry of
Science and Education).
The project brings together the efforts of several research groups spanning different
scientific disciplines with the common purpose of addressing the phenomenon of
bilingualism from an open and multidisciplinary perspective. The MRG attempts to
understand the use of multisensory cues (audiovisual) speech information in the
context of learning and using a second language. The project includes behavioral,
neuroimaging (fMRI, ERP) and, neurostimulation (TMS) approaches.
Job description
We seek a person who leads the electrophysiological aspects of the project
(ERP/EEG), including the development of independent scientific studies, as well as the
participation (i.e., supervision) in others. Involvement in some organizational and
management aspects is also expected.
Candidate requirements
- Previous experience in ERP/EEG recording and analysis is *indispensable*
- PhD
- Motivation about the question of multisensory integration and/or speech perception
- Background in cognitive neuroscience, neuroscience, and/or cognitive psychology
- Programming skills
* Applicants from outside the EU are welcome to apply but must qualify for a valid visa.
- Duration: The position will be funded and renewable for up to two years
- Starting date: As soon as possible
- Salary: 28000EUR/year
- Travel: The project may require some travel to conferences / meetings
How to apply
Applications should include:
- a C.V. including a list of publications
- the names of two referees who would willing to write letters of recommendation
- a brief cover letter describing research interests
For informal enquiries about the position and applications, please contact Salvador
Soto-Faraco. ( Applications will be
accepted until the position is filled.
Please, mention that you are applying to the POSTDOCTORAL position in the email

Wednesday, November 30, 2011

Open rank faculty position - Johns Hopkins

The Department of Cognitive Science in the Krieger School of Arts and Sciences and the School of Education at Johns Hopkins University seek a faculty candidate, at any level, with an exceptional record of conducting and directing research in the broad area of the Science of Learning.  The appointment will be joint between these two units, with the expectation of responsibilities in both, but will have tenure-track and/or tenured status within the Cognitive Science Department. Research approaches and content areas that are of particular interest include plasticity, learning, and development in the areas of language, visual or speech perception or spatial representation. The ideal candidate should carry out research that makes substantive contact with theory, uses experimental, developmental, neuroscience and/or computational approaches, and has implications for application within the broad field of learning. Candidates should be strongly interdisciplinary, prepared to carry out effective teaching, student supervision, and collaboration in a formally-oriented, highly interdisciplinary Cognitive Science department, and eager to take advantage of collaborations with the School of Education and its connection to diverse student populations in area public and private schools.Candidates should be capable of making significant contributions to a new research initiative in the Science of Learning involving the Cognitive Science Department, the School of Education, and other units; further faculty growth in this area is anticipated. The Cognitive Science Department and the School of Education have a strong commitment to achieving diversity among faculty and staff. We are particularly interested in receiving applications from members of underrepresented groups and strongly encourage women and persons of color to apply for this position. Applications are due by January 15, 2012. 

Johns Hopkins University actively encourages interest from minorities and women.

Please send cover letter, CV, research statement, and three letters of recommendation. Please send electronic submissions only. Submit to:   HYPERLINK""

The Johns Hopkins University is an Equal Opportunity, Affirmative Action employer, Minorities, women, Vietnam-era veterans, disabled veterans and individuals with disabilities are encouraged to apply.

Post doc - Fridriksson lab Univ of S. Carolina

Three post-doctoral positions are available in the lab of Julius Fridriksson ( at the University of South Carolina (Columbia, SC, USA). The primary research foci of the lab are as follows: 1) neural basis of speech/language processing with special emphasis on brain plasticity; 2) neurophysiology of aphasia recovery; and 3) statistical analyses of neuroimaging data (primarily structural and functional MRI). This research relies on a range of methodologies such as functional and structural MRI, lesion-symptom mapping, transcranial magnetic stimulation (TMS), and transcranial direct current stimulation (tDCS). The University of South Carolina has a Siemens Trio MRI scanner that is primarily devoted to research and we have access to a state-of-the-art TMS setup. Much of our research is conducted in collaboration with several other investigators (e.g. Drs. Chris Rorden [], Leonardo Bonilha, and Marom Bikson). Columbia is centrally located in South Carolina, within a two-hour drive to the beach and the mountains. The weather in Columbia is marked by “Southern” summers, and a mild autumn, winter, and spring. The salary for these positions is very competitive but will be commensurate with experience and previous scholarship. The ideal applicant will work as a part of a research team as well as have the chance to initiate and carry out independent projects. If interested, please contact Julius Fridriksson at

Thursday, November 3, 2011

Neurobiology of Language Conference - Last Day for pre-registration is TODAY

Just a reminder that TODAY is the last day for pre-registration for the Neurobiology of Language Conference in Annapolis, Maryland, Nov. 10-11, 2011.  You can register here. See you in Annapolis!

Thursday, October 20, 2011

Assistant/Associate/Full Professor of Communication Sciences and Disorders

The University of Texas at Dallas invites applications for a tenure-stream faculty position in the School of Behavioral and Brain SciencesWe seek an outstanding scholar whose research program in the neurobiology of communication disorders will complement and enhance the behavioral, physiologicalclinical, and technology-focused investigations ongoing at the UT-Dallas Callier Center for Communication DisordersResponsibilities include research, graduate-level teaching, and mentoring doctoralstudents and postdoctoral fellowsApplicants should hold a PhD in a relevant field and an established or promising research program focusing on neurobiological aspects of communication disorders.
The UT-Dallas Callier Center for Communication Disorders is one of four centers in the School of Behavioral and Brain Sciences, which offers PhD programs inCommunication Sciences and Disorders, Cognition and Neuroscience and Psychological Sciences. The School has an established tradition of interdisciplinary research as well as collaborations with investigators at the UT Southwestern Medical Center, located adjacent to the Callier Center's Dallas site. Enrollment, facultyfacilities and research expenditures are expanding rapidly at University of Texas at Dallas, which boasts one of the state's most academically talented student bodies.
The University of Texas at Dallas is an Equal OpportunityIAffirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, citizenship status, Vietnam era or special disabled veteran's statusor sexual orientation. Indication of gender and ethnic origin for affirmative action purposes is requested as part of the application process but is not required for consideration. Review of applications will begin January 1,2012and will continue until the position is filled; the starting date is September 12012To apply for this position, applicants should submit (a) their current curriculum vitae, (b) a letter of interest (including research interests), and (c) letters of recommendation from (or the names and contact information for) at least five professional references via the ONLINE APPLICATION FORM (
Upon submitting their preferred email address, applicants will receive instructions to access a personalized application profile website. School hiring officials will receive notification when application materials are posted and are available for review.

Vicki Carlisle
Office of the Executive Vice President and Provost
800 W Campbell Road, AD23
Richardson, TX 75080-3021
Phone: 972-883-6751| Fax: 972-883-2276
The University of Texas at Dallas

Wednesday, October 19, 2011


The Center for Language Science (CLS) at Pennsylvania State University ( invites applications for an anticipated postdoctoral position. We are seeking a candidate who has extensive language neuroscience experience, particularly with fMRI methods, and who would like to develop expertise on bilingual language processing. The position will include interaction with CLS faculty and students and the larger Penn State neuroscience community (see and towards developing fMRI expertise among students and faculty and creating potential collaborative projects. The successful candidate will benefit from a highly interactive group of faculty whose interests include bilingual language processing, second language acquisition in children and adults, and language contact. Applicants with interests in these topics and with an interest in extending their expertise within experimental psycholinguistics and cognitive neuroscience are particularly welcome to apply. There is no expectation that applicants will have had prior experience in research on bilingualism but previous fMRI expertise is critical.

The CLS is home to a cross-disciplinary research program that includes a new NSF training program, Partnerships for International Research and Education (PIRE): Bilingualism, mind, and brain: An interdisciplinary program in cognitive psychology, linguistics, and cognitive neuroscience. The program provides training in research on bilingualism that includes an international perspective and that exploits opportunities for collaborative research conducted with one of our international partner sites in the UK (Bangor, Wales), Germany (Leipzig), Spain (Granada and Tarragona), The Netherlands (Nijmegen), Sweden (Lund) and China (Hong Kong and Beijing) and in conjunction with our two domestic partner sites at Haskins Labs and the VL2 Science of Learning Center at Gallaudet University. The successful postdoctoral candidate will have an opportunity to engage in collaborative research within the Center's international network.

Questions about faculty research interests may be directed to relevant core training faculty: Psychology: Judith Kroll, Ping Li, Janet van Hell, and Dan Weiss;Spanish: Rena Torres Cacoullos, Giuli Dussias, Chip Gerfen, John Lipski, and Karen Miller; Linguistics:  Nola Stephens; Communication Sciences and Disorders:  Carol Miller; German: Carrie Jackson, Mike Putnam, and Richard Page. Administrative questions can be directed to the Director of the Center for Language Science, Judith Kroll: More information about the Center for Language Science (CLS), about the PIRE program, and faculty research programs can be found at or

The initial appointment will be for one year, with a strong possibility of renewal for the next year. Salary and benefits follow NSF/NIH guidelines. The search is open to all eligible candidates regardless of citizenship.
Applicants should send a CV, several reprints or preprints, and a statement of research interests. This statement should indicate two or more core faculty members as likely primary and secondary mentors and should describe the candidate's goals for research and training during a postdoctoral position, including previous fMRI experience and directions in which the candidate would like to develop his/her expertise in the language science of bilingualism. Applicants should also provide names of three recommenders and arrange for letters of recommendation to be sent separately.

Application materials should be sent electronically to  For fullest consideration, all materials should be received by December 1, 2011. The appointment can begin any time between February 1, 2012 and June 1, 2012. We encourage applications from individuals of diverse backgrounds.  Penn State is committed to affirmative action, equal opportunity and the diversity of its workforce.

Tuesday, October 11, 2011

University of California, Irvine -- Junior Faculty Position in Cognitive Neuroscience

Subject to budgetary authorization, the Department of Cognitive Sciences ( at the University of California, Irvine (UCI) has available a tenuretrack position at the Assistant Professor level in cognitive neuroscience. Of particular interest are researchers who employ a multi‐method approach to understand the computational and neural organization of speech and language processes or higher‐level perception or action. The successful candidate will interact with a dynamic and growing community in cognitive, computational, and neural sciences within the Cognitive Science Department, the Center for Cognitive Neuroscience, and the newly founded Center for Language Science. Irvine is located in Orange County on the Southern California coastline between Los Angeles and San Diego.

The online application should include: A cover letter indicating primary research interests, CV, three recent publications, and 3‐5 letters of recommendation. Candidates should apply online at:

Review of applications will commence on December 1. Inquiries about the application process or position should be sent to:

The University of California, Irvine is an equal opportunity employer committed to excellence through diversity. For you language folks, it is worth pointing out that UCI has excellent groups in the audition (Center for Hearing Research), cognitive neuroscience, and a growing language/speech group:

Greg Hickok - Neuroscience of Language
Kent Johnson - Philosophy of Language
Lisa Pearl - Computational models of language acquistion
Kourosh Saberi - Audition, Neuroscience of auditory/speech perception
Steve Small - Neuroscience of Language
Jon Sprouse - Experimental approaches to syntactic theory, psycholinguistics
Ramesh Srinivasan - EEG, large scale networks, speech processing
Fan-Gang Zeng - Speech perception, auditory disorders, prosthetic hearing

Monday, October 10, 2011

Do we love our iPhones literally? I really don't care

There has been a huge backlash in the scientific community and blogosphere over a recent New York Times op-ed piece by Martin Lindstrom discussing a functional MRI study of the brain response to hearing and seeing an iPhone ringing.

Purported finding: insula activation.
Interpretation: we love our iPhones, literally.
Why this interpretation?: because insula activation has previously been associated with feelings of love.

Obviously a dubious interpretation and certainly a highly questionable piece of editorial decision making on the part of the Times. Not surprisingly, the response has been vigorous.

Russ Poldrack, a respected UT Austin prof, called it "complete crap" and wrote a letter to the editor of the Times to such effect. The letter, which was co-signed by 44 neuroscientists, was published recently. Poldrack correctly pointed out the flawed logic of the claim and further noted that the insula activates for all kinds of things. I would have signed it too and I'd like to extend my thanks to Russ for taking the time to write the letter.

Tal Yarkoni, a UC Boulder post doc, wrote, "the New York Times blows it big time on brain imaging."

The Neurocritic blog was all over this one too, as was science writer and blogger David Dobbs who wins the prize for the most unrestrained headline: fMRI Shows My Bullshit Detector Going Ape Shit Over iPhone Lust

It might surprise you that I'm not going to jump on the bandwagon here. Yes, I agree the claim is complete crap and yes my bullshit detector when ape shit and yes I think the Times editorial staff clearly could use an education on functional imaging. But I'm not too worried about this op-ed piece or the blathering of a pseudoscientist like Lindstrom. Why? Because it is so clearly ridiculous that the most harm it will do is discredit the content of the NYT, stir up some debates about iPhones vs. Androids, and maybe cause the public to question functional MRI a bit more (not a bad thing). Importantly, it will have no impact on scientific progress.

What worries me more, a lot more, are claims by serious, respected scientists that sound reasonable, but are based on the same flawed or weak logic. These claims fly under the radar, go unchallenged and DO impact scientific progress.

Consider another NYT piece published in 2006 called "Cells that Read Minds" by respected science writer Sandra Blakeslee. The article is an excellent summary of the state of scientific thought regarding the function of mirror neurons (you KNEW this was going to come back to mirror neurons, didn't you?!). Let me be clear, what follows is not a critique of Blakeslee, who very accurately summarized the field, but of the logic of the claims made by her sources, respected scientists like Giacomo Rizzolatti, Vittorio Gallese, Marco Iacoboni, and others.

To illustrate my point I'll quote from the NYT piece which quotes Iacoboni:
"When you see me perform an action - such as picking up a baseball - you automatically simulate the action in your own brain," said Dr. Marco Iacoboni, a neuroscientist at the University of California, Los Angeles, who studies mirror neurons. ... "you understand my action because you have in your brain a template for that action based on your own movements.

"You automatically simulate the action" -- this claim comes from the observation that when you watch (some!) actions (not all!) you activate motor-related areas. This is an inference, not a fact. Yes, motor areas do activate during perception, in some experiments, under some conditions. But does this mean that actions are "automatically simulated"? Or are there other possibilities? Iacoboni's own highly cited paper in the journal Science, which showed activation of the motor system during observation of actions also activated just as robustly during the observation of grey rectangle with a dot in it. Does this mean that the motor system "automatically simulates" grey rectangles with dots in them? And the region that was activated was Broca's area, long known to activate during motor action, particularly speech, but also under a variety of other behaviors and tasks, just like the insula.

" understand action because you have in your brain a template for that action based on your own movements." To use Poldrack's words, "this kind of reasoning is well known to be flawed." Just because a region previously shown to be associated with a given function (action execution) also activates for another function (action perception), doesn't mean it is doing the same thing for both functions or that the activation is causing the behavior under investigation (understanding). More to the point, the activation of a brain region in such a study tells us nothing directly about what is causing the activation. Example: in the early days of fMRI we were piloting a visual perception task and found very highly correlated and wildly significant activity in the frontal pole during visual stimulation. Using standard logic, this would indicate that the frontal poles were critically involved in low-level vision, an odd finding. What we later discovered was that the mirror that allowed the subject to view the screen was titled down too much so that every time we presented a stimulus the subject had to look up, which moved their head just enough to generate a perfectly correlated change in the signal in the portion of the brain that moved the most, the frontal pole.

Notice that the original interpretation of mirror neurons based on observations in the monkey, that cells fire both during action observation and action execution, is no less flawed logically. There is a correlation, but correlation does not imply causation.

Here's another quote from Blakeslee's piece that discusses the work of Christian Keysers and that may sound suspiciously similar to the iPhone claim.
Social emotions like guilt, shame, pride, embarrassment, disgust and lust are based on a uniquely human mirror neuron system found in a part of the brain called the insula, Dr. Keysers said. In a study not yet published, he found that when people watched a hand go forward to caress someone and then saw another hand push it away rudely, the insula registered the social pain of rejection. Humiliation appears to be mapped in the brain by the same mechanisms that encode real physical pain, he said.

Despite similar logical flaws apparent in the 2006 piece, no one seemed to notice, unlike in the recent NYT op-ed case. There was no letter to the editor signed by a couple dozen neuroscientists, no uproar that I can find regarding the ridiculously flawed logic of the claims, only mindless acceptance or indifference, except for a nice piece in by Alison Gopnik who called mirror neurons a myth and suggested that they are the "'left brain/right brain' of the 21st century". This critique has been largely ignored if we take the proliferation of mirror neuron claims as evidence.

This should trouble you. How is it that flawed logic in one domain is obvious and creates a dramatic scientific reaction, while goes largely unnoticed or even rubber stamped in another? It comes down to intuition and bias. We intuitively know that a single fMRI study cannot tell us whether or not we love our iPhones. Further, we are biased by the source: one man's claim based on an unpublished study is easy to dismiss. With mirror neurons the claim seems reasonable, intuitive, and (deceptively) simple. And it is grounded in hardcore neuroscience methods, recording from single cells, with findings reported in the best journals by established, respected scientists. Given such a bias, we don't think about it as hard and are more willing to overlook or just fail to notice the logical flaws.

I don't worry about the crazy claims. They will take care of themselves. It's the ones that make sense that worry me the most and that require all of us to think about a little more carefully.

Saturday, October 8, 2011

UCSF Post doc


The Speech Neuroscience Research Group at the University of California, San Francisco (UCSF) is seeking two postdoctoral-fellows interested in understanding the organization of human speech processing and the neural basis of speech motor control.

UCSF is a world-class research institution with a wide array of scanner facilities that includes MRI (both 3T and 7T systems) as well as a 275-channel whole-head MEG/EEG scanner. There is also a large and rapidly expanding program of research using high-density invasive electrocorticography (ECoG) recordings from neurosurgical patients.

Two postdoctoral positions are open in the labs of Professors Edward Chang and John Houde. Professor Chang’s lab focuses on the basic neural representations of acoustic, phonetic, and lexical information in human cortex. Professor Houde’s lab investigates the neural basis of speech motor control. The research focus of the lab is investigating the neural basis of feedback processing in speech production, but other ongoing projects in the lab include studies of sequential speech production, spasmodic dysphonia and stuttering. Major experimental methods include invasive electrocorticography (ECoG), MEG source analysis, time-frequency analysis and simultaneous EEG-fMRI.

The positions are for two years and offer a competitive salary funded by the NIH and NSF. Ideal applicants will have experience with programming (especially in the Matlab environment), and have strong backgrounds in time series analysis, signal processing, control theory, phonetics, and cognitive neuroscience.

To apply, please submit a curriculum vita, cover letter, two references, and representative publications to Professors Edward Chang ( ) and John Houde ( ).

Thursday, October 6, 2011

Programmer position: NYU Neuroscience of Language Laboratory

A full or part-time Programmer position is available at the NYU Neuroscience of Language Laboratory (, available immediately. Responsibilities include both the development of MEG and EEG data analysis routines and functioning as support personnel for the lab. A strong background in statistics and Matlab are essential. Prior experience with psychological experiments and electrophysiology is preferred.

We are looking for a full-time person but will also consider an excellent match on a part-time basis. Salary commensurate with experience. To apply, please email CV and names of references to Prof. Liina Pylkkänen (

Monkeys, and their auditory cortex neurons, can categorize speech sounds

An interesting new study by Tsunda, Lee, and Cohen (2011) has found that rhesus monkeys show categorical perception of a speech sound continuum (dad to bad) and further that the population response of neurons in anterior lateral belt region of auditory cortex appears to reflect the categories. However, the average activity of the auditory cortex cells did not predict response choice. A previous study from the Cohen lab (Russ et al. 2008) found that neurons in ventral prefrontal cortex did correlate with the monkey's behavioral response in a similar speech discrimination task.

So what have we learned? We have yet more evidence that you don't need a motor speech system to perform well on a subtle speech perception task involving minimal pair place of articulation contrasts (/b/ vs. /d/). We can add monkeys to the list of critters who can do it. Second we learned that auditory cortex seems to code the categories, at least in the population response. The decision in such tasks, however, is not read off of the auditory response directly, but is mediated by prefrontal regions. This fits will with human stroke and imaging data suggesting a similar division of labor: auditory-related areas code speech categories while frontal regions are critical for task-related decision making, at least of these sorts of tasks.

This set of papers is definitely worth a look...


Russ, B., Orr, L., & Cohen, Y. (2008). Prefrontal Neurons Predict Choices during an Auditory Same-Different Task Current Biology, 18 (19), 1483-1488 DOI: 10.1016/j.cub.2008.08.054

Tsunada, J., Lee, J., & Cohen, Y. (2011). Representation of speech categories in the primate auditory cortex Journal of Neurophysiology, 105 (6), 2634-2646 DOI: 10.1152/jn.00037.2011

Wednesday, October 5, 2011

Asst. Prof. job in Neuroling: University of Texas at Austin, Department of Linguistics
Job Location: Texas, USA
Rank or Title: Assistant Professor
Linguistic Field(s): Neurolinguistics LL Issue: 22.3787
Date Posted: 27-Sep-2011

 Job Description: The Department of Linguistics seeks applications for a tenure-track position, with a specialization in the study of language and the brain. We have a preference for individuals with several years of experience beyond the Ph.D. The successful candidate will have a strong track record of research in how language is processed, represented, learned, and/or understood. We particularly seek candidates who have investigated linguistic processing using neuoroimaging methods and who will provide leadership in the use of such techniques within the department. For information on the imaging facilities available at The University of Texas at Austin, see the web site of the Imaging Research Center: In 2013, the Department of Linguistics will be moving into a newly-constructed building that will allow excellent laboratory space within the department. 

Duties include: (a) teaching undergraduate and graduate courses; (b) directing thesis and dissertation research; (c) conducting original research and publication; (d) obtaining external funding to support a strong research program; (e) advising undergraduate and graduate students; and (f) performing department, college and institutional service.

 The successful candidate should have the Ph.D. in hand by August 20, 2012. He or she should have documented excellence or potential for excellence as a teacher, researcher, advisor, and leader, and a strong commitment to working collaboratively with other faculty members within the department.

 To apply, please send a letter of application, curriculum vitae, three letters of recommendation, evidence of your past teaching performance or teaching potential, a list of courses you are prepared to teach, and samples of published or other written work to the application address below. Electronic applications may be submitted to Screening of all application materials will begin November 15, 2011, although applications will be accepted until the position is filled. For inquires regarding this position, please email Richard P. Meier, Chair, at 

This position is pending budgetary approval. A background check will be conducted on the successful applicant. The University of Texas at Austin is an AA/EEO employer. 

Application Deadline: 15-Nov-2011 Open until filled Application Address: Search Committee Department of Linguistics University of Texas at Austin 1 University Station/B5100 Austin, TX 78712-0198 USA

 Application Email:

 Contact Information: Professor Richard P. Meier Phone: 512 471 1701 Fax: 512 471 4340

Thursday, September 29, 2011

International conference: NeuroPsychoLinguistic Perspectives on Aphasia

Call for papers (CALL_EN) : International conference NPL-Aphasia

NeuroPsychoLinguistic Perspectives on Aphasia
International conference
21-22-23 June 2012, Toulouse, France
Languages of the conference: English and French

Abstract submission:

Guest speakers:

Marie-Pierre De Partz, Université Catholique de Louvain
Marina Laganaro, University of Geneva
Jean-Luc Nespoulous, University of Toulouse 2-Le Mirail
Michel Paradis, McGill University & UQÀM

Call for abstracts:

The study of acquired language disorders, and specifically the study of aphasia in adult patients, brings together various research perspectives around language and cognitive sciences, such as:
. Linguistics (involving different representational levels and their
interfaces: phonetics, phonology, prosody, morphology, semantics,
lexicon, syntax, discourse, pragmatics, …);
. Psycholinguistics (regarding the different levels of decoding and
encoding processes) ;
. Neurolinguistics (investigating the neurobiological grounding of
language and cognition).

The meeting is dedicated to illustrate different approaches to aphasia research, including qualitative and quantitative studies of language disorders in patients with left hemisphere and / or right hemisphere lesions (stroke, traumatic injury, dementia) — both case and group studies — relating to one or a combination of several of the research areas mentioned above.

The conference particularly invites papers investigating theoretical aspects of language disorders (underlying impairments, functional reorganisation, development of compensation strategies, etc.) or
exploring practical aspects (treatment outcomes, novel proposals for therapy, etc.), based on one or several of the following perspectives (but not restricted to them):
. Modelisation of language and cognitive structures and functions;
. Remediation programs for therapy (development of treatment and
assessment methods based on clinical research) ;
. Across-thematic perspectives:
- Disorders and normality;
- Bilingualism and crosslinguistic approaches;
- Empirical and experimental research methodologies;
- Variability and stability of performance;
- Aphasia therapy and recovery, language assessment, treatment programs,
- Spontaneous and elicited strategies and their clinical implications.

We encourage papers exploring dissociations, that are relevant or not, and papers with consideration to language structure, processing and use in pathological contexts and in “normality” with original empirical and experimental methods (computational, formal, corpus analysis, eye-tracking, study of intra- and inter-task, -individual, -language variability, dissociations between modalities : production -
comprehension / speech - writing - non verbal, fMRI, PET, awake surgery, …).

The contributions will be presented and discussed during oral (20 minutes + questions) and poster sessions. Additionally, workshops intended to stimulate discussions will be organised, with special focus on:
1- Crosslinguistic and typological approaches;
2- Empirical and experimental methods;
3- Clinical applications: elaboration of treatment programs;
4- Social readaptation of aphasic persons: improving communication to
live better.

Abstracts (maximum of 600 words including references, see the submission guidelines: should indicate which type of presentation (talk, poster or talk/poster for a workshop) is preferred.

Important dates:

Sept. 2011> Call for abstracts
31 Dec. 2011> Deadline for abstract submission
Feb. - March 2012 > Notification of acceptance and confirmation for
March - June 2012 > Registration (early bird: before 15/04, late: after
May 2012 > Program
21-22-23 June 2012 > Conference

Tuesday, September 27, 2011

POST-DOC POSITION – MEG/EEG – National Institutes of Health, NIDCD

POSTDOCTORAL POSITION – MEG/EEG – National Institutes of Health, NIDCD Division of Intramural Research
Applications are invited for a postdoctoral position in the Language Section, NIDCD, National Institutes of Health, to work on language, social communication, and relevant neurological disorders using MEG/EEG. The research will focus on discourse level language comprehension, production, and all aspects of natural ecologically valid language use.   Investigations will be carried out in normal adults and clinical populations including stroke, traumatic brain injury and stuttering. Major experimental methods include MEG source analysis, time-frequency analysis and simultaneous EEG-fMRI.
Applicants should have a doctoral-level degree in neuroscience, psychology, medicine or a related area. Prior experience in MEG/EEG experimental design, data acquisition and analysis is necessary. Advanced skills for time series analysis and MATLAB programming are highly desirable. Experience with fMRI is preferred but not required. Salary will be commensurate with the salary scale of the National Institute of Health, NIDCD Division of Intramural Research. The position is funded for two to five years. Applications will be considered until the position is filled.
For further information or to submit an application (including a brief CV and two references) please contact Allen Braun, M.D.  email:

Saturday, September 24, 2011

fMRI, TMS, and Coldplay

I never thought I'd see these three nouns woven into a blog entry but Brad Buchsbaum makes it work on his FlowBrain Blog. It's actually a really nice discussion of how useful TMS really is in terms of addressing the "epiphenomenon issue" with fMRI data. Check it out:

Friday, September 23, 2011

Mirror Neuron Forum - Role of mirror neurons in speech and language processing - Part II

Marco Iacoboni's response to my argument that the motor system is not necessary for speech perception was short and sweet, so let's break it down line by line.
In a ‘‘virtual lesion’’ repetitive TMS (rTMS) study on speech perception, the TMS effects over premotor cortex were, if anything, a little stronger than the TMS effects over the auditory cortex (Meister et al., 2007). However, the effects were not reliably different, suggesting that both structures participated in the functional process, in contrast to GH’s suggestion that motor processes play a small, modulatory role in speech perception.
Meister et al. found that TMS to premotor cortex resulted in a modest decline in performance in identifying synthesized CV syllables presented in noise in the context of a three-alternative forced choice paradigm. There has been no study that I'm aware of to show that such an effect is found when natural stimuli are used. The stimuli have to be degraded, i.e., partially ambiguous. Can we conclude that premotor cortex is playing an "essential role in speech perception" as the title suggests? No, we can only conclude that it is playing a modest role in the performance of an artificial task under degraded listing conditions. And we can't even tell what aspect of the task is being disrupted. It is possible that TMS is not interfering with the perception at all but rather interfering with the sensory-motor memory of which response button corresponds to which syllable. This one piece of evidence is held up to counter the array of studies that I cited showing that damage to the motor speech system, developmental failure of the motor speech system, complete biological lack of the capacity for a motor speech system, does not prevent speech perception. Where does the weight of the evidence leave us? The motor system plays a modest modulatory role if that. Why didn't STG stimulation cause a greater decline in performance? There is abundant evidence that speech perception is bilaterally mediated in the STG (Hickok & Poeppel, 2000, 2004, 2007).
Again, I find it counterproductive to focus on dichotomous models (‘‘it’s auditory,’’ ‘‘no, it’s motor’’). These models, although didactically useful, tend to provide a limited understanding of the functional processes at play. Indeed, consistent with the model in GH’s Figure 2D, the most successful recent computational models of action and perception disclose the intimate relationship between motor control and perception (Friston, Daunizeau, Kilner, & Kiebel, 2010; Friston, Mattout, & Kilner, 2011).
I outlined four possible models, only two of which were dichotomous. I'm not denying that action and perception are intimately related. They are! But the functional relation is precisely the reverse to what the mirror neuron claim holds.
Eventually, we will have to get rid of these labels altogether, because they seem to get in the way of a better understanding of the phenomena under investigation.
Call it what you like, it doesn't change the fact that systems in the posterior frontal lobe aren't necessary for speech perception, whereas bilateral systems in the superior temporal lobe are. As much as some folks would like the cortex to one big happy interacting neural network with no differentiation, the fact is that damage to different parts of the system have different effects. We have to deal with these facts. Returning to the facts, here's a quote from Meister et al.
The present results demonstrate that the involvement of the premotor cortex in perception is not merely epiphenomenal and suggest that sensory regions are not sufficient alone for human perception. p. 1695
and a figure from Rogalsky et al. 2011 which shows comprehension, word discrimination, and syllable discrimination performance of two cases with lesions involving the human mirror system.
It seems pretty clear that Meister et al.'s claim is false. The recent follow up to Rogalsky et al. using a sample of 24 cases with Broca's area lesions confirms what was found in these two cases.

So, I've covered the response to my criticisms of mirror neuron theory by two of the most prominent and thoughtful defenders of the theory. Given the opportunity to present their strongest possible rebuttal to direct critiques in the Mirror Neuron Forum, both Gallese and Iacoboni failed to mount a viable defense of their model. This, of course, is my view. I'm sure they will disagree and again I invite them to post their own comments as guest entries on this blog. So far I have not heard a peep from either of them despite direct email invitations to participate.


Gallese, V., Gernsbacher, M., Heyes, C., Hickok, G., & Iacoboni, M. (2011). Mirror Neuron Forum Perspectives on Psychological Science, 6 (4), 369-407 DOI: 10.1177/1745691611413392

Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences, 4, 131-138.

Hickok, G., & Poeppel, D. (2004). Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition, 92, 67-99.

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393-402.

Meister, I. G., Wilson, S. M., Deblieck, C., Wu, A. D., & Iacoboni, M. (2007). The essential role of premotor cortex in speech perception. Curr Biol, 17(19), 1692-1696.

Rogalsky, C., Love, T., Driscoll, D., Anderson, S. W., & Hickok, G. (2011). Are mirror neurons the basis of speech perception? Evidence from five cases with damage to the purported human mirror system. Neurocase, 17(2), 178-187

Mirror Neuron Forum - Role of mirror neurons in speech and language processing - Part I

Now on to my favorite mirror neuron topic, Question 2 of the Mirror Neuron Forum:

Do Mirror Mechanisms Causally Contribute to Speech Perception and Language Comprehension?

There are two questions here, each logically independent of the other, but findings from one domain may provides hints regarding the other. The first is whether mirror neurons are the basis of speech sound recognition. This was the first language-related ability that mirror neuron function was generalized to in humans. The second question is whether the motor system -- often defined as the somatotopically organized fields such as M1, which is generally consider NOT to be part of the mirror system, but no one seems to worry about that for some reason -- is involved in the representation of action-related concepts. One question is a perceptual issue, the other is a semantic/conceptual issue.

I focused on the first question for two reasons. One is its primacy in the history of the development of theories of mirror neuron function in humans. The second is that there is a TON of data on the topic, allowing us to draw firm conclusions. I consider this a test case for the MN theory and suggested that if the theory fails here, we need to seriously question its role in other domains. I then presented a list of the evidence proving (I almost never use this word, but I think it is justified here) that the motor speech system is NOT necessary for speech recognition.

Gallese did not dispute this claim. Instead he questioned whether findings from the speech perception literature should lead us to question findings in other domains.

VG: According to GH, the roles of MNs in speech perception and language understanding are to be considered tightly related: If a relationship between MNs and speech perception cannot be established, so the argument goes, it would follow that the connection between MNs and language understanding would be falsified. I disagree with this logic.

Note that I didn't actually say that findings from speech perception would falsify claims regarding language semantics. I said, "If the action understanding interpretation fails for speech perception, it raises serious questions about the theory generally." Why do I say this? Because this is the domain in which we have the most evidence. It is a test case. If the theory holds up for speech perception, then it passed a rigorous test and we might be more lenient in accepting weaker data in other domains. If it fails the rigorous test, this leads us to question the weaker data. Could data from other studies lead to the firm conclusion that motor systems play a role in action knowledge representation (or empathy or whatever)? Yes. But we are not there yet. Speech perception is the ONLY domain where the results are conclusive and the theory failed. This was my point.

Now, in another recently published paper, I reviewed the evidence claimed to support the theory that the motor system is critically involved in action semantics and found the evidence weak at best (Hickok, 2010). So let's look at what Gallese takes to be some of the strongest claims.
VG: In humans, the cortical motor system is activated during the observation of a variety of motor behaviors
Activation does not imply causation.
VG: right handers preferentially activate the left premotor cortex during lexical decisions on manual-action verbs (compared with nonmanual-action verbs), whereas left handers preferentially activate right premotor areas (Willems, Hagoort,&Casasanto, 2010). Thus, right and left handers, who performactions differently, use correspondingly different areas of the brain for representing action verb meanings
That's nice but unsurprising and easily explainable without assuming that the meaning of the verbs is coded in the motor system. If I say a word like throw this will activate in your brain a network of systems and representations that have previously been associated with that word. Chances are, you have previously linked that word with the very action itself: "Throw me the ball!" upon which you generate the movement. So even if the movement itself is not part of the meaning of the word, motor programs for generating the movement just might activate when you hear the word. So given that lefties and righties throw with different hands, you would expect to see the observed difference. Depending on your recent life experiences, upon hearing throw you might also activate the word up and the memory of a wild party, but that doesn't mean that up and WILD PARTY are part of the meaning of to throw, it just means they are associated at some level.

How can we test this idea more directly? One prediction is that damage to the motor system should cause deficits in understanding actions. Some studies have been published which are suggestive in this direction, e.g., in Parkinson's patients, but these cases are far from complication free as I noted in my 2010 review. Unfortunately, there is not a lot of (convincing) experimental evidence available. However, I will again point out that we can readily understand actions that we cannot perform such as the coiling of a snake or the flying of a bird. Further, from an evolutionary standpoint, these are actions that are critical to understand because survival can depend on it. This indicates that action understanding, at a fundamental level, cannot be dependent on motor representations. So to sum up: the MN theory of action understanding has failed its only rigorous test. The evidence supporting the role of MNs in action semantics is debatable. There is evidence that the motor system is not critical for understanding actions generally. Together, this leads me to "seriously question" the claim that actions semantics depends on the motor system.

Gallese, V., Gernsbacher, M., Heyes, C., Hickok, G., & Iacoboni, M. (2011). Mirror Neuron Forum Perspectives on Psychological Science, 6 (4), 369-407 DOI: 10.1177/1745691611413392

Hickok, G. (2010). The role of mirror neurons in speech perception and action word semantics. Language and Cognitive Processes, 25, 749 - 776.

Thursday, September 22, 2011

Job posting: Assistant or Associate Professor of Communication Sciences and Disorders (CSD) - Penn State Univ.

Assistant or Associate Professor of Communication Sciences and Disorders (CSD)

Work Unit: College Of Health & Human Development
Department: Communication Sciences and Disorders
Job Number: 34660
Affirmative Action Search Number: 023-105

The Department of Communication Sciences and Disorders (CSD) (, College of Health and Human Development at The Pennsylvania State University seeks candidates for a full-time continuing (36-week) tenured or tenure-track position of Assistant or Associate Professor to begin Fall 2012.

The responsibilities of this position will be to establish or continue a line of research in a specialty area(s) related to language, speech or voice science, autism, and/or fluency. Specialty interests in neuroscience, neurogenics, neuromotor disorders and/or aging considered a plus. In addition, will teach undergraduate and graduate courses in area of specialty; supervise undergraduate and graduate (M.S./Ph.D.) research; be actively involved in enhancing and building the Ph.D. program; provide service to the Department, College, and University; and contribute to the clinical aspects of the program. Opportunities exist for interdisciplinary collaborations across the University Park and Hershey Medical Center campuses. These collaborations include the Penn State Social Science Research Institute, the Center for Healthy Aging, the Social, Life, and Engineering Sciences Imaging Center (which houses a human electrophysiology facility and a 3 Tesla fMR unit), the Penn State Center for Language Science, the Huck Institutes of the Life Sciences, and numerous departments including Biobehavioral Health, Psychology, Kinesiology, Bioengineering, Human Development and Family Studies and departments in the College of Medicine such as Neurology.

Candidates must have an earned Ph.D., with an active research and scholarship program. Previous teaching experience and/or post-doctoral experience desired. CCC-SLP is desirable. Review of credentials will begin immediately and continue to be accepted until the position is filled. Interested candidates should submit a letter of application, current curriculum vitae, copies of relevant research articles or presentations, along with the names, addresses, email and telephone numbers of three professional references, to:

Krista Wilkinson, Ph.D., Chair of the Search Committee
Professor,Communication Sciences and Disorders
c/o Sharon Nyman, Adminstrative Assistant
Department of Communication Sciences and Disorders
The Pennsylvania State University
308 Ford Building
University Park, PA 16802

Or, send via email to:

Penn State is committed to affirmative action, equal opportunity and the diversity of its workforce.

Neurobiology of Language Conference (NLC 2011) -- Scientific Program

The scientific program for the 2011 NLC meeting in Annapolis, Maryland has just been posted online. It looks like a fantastic slate of keynotes, debates, and platform sessions. As an SNL board member, I was involved in selecting the keynote and debate speakers, but the platform session speakers are chosen based on a blind ranking of abstracts done by an army of volunteer reviewers. The result is a very nice mix of topics and speakers. I just had my first look at the program and I don't think there is a single session that I can bug out on! This may be the very first meeting of any conference where I attend all the sessions...

Don't forget to register and see you in Annapolis!

Wednesday, September 21, 2011

The planum temporale is not a functionally homogeneous region

I'm not sure that anyone really believes that the planum temporale (PT) is a functional monolith but you wouldn't know it from the literature. People talk about THE planum temporale -- e.g., Griffiths and Warren's The Planum Temporale as a Computational Hub -- as if it were one functional thing. Cytoarchitectonic data tells us that this is highly unlikely. There are multiple fields in the PT and the posterior half is not even considered part of auditory cortex, based on its laminar organization. Now we have fMRI evidence clearly showing at least two functional subdivisions in the PT.

Two functions that have repeatedly been linked to the PT are spatial hearing and auditory-motor integration. Some authors have linked these two abilities computationally into a single mechanism:

This expanded scheme ... proposes a common computational structure for space processing and speech control in the postero-dorsal auditory stream. -Rauschecker & Scott, 2009, p. 722.
This raises the question, are the same regions in the PT involved in processing spatial and auditory-motor information? A new study addresses this question directly.

In an fMRI study subjects participated in four auditory conditions: listening to stationary noise, listening to moving noise, listening to pseudowords, and shadowing pseudowords (covert repetition). As with previous studies, contrasting the shadow and listen conditions should activate regions specific to auditory-motor processes, while contrasting the stationary and moving noise conditions should activate regions involved in spatial hearing. Subjects (N = 16) showed greater activation for shadowing in left posterior PT (yellow), area Spt, when the shadow and listen conditions were contrasted. The motion vs. stationary noise contrast revealed greater activation in a more medial and anterior portion of left PT (red).

Seeds from these two contrasts were then used to guide the DTI analysis in an examination of connectivity via streamline tractography, which revealed different patterns of connectivity.

It's not straightforward to infer computational function from mapping data, but given that completely different areas are involved and given that the connectivity patterns appear to be different, perhaps the computational mechanisms for spatial hearing and auditory-motor integration are not shared after all.

In any case, it is now very clear that, as with Broca's area, the PT is not functionally homogeneous.


Isenberg, A., Vaden, K., Saberi, K., Muftuler, L., & Hickok, G. (2011). Functionally distinct regions for spatial processing and sensory motor integration in the planum temporale Human Brain Mapping DOI: 10.1002/hbm.21373

Griffiths, T. D., & Warren, J. D. (2002). The planum temporale as a computational hub. Trends in Neuroscience, 25(7), 348-353.

Rauschecker, J. P., & Scott, S. K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nature Neuroscience, 12(6), 718-724