Monday, February 20, 2017

Pre- and post-doctoral research positions in MEG research in the NYU Neuroscience of Language Lab (PIs Pylkkänen & Marantz in New York or Abu Dhabi)

The NYU Neuroscience of Language Lab ( has openings for research scientists, which could be realized either as pre-doctoral RAships or as a post-doc. The RAs could be based either in our Abu Dhabi or New York labs. A post-doctoral fellow would be based in Abu Dhabi.

A BA/BS, MA/MS or PhD in a cognitive science-related discipline (psychology, linguistics, neuroscience, etc.) or computer science is required. 

The hired person would ideally have experience with psycho- and neurolinguistic experiments, a background in statistics and some programming ability (especially Python and Matlab). A strong computational background and knowledge Arabic would both be big plusses.  

The pre/post-doc's role will depend on the specific qualifications of the person hired, but will in all cases involve MEG research on structural and/or semantic aspects of language. 

In Abu Dhabi, salary and benefits, including travel and lodging, are quite generous. We are looking to start these position in summer 2017. Evaluation of applications will begin immediately. 

To apply, please email cover letter, CV and names of references to Liina Pylkkänen at and Alec Marantz at For the RAships, please indicate if you have a preference for either Abu Dhabi or New York. 

Friday, February 10, 2017

Postdoctoral Fellowship: The Department of Speech, Language, and Hearing Sciences at Purdue University

Postdoctoral Fellowship: The Department of Speech, Language, and Hearing Sciences at Purdue University invites applications for a postdoctoral fellowship from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health beginning July 1, 2017. Applicants must be U.S. citizens or have permanent resident status. This will be a two-year appointment. Individuals may seek training in any of the following overarching areas: (1) Foundational; (2) Developmental Disorders; (3) Neurological and Degenerative Disorders. Potential mentors include: Alexander Francis, Stacey Halum, Michael Heinz, Jessica Huber, David Kemmerer, Keith Kluender, Ananthanarayan (Ravi) Krishnan, Laurence Leonard, Amanda Seidl, Preeti Sivasankar, Elizabeth Strickland, Christine Weber, and Ronnie Wilbur. Applicants are encouraged to contact appropriate individuals on this list prior to submitting an application. A description of the research areas of these potential mentors can be found at Application materials should include a statement of interest including desired research trajectory, three letters of recommendation, a curriculum vitae, and copies of relevant publications.  These materials should be sent to Elizabeth A. Strickland, Project Director, at  Deadline for submission of applications is February 28, 2017. Purdue University is an equal opportunity/equal access/affirmative action employer fully committed to achieving a diverse workforce.

Tuesday, February 7, 2017

Interdisciplinary Postdoctoral Position in Basal Ganglia-Cortical Coding of Speech


Two postdoctoral positions are available in the University of Pittsburgh Departments of Neurosurgery and Psychology. The research involves the use of invasive deep brain electrical recording and stimulation in patients with Parkinson’s disease to study subcortical contributions to speech production. One of the Postdoctoral Associates will work closely with a mentorship team led by Dr. Mark Richardson and the other will work closely with a mentorship team led by Julie Fiez.  Support for this position comes from a recently awarded BRAIN Initiative grant (Research Opportunities Using Invasive Neural Recording and Stimulating Technologies in the Human Brain, U01), for which Dr. Richardson is the PI. Other co-Investigators include Tom Mitchell and Lori Holt (CMU), Diane Litman, Rob Turner, Sue Shaiman and Mike Dickey (Pitt), Stan Anderson and Nathan Crone (JHU).

Research Description:

An abstract of the U01 grant can be found here:

A major strength of this project is the complimentary nature of extensive, multi-disciplinary expertise from team members at the University of Pittsburgh, Johns Hopkins University and Carnegie Mellon University. This combined expertise allows us to employ a novel combination of classical analytic methods and more recent machine learning methods for supervised and exploratory analyses to document the neural dynamics of basal ganglia and cortical activity during speech production.

Job Responsibilities:

Assume an integrated role in all aspects of 1) administration of behavioral protocols, 2) intraoperative speech data collection, with advisory role for pre- and post-surgical data collection, 3) data analysis performed independently, including application of speech processing and machine learning algorithms to analyze collected data, and 4) manuscript and grant writing.


Ph.D. in computational neuroscience, computer science, linguistics, psychology, neuroscience, communication science, engineering, bioengineering, or equivalent; previous research experience in computational neuroscience, neurolinguistics, or speech-language processing desired, along with expertise in MATLAB, acoustic signal processing and behavioral studies of human speech.

Interested Candidates please send a Cover Letter and CV to Corrie Durisko at​

Monday, November 28, 2016

Revisiting the relation between speech production and speech perception: Further comments on Skipper et al.

Continuing the "discussion" of Skipper, Devlin, and Lametti's (SDL) recent and in my opinion badly misguided review of the relation between speech perception and production, let's consider this quote on page 84:
Miceli, Gainotti, Caltagirone, and Masullo (1980) found a strong relationship between the ability to produce speech and discriminate syllables in 69 fluent and nonfluent aphasics. Specifically, contrasts between groups with and without a phonemic output disorder showed that patients with a disorder were worse at discriminating phonemes, particularly but not limited to those distinguished by place of articulation
This is misleading. "Ability to produce speech" in this paper is defined basically as a presence of phonemic paraphasias in the absence of articulatory difficulty, which will tend to identify fluent aphasics with posterior lesions like Wernicke's and conduction aphasia.  This is a rather odd measure of "ability to produce speech" but nonetheless the article reports that patients with "phonemic output disorder" (POV+) so defined were compared with those without (POV-) on a syllable discrimination task and the POV+ group performed worse, which is what SDL note and call a "strong relationship." However, when Miceli et al. dug deeper to ask whether there was a correlation between severity of POV+ and severity of syllable discrimination deficit, no relation was observed.  More importantly, Miceli et al. go on to report dissociations between POV+ and comprehension measures, which is the point I've been making for quite a while.  

Thus, rather than providing evidence for a relation between speech output ability and the ability to perceive speech, the report shows (i) that the severity of the production deficit is not correlated with the severity of performance on a syllable discrimination task and (ii) the presence of production deficits (POV+) dissociates from measures of auditory comprehension.  

SDL also claim in the same section that "Both children and adults with cerebral palsy have been shown to perform worse on phoneme discrimination and this is often related to articulatory
abilities" citing in support of their claims Bishop et al. 1990.  This is one of my favorite studies because it clearly shows how incredible important task selection is to understanding speech perception.  It is true that people with cerebral palsy performed worse on syllable discrimination tasks but that the same participants had NO IMPAIRMENT relative to controls when the same speech sounds were comprehended (using a very cool task) rather than discriminated.  See my blog post about the Bishop et al. study here.

SDL also use Parkinson's disease--"a degenerative movement disorder that results in reductions in premotor, SMA, and parietal cortex metabolism, linked to the basal ganglia"--as evidence that motor impairment affects speech perception.  I've addressed these findings previously in the Myth of Mirror Neurons, but noticed a new paper in the citation list by Vitale et al., so I looked it up and noted a fascinating conclusion from this large scale (N>100) study.  Here's an extended quote from the abstract:
Our patients with Parkinson's disease showed age-dependent peripheral, unilateral, or bilateral hearing impairment. Whether these auditory deficits are intrinsic to Parkinson's disease or secondary to a more complex impaired processing of sensorial inputs occurring over the course of illness remains to be determined. Because α-synuclein is located predominately in the efferent neuronal system within the inner ear, it could affect susceptibility to noise-induced hearing loss or presbycusis. It is feasible that the natural aging process combined with neurodegenerative changes intrinsic to Parkinson's disease might interfere with cochlear transduction mechanisms, thus anticipating presbycusis
So people with Parkinson's disease have peripheral hearing loss.  It seems to me that might be a better explanation of the speech perception deficit than damage to the motor system as SDL try to argue.    

Friday, November 18, 2016

How do chinchillas, pigeons, and infants perceive speech? Another Comment on Skipper et al.

There's a nagging problem for any theory that holds that the motor system is critical for speech perception: critters without the biological capacity for speech production can be trained to perceive speech remarkably well.  Here's the graph from Kuhl & Miller showing categorical perception in Chinchillas:

This, I would say, was a major factor in what doomed the motor theory of speech perception and why speech scientists like me had abandoned the idea in any strong form by the time mirror neurons came on the scene.

A reasonable response to data such as these is to acknowledge that speech perception can happen with the auditory system alone. With that as our limiting case, if you want to explore the role of the motor system in speech perception, it will have to be a much more nuanced contribution, e.g., that the motor system somehow contributes a little but under some circumstances.  I've acknowledged this possibility. From Hickok et al. 2009:
the claim for the ‘necessity’ of the motor system in speech perception seems to boil down to 10 percentage points worth of performance on the ability to discriminate or judge identity of acoustically degraded, out of context, meaningless syllables – tasks that are not used in typical speech processing and that double-dissociate from more ecologically valid measures of auditory comprehension even when contextual cues have been controlled. This suggests a very minor modulatory role indeed for the motor system in speech perception. 
Ok, so that's a little snarky for an acknowledgement.  Here's another that's more measured from Hickok et al. 2011:
we propose that sensorimotor integration exists to support speech production, that is, the capacity to learn how to articulate the sounds of one’s language, keep motor control processes tuned, and support online error detection and correction. This is achieved, we suggest, via a state feedback control mechanism. Once in place, the computational properties of the system afford the ability to modulate perceptual processes somewhat, and it is this aspect of the system that recent studies of motor involvement in perception have tapped into.

 I'm not sure I agree with myself any more as the evidence for a modulatory role under ecologically valid listening conditions is extremely weak.  For example, Pulvermuller and colleagues took the task issue complaints seriously and performed a TMS study using comprehension as their measure. This study failed to replicate the effect on accuracy of speech perception found with discrimination or identification tasks but did find an RT effect that held for some sounds and not others.  See my detailed comments on this study here.

But back to SDL and chinchillas.  What is their take on these facts? Here's what they say:
Though these categorical speech perception studies are often revered because they suggest the reality of speech units like phonemes, they have been criticized. Problems include that the tasks assume the units under study and that within category differences are actually readily discernible and meaningful
I agree with both the idea that these studies don't necessarily imply that the phoneme (or segment) is a unit of analysis in perception or that listeners can't hear within category differences (see Massaro's critiques of categorical perception).  But that doesn't make the similarity between the human and chinchilla curve evaporate.  No matter what unit is being analyzed or whether within-category differences can be detected under other task conditions, it still remains that chinchilla's can hear subtle differences between speech sounds. SDL's critique is tangential.

SDL then turn to a line of argumentation that makes no sense to me. The write of the claim from animal work that
Neurobiologically, the argument is unsound because, in the early work frequently used to support this argument... the brain was not directly observed. 
 The claim is not neurobiological.  It is functional.  Neurobiology doesn't matter for the structure of the argument: if an animal cannot produce speech yet can perceive it, it follows that you don't need to be able to produce speech to perceive it.  Period.  But let's read on:
Yet it has been suggested that premotor cortex is involved in processing sounds that we cannot produce in ways that make use of the underlying computational mechanisms that would also be involved in movement
So this implies that motor plans for non-speech actions are sufficient for perceiving speech.  So, assuming that SDL buy into the broader claims that action understanding for say grasping and speech is achieved via motor simulation, what they are actually saying is that when a chinchilla perceives a speech sound it resonates with a motor network for some non-speech actions (biting?) and this somehow results in the correct perception of the speech sound (for which there is no motor plan) instead of the motor plan that it actually resonated with. Hmm. Isn't is a bit more parsimonious to assume that the two sounds are acoustically different and that the chinchilla's auditory system can detect and represent that difference?

If we are going to accept a hypothesis that deviates substantially from parsimony, we're going to need some very strong evidence.  SDL highlight the fact that premotor areas of nonhuman primates activate during the perception of sounds they cannot produce. But again there is a more parsimonious explanation. The brain needs to map all sorts of perceptual events onto action plans for responding to those events. If you see a snake coiling and rattling its tail, you need to map that percept onto movement plans for getting out of the way. Presumably, your premotor cortex would be activated by that percept even though you have no motor programs for coiling and tail rattling. The same mechanism can explain the data SDL mention.

SDL also highlight that "Bruderer et al. (2015) showed that perturbing the articulators of
6-month old infants disrupts their ability to perceive speech sounds."  But the study is confounded by differences in the amount of distraction that the methods of perturbation likely causes.  Here are the teethers they used.  Which one would you guess is more annoying to the infant? Once you've made your guess, go read the paper and see which one caused the speech perception decline. 

In sum, SDL make a convoluted argument to salvage the idea that the motor system is responsible for the perception of speech even in animals and pre-lingual infants.  A much simpler explanation exists: auditory speech perception is achieved by the auditory system, which is present in adult humans, prelingual infants, and chinchilla's, all of which can perceive speech surprisingly well.