Wednesday, March 22, 2017

Misunderstandings of the Hickok & Poeppel Dual Stream framework: Comments on Dial & Martin 2017

A recent paper by Dial & Martin (DM) presents some interesting data on the relation between performance on a range of different speech perception tasks including some that have been the topic of discussion on this blog and in many of my papers.  These include syllable discrimination and auditory comprehension among others.  I have argued in several papers with David Poeppel and others that these two tasks differentially engage the dorsal (syllable discrimination) and ventral streams (comprehension).  DM sought to test this claim by testing how well these tasks hang together or dissociate in a group of 13 aphasic patients.  Their primary claim is that performance on sublexical and comprehension tasks largely hang together in contrast to previous reports of double dissociations. They suggest the discrepancy is due to better controlled stimuli in their experiment compared to past studies. DM's experiments are really nicely done and generated some fantastic data.  I don't think their conclusions about the dual stream model follow, however, because they get the dual stream model wrong.

First a comment on their data, focusing on the syllable discrimination and word picture matching tasks  (their Experiment 2a) as these are the poster-child cases.  DM report a strong correlation between performance on these tasks.  It indeed looks quite strong.  But they also report that two patients (18%) performed significantly better on the auditory comprehension task compared to the discrimination task. The control group did the same: significantly better on comp than disc.  So this is consistent with claims that these tasks are tapping into partially shared, partially different processes, as Hickok &  Poeppel (HP) have claimed.  

Do these findings lead to a rejection of part of the HP dual stream framework claims?  DM say yes. Here's a couple quotes from their concluding remarks:
5.2. Concluding comments on implications for dual route models 
Though dual route models with a specific neuroanatomical basis like that of Hickok and Poeppel have been proposed relatively recently (Hickok and Poeppel, 2000), cognitive models of language processes with a dual route framework (though typically without a specified neural basis) are common in the neuropsychological literature, particularly for reading and repetition (e.g., Coltheart et al., 2001; Dell et al., 2007; Hanley et al., 2004; Hanley and Kay, 1997; Hillis and Caramazza, 1991; McCarthy and Warrington, 1984; Nozari et al., 2010). Critically, many of these models assume that sublexical processing is shared between the two routes and the routes do not become activated until after sublexical processing occurs. A similar approach could be applied in the speech perception domain. That is, one might assume that there are separable routes for translation to speech output and for accessing meaning, but assume that sublexical processing is shared by the two routes and must be accomplished before processing branches into the separate routes. 
... In summary, the current study provides support for models of speech perception where processing of sublexical information is a prerequisite for processing of lexical information, as is the case in TRACE (McClelland and Elman, 1986), NAM (Luce and Pisoni, 1998) and Shortlist/MERGE (Norris, 1994; Norris et al., 2000). On the other hand, we failed to find support for models that do not require passage through sublexical levels to reach lexical levels, such as the episodic theory of speech perception (e.g., Goldinger, 1998) or dual route models of speech perception (Hickok and Poeppel, 2000, 2004, 2007; Hickok, 2014; Poeppel and Hickok, 2004; Majerus, 2013; Scott and Wise, 2004; Wise et al., 2001).  [emphasis added]
The problem with these conclusions is that this characterization of the HP dual route framework is inaccurate.  We do not claim that the system does not require passage through sublexical levels.  Rather, we specifically propose a phonological (not lexical) level of processing/representation that is shared between the two routes, as is clear in our figure from HP 2007 (yellow shading).


This is not a new feature of the HP framework.  Our claim of a shared level of representation between dorsal and ventral streams goes back to our 2000 paper.  From the abstract:
In this review, we argue that cortical fields in the posterior–superior temporal lobe, bilaterally, constitute the primary substrate for constructing sound-based representations of speech, and that these sound-based representations interface with different supramodal systems in a task-dependent manner. [emphasis added]
To restate, we proposed one sound-based (not lexically based) speech network located in the STG region that interfaces with two systems in a task dependent manner.  This clearly predicts associations between tasks if the functional damage is in the shared region and dissociations if the functional damage is in one stream or the other.  Both patterns should be found and DM's study confirms this.

So where did the idea that HP propose that speech recognition/comprehension can skip the sublexical level?  They quote one of my papers with former student, Kenny Vaden, as support for this assumption:
For example, Vaden et al. (2011) state that: [sublexical] information is only represented on the motor side of speech processing and…[is] not explicitly extracted or represented as a part of spoken word recognition (p. 2672). 
But this is misleading especially when you look at the term that DM replaced with their bracketed [sublexical] term.  Here's the full quote from this paper:
Our findings are more in line with the view that segment level information is only represented explicitly on the motor side of speech processing and that segments are not explicitly extracted or represented as a part of spoken word recognition as some authors have proposed (Massaro, 1972).  -Vaden et al. 2011
Two things to note here.  One is that Vaden et al. are noting that the findings we reported were more in line with theories that did not specifically implicate segmental representations in speech recognition; we were not making a claim about the position of the dual stream model of HP.  Second and more importantly, there is a difference between sublexical and segmental.  Sublexical means things that are below the level of the word, which includes segments but also syllables or pieces of syllables. In recent years I have leaned more and more toward the view that segmental units are not represented on the perceptual/recognition side of speech processing as the Vaden et al. quote suggests.  (David's position is different, by the way, I think.)  But this view of mine does not imply that sublexical information isn't processed in the STG and shared between the two streams.  I believe it is!  And DM's findings are perfectly compatible with this view.

Moreover, the HP claim has little to do with the nature of the representation and more to do with the process.  Notice that we don't say that the dorsal stream is more involved in sublexical representations, we say that it is more involved in sublexical tasks.  It is about the task driven cognitive/metalinguistic/ecologically invalid processes that are invoked most strongly by sublexical tasks, what we called "explicit attention" in HP2000:
Tasks that regularly involve these extra-auditory left hemisphere structures [i.e., the dorsal stream] all seem to require explicit attention to segmental information.  Note that such tasks are fundamentally different from tasks that involve auditory comprehension: when one listens to an utterance in normal conversation, there is no conscious knowledge of the occurrence of specific phonemic segments, the content of the message is consciously retained. 
So, the reason why the dorsal stream gets involved in syllable discrimination is that it is a task that requires attentional mechanisms that aren't normally involved in normal speech recognition and the network over which these attentional mechanisms can best operate is the sensorimotor dorsal stream network.

The problem with tasks like syllable discrimination is NOT that they can't assess the integrity of the perceptual analysis/representational system in the STG/STS, it is that you can't tell whether deficits on that task are coming from perceptual problems or metalinguistic attentional (or working memory) problems.  It's interesting to see how various speechy tasks hang together or not--and stay tuned for my own foray into this area with evidence for both associations and dissociations consistent with HP--but honestly, if you want to unambiguously map the circuits and computations involved in speech recognition as it is used in the wild, dump syllable discrimination and stick to auditory comprehension.

Tuesday, March 14, 2017

RESEARCH FACULTY POSITIONS at the BCBL- Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain)

RESEARCH FACULTY POSITIONS at the BCBL- Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) www.bcbl.eu (Center of excellence Severo Ochoa)

The Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) together with IKERBASQUE (Basque Foundation for Science) offer 3 permanent IKERBASQUE Research Professor positions in the following areas:

-Language acquisition
- Any area of Language processing and/or disorders with advanced experience in MRI
- Any area of Language processing and/or disorders with advanced experience in MEG

The BCBL Center (recently awarded the label of excellence Severo Ochoa) promotes a vibrant research environment without substantial teaching obligations. It provides access to the most advanced behavioral and neuroimaging techniques, including 3 Tesla MRI, a whole-head MEG system, four ERP labs, a NIRS lab, a baby lab including eyetracker, EEG and NIRS, two eyetracking labs, and several well-equipped behavioral labs.  There are excellent technical support staff and research personnel (PhD and postdoctoral students). The senior positions are permanent appointments. 

We are looking for cognitive neuroscientists or experimental psychologists with a background in psycholinguistics and/or neighboring cognitive neuroscience areas, and physicists and/or engineers with fMRI or MEG expertise. Individuals interested in undertaking research in the fields described in http://www.bcbl.eu/research/lines/ should apply through the BCBL web page (www.bcbl.eu/jobs). The successful candidate will be working within the research lines of the BCBL whose main aim is to develop high-risk/high gain projects at the frontiers of Cognitive Neuroscience. We expect high readiness to work with strong engagement and creativity in an interdisciplinary and international environment.

Deadline June 30th

We encourage immediate applications as the selection process will be ongoing and the appointment may be made before the deadline.

Only senior researchers with a strong record of research experience will be considered. Women candidates are especially welcome.
To submit your application please follow this link: http://www.bcbl.eu/jobs applying for Ikerbasque Research Professor 2017 and upload:
  1. Your curriculum vitae.
  2. A cover letter/statement describing your research interests (4000 characters maximum)
  3. The names of two referees who would be willing to write letters of recommendation

Applicants should be fluent in English. Knowledge of Spanish and/or Basque will be considered useful but is not compulsory.


For more information, please contact the Director of BCBL, Manuel Carreiras (m.carreiras@bcbl.eu).

Monday, February 27, 2017

Post-doctoral position in sensorimotor learning and control of speech production


The Laboratory for Speech Physiology and Motor Control (PI Ludo Max, Ph.D.) at the University of Washington (Seattle) announces an open post-doctoral position in the areas of sensorimotor integration and sensorimotor learning for speech production. The position will involve experimental work on both typical speech and stuttering. The lab is located in the University of Washington's Department of Speech and Hearing Sciences and has additional affiliations with the Graduate Program in Neuroscience and the Department of Bioengineering. See http://faculty.washington.edu/ludomax/lab/ for more information.

The successful candidate will use speech sensorimotor adaptation paradigms (with digital signal processing perturbations applied to the real-time auditory feedback or mechanical forces applied to the jaw by a robotic device) to investigate aspects of learning and control in stuttering and nonstuttering adults and children. In addition, the candidate will use electroencephalographic (EEG) techniques to investigate auditory-motor interactions during speech movement planning in the same populations.

The position is initially for one year (a second-year extension is possible contingent upon satisfactory performance and productivity) with a preferred starting date in the spring or early summer of 2017. Applicants should have the Ph.D. degree by the start of the appointment. Review of applications will begin immediately. Candidates with a Ph.D. degree in neuroscience, cognitive/behavioral neuroscience, motor control/kinesiology, biomedical engineering, communication disorders/speech science, and related fields, are encouraged to apply.

We seek a candidate with excellent verbal and written communication skills who is strongly motivated and has substantial computer programming experience (in particular MATLAB and/or R).


For more information, please contact lab director Ludo Max, Ph.D. (LudoMax@uw.edu). Applications can be submitted to the same e-mail address. Interested candidates should submit (a) a cover letter describing their research experiences, interests, and goals, (b) a curriculum vitae, (c) the names and contact information of three individuals who can serve as references, and (d) reprints of relevant journal publications.

The University of Washington is an affirmative action and equal opportunity employer. All qualified applicants will receive consideration for employment without regard to, among other things, race, religion, color, national origin, sex, age, status as protected veterans, or status as qualified individuals with disabilities.

Friday, February 24, 2017

Post-Doc Position in the Department of Speech, Language & Hearing Sciences, Sargent College, Boston University


The Guenther Lab at Boston University is seeking applications for a postdoctoral associate in computational neuroscience and neuroimaging of speech in normal and disordered populations. The Guenther Lab is one of the world's preeminent speech, neuroscience laboratories, and the associate will work with a team of experts in neuroimaging, computational modeling, and neural data collection and analysis within the Guenther Lab, the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital, and the Picower Institute for Learning and Memory at Massachusetts Institute of Technology.

Required Skills:
PhD in cognitive or computational neuroscience, biomedical engineering, speech, language and hearing science or related field. Strong written and oral communication skills required as well as experience with neural data analysis. Preference to candidates with strong computer programming (Matlab, Python, C++) and statistics backgrounds. Knowledge of speech motor control processes and/or computational modeling methods are desired. Salary will be aligned with experience and NIH guidelines. Two year commitment required. 

The position is available immediately. Interested applicants should forward a CV, cover letter, and two letters of recommendation to Frank Guenther (guenther@bu.edu).

We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. We are a VEVRAA Federal Contractor.

Job Location: Boston, Massachusetts, United States


Position Type: Full-Time/Regular

Monday, February 20, 2017

Pre- and post-doctoral research positions in MEG research in the NYU Neuroscience of Language Lab (PIs Pylkkänen & Marantz in New York or Abu Dhabi)


The NYU Neuroscience of Language Lab (http://www.psych.nyu.edu/nellab/) has openings for research scientists, which could be realized either as pre-doctoral RAships or as a post-doc. The RAs could be based either in our Abu Dhabi or New York labs. A post-doctoral fellow would be based in Abu Dhabi.


A BA/BS, MA/MS or PhD in a cognitive science-related discipline (psychology, linguistics, neuroscience, etc.) or computer science is required. 


The hired person would ideally have experience with psycho- and neurolinguistic experiments, a background in statistics and some programming ability (especially Python and Matlab). A strong computational background and knowledge Arabic would both be big plusses.  


The pre/post-doc's role will depend on the specific qualifications of the person hired, but will in all cases involve MEG research on structural and/or semantic aspects of language. 


In Abu Dhabi, salary and benefits, including travel and lodging, are quite generous. We are looking to start these position in summer 2017. Evaluation of applications will begin immediately. 



To apply, please email cover letter, CV and names of references to Liina Pylkkänen at liina.pylkkanen@nyu.edu and Alec Marantz at marantz@nyu.edu. For the RAships, please indicate if you have a preference for either Abu Dhabi or New York. 

Friday, February 10, 2017

Postdoctoral Fellowship: The Department of Speech, Language, and Hearing Sciences at Purdue University

Postdoctoral Fellowship: The Department of Speech, Language, and Hearing Sciences at Purdue University invites applications for a postdoctoral fellowship from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health beginning July 1, 2017. Applicants must be U.S. citizens or have permanent resident status. This will be a two-year appointment. Individuals may seek training in any of the following overarching areas: (1) Foundational; (2) Developmental Disorders; (3) Neurological and Degenerative Disorders. Potential mentors include: Alexander Francis, Stacey Halum, Michael Heinz, Jessica Huber, David Kemmerer, Keith Kluender, Ananthanarayan (Ravi) Krishnan, Laurence Leonard, Amanda Seidl, Preeti Sivasankar, Elizabeth Strickland, Christine Weber, and Ronnie Wilbur. Applicants are encouraged to contact appropriate individuals on this list prior to submitting an application. A description of the research areas of these potential mentors can be found at http://www.purdue.edu/hhs/slhs/research/areas.html. Application materials should include a statement of interest including desired research trajectory, three letters of recommendation, a curriculum vitae, and copies of relevant publications.  These materials should be sent to Elizabeth A. Strickland, Project Director, at estrick@purdue.edu.  Deadline for submission of applications is February 28, 2017. Purdue University is an equal opportunity/equal access/affirmative action employer fully committed to achieving a diverse workforce.   www.purdue.edu/hhs/slhs