Sunday, March 30, 2014

Lab manager position available at Duke University


We are looking for a highly motivated recent graduate (BS, BA) to help kick-start the new lab of Prof. Tobias Overath (http://people.duke.edu/~jto10) at the Duke Institute for Brain Sciences (DIBS). Work in the lab investigates how sounds, from simple sinusoids to complex speech signals, are processed in the human brain, while tracking the underlying neural processes using a combination of behavioral (psychoacoustics) and neuroimaging methods (fMRI, EEG).

An ideal candidate will have received an undergraduate degree in psychology, neuroscience, biomedical engineering, or a related field, by summer 2014, and will have some familiarity with fMRI, EEG, and/or other experimental techniques. An interest in how the brain processes sound is a strong plus, as is excellent knowledge of at least one programming language (preferably Matlab). We are looking for a lab manager who is conscientious and dependable as well as highly self-motivated and pro-active.

The main duties of the lab manager position will focus on (1) initially getting the new lab up and running (e.g. ordering of equipment), (2) organizational tasks (e.g. logistics, IRB, subject recruitment, teaching materials), and (3) scientific tasks (e.g. design, implementation, analysis and write-up of experiments). The balance of these tasks will shift gradually towards (3), and the lab manager will have the chance to learn many skills that will be relevant to pursuing a career in science or medicine.

The position is available for an initial one-year period starting this Fall 2014, with the potential for renewal. Salary will be $31,000 p.a. plus benefits.

To initiate an application for the position, please email the PI Tobias Overath (t.overath@duke.edu) by April 15, 2014 (later applications will also be considered if the position is not filled), including the following documents: (1) a brief statement about yourself and why you are interested in the position, (2) a resume that includes brief descriptions of past research experiences, programming knowledge, relevant courses and grades, and (3) the names and email addresses of 2 references who could be contacted (at least one reference should be able to speak to your research background).

A double dose of ECog – two 2014 papers on speech


A recent paper in Nature and a recent paper in Science provide ECog evidence for dorsal stream function and STG function, respectively.      

The first paper, “Sensory–motor transformations for speech occur bilaterally,” is from my NYU colleague Bijan Pesaran’s lab; the first author is Greg Cogan, a post-doc with Bijan. The paper tackles the important question of how dorsal stream structures implement sensory–motor transformations, an issue that Greg Hickok and I have speculated about (and Greg H. has worked on extensively). This rich paper reports a bunch of cool findings worth reading and studying. One of the strong claims – the part of the data providing the title – concerns the bilateral nature of (those parts of) the dorsal stream underpinning sensory-motor transformations for speech. Previous work has argued that output-related dorsal-stream processing is lateralized, certainly much more strongly than ventral stream areas/functions. I still find that position on the right track (cf. Hickok & Poeppel 2007), and I derive some special frisson from the fact that Greg Cogan, the co-architect of this counter-argument, was my graduate student and is an important collaborator. The data are the data – so it’s now important to figure out the why/how/what/when of these two dorsal streams. I am no apologist for lateralization in speech, but these data certainly present a new interpretive challenge. Speculations, ideas, data welcome.    


The second paper, “Phonetic Feature Encoding in Human Superior Temporal Gyrus,” is from Eddie Chang’s lab at UCSF and is spearheaded by Nima Mesgarani (now faculty at Columbia University in the EE department). Over the years, the evidence has steadily accumulated that STG is the ‘home’ of acoustic-phonetic perceptual analysis. Previous ECog data, including stimulation data, for example by Dana Boatman, Nathan Crone, and colleagues, has been strong evidence for STG (e.g. for review, Boatman 2004, http://www.ncbi.nlm.nih.gov/pubmed/15037126). This new work builds on those findings and demonstrates the sensitivity and selectivity of this region. From data acquired while the patients listened to spoken sentences (of numerous speakers), Nima et al. extracted activity profiles of the electrodes to all English phonemes. Phonetic features turn out to be an effective grouping principle (manner is especially prominent). Nima had done a similar project in Shihab Shamma’s lab in his dissertation work (I harassed him about it at his defense …) - but ferrets neither speak nor listen to all that much human speech … In this new work, the acoustic-phonetic encoding is elegantly described, providing some ways to think about the intermediate representations that could link input-related spectro-temporal processing to linguistic structures.      

Friday, March 21, 2014

Graduate Student or Post-doctoral Fellow with Dr. Deryk Beal – Neurodevelopment of speech motor control

Supervisor: Dr. Deryk Beal

Dr. Deryk Beal, principal investigator and founder of the Speech Neurogenetics Laboratory at the University of Alberta, invites applications for a WCHRI (http://wchri.srv.ualberta.ca/) funded position in the areas of developmental cognitive neuroscience, speech motor control and their related underlying genetic contributions.
Dr. Beal is interested in advancing our understanding of the genetic and neural contributions to speech motor control in typically developing children and adults as well as children and adults with developmental stuttering and other motor speech disorders. My laboratory is equipped with state-of-the-art data acquisition systems, analysis software, and full access to the Peter S. Allen MR Research Centre.
Dr. Beal was awarded an innovation grant from the Women and Children’s Health Research Institute to support a PhD student or Postdoctoral Fellow. The Speech Neurogenetics Laboratory provides a rich and multidimensional advanced graduate or post-graduate training program as it is positioned within the Centre for Neuroscience, Institute for Stuttering Treatment and Research and Faculty of Rehabilitation Medicine. Collaborators on current projects span the University of Alberta, Boston University and the Nationwide Children’s Hospital at the University of Ohio.
The candidate will be expected to oversee genetic family aggregation, neuroimaging and behavioural motor control experiments as well as to analyze behavioural and functional and structural MRI and DTI data, prepare manuscripts for publications and participate in conferences. There are many very strong opportunities for meritorious-based authorship.
The successful applicant will have a master’s or doctoral degree in a field related to cognitive neuroscience, neuroscience, psychology, developmental psychology, medicine or speech pathology. Individuals with a background in electrical engineering, biomedical engineering or computer science also will be considered. The candidate should be able to work efficiently, independently and diligently. The candidate should also possess excellent interpersonal, oral and written communication skills and enjoy working as part of a diverse and energetic interdisciplinary team. Applicants are expected to have a strong research background in the design and statistical analysis of brain-imaging experiments and/or motor control and learning experiments. Programming skills (MATLAB, C++; Python) and experience with at least one of the neuroimaging analyses programs (SPM, FSL, Freesurfer, ExploreDTI) are strongly desirable.
Approximate start date is Spring/Summer 2014. Successful candidates will participate fully in the activities of the laboratory including regular supervisory meetings, laboratory meetings and journal clubs.
For consideration please send a statement of interest, a CV and a list of three potential referees via email to Deryk Beal, PhD (dbeal@ualberta.ca). The search will continue until the position is filled.
Websites: http://www.ualberta.ca/~acn/Beal.htmlANDhttp://www.istar.ualberta.ca/ISTAR%20Staff/DerykBeal.aspx

Wednesday, March 5, 2014

Hierarchical and Independent Levels of Representation in Speech Production: Discussion of the HSFC Model

Guest post by Matt Goldrick and Adam Buchwald

As detailed in a 2012 Talking Brains post, Greg and colleagues have proposed a model for speech production that aims to synthesize research from motor control, psycholinguistics, and neuroscience. This year, the inaugural issue of Language, Cognition, and Neuroscience (a re-christening of Language and Cognitive Processes) was guest edited by Albert Costa and F. Xavier Alario. It featured an article by Greg outlining a descendent of this model, the Hierarchical State Feedback Control model (HSFC). This target article was accompanied by a number of commentaries, including one by co-authored by the two of us and Brenda Rapp, as well as a response by Greg.

We (Matt and Adam) wanted to take advantage of the extra space afforded by Talking Brains to continue this conversation. The H in HSFC emphasizes the key role of hierarchical representations in Greg's proposal. In this post, we'd like to articulate why psycholinguists and neuroscientists have argued that in addition to such hierarchical representations, distributed/parallel encoding plays a critical role in language production.

To orient the discussion, consider two classical types of neurocognitive representational structures from vision:

1) Hierarchical representations. In representations that have this type of structure, there is a mapping (a necessary relationship) between two sets of representations. Consider classic simple vs. complex cells (Hubel & Wiesel, 1962). Under this proposal, simple cells preferentially respond to oriented bars in particular locations in the visual field. By integrating responses over many simple cells, complex cells respond to oriented bars across multiple locations. Critically, there is a precise mapping between these two levels of representation; the response properties of complex cells are defined by a function stated over the response properties of simple cells. 

2) Parallel, independent representations. In representations that have this type of structure, the relationship between the two sets of representations is not defined by a direct mapping which spells out one level in terms of the other; rather, they are independent dimensions of structure. These dimensions can be linked or bound together, but they need not necessarily co-occur. Consider Treisman and colleagues' classic Feature-Integration Theory, which claims that some dimensions of visual stimuli are initially processed independently and only later bound together. This proposal provides a ready account of illusory conjunctions (Treisman & Schmidt, 1982). For example, if letter identity and color are coded independently, this can explain how a display with green Xs and brown Ts can give rise to the erroneous perception of a green T; this percept would be unlikely if letter identity and color were encoded in a single representation. Critically, the two types of information must be encoded independently (but in parallel) for these illusory conjunctions to occur during the later process of binding.  

The HSFC model emphasizes the role of hierarchical representations. There is abundant evidence that these play a role in speech production. With respect to speech motor control, many accounts adopt a syllable-sized, relatively coarse-grained specification of motor movements, which directly maps onto detailed information regarding the precise temporal and kinematic coordination involved in production. There is also evidence that there are multiple levels of segment-sized representations that specify different types of information. A classic distinction is between context-independent vs. position-specific aspects of sound structure. The context-independent representations encode information about the sounds (e.g., /t/ in table and stable), and these map to position-specific representations that spell out the details (e.g., table contains aspirated [th] and stable contains unaspirated [t]). Evidence that these constitute distinct levels of representation includes data from individuals with acquired speech impairment (Buchwald & Miozzo, 2011). While this is not directly specified in the current HSFC model, it is clearly consistent with the overarching account as noted in Greg's response.  

But what we'd like to emphasize is that parallel, independent representations also play a key role in language production. In particular, there's abundant reasons to believe that at certain levels of representation syllabic and segmental structure are not organized in a strict hierarchical fashion, but rather form parallel aspects of form representation. A number of results suggest that rather than syllables being defined as chunks of segments, syllable structure defines a frame; segments are then bound or linked to positions within this frame (see Goldrick, in press, for review and discussion of other dimensions of phonological structure). 

To make this contrast explicit, consider the syllable "cat." Under a strictly hierarchical theory, this syllable could be defined by a mapping from [kaet] to the component segment [k-Onset] [ae-Nucleus] [t-Coda]. Under a theory utilizing independent representations, there is a [Onset]-[Nucleus]-[Coda] syllable frame and, independently, three segments /k/, /ae/, /t/. The syllable is represented by the binding /k/-[Onset]; /ae/-[Nucleus]; /t/-[Coda].

The first form of evidence in favor of the independent representations perspectives comes from illusory conjunctions in production. Speech errors can result in the mis-ordering of segments. In the majority of these errors, the segments occur in the wrong syllable but the correct syllable position (e.g., bad cat misproduced as "bad bat"). However, a substantial minority (more than 20% of errors in corpora of spontaneous speech; Vousden, Brown, & Harley, 2000) result in error being produced in incorrect syllable positions (e.g., film misproduced as "flim"). Just as letter identity and color form independent, dissociable dimensions of visual representation, segment identity and syllable positions form dissociable dimensions of phonological representations in production.

Evidence from priming points to a similar conclusion. Colored object naming is facilitated by segmental overlap between the color and object name, even when the segments occur in different syllable positions (e.g., green flagDamian & Dumay, 2009). In addition, production of phrases made up of two nonsense words is facilitated when the two nonsense words have syllables with the same structure compared to nonwords that do not have matching structures -- even when there are no segments shared across the two syllables (Sevald, Dell, & Cole, 1995). For example, repeating two nonwords that both start with CVC syllables (e.g., KEM TIL.FER) or CVCC syllables (KEMP TILF.NER) is faster than repeating nonwords that start with syllables with contrasting consonant-vowel patterns (e.g., KEM TILF.NER or KEMP TIL.FLER). This occurs in spite of the syllables sharing no segments (e.g., KEM and TIL).

Based on data such as these, psycholinguistic theories (e.g., Shattuck-Hufnagel, 1992) have proposed that syllables and segments are not related in a strictly hierarchical fashion, but rather form independent-yet-linked dimensions of sound structure. That's not to say that the links are purely arbitrary; only certain segments can be associated to particular syllable positions (e.g., in English, /ng/ can be associated to coda but not onset). But segments are not merely the "elaborated" form of syllabic chunks; they form independent entities.

While hierarchical representations are a critical part of speech production, it's important to acknowledge the critical role of non-hierarchical representation. Mirroring other domains of processing, both representational schemas serve critical functions in the neurocognitive mechanisms supporting speech. 

References
Buchwald, A. & Miozzo, M. (2011). Finding levels of abstraction in speech production: Evidence from sound-production impairment. Psychological Science, 22, 1113-1119.
Damian, M. F., & Dumay, N. (2009). Exploring phonological encoding through repeated segments. Language and Cognitive Processes24, 685-712.
Goldrick, M. (in press). Phonological processing: The retrieval and encoding of word form information in speech production. In M. Goldrick, V. Ferreira, & M. Miozzo (Eds.) The Oxford handbook of language production. Oxford: Oxford University Press.
Hickok, G. (2014a). The architecture of speech production and the role of the phoneme in speech processing. Language, Cognition and Neuroscience29, 2-20.
Hickok, G. (2014b). Towards an integrated psycholinguistic, neurolinguistic, sensorimotor framework for speech production. Language, Cognition and Neuroscience29, 52-59.
Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of physiology,160(1), 106-154.
Rapp, B., Buchwald, A., & Goldrick, M. (2014). Integrating accounts of speech production: The devil is in the representational details. Language, Cognition and Neuroscience, 29, 24-27.
Sevald, C. A., Dell, G. S., & Cole, J. S. (1995). Syllable structure in speech production: Are syllables chunks or schemas? Journal of Memory and Language34, 807-820.
Shattuck-Hufnagel, S. (1992). The role of word structure in segmental serial ordering. Cognition42, 213-259.
Treisman, A., & Schmidt, H. (1982). Illusory conjunctions in the perception of objects. Cognitive Psychology, 14, 107-141.

Vousden, J. I., Brown, G. D., & Harley, T. A. (2000). Serial control of phonology in speech production: A hierarchical model. Cognitive psychology,41, 101-175.

Monday, March 3, 2014

Post-Doc and Research Assistant Positions Available –CoNi Lab, Arizona State University


A Post-Doc and a fullt-me research assistant position are available in the Communication Neuroimaging and Neuroscience Laboratory (CoNi Lab) at Arizona State University, directed by Dr. Corianne Rogalsky. Our research is devoted to the cognitive neuroscience of language and music in the healthy and damaged brain, using techniques including fMRI, DTI, neuropsychological testing, and high-resolution lesion mapping.

Post-Doc: Responsibilities will include designing and implementing fMRI and structural imaging studies aimed at understanding the neural computations contributing to speech comprehension in everyday communication, particularly focusing on the contributions of meta-linguistic processes such as working memory, cognitive control, and attention, broadly defined. All scanning is conducted at the Barrow Neurological Institute in downtown Phoenix, and there is access to stroke and aphasic populations through the ASU Speech & Hearing Clinic and numerous stroke facilities throughout the Phoenix area. Requirements include spoken and written proficiency in English, a Ph.D. in neuroscience, psychology, computer science, or a related field.  Preference will be given to applicants who have evidence of successfully conducting fMRI experiments in the realm of cognition. Proficiency with the linux computing environment, E-prime, Matlab, AFNI, and/or FSL is preferred.

Research Assistant: Responsibilities will include behavioral and fMRI data collection, programming of experiments, contacting and scheduling research participants, and data scoring and analysis. These tasks require an applicant to have a strong initiative to problem solve, be self-sufficient, and efficiently multitask. Requirements include spoken and written proficiency in English, a minimum of a bachelor-level degree (e.g., BA or BS), preferably in psychology, neuroscience, computer science, or a related field, and willingness to make a 2-year commitment. Strong interpersonal skills and an ability to effectively recruit and work with participants (including special populations), and other members of the lab are essential. Preference will be given to applicants who also are proficient with the linux computing environment, are familiar with E-prime and/or Matlab, and/or have experience with neuroimaging analysis software such as AFNI or FSL.

The CoNi Lab is situated in the Department of Speech and Hearing Science at ASU.  ASU is located in Tempe, Arizona, in the metropolitan Phoenix area, which has a thriving neuroscience and neuroimaging community including the Mayo Clinic and Barrow Neurological Institute. Tempe features 330 days of sunshine a year.


Applications will be reviewed as they are received. The preferred start date for both positions is July 1st, but slightly later start dates will also be considered. Both positions are funded for two years, with possible extensions pending funding. If interested, please email a brief cover letter (including a description of research interests, qualifications, future goals, and available start date), CV, reprints or preprints (for post-doc position), and contact information for two references to corianne.rogalsky@asu.edu. Arizona State University is an equal opportunity employer. Please visit neuroimaging.lab.asu.edu for more information about the CoNi lab.