Monday, May 4, 2015

Postdoctoral position, Center for Language Science, The Pennsylvania State University


The Center for Language Science (CLS) at The Pennsylvania State University (http://cls.psu.edu/) invites applications for a postdoctoral position. The CLS is home to a cross-disciplinary research program that includes the NSF training program, Partnerships for International Research and Education (PIRE): Bilingualism, mind, and brain: An interdisciplinary program in cognitive psychology, linguistics, and cognitive neuroscience. The program provides training in research on bilingualism that includes an international perspective and that exploits opportunities for collaborative research conducted with one of our international partner sites in the UK (Bangor, Wales), Germany (Mannheim), Spain (Granada and Tarragona), The Netherlands (Nijmegen), Sweden (Lund), and China (Hong Kong and Beijing) and in conjunction with our two domestic partner sites at Haskins Labs and the VL2 Science of Learning Center at Gallaudet University. The successful postdoctoral candidate will have an opportunity to engage in collaborative research within the Center`s international network.

We welcome applications from candidates with preparation in any of the disciplines that contribute to our program. The successful candidate will benefit from a highly interactive group of faculty whose interests include bilingual language processing, language acquisition in children and adults, and language contact, among other topics. Applicants with interests in these topics and with an interest in extending their expertise within experimental psycholinguistics and cognitive neuroscience are particularly welcome to apply. There is no expectation that applicants will have had prior experience in research on bilingualism but we expect candidates to make a commitment to gain expertise in research on bilingualism and also in using neuroscience methods, including both fMRI and ERPs. There is also a possibility of teaching one course during the academic year in the Program in Linguistics.

Questions about faculty research interests may be directed to relevant core training faculty: Psychology: Michele Diaz, Judith Kroll, Ping Li, Janet van Hell, and Dan Weiss; Spanish: Rena Torres Cacoullos, Matt Carlson, Giuli Dussias, John Lipski, Marianna Nadeu, and Karen Miller; Communication Sciences and Disorders: Carol Miller and Chaleece Sandberg; German: Carrie Jackson, Mike Putnam, and Richard Page; French: Marc Authier and Lisa Reed. Administrative questions can be directed to the Director of the Center for Language Science, Judith Kroll:  jfk7@psu.edu. More information about the Center for Language Science (CLS), about the PIRE program, and faculty research programs can be found at http://cls.psu.edu or http:// pire.la.psu.edu.

The initial appointment will be for one year, with a possibility of renewal for a second year depending on the availability of funds. Salary follows NSF/NIH guidelines. The PIRE funding requires that we restrict the search to US citizens only. Applicants should upload a CV, several reprints or preprints, and a statement of research interests. This statement should indicate two or more core faculty members as likely primary and secondary mentors and should describe the candidate`s goals for research and training during a postdoctoral position, including previous experience and directions in which the candidate would like to develop his/her expertise in the language science of bilingualism. Candidates interested in gaining teaching experience should include information on teaching experience and preparation.

Additionally, applicants should arrange for three letters of recommendation to be sent separately to Sharon Elder at sle9@psu.edu.  Review of applications will begin immediately and continue until the position is filled.  The appointment can begin as early as August 1, 2015 but no later than October 1, 2015. Candidates must have completed their Ph.D by the time of appointment.  Apply online at https://psu.jobs/job/57207

CAMPUS SECURITY CRIME STATISTICS: For more about safety at Penn State, and to review the Annual Security Report which contains information about crime statistics and other safety and security matters, please go to  http://www.police.psu.edu/clery/, which will also provide you with detail on how to request a hard copy of the Annual Security Report.

Penn State is an equal opportunity, affirmative action employer, and is committed to providing employment opportunities to all qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.

Friday, May 1, 2015

Is there an evolutionary model for language? The case for "Within Species Comparative Computational Neuroscience"

Comparative Neuroscience is entrenched in our methodological psyche. We regularly use phylogenetically related animals (mice, cats, monkeys) as model systems for understanding our own brain.  Hubel and Wiesel shared a Nobel Prize "for their discoveries concerning information processing in the visual system" not "the cat visual system" (their model animal) because we believe evolution conserves neurocomputational principles including coding strategies, architectures and so on.  Studying mice, cats, and monkeys, we believe, teaches us about the human brain.

For decades, centuries maybe, language scientists have lamented the lack of an animal model for language.  In fact, this was our excuse for why vision scientists seem to have made so much more progress in mapping the neural foundation of their system than ours.  But is it really the case that we don't have an animal model?  Some researchers will quickly point out that birdsong or ultrasonic vocalizations in mice can provide a useful model.

But I suggest we can do better or a least do more by looking for evolutionary homologies to our language system not in other species but in our own brain.

Here's the basic idea:

(1) Neural systems, like the species they inhabit, have a long evolutionary history.
(2) The evolution of neural subsystems (vision, hearing, olfaction, memory, emotion, social cognition, language ...) was not uniform but more klugey.
(3) The evolution of a given subsystem builds on its neurocomputational ancestor systems.
(4) Therefore, just like we find homologies in structural or functional design of related species that reflect their evolutionary lineage, we should find neural homologies in computations and architectures that reflect their neurocomputational lineage.

Language is an interesting case because it evolved so recently compared to other neural systems. Consider that the earliest estimates for the first stages of language evolution are in the range of 1.75 Mya and more typical estimates are roughly equivalent to the appearance of H. Sapiens about 100,000 years ago.  But even if we assume the rudiments of a neural system for language was developing 2 Mya it is quite clear that this system evolved in the context of an already rich neurocomputational system with highly developed sensory and motor, memory, conceptual, and social systems in place.  Specifically, our lineage split with our very bright primate cousin, the chimpanzee, ~5 Mya, which leaves at least an additional 3 million years of brain evolution between our common ancestor with chimps and the (earliest stages) of language evolution.

What this means is that language circuits likely built on top of, and therefore should show homologies to, other systems in our own brain.  And this opens the door to a Comparative Computational Neuroscience program for language: looking to non-linguistic neural systems for clues to the brain organization for language.

This is precisely the approach that gave rise to the Dual Stream model for language, which argues for a homologous organization between language and non-linguistic sensory systems such as the dual stream models of vision and hearing (see here for similar arguments).  Recent work suggesting shared computational principles behind motor control and linguistic processes in speech production is another example.  The fact that we finding what look like homologies provides some evidence that this approach might hold promise.

This does not mean that language can be reduced to sensorimotor circuits any more than the human mind can be "reduced" to the macaque's.  The approach is in fact quite agnostic to the degree of specialization of a system compared to its neurocomputational cousins, making it a potentially useful methodological framework for both the language-is-special and the language-is-not-special crowds.  All it really says is that we can learn something about language systems from studying hearing or vision or motor control, just like we can learn something about human vision from studying cats.

Tuesday, April 21, 2015

22nd Annual Joint Symposium on Neural Computation -- USC -- May 26, 2015

*** ANNOUNCEMENT ***

*22nd Annual Joint Symposium on Neural Computation*
Saturday May 16, 2015 from 8:00 am to 5:00 pm.

USC Campus, Salvatori Hall (B6 on campus map <http://web-app.usc.edu/maps/map.pdf>)

_Confirmed Speakers_:

Brenda Bloodgood, UCSD
Denise Cai, UCLA
Sreekanth Chalasani, Salk Institute
Michael Dickinson, Caltech
Andrew Hires, USC
Eugene Izhikevich, Brain Corporation
Mayank Mehta, UCLA
Terry Sanger, USC
Francisco Valero-Cuevas, USC
Greg Ver Steeg, ISI
Yi Zuo, UC Santa Cruz

Poster abstract deadline is _Monday, May 4_ (see below for details).
Perfectly OK to re-use a poster you have presented at another recent conference.

This year's JSNC is sponsored in part by the Brain Corporation.

_Registration_. Please pre-register by emailing your name and affiliation to
Registration" in the subject header.  Pre-registration fee is $25 for students,
$35 for others. Registration at the door is $45.  Registration includes light
breakfast, coffee, snacks, and catered lunch.  You may pay by (1) Google Wallet
(click the $ to attach money in gmail), or by personal check mailed to Denise
Steiner, BME Department, Mail Code 1111, USC, Los Angeles, 90089.
Pre-registration will save your money and time, and allow us to plan for food.

_Poster Abstract Submission_.  Posters are welcomed from all members of the
neural computation community of Southern California.  Research areas include all
aspects of neural computation including cellular, network, and systems-level
modeling; applications of neuromorphic algorithms and hardware to problems in
vision, speech, motor control, and cognitive function.  Reuse of posters from
recent meetings is encouraged (e.g. SFN, Cosyne, NIPS, etc.).  The deadline for
receipt of abstracts is Monday, May 4, 2015.  Abstract must fit on a single
page, in PDF format.  They should be sent to JointSymposium@gmail.com
<mailto:JointSymposium@gmail.com> with "JSNC 2015 Abstract" in the subject
line.  Accepted abstracts will be invited for poster presentation at the
conference.  Notification of acceptance will be given on May 6, 2015.

_Parking_.  Enter the USC campus at Gate 1 on Exposition Blvd. or  Gate 6 on
Vermont.  Park as directed by the gate attendant.

_About the JSNC_.  In 1994, the Institute for Neural Computation at UCSD hosted
the first Joint Symposium on Neural Computation with Caltech.  This Symposium
brought together students and faculty, and experimentalists and theorists, for a
enjoyable day of short presentations on topics relating to information
processing in the nervous system.  Since then, the JSNC has rotated between

UCSD, Caltech, USC, UCLA, UC Irvine and UC Riverside.

Friday, April 17, 2015

Doctoral Student Position – Neurodevelopment of speech-motor control


The Neurodevelopmental Speech Disorders Laboratory (PI Deryk Beal, PhD) at the University of Alberta invites applications for a Natural Sciences and Engineering Research Council of Canada (NSERC - http://www.nserc-crsng.gc.ca) funded doctoral student position in the areas of developmental cognitive neuroscience, sensorimotor integration and speech-motor control.

The Neurodevelopmental Speech Disorders Laboratory provides a rich and multidimensional advanced doctoral training program. The lab is positioned within the Neuroscience and Mental Health Institute (www.neuroscience.ualberta.ca), MR Research Centre (www.invivonmr.ualberta.ca ), Institute for Stuttering Treatment and Research (www.istar.ualberta.ca) and Faculty of Rehabilitation Medicine.

The successful candidate will oversee neuroimaging and behavioural experiments detailing the neurodevelopment of sensorimotor control in children. Duties will include collection and analyses of behavioural data, functional and structural MRI and DTI data, preparation of manuscripts for publications and participation in reading groups, symposia and conferences. There are many very strong opportunities for meritorious-based authorship.

The successful applicant will have an undergraduate or master’s degree in a field related to cognitive neuroscience, neuroscience, psychology, developmental psychology, medicine or speech pathology. Individuals with a background in electrical engineering, biomedical engineering or computer science will also be considered.

The candidate should be able to work efficiently, independently and diligently. The candidate should also possess excellent interpersonal, oral and written communication skills and enjoy working as part of a diverse and energetic interdisciplinary team. Applicants are expected to have a strong academic track record and signficant skill with statistical analysis. Programming skills (MATLAB, C++; Python) and experience with at least one of the neuroimaging analyses programs (SPM, FSL, Freesurfer, ExploreDTI) are strongly desirable. 

Applications will be accepted until May 1, 2015. Successful candidates will participate fully in the activities of the laboratory including regular supervisory meetings, laboratory meetings and journal clubs. 


For consideration please send a statement of interest, a CV, unofficial transcripts and a list of three potential referees via email to Deryk Beal, PhD (dbeal@ualberta.ca).

An even cooler demo of saccadic suppression?

The previous post described a demo of saccadic suppression (also described here) in which you get up close to a mirror and move your eyes back and forth while taking a selfie video.  Your own eye motion is typically not visible in the mirror but is quite visible in the selfie.

Is this just an effect in which signals from the retina are "shut down" during a rapid saccade?  To find out, try this: do the same experiment but instead of fixating on one eye then the other repeatedly, move your finger slowly back and forth right in front of your eyes.  Track the movement of your finger in the reflection so that your eyes and your finger are close to the same focal plane


You will readily notice the movement of your finger (no surprise), but while still tracking your finger, direct your attention to your eyes.  Can you see them moving as they follow your finger? Or do they appear stationary?  Do this while taking a video selfie.  When you watch the video, do you see the movement of your eyes?  Let me know what you see!  For me there's a rather dramatic difference in the perception of my own slow tracking eye movements (they appear pretty much stationary in the mirror) versus my finger, which is obviously moving back and forth.

What's interesting here, if I'm thinking about this correctly (vision peeps chime in!), is that in the live, mirror viewing condition, the retinal image of the finger movement is relatively stable, whereas the retinal image of the eye gaze direction is changing. Yet, we see the finger not the eyes moving.








Thursday, April 16, 2015

Why can't you see your eyes move? Saccadic suppression demo

Saccadic suppression refers to the failure to detect motion or spatial displacement of a retinal image during self-generated eye movements. If you look from one side of the room to the other, you don't perceive the room as moving despite the displacement of the image across your retina.

Here's a cool demo of the effect.  Look at your eyes in a mirror. Get right up close. Now fixate one eye then the other. Go back and forth. Most likely you will not detect your own eye movements at all.  Now grab your mobile phone and take a selfie video of yourself doing the same thing, like this:


Watch the video and you will readily see your eyes move back and forth even though you didn't see it while you took the movie.  This is saccadic suppression.

Tuesday, April 14, 2015

Human Research Technologist/Lab Manager position -- Ctr for Lang Sci, Penn State

The Center for Language Science (http://cls.psu.edu/) at The Pennsylvania State University invites applications for a Human Research Technologist/Lab Manager position. The Center for Language Science includes a highly interactive group of faculty and students whose interests include bilingualism, language processing, language acquisition in children and adults, language contact, dialectology and the linguistics of bilingualism. The job includes preparing materials for experimentation for behavioral studies, eye tracking, and electrophysiological studies, programming experiments and testing research participants using each of these methods, and performing statistical analyses using a range of software applications. The individual will be responsible for organizing the laboratory schedule, recruiting research participants, developing appropriate databases, and managing the laboratory operation, including oversight of equipment maintenance and website updating. Typically requires an Associate's degree or higher plus one year of related experience, or an equivalent combination of education and experience. Bachelor’s degree preferred. Knowledge of E-prime, SPSS, and MATLAB is desirable, as is experience with eye tracking and event related potential methods, but training will be provided for all technical methods and also for the conduct of research with human participants. The successful candidate will be a college graduate who has had laboratory experience as an undergraduate, preferably with both behavioral and cognitive neuroscience methods. Interested applicants should include in their submitted materials a listing of three names of references. Questions about the position can also be sent to Judy Kroll (jfk7@psu.edu). Although the start date is flexible, the appointment begin date will be no later than July 1. This is a fixed-term appointment funded for one year from date of hire with possibility of re-funding. Apply online at https://psu.jobs/job/55522 

CAMPUS SECURITY CRIME STATISTICS: For more about safety at Penn State, and to review the Annual Security Report which contains information about crime statistics and other safety and security matters, please go to http://www.police.psu.edu/clery, which will also provide you with detail on how to request a hard copy of the Annual Security Report. 


Penn State is an equal opportunity, affirmative action employer, and is committed to providing employment opportunities to all qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.