Tuesday, April 21, 2015

22nd Annual Joint Symposium on Neural Computation -- USC -- May 26, 2015


*22nd Annual Joint Symposium on Neural Computation*
Saturday May 16, 2015 from 8:00 am to 5:00 pm.

USC Campus, Salvatori Hall (B6 on campus map <http://web-app.usc.edu/maps/map.pdf>)

_Confirmed Speakers_:

Brenda Bloodgood, UCSD
Denise Cai, UCLA
Sreekanth Chalasani, Salk Institute
Michael Dickinson, Caltech
Andrew Hires, USC
Eugene Izhikevich, Brain Corporation
Mayank Mehta, UCLA
Terry Sanger, USC
Francisco Valero-Cuevas, USC
Greg Ver Steeg, ISI
Yi Zuo, UC Santa Cruz

Poster abstract deadline is _Monday, May 4_ (see below for details).
Perfectly OK to re-use a poster you have presented at another recent conference.

This year's JSNC is sponsored in part by the Brain Corporation.

_Registration_. Please pre-register by emailing your name and affiliation to
Registration" in the subject header.  Pre-registration fee is $25 for students,
$35 for others. Registration at the door is $45.  Registration includes light
breakfast, coffee, snacks, and catered lunch.  You may pay by (1) Google Wallet
(click the $ to attach money in gmail), or by personal check mailed to Denise
Steiner, BME Department, Mail Code 1111, USC, Los Angeles, 90089.
Pre-registration will save your money and time, and allow us to plan for food.

_Poster Abstract Submission_.  Posters are welcomed from all members of the
neural computation community of Southern California.  Research areas include all
aspects of neural computation including cellular, network, and systems-level
modeling; applications of neuromorphic algorithms and hardware to problems in
vision, speech, motor control, and cognitive function.  Reuse of posters from
recent meetings is encouraged (e.g. SFN, Cosyne, NIPS, etc.).  The deadline for
receipt of abstracts is Monday, May 4, 2015.  Abstract must fit on a single
page, in PDF format.  They should be sent to JointSymposium@gmail.com
<mailto:JointSymposium@gmail.com> with "JSNC 2015 Abstract" in the subject
line.  Accepted abstracts will be invited for poster presentation at the
conference.  Notification of acceptance will be given on May 6, 2015.

_Parking_.  Enter the USC campus at Gate 1 on Exposition Blvd. or  Gate 6 on
Vermont.  Park as directed by the gate attendant.

_About the JSNC_.  In 1994, the Institute for Neural Computation at UCSD hosted
the first Joint Symposium on Neural Computation with Caltech.  This Symposium
brought together students and faculty, and experimentalists and theorists, for a
enjoyable day of short presentations on topics relating to information
processing in the nervous system.  Since then, the JSNC has rotated between

UCSD, Caltech, USC, UCLA, UC Irvine and UC Riverside.

Friday, April 17, 2015

Doctoral Student Position – Neurodevelopment of speech-motor control

The Neurodevelopmental Speech Disorders Laboratory (PI Deryk Beal, PhD) at the University of Alberta invites applications for a Natural Sciences and Engineering Research Council of Canada (NSERC - http://www.nserc-crsng.gc.ca) funded doctoral student position in the areas of developmental cognitive neuroscience, sensorimotor integration and speech-motor control.

The Neurodevelopmental Speech Disorders Laboratory provides a rich and multidimensional advanced doctoral training program. The lab is positioned within the Neuroscience and Mental Health Institute (www.neuroscience.ualberta.ca), MR Research Centre (www.invivonmr.ualberta.ca ), Institute for Stuttering Treatment and Research (www.istar.ualberta.ca) and Faculty of Rehabilitation Medicine.

The successful candidate will oversee neuroimaging and behavioural experiments detailing the neurodevelopment of sensorimotor control in children. Duties will include collection and analyses of behavioural data, functional and structural MRI and DTI data, preparation of manuscripts for publications and participation in reading groups, symposia and conferences. There are many very strong opportunities for meritorious-based authorship.

The successful applicant will have an undergraduate or master’s degree in a field related to cognitive neuroscience, neuroscience, psychology, developmental psychology, medicine or speech pathology. Individuals with a background in electrical engineering, biomedical engineering or computer science will also be considered.

The candidate should be able to work efficiently, independently and diligently. The candidate should also possess excellent interpersonal, oral and written communication skills and enjoy working as part of a diverse and energetic interdisciplinary team. Applicants are expected to have a strong academic track record and signficant skill with statistical analysis. Programming skills (MATLAB, C++; Python) and experience with at least one of the neuroimaging analyses programs (SPM, FSL, Freesurfer, ExploreDTI) are strongly desirable. 

Applications will be accepted until May 1, 2015. Successful candidates will participate fully in the activities of the laboratory including regular supervisory meetings, laboratory meetings and journal clubs. 

For consideration please send a statement of interest, a CV, unofficial transcripts and a list of three potential referees via email to Deryk Beal, PhD (dbeal@ualberta.ca).

An even cooler demo of saccadic suppression?

The previous post described a demo of saccadic suppression (also described here) in which you get up close to a mirror and move your eyes back and forth while taking a selfie video.  Your own eye motion is typically not visible in the mirror but is quite visible in the selfie.

Is this just an effect in which signals from the retina are "shut down" during a rapid saccade?  To find out, try this: do the same experiment but instead of fixating on one eye then the other repeatedly, move your finger slowly back and forth right in front of your eyes.  Track the movement of your finger in the reflection so that your eyes and your finger are close to the same focal plane

You will readily notice the movement of your finger (no surprise), but while still tracking your finger, direct your attention to your eyes.  Can you see them moving as they follow your finger? Or do they appear stationary?  Do this while taking a video selfie.  When you watch the video, do you see the movement of your eyes?  Let me know what you see!  For me there's a rather dramatic difference in the perception of my own slow tracking eye movements (they appear pretty much stationary in the mirror) versus my finger, which is obviously moving back and forth.

What's interesting here, if I'm thinking about this correctly (vision peeps chime in!), is that in the live, mirror viewing condition, the retinal image of the finger movement is relatively stable, whereas the retinal image of the eye gaze direction is changing. Yet, we see the finger not the eyes moving.

Thursday, April 16, 2015

Why can't you see your eyes move? Saccadic suppression demo

Saccadic suppression refers to the failure to detect motion or spatial displacement of a retinal image during self-generated eye movements. If you look from one side of the room to the other, you don't perceive the room as moving despite the displacement of the image across your retina.

Here's a cool demo of the effect.  Look at your eyes in a mirror. Get right up close. Now fixate one eye then the other. Go back and forth. Most likely you will not detect your own eye movements at all.  Now grab your mobile phone and take a selfie video of yourself doing the same thing, like this:

Watch the video and you will readily see your eyes move back and forth even though you didn't see it while you took the movie.  This is saccadic suppression.

Tuesday, April 14, 2015

Human Research Technologist/Lab Manager position -- Ctr for Lang Sci, Penn State

The Center for Language Science (http://cls.psu.edu/) at The Pennsylvania State University invites applications for a Human Research Technologist/Lab Manager position. The Center for Language Science includes a highly interactive group of faculty and students whose interests include bilingualism, language processing, language acquisition in children and adults, language contact, dialectology and the linguistics of bilingualism. The job includes preparing materials for experimentation for behavioral studies, eye tracking, and electrophysiological studies, programming experiments and testing research participants using each of these methods, and performing statistical analyses using a range of software applications. The individual will be responsible for organizing the laboratory schedule, recruiting research participants, developing appropriate databases, and managing the laboratory operation, including oversight of equipment maintenance and website updating. Typically requires an Associate's degree or higher plus one year of related experience, or an equivalent combination of education and experience. Bachelor’s degree preferred. Knowledge of E-prime, SPSS, and MATLAB is desirable, as is experience with eye tracking and event related potential methods, but training will be provided for all technical methods and also for the conduct of research with human participants. The successful candidate will be a college graduate who has had laboratory experience as an undergraduate, preferably with both behavioral and cognitive neuroscience methods. Interested applicants should include in their submitted materials a listing of three names of references. Questions about the position can also be sent to Judy Kroll (jfk7@psu.edu). Although the start date is flexible, the appointment begin date will be no later than July 1. This is a fixed-term appointment funded for one year from date of hire with possibility of re-funding. Apply online at https://psu.jobs/job/55522 

CAMPUS SECURITY CRIME STATISTICS: For more about safety at Penn State, and to review the Annual Security Report which contains information about crime statistics and other safety and security matters, please go to http://www.police.psu.edu/clery, which will also provide you with detail on how to request a hard copy of the Annual Security Report. 

Penn State is an equal opportunity, affirmative action employer, and is committed to providing employment opportunities to all qualified applicants without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. 

Sunday, March 29, 2015

Post-doc position in sensorimotor learning and control of speech production

The Laboratory for Speech Physiology and Motor Control (PI Ludo Max, Ph.D.) at the University of Washington (Seattle) announces an open postdoctoral position in the area of speech sensorimotor learning. The lab is located in the University of Washington's Department of Speech and Hearing Sciences and has additional affiliations with the Graduate Program in Neurocience and the Department of Bioengineering. See http://faculty.washington.edu/ludomax/lab/ for more information.

The successful candidate will use primarily speech sensorimotor adaptation paradigms (with digital signal processing perturbations applied to the real-time auditory feedback or mechanical forces applied to the jaw by a robotic device) to investigate aspects of learning and control in healthy adults and children. Opportunities and facilities are available to also contribute to studies of speech and limb sensorimotor learning and control in adults and children who stutter.

The position is initially for one year (a second-year extension is possible contingent upon satisfactory performance and productivity) with a preferred starting date in the Summer of 2015. Applicants should have the Ph.D. degree by the start of the appointment. Review of applications will begin immediately. Candidates with a Ph.D. degree in neuroscience, cognitive/behavioral neuroscience, motor control/kinesiology, biomedical engineering, communication disorders/speech science, and related fields, are encouraged to apply.

We seek a candidate with excellent verbal and written communication skills who is strongly motivated and has substantial computer programming experience (the lab relies heavily on MATLAB, R, Psyscope/Psychtoolbox).

For more information, please contact lab director Ludo Max, Ph.D. (LudoMax@uw.edu). Applications can be submitted to the same e-mail address. Interested candidates should submit (a) a cover letter describing their research experiences, interests, and goals, (b) a curriculum vitae, (c) the names and contact information of three individuals who can serve as references, and (d) reprints of relevant journal publications.

The University of Washington is an affirmative action and equal opportunity employer. All qualified applicants will receive consideration for employment without regard to, among other things, race, religion, color, national origin, sex, age, status as protected veterans, or status as qualified individuals with disabilities.

Thursday, March 5, 2015

Why computational cognitive scientists can continue their work despite rumors of their field's demise

The cognitive revolution (or better, information processing revolution) rejected the idea that behavior could be understood without reference to a contribution from the mind/brain.  Through decades of experimentation and theory development, we have come to appreciate that the mind/brain works by computing (or better, transforming) information available in the environment (or stored in the mind/brain itself) as a means to control behavior.  Call this the computational theory of mind.  Models in this framework often abstract away from particular instances (tasks, experiences, actions) and develop abstract models of how the brain computes (transforms information).  These often use mathematical symbols or other representational notation.

Radical embodied cognition rails against this view and makes arguments along these lines:

Computational/symbolic/mathematical models are descriptions of some phenomenon.  For example, a falling apple doesn't actually compute the gravitational force as understood mathematically.  The mind is the same. Just because you can describe, say, aspects of movement according to Fitt's law doesn't mean the brain actually computes the formula.  And by generalization, just because we can describe lots of mental functions using computational/symbolic/mathematical models doesn't mean the brain computes or processes symbols. Therefore, the mind doesn't compute; computational models are barking up the wrong tree; we need a new paradigm.

Putting aside debates about what counts as computation, here's why these sorts of arguments don't change the computational cognitive scientist's research program one bit.

Falling apples don't compute, but an abstract mathematical description of the force behind the behavior led to great scientific progress.  It is the abstract mathematical descriptions that has pushed physics to such heights of understanding.  If physicists rejected their theories just because apples don't compute, we probably would be too busy tending the farm to debate this silliness.  Therefore, modeling cognition using abstract computational systems can (has!) lead (led!) to great scientific progress.  Even if the mind isn't literally crunching X's and Y's, there is great value in modeling it this way.

No computational cognitive scientist (that I know) actually believes the mind works precisely, literally as their models hold.  Chomskians don't believe neuroscientists will find linguistic tree structures lurking in dendritic branching patterns nor do Bayesians expect to find P(A|B) spelled out by collections of neurons doing Y-M-C-A dance moves.  Rather, we understand that these ideas have to be implemented by massive, complex neural networks structured into a hierarchy of architectural arrangements bathed in a sea of chemical neuromodulators and modified according principles such as spike-timing-dependent plasticity.  No one (that I know) is foolhardy enough to believe that the relation between our computational models and neural implementation is literal, transparent, or simple.  In short, computational cognitive scientists use their models in exactly the same way physicists use math. To reject this approach because mathematical symbols aren't literally lurking in the brain is foolish.

Cognitive neuroscientists, also disparaged by the embodieds, are working on the linking theories, asking how tree structures or prior probabilities might be implemented in neural networks.  Not surprisingly, the neural implementation models don't literally contain symbols. Instead they contain units (e.g., neurons) arranged into architectures, with particular connection patterns, nested oscillators, modulators, and so on, and often modeled after real brain circuits as best we understand them.  We are doing well enough at neurocomputational modeling to simulate all kinds of complex behaviors.

I respect that radical embodieds want to see how much constraint on cognitive systems the environment and the body can provide and that they want a more realistic idea of how the mind physically works (in which case I suggest studying neuroscience rather than polar planimeters).  We have learned some things from this embodied/ecological approach.  But given that subscribers don't reject that the mind/brain contributes something, we still need models of what that something is.  And this is what computational cognitive scientists have been working on for decades with much success.
Carry on, you computational people.  Let's check back in with the radical embodies in 2025 to see how far they've gotten in figuring out attention, language, memory, decision making, perceptual illusions, motor control, emotion, theory of mind, and the rest. If they have made some progress, and I expect they will, we can then update our models by adding a few body parts and letting our robots roam a little bit more.