Saturday, December 6, 2014

Positions at Gallaudet in DC: educational neuroscience

Job Overview
The exciting interdisciplinary PhD in Educational Neuroscience (PEN) program at Gallaudet University (Washington, D.C.) is seeking applicants with an expertise in Cognitive Neuroscience-Educational Neuroscience at the assistant or associate professor level for two (2) tenure-track positions beginning in fall, 2015.

Postion 1: Candidates with a vibrant Cognitive Neuroscience-Educational Neuroscience (specifically, neuroimaging) research program, with a strong focus on children, and who advance understanding of the neural basis of learning in one or more of the following scientific areas (encompassed within the discipline of Educational Neuroscience), will be seriously considered: language/bilingualism, reading/literacy, math/numeracy, science/higher cognition, or social/emotional learning.

Position 2: Candidates with a vibrant Cognitive Neuroscience-Educational Neuroscience (specifically, neuroimaging) research program, in adults and/or children, in one or more of the following areas, will be seriously considered: human motion perception and generation, human brain mechanisms for multisensory integration of motion and vision, mirror neurons, the interface of human motion perception/generation and technology (Avatar, Robotics, Motion Capture), the brain’s translation of rapidly-changing motion and visual information into meaningful action, for example, in face perception, language perception and generation (signed or spoken), biological motion or reading.

Gallaudet’s PhD in Educational Neuroscience (PEN) program (launched fall 2013) pioneers new Cognitive Neuroscience and Educational Neuroscience science especially involving how children learn. The successful candidate will be housed in the PhD in Educational Neuroscience program and will enjoy an affiliation with one of PEN’s five affiliated departments as per the candidate’s scholarly research and expertise (e.g., Psychology, Linguistics, Hearing Speech and Language Sciences, Interpretation or Education). The new faculty member will have vibrant opportunities to work collaboratively with members from the home PEN program, the five affiliated departments, the consortium of universities in the Greater Washington DC Area, and, importantly, an extensive network of scholars available via the National Science Foundation’s Science of Learning Center at Gallaudet University, Visual Language and Visual Learning, VL2, and VL2’s three Resource Research Hubs, particularly, the Brain and Language Laboratory for Neuroimaging (BL2).  As a core mission outgrowth of Gallaudet’s NSF Science of Learning Center, VL2, the PEN program is thus linked with an active network of leading world scholars in Cognitive Neuroscience, neuroimaging, language and bilingualism, reading and literacy, and higher cognition, in both hearing children and the young deaf visual learner, and American Sign Language.

Gallaudet’s PEN PhD program is also propelled by the goal of achieving great excellence in teaching, and to provide its students with the most cutting-edge knowledge, healthy and lively critical analysis and discussion, strong mentorship, and a great richness and diversity of career paths.

Position 1 & 2: Candidates must show (i) significant potential for innovation, scholarship, and commitment to excellence in research and teaching. Additionally, candidates must have (ii) a PhD or EdD in Cognitive Neuroscience or Educational Neuroscience; (iii) strong evidence of foundational research training in the Cognitive Neurosciences with specific neuroimaging expertise (e.g., fMRI, EEG, fNIRS, MEG); (iv) an innovative research program that links (or has the potential to link) Cognitive Neuroscience research outcomes with learning and education in children; (v) promising publication record and teaching experience; and (vi) proficiency in American Sign Language and knowledge of Deaf Culture, or, a demonstrable commitment to develop mastery of American Sign Language.

Position 1 Responsibilities: The successful candidate will maintain a highly effective research program in the Cognitive Neurosciences (inclusive of combined neuroimaging and behavioral experimentation), engage in teaching, graduate student mentorship, and scholarly dissemination activities that lead to publications and federal external grant funding. The new faculty position also affords exciting leadership enhancing opportunities in Gallaudet’s PhD in Educational Neuroscience program through the building and sustaining of partnerships with other universities and related student mentoring, and encourages great innovation and creativity in building diverse, meaningful, and principled two-way partnerships spanning science and society.

Position 2 Responsibilities: The successful candidate will maintain a high-profile research program in the Cognitive Neurosciences (inclusive of combined neuroimaging and behavioral experimentation), demonstrate excellence in teaching, graduate student mentorship, and scholarly dissemination activities that lead to publications and federal external grant funding. The new faculty position also affords exciting leadership enhancing opportunities in Gallaudet’s PhD in Educational Neuroscience program through collaborative building and sustaining of Gallaudet’s PEN-VL2 Motion Light Laboratory.

Gallaudet University is a bilingual university and serves deaf, hard of hearing, and hearing students from many different backgrounds and seeks to develop a workforce that reflects the diversity of its student body. Gallaudet is an equal employment opportunity/affirmative action employer and actively encourages deaf, hard of hearing members of traditionally underrepresented groups, people with disabilities, women, and veterans to apply for open positions.

Assistant Professor; position pending final approval. Salary commensurate with experience and qualifications.

Application Information
Review of applications to begin immediately.
Send a curriculum vitae, representative publications, and a detailed cover letter demonstrating each of the following five (5) points inclusively: (i) Evidence of your quality of scholarly training and activities specifically in the Cognitive Neurosciences (with clear identification of your neuroimaging expertise), (ii) research program, (iii) your unique approach to the emerging field of Educational Neuroscience, (iv) teaching experiences and teaching philosophy, and (v) how your expertise in the Cognitive Neurosciences can inform learning in young children, or, how you plan to do so in the future. Under separate cover, please have three letters of reference sent; all correspondence should be addressed to:  
PhD in Educational Neuroscience Program Search Committee (Position 1)
Attention: Provost Carol Erting
Gallaudet University
800 Florida Ave., NE
Washington, DC 20002-3695

Review of completed applications will begin January 5, 2014 and continue until the position is filled; employment to begin Fall semester 2015.

Specific questions may be addressed either to Provost Erting ( or to Professor Laura-Ann Petitto (Laura-Ann.Petitto@Gallaudet.Edu), Chair, PhD in Educational Neuroscience Steering Committee.

Monday, December 1, 2014

A different kind of job ... Unusual post-doc opportunity at NYU

Postdoctoral position in Brain Imaging of Neuroaesthetics

A two-year postdoctoral position in Cognitive Neuroscience to study the neural basis of human responses to painting, poetry, and music. The Postdoctoral Researcher will work with faculty at NYU (Denis Pelli, David Poeppel, Gabrielle Starr, and Edward Vessel) with expertise in aesthetics, fMRI, MEG, EEG, and psychophysics, and an international research team, to design and carry out experiments as part of NYU's Global Institute for Advanced Study. Great research environment with Psychology, Center for Neural Science, and Center for Brain Imaging all in one building, which also houses a 3-T MRI and MEG center devoted to research.

The successful candidate will be interested in aesthetics and have strong quantitative skills, including MATLAB, and experience designing and analyzing fMRI experiments and possibly EEG or MEG. Pay will follow the NIH scale. Candidates should have their Ph.D. in hand at time of appointment.

Candidates with an interest in this position should send their CV, contact information, statement of research interest, and the names of three references to Gabrielle Starr ( with cc to Denis Pelli ( and Ed Vessel ( Applications will be reviewed until the position is filled.

Wednesday, November 12, 2014

Computation at the neuron level -- where noncomputational embodied theories need to start

It seems that some embodied theorists see no need for computation or perhaps even information processing.  Rather than talking about, say, how interaural time difference (ITD) information can be used to compute spatial location, some embodied theorists want to say that spatial location is "perceived directly" given the physical signal as it passes through body-determined channels.  The brain is thought to bring little to the task in that the physical signal is not transformed but rather registers directly in neural systems.

These theorists have spent a fair amount of time talking about the body--the movement is called "embodiment" after all--but little time talking about what's going on at the neuronal level.  I say, point well taken with respect to the contribution of the body: you don't get ITDs in the first place without two ears and a head in between.  But I also say, it is time for embodied theorists to look at the next step in the "registration" of those physical signals: the function of individual neurons. (Actually this is the second step, the first being transducer organs such as the cochlea and photoreceptor cells).  Physical signals must be passed through neurons, which exhibit a complex relation between input and output.  Some would even go so far as to say neurons are transforming the signal, i.e., computing. Here's a quote that gives a sense of what's going on at the single neuron level:
Neurons take input signals at their synapses and give as output sequences of spikes. To characterize a neuron completely is to identify the mapping between neuronal input and the spike train the neuron produces in response. In the absence of any simplifying assumptions, this requires probing the system with every possible input. Most often, these inputs are spikes from other neurons; each neuron typically has of order N ~ 10^3 presynaptic connections. If the system operates at 1 msec resolution and the time window of relevant inputs is 40 msec, then we can think of a single neuron as having an input described by a ~ 4 x 10^4 bit word—the presence or absence of a spike in each 1 msec bin for each presynaptic cell—which is then mapped to a one (spike) or zero (no spike). More realistically, if average spike rates are ~10s^-1 the input words can be compressed by a factor of 10. In this picture, a neuron computes a Boolean function over roughly 4000 variables.  Aguera y Arcas et al. Neural Computation 15, 1715–1749 (2003)
If you want neuroscientists and good old fashioned cognitive scientists (GOFCS) to take you seriously, build some "embodied" models of whatever process you are interested in and let's see how far you get without transforming the information (and therefore morphing into a GOFCS).  For now, we don't see how you can get past even a single neuron without information processing rendering your more fundamental claims pretty much vacuous.  

Friday, November 7, 2014

Embodied robots -- Post #2 on Wilson & Golonka 2013

There's some cool stuff highlighted by W&G including robots that tidy up without being programmed to do so, robots that walk (downhill) with only the power of gravity simply because their bodies were designed in the right way, and cricket robots that find the best mate automatically due to the architecture of the sound localization system. We've discussed sound localization previously so let's focus on the two other examples.

Robots that tidy without the intention to do so or knowledge they did it.

Robots with two sensors situated at 45 degree angles on the robot's "head"

and a simple program to avoid obstacles detected by the sensors will after a while tidy a room full of randomly distributed cubes into neat piles:

W&G conclude from this that,

Importantly, then, the robots are not actually tidying – they are only trying to avoid obstacles, and their errors, in a specific extended, embodied context, leads to a certain stable outcome that looks like tidying 
The point here is that the robots did not have a representation for, or a desire to, tidy or even any knowledge that they had tidied.  A complex "cognitive" behavior can emerge from "a single rule, 'turn away from a detected obstacle'" to quote W&G.  

This is cool.  But it neither rules out computation/information processing as the basis of mental function nor tell us how and why humans tidy.  Regarding my first point, notice that even though there is no program in the bot specific to tidying, there is a program nonetheless--W&G call it a "rule" which thought would be a banned termed in the embodied camp--that controls the robot's behavior.  Granted, the computation has nothing to do with tidying.  But it has everything to do with detecting obstacles and using that information to generate a change in a motor plan, which itself is a computational problem that the robot's programmers have solved.  W&G point to tidying behavior but ignore completely the sensorimotor behavior of the robot.  By analogy suppose I laid out the following argument. Humans can dull the point on a pencil's lead. I've developed a robot that writes with a pencil. I've programmed nothing in the robot about pencil lead or the desire to dull it. Yet it happens as an emergent property of the system. Therefore, the system isn't computing, all we have to do is set up the right environmental conditions and it will happen dynamically.  The flaw in the logic of course is you had to program the robot to write.  

What about humans?  Could our tidying behavior be explained similarly?  Not a chance.  The bots don't know they are tidying.  We recognize it immediately.  Where is that knowledge coming from?  Now you need a theory of knowledge of tidying and things get complicated again.  Just because you can get complex-looking behaviors to emerge from simple architectures doesn't mean the simple architectures aren't computing and it doesn't mean that humans do it that way.  

Robot bodies that walk themselves

W&G ask, 
Why does walking have the form that it does? One explanation is that we have internal algorithms which control the timing and magnitude of our strides. Another explanation is that the form of walking depends on how we are built and the relationship between that design and the environments we move through.
Although it's hard to imagine that walking doesn't depend on how we are built and the environment we move through, let's allow the argument.  

Humans don’t walk like lions because our bodies aren’t designed like lions’ bodies. 
Not gonna argue with that!  

Robotics work on walking show that you can get very far in explaining why walking has a particular form just by considering the passive dynamics. For example, robots with no motors or onboard control algorithms can reproduce human gait patterns and levels of efficiency simply by being assembled correctly 
Ah, now some substance.  This is interesting work.  Engineers built a robot frame that could walk down an incline, slinky-like, with nothing but gravity pulling it along.  That's an impressive bit of engineering, but humans can walk on flat ground.  So, the bot shell was fitted with

simple control algorithms..., which allows the robots to maintain posture and control propulsion more independently. 
What's cool is that these simple control algorithms--way simpler than previously used--when fitted to different body types work for a wide range of locomotion behaviors.  W&G conclude,

These robots demonstrate how organisms might use distributed task resources to replace complex internal control structures. 
This is fantastic work.  If you look at the original science paper, in the supplemental material you find that the authors of the paper likened their approach to that of the Wright brothers in designing their plane.  Instead of trying to engineer a craft that from the start could power itself and fly, the Wrights first designed a craft that could glide, then they fit a simple motor to it and (no surprise to them or us now) it flew on its own power.  So building a robot that can glide (e.g., walk down an incline) was a great first step.  Then all you have to do it build in a simple control system.  We don't hear much about this control system in W&G's paper only that they are "simple control algorithms." Here's the description from the science paper:
Their only sensors detect ground contact, and their only motor commands are on/off signals issued once per step. In addition to powering the motion, hip actuation in the Delft biped also improves fore-aft robustness against large disturbances by swiftly placing the swing leg in front of the robot before it has a chance to fall forward.
With the right design, complex calculations can be replaced with simple calculations.  (But they're still calculations, which W&G don't mention.)  Now, if you want the robot to do a little learning, e.g., in order to adapt to changing walking environments, you need to add a little to the computations. The same science paper reports how they implemented sensorimotor learning in their robot:

The robot acquires a feedback control policy that maps sensors to actions using a function approximator with 35 parameters. With every step that the robot takes, it makes small, random changes to the parameters and measures the change in walking performance. This measurement yields a noisy sample of the relation between the parameters and the performance, called the performance gradient, on each step. By means of an actor-critic reinforcement learning algorithm (18), measurements from previous steps are combined with the measurement from the current step to efficiently estimate the performance gradient on the real robot despite sensor noise, imperfect actuators, and uncertainty in the environment. The algorithm uses this estimate in a real-time gradient descent optimization to improve the stability of the step-to-step dynamics. 
The supplementary material provides more information on this algorithm:

The learning controller, represented using a linear combination of local nonlinear basis functions, takes the body angle and angular velocity as inputs and generates target angles for the ankle servo motors as outputs. The learning cost function quadrat- ically penalizes deviation from the dead-beat controller on the return map, evaluated at the point where the robot transfers support from the left foot to the right foot. Eligibility was accumu- lated evenly over each step, and discounted heavily (γ 0.2) between steps. The learning algorithm also constructs a coarse estimate of the value func- tion, using a function approximator with only an- gular velocity as input and the expected reward as output. This function was evaluated and updated at each crossing of the return map.
Although the body design of these robots drastically simplifies the computational task for the robot's digital brain, there is substantially more computation involved in the simple task of walking on level ground that G&W quickly gloss over in their discussion of this example.

All you cognitive modelers who don't take body design into account:  You should!  The embodied theorists are absolutely correct about emphasizing this point.

All you radical embodied cognitive scientists who think you can do away with computation (i.e., information processing): You still can't!  You can simplify the computations and that's excellent progress, but yours is not a new model of the mind.  It's GOFIP--good old-fashioned information processing--hooked up to better models of the delivery system.

Wednesday, November 5, 2014

Has embodied cognition earned its name? Critique of Wilson & Golonka 2013 #1

Wilson and Golonka have provided a very nice outline of the embodied cognition enterprise.  Have a look here.  I'm sure this doesn't represent all embodied theorists but it does summarize the radical "replacement" view. So, I've decided to have a very close look at the piece over the next few days and provide my thoughts for further discussion and clarification.  I have no doubts that I will mischaracterize and misunderstand certain things so I hope Andrew and Sabrina will correct and clarify.  Of course, I would love to hear from others as well.  I'm not attempting to summarize the arguments here so please read the paper for context.  Quotes from the paper are indented and my comments follow.  This blog post concerns the second section of the paper.
Because perception is assumed to be flawed, it is not considered a central resource for solving tasks.
Who argues this?  Perceptual scientists?  By "perception" to you mean perceptual systems? Or do you mean the physical signals that perception uses?

Because we only have access to the environment via perception, the environment also is not considered a central resource. 
 Who argues this?  Of course the environment is a resource for perception.  That's where the input comes from.

This places the burden entirely on the brain to act as a storehouse for skills and information. 
Who argues this?  Do you think traditional cognitive psychologists would deny that information can be stored external to the brain in say written form?  Or that the body or environment constrains the brain's solutions to information processing problems?  Of course, you DO need a brain to read those notes.

This job description makes the content of internal cognitive representations the most important determinant of the structure of our behavior. Cognitive science is, therefore, in the business of identifying this content and how it is accessed and used  
Agreed, generally.  But, the starting point (for perceptual research anyway) is what is the nature of the input, which defines the problem.  Sound can hit the ears with time delays; how do you translate that into an orienting response?  The image hitting the two retinas is slightly different; how do you get 3D from that?  Perceptual scientists are ALWAYS mindful of what the input look like.  To say that for non-embodied psychologists it's just a disembodied mind is building a straw man.

Advances in perception-action research, particularly Gibson’s work on direct perception (Gibson, 1966, 1979), changes the nature of the problem facing the organism. 
These "advances" are 30 years old.  Maybe it would be worth looking at more recent models of perception?

if perception-action couplings and resources distributed over brain, body, and environment are substantial participants in cognition, then the need for the specific objects and processes of standard cognitive psychology (concepts, internally represented competence, and knowledge) goes away, to be replaced by very different objects and processes (most commonly perception-action couplings forming non-linear dynamical systems 
 Your conclusion doesn't follow from your premises.  Why does the fact that there is information in the environment and that information processing is constrained by the body mean that you don't need concepts, internal representations, or knowledge?  Also, a dab of circularity here.  "If perception-action couplings..." (your assumption) then we replace standard notions with "perception-action couplings" (your conclusion).  You've at least partially assumed your conclusion.

This, in a nutshell, is the version of embodiment that Shapiro (2011) refers to as the replacement hypothesis and our argument here is that this hypothesis is inevitable once you allow the body and environment into the cognitive mix.  
See above.  It doesn't follow.   So, if I understand the claim, cognition is spread over environment, body, and brain.  Further, traditional theorists didn't put enough emphasis on environment and body and too much on brain.  Ok, that's reasonable. But unless you want to remove the brain/mind altogether, you still need a theory of the brain/mind's contribution to cognition.  Since, according to your own assumptions (i.e., that the brain/mind does something), that theory cannot be fully derived from environment or body.  This means that you will need a traditional information processing model in between.  Therefore at best "embodied cognition" is a variant of standard cognitive models.

To earn the name, embodied cognition research must, we argue, look very different from this standard approach. 
Seems like it hasn't earned its name.  

Tuesday, November 4, 2014

How the mind works: It's the information stupid!

Not that I'm calling anyone stupid.  That's a reference, of course, to Clinton campaign manager James Carville's "It's the economy, stupid." It's a call to refocus the emphasis.  Here we're talking cognitive science and the relation between computational theories and embodied theories of the mind and the need to refocus our emphasis on information processing.

I contend that embodied theories are, under the hood, computational (i.e., information processing) theories and that the embodied folks are mischaracterizing computational theories.  Or at the very least they using one such theory (~Fodorian philosophy) as representative of the whole cognitivist/computational mindset.  In fact, it’s always been about information and how it gets processed.  It doesn’t matter how you process the information—neurons, electronic switches, gears, pumps—it just matters that information (patterns of physical stuff that correlate with the state of the world) is used in such a way as to guide behavior. To try to make this clear, here’s an excerpt from The Myth of Mirror Neurons discussing some early conceptions of cognitive psychology.  
Psychologist Ulric Neisser, who literally named the field and wrote the book on it with his 1967 text, Cognitive Psychology, defined the domain of cognition this way:
“Cognition” refers to all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used.  … Such terms as sensation, perception, imagery, retention, recall, problem-solving, and thinking, among many others, refer to hypothetical stages or aspects of cognition.[1]
Neisser’s table of contents underlined his view that cognition was not limited to higher-order functions.  His volume is organized into four parts.  Part I is simply the introductory chapter.  Part II is called “Visual Cognition” and contains five chapters.  Part III is “Auditory Cognition” with four chapters. Finally, Part IV deals with “The Higher Mental Processes” and contains a single chapter, which Neisser refers to as “essentially an epilogue” with a discussion that is “quite tentative”. He continues,
Nevertheless, the reader of a book called Cognitive Psychology has a right to expect some discussion of thinking, concept-formation, remembering, problem-solving, and the like…. If they take up only a tenth of these pages, it is because I believe there is still relatively little to say about them….
Most scientists today working on perception or motor control, even at fairly low levels, would count their work as squarely within the information processing model of the mind/brain and therefore within Neisser’s definition of cognition.  Consider this paper title, which appeared recently in a top-tier neuroscience journal: Eye Smarter than Scientists Believed: Neural Computations in Circuits of the Retina.  If anything in the brain is a passive recording device (like a camera) or a simple filter (like polarized sunglasses) it’s the retina, or so we thought. Here’s how the authors put it:
Whereas the conventional wisdom treats the eye as a simple prefilter for visual images, it now appears that the retina solves a diverse set of specific tasks and provides the results explicitly to downstream brain areas.[2]
Solves a diverse set of specific tasks and provides the results… sounds like a purpose-built bit of programing—in the retina!  We observe similar complexity in the control of simple movements, such as tracking an object with the eyes, an ability that is thought to involve a cerebral cortex-cerebellar network including more than a half dozen computational nodes that generate predictions, detect errors, calculate correction signals, and learn.[3]

---end excerpt--

My former post doc advisor, Steve Pinker, who is arguably today’s champion of the computational theory of mind and a staunch defender of "symbolic processing" (it's not what you think!) reinforces the broad definition of computation as just being about information processing:  
the function of the brain is information processing, or computation… Information consists of patterns in matter or energy, namely symbols, that correlate with states of the world. That’s what we mean when we say that something carries information. A second part of the solution is that beliefs and desires have their effects in computation—where computation is defined, roughly, as a process that takes place when a device is arranged so that information (namely, patterns in matter or energy inside the device) causes changes in the patterns of other bits of matter or energy, and the process mirrors the laws of logic, probability, or cause and effect in the world. [4]
Notice that symbols are defined simply as patterns in matter or energy, not x’s and y’s in lines of code.  The patterns “represent” (i.e., correlate with) states of the world.  This constitutes information that brains can make use of by changing the patterns, e.g., taking interaural time difference and using that information to guide head movement. This is why the embodied movement is so puzzling to me.  It’s fundamentally no different that the computational theory of mind.  Does the body contribute something to information processing?  Of course!  The brain evolved with the body to solve survival problems.  The body shapes the input to the brain. But that doesn't mean that the brain isn't processing information.  

1 Neisser, U. (1967) Cognitive psychology. Appleton-Century-Crofts
2 Gollisch, T. and Meister, M. (2010) Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron 65, 150-164
3 Wolpert, D.M., et al. (1998) Internal models in the cerebellum. Trends in Cognitve Sciences 2, 338-347

Friday, October 24, 2014


The Department of Speech and Hearing Science at Arizona State University, Tempe Campus, invites applicants with expertise in communication disorders and related disciplines to apply for two open-rank tenure-track faculty positions starting August, 2015.
For the first position, we are seeking candidates whose areas of expertise will complement and augment our current research strengths in psychoacoustics, cochlear implants, auditory neurophysiology and pediatrics. Candidates with research interests in the areas of aging, amplification, auditory disorders, electrophysiology, and/or auditory physiology are encouraged to apply. Evidence of a publication record is expected as well as current or potential for extramural funding commensurate with rank. Responsibilities include research, teaching graduate and undergraduate courses, mentoring PhD students, and participating in the service of the department, college, and University.
For the second position, we are seeking candidates whose areas of expertise lie in the domain of communication sciences, particularly as it relates to the developing and aging brain. Relevant research interests include clinical approaches to rehabilitation, auditory and cognitive neuroscience, neural speech processing, and other related areas. Evidence of extramural funding and a publication record commensurate with rank is expected. Responsibilities include research, teaching graduate and undergraduate courses, mentoring PhD students, and participating in the service of the department, college, and University.
Interested applicants should submit the following: 1) cover letter, 2) teaching statement, 3) research statement, 4) curriculum vita, and 5) names and contact information of three individuals who would be willing to provide a reference upon request of the search committee. These materials should be sent via email to Please include “Faculty Hire” and expected rank in the subject line (e.g., Faculty hire – associate). For complete qualifications and application information, go to The initial deadline for applications is January 2, 2015. Applications will be reviewed weekly thereafter until the position is closed. Arizona State University is an equal opportunity/affirmative action employer committed to excellence through diversity. Women and minorities are encouraged to apply (ASU Affirmative Action). A background check is required for employment.
The Department of Speech and Hearing Science is housed in the College of Health Solutions and offers undergraduate Major and Minor degrees in Speech and Hearing Science, a Certificate for Speech-Language Pathologist Assistants, a Master’s degree in Communication Disorders for SLPs, an clinical doctoral degree in Audiology (AuD), and a PhD degree in Speech and Hearing Science. The department also administers a large undergraduate program in American Sign Language.  The Phoenix area has numerous clinical and research facilities available for collaboration including Barrow Neurologic Institute, Mayo Clinic and other hospital systems, and ASU research institutes. For more information please visit our website at 

Questions about these positions and/or the application process may be directed to the Chair of the search committee, Dr. Andrea Pittman at (480) 727-8728 or 

Embodied or Symbolic? Who cares!

I still don't understand the hype over embodied cognition. It's too abstract a concept for me, I guess.  I need more grounding in the real world. (Am I getting it?) So let's consider a real world example of neural computation. For the record, this is partially excerpted/paraphrased from a discussion in The Myth of Mirror Neurons.

Sound localization in the barn owl is fairly well-understood in neurocomputational terms.  Inputs from the two ears converge in the brainstem's nucleus laminaris with a "delay line" architecture as in the figure:

Given this arrangement, the neurons (circles) on which the left and right ear signals will converge simultaneously will depend on the time difference between excitation of the two ears. If both ears are stimulated simultaneously (sound directly in front), convergence will happen in the middle of the delay line. If the sound stimulates the left ear first, convergence will happen farther to the right in this schematic (left ear stimulation arrives sooner allowing its signal to get further down the line before meeting the right ear signal). And vise versa if right ear stimulation arrives sooner. This delay line architecture basically sets up an array of coincidence detectors in which the position of the cell that detects the coincidence represents information: the difference in stimulation time at the two ears and therefore the location of the sound source. Then all you have to do is plug the output (firing pattern) of the various cells in the array into a motor circuit for controlling head movements and you have a neural network for detecting sound source location and orienting toward the source.
Question: what do we call this kind of neural computation?  Is it embodied? Certainly it takes advantage of body-specific features, the distance between the two ears (couldn't work without that!) and I suppose we can talk of a certain "resonance" of the external world with neural activation.  In that sense, it's embodied.  On the other hand, the network can be said to represent information in a neural code--the pattern of activity in network of cells--that no longer resembles the air pressure wave that gave rise to it.  In fact, we can write a symbolic code to describe the computation of the network.  Typical math models of the process use cross-correlation but you can do it with some basic code like this:

Let x = time of sound onset detected at left ear 
Let y = time of sound onset detected at right ear 
If x = y, then write ‘straight ahead’
x < y, then write ‘left of center’
If x > y, then write ‘right of center’ 

Although there is no code in the barn owl’s brain, the architecture of the network indeed implements the program: x and y are the input signals (axonal connections) from the left and right ears; the relation between x and y is computed via delay line coincidence detectors; and “rules” for generating an appropriate output are realized by the connections between various cells in the array and the motor system (in our example). Brains and lines of code can indeed implement the same computational program. Lines of code do it with a particular arrangement of symbols and rules, brains do it with particular arrangement of connections between neurons that code or represent information. Both are accurate ways of describing the computations that the system carries out to perform a task.  

Does it matter, then, whether we call this non-representational embodied cognition or classical symbolic computation?  I think not.  If we simply start trying to actually figure out the architectures and computations of the system we are studying, the question of what to call it becomes trivial.