Friday, October 24, 2014


The Department of Speech and Hearing Science at Arizona State University, Tempe Campus, invites applicants with expertise in communication disorders and related disciplines to apply for two open-rank tenure-track faculty positions starting August, 2015.
For the first position, we are seeking candidates whose areas of expertise will complement and augment our current research strengths in psychoacoustics, cochlear implants, auditory neurophysiology and pediatrics. Candidates with research interests in the areas of aging, amplification, auditory disorders, electrophysiology, and/or auditory physiology are encouraged to apply. Evidence of a publication record is expected as well as current or potential for extramural funding commensurate with rank. Responsibilities include research, teaching graduate and undergraduate courses, mentoring PhD students, and participating in the service of the department, college, and University.
For the second position, we are seeking candidates whose areas of expertise lie in the domain of communication sciences, particularly as it relates to the developing and aging brain. Relevant research interests include clinical approaches to rehabilitation, auditory and cognitive neuroscience, neural speech processing, and other related areas. Evidence of extramural funding and a publication record commensurate with rank is expected. Responsibilities include research, teaching graduate and undergraduate courses, mentoring PhD students, and participating in the service of the department, college, and University.
Interested applicants should submit the following: 1) cover letter, 2) teaching statement, 3) research statement, 4) curriculum vita, and 5) names and contact information of three individuals who would be willing to provide a reference upon request of the search committee. These materials should be sent via email to Please include “Faculty Hire” and expected rank in the subject line (e.g., Faculty hire – associate). For complete qualifications and application information, go to The initial deadline for applications is January 2, 2015. Applications will be reviewed weekly thereafter until the position is closed. Arizona State University is an equal opportunity/affirmative action employer committed to excellence through diversity. Women and minorities are encouraged to apply (ASU Affirmative Action). A background check is required for employment.
The Department of Speech and Hearing Science is housed in the College of Health Solutions and offers undergraduate Major and Minor degrees in Speech and Hearing Science, a Certificate for Speech-Language Pathologist Assistants, a Master’s degree in Communication Disorders for SLPs, an clinical doctoral degree in Audiology (AuD), and a PhD degree in Speech and Hearing Science. The department also administers a large undergraduate program in American Sign Language.  The Phoenix area has numerous clinical and research facilities available for collaboration including Barrow Neurologic Institute, Mayo Clinic and other hospital systems, and ASU research institutes. For more information please visit our website at 

Questions about these positions and/or the application process may be directed to the Chair of the search committee, Dr. Andrea Pittman at (480) 727-8728 or 

Embodied or Symbolic? Who cares!

I still don't understand the hype over embodied cognition. It's too abstract a concept for me, I guess.  I need more grounding in the real world. (Am I getting it?) So let's consider a real world example of neural computation. For the record, this is partially excerpted/paraphrased from a discussion in The Myth of Mirror Neurons.

Sound localization in the barn owl is fairly well-understood in neurocomputational terms.  Inputs from the two ears converge in the brainstem's nucleus laminaris with a "delay line" architecture as in the figure:

Given this arrangement, the neurons (circles) on which the left and right ear signals will converge simultaneously will depend on the time difference between excitation of the two ears. If both ears are stimulated simultaneously (sound directly in front), convergence will happen in the middle of the delay line. If the sound stimulates the left ear first, convergence will happen farther to the right in this schematic (left ear stimulation arrives sooner allowing its signal to get further down the line before meeting the right ear signal). And vise versa if right ear stimulation arrives sooner. This delay line architecture basically sets up an array of coincidence detectors in which the position of the cell that detects the coincidence represents information: the difference in stimulation time at the two ears and therefore the location of the sound source. Then all you have to do is plug the output (firing pattern) of the various cells in the array into a motor circuit for controlling head movements and you have a neural network for detecting sound source location and orienting toward the source.
Question: what do we call this kind of neural computation?  Is it embodied? Certainly it takes advantage of body-specific features, the distance between the two ears (couldn't work without that!) and I suppose we can talk of a certain "resonance" of the external world with neural activation.  In that sense, it's embodied.  On the other hand, the network can be said to represent information in a neural code--the pattern of activity in network of cells--that no longer resembles the air pressure wave that gave rise to it.  In fact, we can write a symbolic code to describe the computation of the network.  Typical math models of the process use cross-correlation but you can do it with some basic code like this:

Let x = time of sound onset detected at left ear 
Let y = time of sound onset detected at right ear 
If x = y, then write ‘straight ahead’
x < y, then write ‘left of center’
If x > y, then write ‘right of center’ 

Although there is no code in the barn owl’s brain, the architecture of the network indeed implements the program: x and y are the input signals (axonal connections) from the left and right ears; the relation between x and y is computed via delay line coincidence detectors; and “rules” for generating an appropriate output are realized by the connections between various cells in the array and the motor system (in our example). Brains and lines of code can indeed implement the same computational program. Lines of code do it with a particular arrangement of symbols and rules, brains do it with particular arrangement of connections between neurons that code or represent information. Both are accurate ways of describing the computations that the system carries out to perform a task.  

Does it matter, then, whether we call this non-representational embodied cognition or classical symbolic computation?  I think not.  If we simply start trying to actually figure out the architectures and computations of the system we are studying, the question of what to call it becomes trivial.  

Tuesday, October 14, 2014

Lamar Smith's attack on NSF a thinly veiled attempt to suppress environmental education

It's no secret that Lamar Smith (R-TX), Chairman of the Science, Space and Technology committee, has been waging a war on the National Science Foundation. See hereherehere and here. In a 2013 piece in USA Today, Smith, writing with Eric Cantor stated:
While the NSF spends most of its funds well, we have recently seen far too many questionable grants, especially in the social, behavioral and economic sciences. 
A link to a more complete list of the suspect grants on which Smith requested information can be found here. It's interesting that among the questionable grants, a sizable fraction of them concern public communication of environmental information, including climate change.  In fact, eco-related projects account for more than half ($16.9M) of the $26M in funding handed out by the NSF for "questionable" research.  Place this observation in the context of how much "waste" is actually under question--$26M is in the ballpark of 0.05% of NSF's budget for the 8-year time window over which the "questionable" grants were handed out--and it is quite clear that this is not about trimming waste.  It's about promoting a political agenda by suppressing the dissemination of information on environmental issues, particularly climate change.  

Thursday, October 9, 2014

Postdoctoral Fellowship: The Department of Speech, Language, and Hearing Sciences at Purdue University

Postdoctoral Fellowship: The Department of Speech, Language, and Hearing Sciences at Purdue University invites applications for a postdoctoral fellowship from the National Institute on Deafness and Other Communication Disorders, National Institutes of Health beginning July 1, 2015. Applicants must be U.S. citizens or have permanent resident status. This will be a two-year appointment. Individuals may seek training in any of the following inter-related areas: (1) speech and voice production, development, and disorders; (2) language structure, development, and disorders; (3) auditory perception, neural plasticity, and sensory aids; (4) cognitive neuroscience approaches to hearing, language processing, and communication disorders; and (5) linguistics applied to communication sciences and disorders. Potential mentors include: Alexander Francis, Lisa Goffman, Michael Heinz, Jessica Huber, David Kemmerer, Keith Kluender, Ananthanarayan Krishnan, Laurence Leonard, Amanda Seidl, Mahalakshmi Sivasankar, Elizabeth Strickland, Christine Weber-Fox, and Ronnie Wilbur. Applicants are encouraged to contact appropriate individuals on this list prior to submitting an application. A description of the research areas of these potential mentors can be found at Application materials should include a statement of interest, three letters of recommendation, a curriculum vitae, and copies of relevant publications.  These materials should be sent to Laurence B. Leonard, Project Director, at  Deadline for submission of applications is January 16, 2015. Purdue University is an equal opportunity/equal access/affirmative action employer fully committed to achieving a diverse workforce.

Sunday, October 5, 2014

Dear crowd: please crowd-solve the question "What can be or should be the relation between linguistics and neuroscience?"

Dave Embick and I just wrote a paper (for LCN) in which we speculate further about the possible relations between linguistics and neuroscience (as in Poeppel, D. and Embick, D. (2005). The relation between linguistics and neuroscience. In A. Cutler (ed.), Twenty-First Century Psycholinguistics: Four Cornerstones. Lawrence Erlbaum; and Poeppel, D. (2012). The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language. Cogn Neuropsychol, 29(1-2):34-55. PDFs available on my site.) In particular, we discuss what we might aspire to, i.e. what the endgame might look like - or should look like. We are interested in reactions and advice of any kind. In some sense, we'd like to crowd-source the issue, i.e. collect examples of true success stories, spectacular failures, and so on ...

But: the bar is *high*. For example, a success might be something akin to the explanatory, mechanistic, causal understanding we have for sound localization in the barn owl (e.g. here). A failure might be akin to the case of C. elegans, a creature for which we know the genome and every neural ganglion and the entire wiring diagram but we cannot even figure out why the damn worm turns left or right. What, then, is a useful relation between computational-representational (CR) theories (as developed in linguistics, psycholinguistics, computer science, etc.) and neurobiological (NB) infrastructure?

In the review process, we got reactions across the spectrum, as per usual. One reviewer found the speculations reasonable, and in some places even helpful. Phew. Another reviewer found us relentlessly naive and misguided. Also phew.

Attached is a precis of the paper (which is, of course, available upon request). We welcome any advice, criticism, example, counterexample - either as comments here or messages to Embick ( or me (

Towards a computational(ist) neurobiology of language:
Correlational, Integrated, and Explanatory neurolinguistics (***Précis***)

David Embick, University of Pennsylvania & David Poeppel, NYU and MPI

Abstract: We outline what an integrated approach to language research that connects experimental, theoretical, and neurobiological domains of inquiry would look like, and ask to what extent unification is possible across domains. At the center of the program is the idea that computational/representational (CR) theories of language must be used to investigate its neurobiological (NB) foundations. We consider different ways in which CR and NB might be connected. These are (1) A Correlational way, in which NB computation is correlated with the CR theory; (2) An Integrated way, in which NB data provide crucial evidence for choosing among CR theories; and (3) an Explanatory way, in which properties of NB explain why a CR theory is the way it is. We examine various questions concerning the prospects for Explanatory connections in particular, including to what extent it makes sense to say that NB could be specialized for particular computations.

(Q1) Basic Question: How does the brain execute the different computations that make up language?
(Q2) Advanced Question: Is the fact that human language is made up of certain computations (and not others) explained by the fact that these computations are executed in neurobiological structures that have certain properties (and not others)?

Possible Connections
Correlational Neurolinguistics: CR theories of language are used to investigate the NB foundations of language. Knowledge of how the brain computes is gained by capitalizing on CR knowledge of language.
Integrated Neurolinguistics: CR neurolinguistics plus the NB perspective provides crucial evidence that adjudicates among different CR theories. I.e., brain data enrich our understanding of language at the CR level.
Explanatory Neurolinguistics: (Correlational+Integrated Neurolinguistics) plus something about NB structure/function explains why the CR theory of language involves particular computations and representations (and not others).

Questions about specialization (crucial for Explanatory Neurolinguistics)
Specialization Question 1: Are there particular levels of NB organization that are to be privileged as candidates for CR specialization?
Specialization Question 2: Are there particular parts of the CR theory that are more likely to be candidates for Explanatory Neurolinguistic explanation than others?

Thursday, October 2, 2014

Broca’s area doesn’t care what you do (syntactically): it cares how you do it (actively)

Guest post by William Matchin:

There are a few topics on this blog on the polemical spectrum that don’t happen to involve mirror neurons; one of them is the topic of Broca’s area and its putative role in syntax (see previous posts here and here). Our recent paper published in Brain and Language – (Matchin, Sprouse & Hickok, 2014) - addresses this issue.

The hypotheses regarding syntax and Broca’s area were never ludicrous - the neuropsychological data suggesting a close link between Broca’s area and the grammar are quite striking, as well as compelling. Rather, there are two arguments, empirical and methodological, against these hypotheses: (1, empirical) these hypotheses ignore the fact that patients with agrammatic production and sentence comprehension issues appear to have intact syntactic competence, as shown by their ability to perform remarkably well on acceptability judgments (Linebarger et al., 1983), and (2, methodological) syntactic manipulations are often conflated with processing mechanisms – as such, increased activation in neuroimaging studies for, say, center-embedded sentences over right-branching sentences (Stromswold et al., 1996) may very well reflect computations related to how the sentences are handled (e.g., working memory), and not their syntactic properties. This makes interpretation of these kinds of neuroimaging results difficult – are the effects due to syntactic operations (e.g., Merge or Movement) or are they due to domain-general processing mechanisms like working memory?

This second concern was addressed by a previous paper by another alumna of the Hickok lab, Corianne Rogalsky (Rogalsky et al., 2008). In that paper, Corianne showed that activation to sentence complexity in the posterior portion of Broca’s area – the pars opercularis – could be accounted for by domain-general verbal working memory. However, activation in the anterior portion of Broca’s area – the pars triangularis – could not be accounted for by verbal working memory.


The present study shows that activity in the pars triangularis during sentence processing is sensitive to how the sentence is processed (active vs. passive processing mechanism) and doesn’t particularly care what the specific syntactic operations involved are, speaking against syntactic hypotheses of Broca’s area function.


The study borrows from the psycholinguistic literature on the filled-gap effect, which in a nutshell demonstrates that sentences involving movement are processed actively (i.e., subjects predict resolutions to the open dependency) (Stowe et al., 1986). Contrariwise, sentences involving canonical anaphor binding (a different syntactic operation) are processed passively (i.e., subjects can’t predict resolutions to the dependency, because they don’t know there is a dependency until they get to the end of it).


Previous research suggested some syntactic-specificity to the pars triangularis, in that a distance manipulation for movement sentences resulted in activity in this region, while a distance manipulation for anaphor binding sentences did not (Santi & Grodzinsky, 2007), consistent with the syntactic movement hypothesis of Broca’s area (Grodzinsky, 2000). However, in that experiment, syntax (movement, binding) was conflated with processing (active, passive), as the psycholinguistic literature indicates.


Enter backward anaphora: unlike canonical anaphora, psycholinguistic data indicate they process these sentences actively, just like movement sentences (van Gompel & Liversedge, 2003). This corrects for the conflation between syntax and processing in these two constructions. Now, the question is: do backward anaphora show a distance effect in the pars triangularis? If no, then the movement hypothesis stands; if yes, then there is strong evidence to suggest that this region cares about a processing mechanism that can be employed for constructions involving different syntactic operations, with no indication of syntactic-specificity.


The answer is yes – our results demonstrated a distance effect in the pars triangularis.



There is more to the paper, but this is the key result: Broca’s area doesn’t seem to care too much about the syntactic details, but it certainly does care about the processing details. This converges with additional data showing that when you take movement constructions that aren’t processed actively (parasitic gaps), then you don’t get activation in Broca’s area (Santi & Grodzinsky, 2012). So, movement, anaphora, doesn’t matter – what matters is that there is active (predictive) processing.

Many questions remain: How does the brain do syntax? What is the exact mechanism accounting for activation in Broca’s area, if not verbal working memory? Why do people care so much about Broca’s area?* These questions are largely unanswered in this particular paper, but I promise you that we have some ideas (and data) bearing on these questions, so stay tuned.

*We don’t actually have any data or ideas bearing on this particular issue

Grodzinsky, Y. (2000). The neurology of syntax: Language use without Broca's area. Behavioral and brain sciences, 23(01), 1-21.

Matchin, W., Sprouse, J., & Hickok, G. (2014) A structural distance effect for backward
anaphora in Broca’s area: an fMRI study. Brain and language, 138(11), 1-11.

Rogalsky, C., Matchin, W., & Hickok, G. (2008). Broca's area, sentence comprehension, and working memory: an fMRI Study. Frontiers in Human Neuroscience, 2, 14.

Santi, A., & Grodzinsky, Y. (2007). Working memory and syntax interact in Broca's area. Neuroimage, 37(1), 8-17.

Santi, A., & Grodzinsky, Y. (2012). Broca's area and sentence comprehension: a relationship parasitic on dependency, displacement or predictability? Neuropsychologia, 50(5), 821-832.

Stowe, L. A. (1986). Parsing WH-constructions: Evidence for on-line gap location. Language and cognitive processes, 1(3), 227-245.

Stromswold, K., Caplan, D., Alpert, N., & Rauch, S. (1996). Localization of syntactic comprehension by positron emission tomography. Brain and language, 52(3), 452-473.

van Gompel, R. P., & Liversedge, S. P. (2003). The influence of morphological information on cataphoric pronoun assignment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(1), 128.