Wednesday, December 19, 2012

PhD scholarships and post-doctoral positions for 2013

The Centre of Research Excellence in Child Language (CRE-CL) at the Murdoch Childrens Research Institute invites applications for PhD scholarships and post-doctoral positions across a range of cutting edge program areas. This CRE-CL links a number of longitudinal Australian and international studies with the aim of advancing the science of how language develops, what goes wrong and when and how to intervene. It is also unique in its parallel cohorts of children with normal hearing and hearing impairment. It brings together some of the best research, and leading researchers in the world incorporating the following organisations - MCRI, Deakin University and the Parenting Research Centre (all Melbourne based) - and international collaborators at the University of Newcastle (UK), and University of Iowa (USA).
The CRE-CL provides an internationally-unprecedented capacity for language research - in both hearing and deaf children - combining the latest theoretical and measurement approaches in molecular genetics, neuro-imaging, epidemiology, biostatistics and health economics.
Potential research areas include:
Clinical research
Public health research
Molecular genetics
Health economics

For more project details visit here
Please send completed applications to
All applications to include
Details of two academic references
Cover letter detailing which project you are interested in and why, and long term research goals
Academic transcripts

For post-doc applications please also address the selection criteria in the position description.
Closing date - 11th January 2013
Interviews - late January/ early February

Wednesday, November 28, 2012

Action Based Language - More on Glenberg and Gallese

The core of Glenberg and Gallese's proposal is that language is grounded in a hierarchical state feedback control model, made possible, of course, by mirror neurons.  I actually think they are correct to look at feedback control models as playing a role in language, given that I've previously proposed the same thing (Hickok, 2012) along with Guenther, Houde and others, albeit for speech production only, not for "grounding" anything.  Glenberg and Gallese believe, on the hand, that the feedback control model is the basis for understanding language.

Their theoretical trick is to link up action control circuits for object-oriented actions and action control circuits for articulating words related to those actions.  Motor programs for drinking are linked to motor programs for saying "drink".  Then when you hear the word "drink" you activate the motor program for saying the word and this in turn activates the motor programs for actual drinking and this allows you to understand the word.

The overlap ... between the speech articulation and action control is meant to imply that the act of articulation primes the associated motor actions and that performing the actions primes the articulation. That is, we tend to do what we say, and we tend to say (or at least covertly verbalize) what we do. Furthermore, when listening to speech, bottom-up processing activates the speech controller (Fadiga et al., 2002; Galantucci et al., 2006; Guenther et al., 2006), which in turn activates the action controller, thereby grounding the meaning of the speech signal in action.

So as I reach for and drink from my coffee cup, what words will I covertly verbalize?  Drink, consume, enjoy, hydrate, caffeinate?  Fixate, look at, gaze towards, reach, extend, open, close, grasp, grab, envelope, grip, hold, lift, elevate, bring-towards, draw-near, transport, purse (the lips), tip, tilt, turn, rotate, supinate, sip, slurp, sniff, taste, swallow, draw-away, place, put, set, release, let go?  No wonder I can't chat with someone while drinking coffee.  My motor speech system is REALLY busy!

By the way, what might the action controller for the action drink code?  It can't be a specific movement because it has to generalize across drinking from mugs, wine glasses, lidded cups, espresso cups, straws, water bottles with and without sport lids, drinking by leaning down to the container or by lifting it up, drinking from a sink faucet, drinking from a water fountain, drinking morning dew adhering to leaves, drinking rain by opening your mouth to the sky, drinking by asking someone else to pour water into your mouth.  And if you walked outside right now, opened your mouth to a cloudless sky and then swallowed, would you being drinking?  Why not?  If the meaning of drink is grounded in actions, why should it matter whether it is raining or not?

Because it's not the movements themselves that define the meaning.

But the motor system can generate predictions about the consequences of an action and that is where the meaning comes from, you might argue, as do Glenberg and Gallese:

part of the knowledge of what “drink” means consists of expected consequences of drinking

And what are those consequences? Glenberg and Gallese get it (mostly) right:

...predictions are driven by activity in the motor system (cf. Fiebach and Schubotz, 2006), however, the predictions themselves reside in activity across the brain. For example, predictions of how the body will change on the basis of action result from activity in somatosensory cortices, predictions of changes in spatial layout result from activity in visual and parietal cortices, and predictions of what will be heard result from activity in temporal areas.

So where do we stand?  Meanings are dependent on consequences and consequences "reside in activity across the brain" (i.e., sensory areas).  Therefore, the meanings of actions are not coded in the motor system.  All the motor system does according to Glenberg and Gallese (if you read between the lines) is generate predictions.  In other words, the motor system is nothing more than a way of accessing the meanings (stored elsewhere) via associations.

So just to spell it out for the readers at home.  Here is their model of language comprehension:

hear a word --> activate motor program for saying word --> activate motor program for actions related to word --> generate predicted consequences of the action in sensory systems --> understanding.

Why not just go from the word to the sensory system directly?  Is the brain not capable of forming such associations? In other words, if all the motor system is doing is providing an associative link, why can't you get there via non-motor associative links.

More to the point: if the *particular* actions don't matter, as even the mirror neuron crowd now acknowledges, and if what matters is the higher level goals or consequences, and if these goals or consequences are coded in sensory systems (which they are), then there is little role for the motor system in conceptual knowledge of actions.

Glenberg and Gallese correctly point out a strong empirical prediction of their model:

The ABL theory makes a novel and strong prediction: adapting an action controller will produce an effect on language comprehension

They cite Bak's work on ALS and some use-induced plasticity effects.  Again, let me suggest, quite unscientifically, that Stephen Hawking would have a hard time functioning if he didn't understand verbs. Further, use-induced plasticity is known to modulate response bias -- a likely source of these effects.  In short, the evidence for the strong prediction is weak at best.

But rather than adapting an action controller, let's remove it as a means to test their prediction head on.  Given their model in which perceived words activate motor programs for articulating those words, which activate motor programs for generating actions, which generate predictions etc., if you don't have the motor programs for articulating words you shouldn't be able to comprehend speech, or at least show some impairment.  Yet there is an abundance of evidence that language comprehension is not dependent on the motor system.  I reviewed much of it in my "Mirror Neuron Forum" contribution that Glenberg edited and Gallese contributed to.  NONE OF THIS WORK IS EVEN MENTIONED in Glenberg and Gallese's piece.  This is rather unscholarly in my opinion.

Toward the end of the paper they include a section on non-motor processes.  In it they write,

We have focused on motor processes for two related reasons. First, we believe that the basic function of cognition is control of action. From an evolutionary perspective, it is hard to imagine any other story. That is, systems evolve because they contribute to the ability to survive and reproduce, and those activities demand action. As Rudolfo Llinas puts it, “The nervous system is only necessary for multicellular creatures-that can orchestrate and express active movement”  Thus, although brains have impressive capacities for perception, emotion, and more, those capacities are in the service of action

I agree. But action for action sake is useless.  The reason WHY brains have impressive capacities for perception, emotion, and more is to give action purpose, meaning.  Without these non-motor systems, the action system is literally and figuratively blind and therefore completely useless.

Why the unhealthy obsession with the motor system and complete disregard for the mountain of evidence against their ideas.  Because the starting point for all the theoretical fumbling is a single assumption that has gained the status of an axiom in the minds of researchers like Glenberg and Gallese: that cognition revolves around embodiment with mirror neurons/the motor system at the core. (Glenberg's lab name even assumes his hypothesis, "Laboratory for Embodied Cognition").  Once you commit to an idea you have no choice to build a convoluted story to uphold your assumption and ignore contradictory evidence.

I don't think there is a ghost of a chance that Glenberg and Gallese will ever change their views in light of empirical fact.  Skinner, for example, was a diehard defender of behaviorism long after people like Chomsky, Miller, Broadbent and others clearly demonstrated that the approach was theoretically bankrupt.  Today the cognitive approach to explaining behavior dominates both psychology and neuroscience, including embodied approaches like Glenberg and Gallese's.  My hope is that by pointing out the inadequacies of proposals like these, the next generation of scientists, who aren't saddled with tired assumptions, will ultimately move the field forward and consider the function of mirror neurons and the motor system in a more balanced light.

Hickok, G. (2012). Computational neuroanatomy of speech production. Nature Reviews Neuroscience, 13, 135-145.

Tuesday, November 27, 2012

Orthogonal acoustic dimensions define auditory field maps in human cortex

Wow, this is the most blogging I've done in months.  This one is way off the topic of embodied cognition and mirror neurons (some of you will be relieved to hear) and in my view more important.  An interdisciplinary group of us here at UC Irvine have successfully mapped two orthogonal dimensions in human auditory cortex, tonotopy (which we knew about) and periodotopy (which most suspected but hadn't measured convincingly or showed its orthogonal relation to tonotopy in humans).  What's cool about this is it allows us to clearly define boundaries between auditory fields just like is commonly done in vision.  There are 11 field maps in the human auditory core and belt region.

Previous studies of auditory field maps disagreed about whether A1 lined up along Heschl's gyrus or is perpendicular to it.  The disagreements stemmed from the lack of an orthogonal dimension to define boundaries.  We show that A1 lines up along Heschl's gyrus, as the textbook model holds, and show how contradictory maps can be inferred if you don't have the periodotopic data.

What can we do with this?  We can map auditory fields in relation to speech activations.  We can measure magnification factors.  We can measure the distribution of ~receptive field preferences for different frequencies or periodicities between auditory fields and between hemispheres (can you say, definitive test of the AST hypothesis?).  We can determine which fields are affected by motor to sensory feedback, cross sensory integration, attention, and so on.  We use them as seeds for DTI studies or functional connectivity studies.  The floodgates are open.

The report was published online today in PNAS.  You can check it out here:

Barton, Venezia, Saberi, Hickok, and Brewer. Orthogonal acoustic dimensions define auditory field maps in human cortex. PNAS, November 27, 2012, doi:10.1073/pnas.1213381109


Action-Based Language: A theory of language acquisition, comprehension, and production

This is the paper by Glenberg and Gallese.  How could not skip ahead to this one?!  I mean, the title does seem to imply that it will provide the answer to how language works!  So let's dig in.

Here's a quote:

our understanding of linguistic expressions is not solely an epistemic attitude; it is first and foremost a pragmatic attitude directed toward action.
So all of language reduces fundamentally to the action system?

One caveat is important. Whereas we focus on the relation between language and action, we do not claim that all language phenomena can be accommodated by action systems. Even within an embodied approach to language, there is strong evidence for contributions to language comprehension by perceptual systems 

Whew!  I was going to have to quote Pillsbury again:

A reader of some of the texts lately published would be inclined to believe that there was nothing in consciousness but movement, and that the presence of sense organs, or of sensory and associatory tracts in the cortex was at the least a mistake on the part of the Creator” (Pillsbury, 1911) (p. 83)
On page 906 we get to learn about the Action-Sentence Compatibility Effect (ACE), Glenberg's baby.  This is where a sentence that implies motion in one direction (He pushed the box away) facilitates responses (button presses) that are directed away from the subject and interferes with responses that are toward the subject.

The ACE is a favorite of the embodied camp.  They want to argue that this means that the meaning of say push is grounded in actual pushing movements that must be reactivated to accomplish understanding.  The ACE is interesting but not surprising nor conclusive.  Just because two things are correlated (the meaning of the word push and the motor program for pushing) doesn't mean one is dependent on the other; one could exist without the other.  Again, think "fly", "slither", "coil", etc. etc.  Or think of it this way.  If I blew a puff of air in your eye every time I said the phrase "there is not a giraffe standing next to me", before long I could elicit an eye blink simply by uttering the phrase.  Furthermore, I could probably measure a There-Is-Not-A-Giraffe-Standing-Next-To-Me-Eyeblink Compatibility Effect (the TINAGSNTMECE) by asking subjects to respond either by opening their eyes wider or by closing them to indicate their decision. This does not mean that the eye blink embodies the meaning of the phrase.  It just means that there is an association between the phrase and the action.  Glenberg's ACE simply highjacks an existing association that happens to involve action-word pairs that have not only a "pragmatic" association but also an "epistemic" relation, to use their terminology, and calls them one and the same.

Another study that GandG highlight as further evidence for an ACE-like effect makes my point.   Here is the relevant paragraph:

Zwaan and Taylor (2006) obtained similar results using a radically different ACE-type of procedure. Participants in their experiments turned a dial clockwise or counterclockwise to advance through a text. If the meaning of a phrase (e.g., “he turned the volume down”) conflicted with the required hand movement, reading of that phrase was slowed.
Unlike in Glenberg's ACE procedure, Zwaan and Taylor showed that arbitrary pairings between phrases and actions show the same effect (more like the eyeblink example).  Yes, some volume controls involve knob rotation, but others involve pressing a button, increasing/decreasing air pressure passing through the larynx, covering or cupping your ears, or placing your hand over your friend's mouth.  When you read the phrase, "he turned the volume down" did you simultaneously simulate counterclockwise rotation, button pressing, relaxation of your diaphram, covering your ears, and covering your friend's mouth in order to understand the meaning of the phrase?

GandG also selectively site data in support of their claims while obscuring important details:

Bak and Hodges (2003) discuss how degeneration of the motor system associated with motor neuron disorder (amyotrophic lateral sclerosis -- ALS) affects comprehension of action verbs more than nouns.

This is true statement.  What is lacking, however, is the fact that Bak and Hodges studied a particular subtype of ALS, that subtype with a dementia component.  In fact, high-level cognitive and/or psychiatric deficits appear first in this subtype with motor neuron symptoms appearing only later.  I'll let Glenberg and Gallese tell Stephen Hawking that he doesn't understand verbs anymore.

So much for the first two sections.

Language and the Motor System - Editorial

And another quote from the editorial:

phonological features of speech sounds are reflected in motor cortex activation so that the action system likely plays a double role, both in programming articulations and in contributing to the analysis of speech sounds (Pulvermuller et al., 2006)
which explains why prelingual infants, individuals with massive strokes affecting the motor speech system, individuals undergoing Wada procedures with acute and complete deactivation of the motor speech system, individuals with cerebral palsy who never acquired the ability to control their motor speech system, and chinchilla and quail can all perceive speech quite impressively.

One of the most frequently cited brain models of language indeed still sees a role of the motor system limited to articulation, thus paralleling indeed the position held by classical aphasiologists, such as Wernicke, Lichtheim and especially Paul Marie (Poeppel and Hickok, 2004). Recently, a contribution to speech comprehension and understanding is acknowledged insofar as inferior frontal cortex may act as a phonological short-term memory resource (Rogalsky and Hickok, 2011). These traditional positions are also discussed in the present volume, along with modern action-perception models.
Good hear we will get the "traditional" perspective.  David, did you ever think WE would be called "traditional"?  Nice to see that our previously radical views are now the standard theory.

Let's try turning the tables:

One of the most frequently cited brain models of speech perception indeed still sees the motor system as playing a critical role, thus paralleling indeed the position held by classical speech scientists of the 1950s such as Liberman and even the early 20th century behaviorists such as Watson (Pulvermuller et al. 2006).

Moreover, one of the most frequently cited brain models of conceptual representation indeed still sees sensory and motor systems as being the primary substrate thus paralleling indeed the position held by classical aphasiologists, such as Wernicke and Lichtheim (Pulvermuller et al. 2006).

Monday, November 26, 2012

Cortex special issue: Language and the motor system

Observation #1.  In the editorial Cappa and Pulvermuller write,
Whereas the dominant view in classical aphasiology had been that superior temporal cortex (“Wernicke’s area”) provides the unique engine for speech perception and comprehension (Benson, 1979), investigations with functional neuroimaging in normal subjects have shown that even during the most automatic speech perception processes inferior fronto-central areas are being sparked (Zatorre et al., 1992)
I take it that they are referring to Zatorre's task in which subjects are listening to pairs of CVC syllables, some of which are words, some of which are not, and alternating a button press between two keys.  Contrasted with noise, activation foci were reported for automatic-speech-perception-of-random-CVC-syllables-while-alternating-button-pressing in the superior temporal gyrus bilaterally, the left middle temporal gyrus, and the left IFG.  Clearly the stronger activations in the temporal lobe (nearly double the z-scores) are doing little in the way of speech perception and it's the IFG activation that refutes the classical view.

I wonder why no mention was made of a rather nifty study published around the same time by Mazoyer et al. in which a larger sample of subjects listened to sentences of various sorts and which did not result in consistent activation in the IFG. This is a finding that has persisted into more recent research: listening to normal sentences does not result in robust IFG activation.  Sometimes you see it, sometimes you don't (see Rogalsky & Hickok for a review). Superior temporal cortex, that area that people were writing about on their IBM selectrics (Google it, youngster) is not so fickle.  Present speech and it lights up like a sparkler on Independence Day.

Hopes of a balanced (and therefore useful) volume already sinking.  And I haven't even made it past the first paragraph of the editorial.

Mazoyer, B. M., Tzourio, N., Frak, V., Syrota, A., Murayama, N., Levrier, O., Salamon, G., Dehaene, S., Cohen, L., & Mehler, J. (1993). The cortical representation of speech. Journal of Cognitive Neuroscience, 5, 467-479.

Rogalsky, C., & Hickok, G. (2011). The role of Broca's area in sentence comprehension. Journal of Cognitive Neuroscience, 23, 1664-1680.

Language and the Motor System

This is the topic of a special issue of Cortex edited by Stefano Cappa and Friedemann Pulvermuller published just this year (Cortex, Vol. 48, Issue 7).  Let's work our way through what appears to be a highly balanced selection of papers by... oh wait, it seems to be mostly authors sympathetic to the idea that the motor system is the center of the linguistic universe.  But I haven't even looked at the papers yet, so let's not pre-judge.  (Oops, I guess I already did.) Kidding aside, I'm hopeful, actually, that the discussion won't be as one-sided as it has been for the last 10 years.

My plan is to read through the papers, one by one, and post my thoughts.  Please read along and feel free to post your own in the commentary section, or you can email me and I'll post your own guest entry.  As always, input from the authors is welcome.

Now turn to page 785 for the editorial by Cappa and Pulvermuller...

Friday, November 9, 2012

What does "cognitive" mean to you?

Just curious... what counts as "cognitive" to you? I've been reading a bit of the embodied cognition literature and I find statements like this rather odd:  "the traditional conceptualization of cognition as a stage in the perception–cognition–action pipeline."  Is cognition just high-level stuff?  I don't see it that way.  Perception is cognition.  Action is cognition.  Language is cognition.  Categorization, memory, attention, are all cognition.  Is this "cognitive sandwich" notion just a straw man given modern conceptualization of cognition?

Second International Conference on Cognitive Hearing Science for Communication, June 16-19, 2013 - Linköping, Sweden

The first conference in 2011 was a real hit and has boosted research in the field.
We believe that this second conference will be just as successful. Some of the
themes addressed at the first conference have been retained, some will be
explored further, and others are quite new. This reflects the development of
the field. Conference speakers represent the international cutting edge of
Cognitive Hearing Science.
We look forward to welcoming you to an exciting new conference and to
Linköping University, the home of Cognitive Hearing Science. Many prominent
researchers have already accepted to give a talk.
Further information can be obtained from:

Friday, October 26, 2012

Research Assistant/Lab Manager position - Language Behavior and Brain Imaging Lab, Rutgers University

A research assistant/lab manager position is available in the newly formed Language Behavior and Brain Imaging Lab at Rutgers University in Newark, New Jersey. Much of the research in the lab is devoted to the cognitive neuroscience of reading, with potential application to reading disorders. Other aspects of brain and language studied in the lab include concept formation and speech production. Research is performed using a variety of techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), behavioral responses, gene-brain correlations, and magnetoencephalography (MEG).

Responsibilities will include data collection from human research participants in both a purely behavioral and functional brain imaging setting, contacting and scheduling research participants, managing institutional review board (IRB) protocols, and data analysis.

Requirements for a successful applicant include spoken and written proficiency in English, a minimum of a bachelor-level degree (e.g., BA or BS), preferably in psychology, neuroscience, computer science, engineering, biology, or a related field, and willingness to make a 2-year commitment. Preference will be given to applicants who have experience in cognitive neuroscience research with human participants, are proficient with the linux computing environment, have used experiment delivery and data acquisition software such as E-prime, and can program in a scripting language such as Matlab or python.

Rutgers is the state university of New Jersey, and its Newark campus is in the state’s largest city. Newark is undergoing a renaissance of its own and is only minutes from Manhattan by train. Applications will be reviewed as they are received, with a deadline of December 15th. Please email a resume or CV and contact information for 3 references to

Thursday, October 25, 2012

Two Open-Rank Tenure Track Jobs - Northeastern University

Two Open-Rank Tenure Track Openings in the Department of Speech-Language Pathology & Audiology
The Department of Speech-Language Pathology and Audiology at Northeastern University announces two tenure-track open rank faculty positions. Areas of specialization are open. Particularly desirable areas include, but not limited to, speech and language development, speech and language neuroscience, disorders of articulation/phonology/language, adult neurogenics, speech science, auditory neuroscience, hearing impairment in children, aging, communication technologies, and neuroimaging. Applicants from all subfields of Speech-Language Pathology and Audiology are strongly encouraged to apply.  Responsibilities include the ability to develop and maintain a program of independent and collaborative scholarly research and teaching, as well as college and university service.

A doctorate degree in Speech Language Pathology, Audiology, or a related discipline relevant to human communication and its disorders and a record of publications in scholarly journals appropriate to the field of research and desired appointment level are required.  Junior candidates will demonstrate promise for building an externally-funded research program and teaching background.  Candidates at the Full and Associate Professor level will have national and internationally recognized scholarship programs, a demonstrated track record of external funding, and a history of exceptional teaching and service.

The Department of Speech-Language Pathology and Audiology, within the Bouvé College of Health Sciences, offers degrees at the undergraduate and graduate levels, in addition to an interdisciplinary certificate in Early Intervention, and houses the on-campus, state-of-the art Speech-Language and Hearing Center.  Our programs are described in greater detail at  The Department has a 50-year history of externally funded research through the National Institutes of Health, the National Science Foundation, and the US Department of Education.  Northeastern University provides the opportunity for collaborative academic, clinical, and research program development through established links to other units within the University (e.g., Computer Science, Electrical Engineering, and Psychology), as well as with other universities (e.g., Massachusetts Institute of Technology and Boston University), hospitals (e.g., Children’s Hospital), and schools (e.g., Boston Public Schools) in the Boston metropolitan area.   

Bouvé College of Health Sciences has a faculty of 189, with approximately 2,100 undergraduate and 1,500 graduate students. It is the leading national model for education and research in the health, psychosocial and biomedical sciences and supports the University's mission of educating students for a life of fulfillment and accomplishment and creating and translating knowledge to meet global and societal needs.

Northeastern University is on an exciting trajectory of growth and innovation. The University is a leader in inter-disciplinary research, urban engagement, experiential learning, and the integration of classroom learning with real-world experience.  It is home to 20,000 students and to the nation's premier cooperative education program. The past decade has witnessed a dramatic increase in Northeastern's international reputation for research and innovative educational programs. A heightened focus on interdisciplinary research and scholarship is driving a faculty hiring initiative at Northeastern, advancing its position amongst the nation's top research universities.

Salary and rank will be commensurate with education, training, and experience and includes an outstanding benefits package.  Applications should include a cover letter, curriculum vita, research statement, and list of three references.  Applicants should apply through the NEU online employment application system Recommendations letters should also be submitted through the application system. Position inquiries should be sent via email to Dr. Rupal Patel, Search Committee Chair at Applicant review will begin December 1, 2012, and proceed until exceptional candidates are selected.

Northeastern University Equal Employment Opportunity Policy: Northeastern University is an Equal Opportunity/Affirmative Action, Title IX, and an ADVANCE institution. Minorities, women, and persons with disabilities are strongly encouraged to apply. Northeastern University embraces the wealth of diversity represented in our community and seeks to enhance it at all levels. Northeastern University is an E-Verify employer.

Thursday, September 13, 2012

Three (count 'em) 3 tenure-track faculty positions -- Arizona State Univ., Dept of Speech & Hearing Sciences

The Department of Speech and Hearing Science at Arizona State University (SHS) announces openings for 3 tenure-track faculty positions at Full (Department Chair), Associate, and Assistant Professor rank. ASU is a Research I University with outstanding research facilities and infrastructure support and is located within the vibrant metropolitan Phoenix area with 3.5 million people. As The New American University (New American University Reader), ASU has been widely lauded for innovation, and its culture of transformation and excellence thrives in the Department of Speech and Hearing Science.

Successful candidates will be expected to develop and/or maintain internationally recognized, externally-funded research programs, teach in the undergraduate and graduate curricula, and participate in service activities in the Departmental, College and University. 

A PhD in any discipline relevant to human communication and its disorders and a record of publications in scholarly journals appropriate to the desired appointment level is required. Applicants for an Assistant Professor position must show exceptional promise in research and teaching, whereas applicants for the Associate and Full Professor ranks must have demonstrated excellence in research and teaching appropriate to the rank. Candidates whose research specialization would build on the core strengths of the department will be considered with preference (SHS Labs ). Potential for successful interaction with ASU’s existing centers of research excellence is also desirable. Candidates for Department Chair will have demonstrated leadership abilities, and success in translating ideas and vision into practice. 

ASU is an affirmative action/equal employment opportunity employer and is dedicated to recruiting a diverse faculty community. Women and minorities are encouraged to apply. ASU Affirmative Action 

Background check is required for employment.

Applications will be submitted electronically with all files in .pdf format to Interested candidates must submit a cover letter indicating desired rank along with a description of research and expertise. Candidates applying for the Assistant Professor rank should also include a 5-year research plan. All candidates must also submit the following for consideration: i) current curriculum vitae; ii) names of 3 references, including address, phone and email contact information, iii) and attachments of 3 representative publications. Applications will be reviewed beginning November 30th, 2012; and every Friday thereafter until the searches are closed. Please visit SHS JOBS for more information on the department and the advertised positions. 

Job Number: Professor/Chair (10163); Associate Professor (10164); Assistant Professor (10165 

Monday, August 27, 2012

Liberman and the Perception of the Speech Code

Alvin Liberman is at the center of a scientific divide.  On the one hand, his work on the motor theory of speech perception has been elevated virtually to the status of gospel among those researchers who promote the role of the motor system in perception.  On the other hand are a new generation of speech scientists who believe that speech perception is the purview of the auditory system.  For these researchers Liberman has a near villainous status, or at least represents the personification of a roadblock to progress in speech research.  Full disclosure: I lean towards the latter.

So I decided to go back and read some of Liberman's original work.  I highly recommend reading the older literature in your research area -- there's a lot of useful information! -- and it is particularly important whenever decades-old results or theories are gratuitously cited in modern work.  You'll often be surprised and almost always you will learn something important.  With respect to Liberman et al.'s work, I have to admit, it is fairly impressive.  Along with a group at MIT that included Ken Stevens, Liberman and colleagues virtually defined the field of speech perception with their pioneering work.  It was technically sophisticated, theoretically rich, and generated a massive amount of data that defined the problems we are still struggling with today.

On interesting tidbit of information was Liberman's idea that the somatosensory information was what ultimately drove the perception of phonemes; the motor system was used simply as a means to access this information.  Here a quote:

...the articulatory movements and their sensory effects mediate between the acoustic stimulus and the event we call perception. In its extreme and old-fashioned form, this view says that we overtly mimic the incoming speech sounds and then respond to the proprioceptive and tactile stimuli that are produced by our own articulatory movements.  For a variety of reasons such an extreme position is wholly untenable, … we must assume that the process is somehow short-circuited – that is, that the reference to the articulatory movements and their sensory consequences must somehow occur in the brain without getting out into the periphery.” (Liberman, 1957) p. 122

In reference to probably the most important of Liberman's early papers, The Perception of the Speech Code, I noticed an interesting juxtaposition.  One is the starting assumption that their analysis should be restricted to the level of the phoneme.  The article starts,
Our aim is to identify some of the conditions that underlie the perception of speech.  We will not consider the whole process, but only the part that lies between the caustic stream and a level of perception corresponding roughly to the phoneme. (p. 431)
The other is the observation that the group is famous for, the parallel transmission of information about phonemes in a syllable, noted here in the context of a discussion of perceptual experiments involving synthesized versions of the phoneme /d/ in the context of a following vowel:
If we cut progressively into the syllable from the right-hand end, we hear /d/ plus a vowel, or a nonspeech sound; at no point will we hear only /d/. This is so because the formant transition is, at every instant, providing information about two phonemes, the consonant and the vowel – that is, the phonemes are being transmitted in parallel. (p. 436)
And a few pages further on, they conclude:
This parallel delivery of information produces at the acoustic level the merging of influences we have already referred to and yields irreducible acoustic segments of approximately syllabic dimensions. (p. 441). 
And one more towards the end of the paper:
To find acoustic segments that are in any reasonably simple sense invariant with linguistic (and perceptual) segments ... one must go to the syllable level or higher. (p. 451)
This strikes me as the one point where Liberman's work went awry.  By committing theoretically to the notion that individual segments must be extracted and represented in the speech perception process, they were in no position to recognized what their data were clearly telling them: that the acoustic signal is reflecting larger chunks of information -- that  is, something closer to the syllable.  This observation is nothing new, of course.  Others have pointed out the problems with the phoneme as the unit of analysis.  But it is interesting to reconsider the data in their own light, rather than in the shadow of the phoneme-as-a-perceptual-unit dogma.  It was a perfectly reasonable assumption, that phonemes should be relevant not only for phonological theory (i.e., production) but also for perception.  Unfortunately it unnecessarily complicated the perceptual picture.  I wonder how Liberman's work might have progressed if he had made different theoretical assumptions.  Maybe we'd understand how speech is perceived by now.

Liberman, A. M. (1957). Some results of research on speech perception. Journal of the Acoustical Society of America, 29, 117-123.

Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychol Rev, 74, 431-461.