Wednesday, November 28, 2012

Action Based Language - More on Glenberg and Gallese

The core of Glenberg and Gallese's proposal is that language is grounded in a hierarchical state feedback control model, made possible, of course, by mirror neurons.  I actually think they are correct to look at feedback control models as playing a role in language, given that I've previously proposed the same thing (Hickok, 2012) along with Guenther, Houde and others, albeit for speech production only, not for "grounding" anything.  Glenberg and Gallese believe, on the hand, that the feedback control model is the basis for understanding language.

Their theoretical trick is to link up action control circuits for object-oriented actions and action control circuits for articulating words related to those actions.  Motor programs for drinking are linked to motor programs for saying "drink".  Then when you hear the word "drink" you activate the motor program for saying the word and this in turn activates the motor programs for actual drinking and this allows you to understand the word.


The overlap ... between the speech articulation and action control is meant to imply that the act of articulation primes the associated motor actions and that performing the actions primes the articulation. That is, we tend to do what we say, and we tend to say (or at least covertly verbalize) what we do. Furthermore, when listening to speech, bottom-up processing activates the speech controller (Fadiga et al., 2002; Galantucci et al., 2006; Guenther et al., 2006), which in turn activates the action controller, thereby grounding the meaning of the speech signal in action.

So as I reach for and drink from my coffee cup, what words will I covertly verbalize?  Drink, consume, enjoy, hydrate, caffeinate?  Fixate, look at, gaze towards, reach, extend, open, close, grasp, grab, envelope, grip, hold, lift, elevate, bring-towards, draw-near, transport, purse (the lips), tip, tilt, turn, rotate, supinate, sip, slurp, sniff, taste, swallow, draw-away, place, put, set, release, let go?  No wonder I can't chat with someone while drinking coffee.  My motor speech system is REALLY busy!

By the way, what might the action controller for the action drink code?  It can't be a specific movement because it has to generalize across drinking from mugs, wine glasses, lidded cups, espresso cups, straws, water bottles with and without sport lids, drinking by leaning down to the container or by lifting it up, drinking from a sink faucet, drinking from a water fountain, drinking morning dew adhering to leaves, drinking rain by opening your mouth to the sky, drinking by asking someone else to pour water into your mouth.  And if you walked outside right now, opened your mouth to a cloudless sky and then swallowed, would you being drinking?  Why not?  If the meaning of drink is grounded in actions, why should it matter whether it is raining or not?

Because it's not the movements themselves that define the meaning.

But the motor system can generate predictions about the consequences of an action and that is where the meaning comes from, you might argue, as do Glenberg and Gallese:


part of the knowledge of what “drink” means consists of expected consequences of drinking

And what are those consequences? Glenberg and Gallese get it (mostly) right:

...predictions are driven by activity in the motor system (cf. Fiebach and Schubotz, 2006), however, the predictions themselves reside in activity across the brain. For example, predictions of how the body will change on the basis of action result from activity in somatosensory cortices, predictions of changes in spatial layout result from activity in visual and parietal cortices, and predictions of what will be heard result from activity in temporal areas.

So where do we stand?  Meanings are dependent on consequences and consequences "reside in activity across the brain" (i.e., sensory areas).  Therefore, the meanings of actions are not coded in the motor system.  All the motor system does according to Glenberg and Gallese (if you read between the lines) is generate predictions.  In other words, the motor system is nothing more than a way of accessing the meanings (stored elsewhere) via associations.

So just to spell it out for the readers at home.  Here is their model of language comprehension:

hear a word --> activate motor program for saying word --> activate motor program for actions related to word --> generate predicted consequences of the action in sensory systems --> understanding.

Why not just go from the word to the sensory system directly?  Is the brain not capable of forming such associations? In other words, if all the motor system is doing is providing an associative link, why can't you get there via non-motor associative links.

More to the point: if the *particular* actions don't matter, as even the mirror neuron crowd now acknowledges, and if what matters is the higher level goals or consequences, and if these goals or consequences are coded in sensory systems (which they are), then there is little role for the motor system in conceptual knowledge of actions.

Glenberg and Gallese correctly point out a strong empirical prediction of their model:


The ABL theory makes a novel and strong prediction: adapting an action controller will produce an effect on language comprehension

They cite Bak's work on ALS and some use-induced plasticity effects.  Again, let me suggest, quite unscientifically, that Stephen Hawking would have a hard time functioning if he didn't understand verbs. Further, use-induced plasticity is known to modulate response bias -- a likely source of these effects.  In short, the evidence for the strong prediction is weak at best.

But rather than adapting an action controller, let's remove it as a means to test their prediction head on.  Given their model in which perceived words activate motor programs for articulating those words, which activate motor programs for generating actions, which generate predictions etc., if you don't have the motor programs for articulating words you shouldn't be able to comprehend speech, or at least show some impairment.  Yet there is an abundance of evidence that language comprehension is not dependent on the motor system.  I reviewed much of it in my "Mirror Neuron Forum" contribution that Glenberg edited and Gallese contributed to.  NONE OF THIS WORK IS EVEN MENTIONED in Glenberg and Gallese's piece.  This is rather unscholarly in my opinion.

Toward the end of the paper they include a section on non-motor processes.  In it they write,

We have focused on motor processes for two related reasons. First, we believe that the basic function of cognition is control of action. From an evolutionary perspective, it is hard to imagine any other story. That is, systems evolve because they contribute to the ability to survive and reproduce, and those activities demand action. As Rudolfo Llinas puts it, “The nervous system is only necessary for multicellular creatures-that can orchestrate and express active movement”  Thus, although brains have impressive capacities for perception, emotion, and more, those capacities are in the service of action

I agree. But action for action sake is useless.  The reason WHY brains have impressive capacities for perception, emotion, and more is to give action purpose, meaning.  Without these non-motor systems, the action system is literally and figuratively blind and therefore completely useless.

Why the unhealthy obsession with the motor system and complete disregard for the mountain of evidence against their ideas.  Because the starting point for all the theoretical fumbling is a single assumption that has gained the status of an axiom in the minds of researchers like Glenberg and Gallese: that cognition revolves around embodiment with mirror neurons/the motor system at the core. (Glenberg's lab name even assumes his hypothesis, "Laboratory for Embodied Cognition").  Once you commit to an idea you have no choice to build a convoluted story to uphold your assumption and ignore contradictory evidence.

I don't think there is a ghost of a chance that Glenberg and Gallese will ever change their views in light of empirical fact.  Skinner, for example, was a diehard defender of behaviorism long after people like Chomsky, Miller, Broadbent and others clearly demonstrated that the approach was theoretically bankrupt.  Today the cognitive approach to explaining behavior dominates both psychology and neuroscience, including embodied approaches like Glenberg and Gallese's.  My hope is that by pointing out the inadequacies of proposals like these, the next generation of scientists, who aren't saddled with tired assumptions, will ultimately move the field forward and consider the function of mirror neurons and the motor system in a more balanced light.


Hickok, G. (2012). Computational neuroanatomy of speech production. Nature Reviews Neuroscience, 13, 135-145.

Tuesday, November 27, 2012

Orthogonal acoustic dimensions define auditory field maps in human cortex

Wow, this is the most blogging I've done in months.  This one is way off the topic of embodied cognition and mirror neurons (some of you will be relieved to hear) and in my view more important.  An interdisciplinary group of us here at UC Irvine have successfully mapped two orthogonal dimensions in human auditory cortex, tonotopy (which we knew about) and periodotopy (which most suspected but hadn't measured convincingly or showed its orthogonal relation to tonotopy in humans).  What's cool about this is it allows us to clearly define boundaries between auditory fields just like is commonly done in vision.  There are 11 field maps in the human auditory core and belt region.

Previous studies of auditory field maps disagreed about whether A1 lined up along Heschl's gyrus or is perpendicular to it.  The disagreements stemmed from the lack of an orthogonal dimension to define boundaries.  We show that A1 lines up along Heschl's gyrus, as the textbook model holds, and show how contradictory maps can be inferred if you don't have the periodotopic data.

What can we do with this?  We can map auditory fields in relation to speech activations.  We can measure magnification factors.  We can measure the distribution of ~receptive field preferences for different frequencies or periodicities between auditory fields and between hemispheres (can you say, definitive test of the AST hypothesis?).  We can determine which fields are affected by motor to sensory feedback, cross sensory integration, attention, and so on.  We use them as seeds for DTI studies or functional connectivity studies.  The floodgates are open.

The report was published online today in PNAS.  You can check it out here:

http://www.pnas.org/content/early/2012/11/27/1213381109

Barton, Venezia, Saberi, Hickok, and Brewer. Orthogonal acoustic dimensions define auditory field maps in human cortex. PNAS, November 27, 2012, doi:10.1073/pnas.1213381109


COMMENTS WELCOME!

Action-Based Language: A theory of language acquisition, comprehension, and production

This is the paper by Glenberg and Gallese.  How could not skip ahead to this one?!  I mean, the title does seem to imply that it will provide the answer to how language works!  So let's dig in.

Here's a quote:


our understanding of linguistic expressions is not solely an epistemic attitude; it is first and foremost a pragmatic attitude directed toward action.
So all of language reduces fundamentally to the action system?

One caveat is important. Whereas we focus on the relation between language and action, we do not claim that all language phenomena can be accommodated by action systems. Even within an embodied approach to language, there is strong evidence for contributions to language comprehension by perceptual systems 

Whew!  I was going to have to quote Pillsbury again:

A reader of some of the texts lately published would be inclined to believe that there was nothing in consciousness but movement, and that the presence of sense organs, or of sensory and associatory tracts in the cortex was at the least a mistake on the part of the Creator” (Pillsbury, 1911) (p. 83)
On page 906 we get to learn about the Action-Sentence Compatibility Effect (ACE), Glenberg's baby.  This is where a sentence that implies motion in one direction (He pushed the box away) facilitates responses (button presses) that are directed away from the subject and interferes with responses that are toward the subject.

The ACE is a favorite of the embodied camp.  They want to argue that this means that the meaning of say push is grounded in actual pushing movements that must be reactivated to accomplish understanding.  The ACE is interesting but not surprising nor conclusive.  Just because two things are correlated (the meaning of the word push and the motor program for pushing) doesn't mean one is dependent on the other; one could exist without the other.  Again, think "fly", "slither", "coil", etc. etc.  Or think of it this way.  If I blew a puff of air in your eye every time I said the phrase "there is not a giraffe standing next to me", before long I could elicit an eye blink simply by uttering the phrase.  Furthermore, I could probably measure a There-Is-Not-A-Giraffe-Standing-Next-To-Me-Eyeblink Compatibility Effect (the TINAGSNTMECE) by asking subjects to respond either by opening their eyes wider or by closing them to indicate their decision. This does not mean that the eye blink embodies the meaning of the phrase.  It just means that there is an association between the phrase and the action.  Glenberg's ACE simply highjacks an existing association that happens to involve action-word pairs that have not only a "pragmatic" association but also an "epistemic" relation, to use their terminology, and calls them one and the same.

Another study that GandG highlight as further evidence for an ACE-like effect makes my point.   Here is the relevant paragraph:


Zwaan and Taylor (2006) obtained similar results using a radically different ACE-type of procedure. Participants in their experiments turned a dial clockwise or counterclockwise to advance through a text. If the meaning of a phrase (e.g., “he turned the volume down”) conflicted with the required hand movement, reading of that phrase was slowed.
Unlike in Glenberg's ACE procedure, Zwaan and Taylor showed that arbitrary pairings between phrases and actions show the same effect (more like the eyeblink example).  Yes, some volume controls involve knob rotation, but others involve pressing a button, increasing/decreasing air pressure passing through the larynx, covering or cupping your ears, or placing your hand over your friend's mouth.  When you read the phrase, "he turned the volume down" did you simultaneously simulate counterclockwise rotation, button pressing, relaxation of your diaphram, covering your ears, and covering your friend's mouth in order to understand the meaning of the phrase?

GandG also selectively site data in support of their claims while obscuring important details:


Bak and Hodges (2003) discuss how degeneration of the motor system associated with motor neuron disorder (amyotrophic lateral sclerosis -- ALS) affects comprehension of action verbs more than nouns.


This is true statement.  What is lacking, however, is the fact that Bak and Hodges studied a particular subtype of ALS, that subtype with a dementia component.  In fact, high-level cognitive and/or psychiatric deficits appear first in this subtype with motor neuron symptoms appearing only later.  I'll let Glenberg and Gallese tell Stephen Hawking that he doesn't understand verbs anymore.

So much for the first two sections.

Language and the Motor System - Editorial


And another quote from the editorial:

phonological features of speech sounds are reflected in motor cortex activation so that the action system likely plays a double role, both in programming articulations and in contributing to the analysis of speech sounds (Pulvermuller et al., 2006)
which explains why prelingual infants, individuals with massive strokes affecting the motor speech system, individuals undergoing Wada procedures with acute and complete deactivation of the motor speech system, individuals with cerebral palsy who never acquired the ability to control their motor speech system, and chinchilla and quail can all perceive speech quite impressively.


One of the most frequently cited brain models of language indeed still sees a role of the motor system limited to articulation, thus paralleling indeed the position held by classical aphasiologists, such as Wernicke, Lichtheim and especially Paul Marie (Poeppel and Hickok, 2004). Recently, a contribution to speech comprehension and understanding is acknowledged insofar as inferior frontal cortex may act as a phonological short-term memory resource (Rogalsky and Hickok, 2011). These traditional positions are also discussed in the present volume, along with modern action-perception models.
Good hear we will get the "traditional" perspective.  David, did you ever think WE would be called "traditional"?  Nice to see that our previously radical views are now the standard theory.

Let's try turning the tables:

One of the most frequently cited brain models of speech perception indeed still sees the motor system as playing a critical role, thus paralleling indeed the position held by classical speech scientists of the 1950s such as Liberman and even the early 20th century behaviorists such as Watson (Pulvermuller et al. 2006).

Moreover, one of the most frequently cited brain models of conceptual representation indeed still sees sensory and motor systems as being the primary substrate thus paralleling indeed the position held by classical aphasiologists, such as Wernicke and Lichtheim (Pulvermuller et al. 2006).

Monday, November 26, 2012

Cortex special issue: Language and the motor system


Observation #1.  In the editorial Cappa and Pulvermuller write,
Whereas the dominant view in classical aphasiology had been that superior temporal cortex (“Wernicke’s area”) provides the unique engine for speech perception and comprehension (Benson, 1979), investigations with functional neuroimaging in normal subjects have shown that even during the most automatic speech perception processes inferior fronto-central areas are being sparked (Zatorre et al., 1992)
I take it that they are referring to Zatorre's task in which subjects are listening to pairs of CVC syllables, some of which are words, some of which are not, and alternating a button press between two keys.  Contrasted with noise, activation foci were reported for automatic-speech-perception-of-random-CVC-syllables-while-alternating-button-pressing in the superior temporal gyrus bilaterally, the left middle temporal gyrus, and the left IFG.  Clearly the stronger activations in the temporal lobe (nearly double the z-scores) are doing little in the way of speech perception and it's the IFG activation that refutes the classical view.

I wonder why no mention was made of a rather nifty study published around the same time by Mazoyer et al. in which a larger sample of subjects listened to sentences of various sorts and which did not result in consistent activation in the IFG. This is a finding that has persisted into more recent research: listening to normal sentences does not result in robust IFG activation.  Sometimes you see it, sometimes you don't (see Rogalsky & Hickok for a review). Superior temporal cortex, that area that people were writing about on their IBM selectrics (Google it, youngster) is not so fickle.  Present speech and it lights up like a sparkler on Independence Day.

Hopes of a balanced (and therefore useful) volume already sinking.  And I haven't even made it past the first paragraph of the editorial.




Mazoyer, B. M., Tzourio, N., Frak, V., Syrota, A., Murayama, N., Levrier, O., Salamon, G., Dehaene, S., Cohen, L., & Mehler, J. (1993). The cortical representation of speech. Journal of Cognitive Neuroscience, 5, 467-479.

Rogalsky, C., & Hickok, G. (2011). The role of Broca's area in sentence comprehension. Journal of Cognitive Neuroscience, 23, 1664-1680.


Language and the Motor System

This is the topic of a special issue of Cortex edited by Stefano Cappa and Friedemann Pulvermuller published just this year (Cortex, Vol. 48, Issue 7).  Let's work our way through what appears to be a highly balanced selection of papers by... oh wait, it seems to be mostly authors sympathetic to the idea that the motor system is the center of the linguistic universe.  But I haven't even looked at the papers yet, so let's not pre-judge.  (Oops, I guess I already did.) Kidding aside, I'm hopeful, actually, that the discussion won't be as one-sided as it has been for the last 10 years.

My plan is to read through the papers, one by one, and post my thoughts.  Please read along and feel free to post your own in the commentary section, or you can email me and I'll post your own guest entry.  As always, input from the authors is welcome.

Now turn to page 785 for the editorial by Cappa and Pulvermuller...

Friday, November 9, 2012

What does "cognitive" mean to you?

Just curious... what counts as "cognitive" to you? I've been reading a bit of the embodied cognition literature and I find statements like this rather odd:  "the traditional conceptualization of cognition as a stage in the perception–cognition–action pipeline."  Is cognition just high-level stuff?  I don't see it that way.  Perception is cognition.  Action is cognition.  Language is cognition.  Categorization, memory, attention, are all cognition.  Is this "cognitive sandwich" notion just a straw man given modern conceptualization of cognition?

Second International Conference on Cognitive Hearing Science for Communication, June 16-19, 2013 - Linköping, Sweden



The first conference in 2011 was a real hit and has boosted research in the field.
We believe that this second conference will be just as successful. Some of the
themes addressed at the first conference have been retained, some will be
explored further, and others are quite new. This reflects the development of
the field. Conference speakers represent the international cutting edge of
Cognitive Hearing Science.
We look forward to welcoming you to an exciting new conference and to
Linköping University, the home of Cognitive Hearing Science. Many prominent
researchers have already accepted to give a talk.
Further information can be obtained from: www.chscom2013.se