Thursday, May 16, 2013

Motor control for speech versus non-speech vocal tract gestures

Here is a pretty cool demo from the 1960s showing the dissociability of motor control during speech versus non-speech tongue movements.  Check it out!  Thanks to Ron Nutsell for pointing this out.

Thursday, May 9, 2013

Flying Pigs on National Public Radio: Promoting the wrong theory of language and understanding

NPR aired an interview recently with Benjamin Bergen, UCSD cognitive scientist, discussing an embodied view of word meaning.  The basic idea is nothing new by now: we understand words by "simulating" our physical experiences that have become associated with those words.  Here's a quote taken from the NPR transcript of the interview:
If someone read a sentence like, "the shortstop threw the ball to first base," parts of the brain dedicated to vision and movement would light up, Bergen says. "The question was, why?" he says. "They're just listening to language. Why would they be preparing to act? Why would they be thinking that they were seeing something?" 
The answer that emerged from this research is that when you encounter words describing a particular action, your brain simulates the experience, Bergen says. 
"The way that you understand an action is by recreating in your vision system what it would look like to perceive that event and recreating in your motor system what it would be like to be that shortstop, to have the ball in your hand and release it," Bergen says.
This is standard embodied cognition speak.  I haven't read his book, but this view seems to be the central topic of Bergen's monograph, Louder than words: the new science of how the brain makes meaning.  I'm sure his book is much more careful and articulated than the interview, but the interview is what more people will hear and so deserves a response, particularly because the interview discussion goes beyond word meanings, claiming to reset our understanding of language itself:
Just a few decades ago, many linguists thought the human brain had evolved special module for language. It seemed plausible that our brains have some unique structure or system. After all, no animal can use language the way people can.
But in the 1990s, scientists began testing the language-module theory using "functional" MRI technology that let them watch the brain respond to words. And what they saw didn't look like a module, says Benjamin Bergen.
They found something totally surprising," Bergen says. "It's not just certain specific little regions in the brain, regions dedicated to language, that were lighting up. It was kind of a whole-brain type of process.

He then goes to explain how we understand language via simulation, as in the baseball example. This generalization to language is troubling, reckless even.  There are so many problems with the claim, it's hard to know where to start, but I'll try:

1. A theory of word meaning is not a theory of language, it's a theory of word meaning.  Let's translate the argument to the visual domain to reveal how ridiculous this generalization is.  "Just a few decades ago, many visual scientists used to think that the human brain has evolved special modules for vision, like systems for wavelength frequency detection, motion detection, and analysis of object form.  But in the 1990s MRI technology let them watch the brain respond to visual scenes.  And what they saw didn't look like a module, but involved activation all over the brain."  Do we conclude that decades of research on vision got it all wrong just because lot's of brain tissues lights up when we look at things?  Of course not!  Bergen's comment is nothing more than a misguided interpretation of functional MRI and its relation to computational systems in the brain.

2. To push the point, it's not even clear to me that Bergen's theory has anything to do with language.  It is a theory of conceptual representation, not a theory of how the brain takes an acoustic signal and extracts and transforms the relevant bits to make contact with that conceptual system.  The latter issue is what occupies most linguists' time and theoretical focus.  Does Bergen claim that his theory explains cochlear filtering of the acoustic signal.  No.  Does he claim that his theory explains how that signal is elaborated in the frequency and time domains to yield a spectro-temporal representation of the signal? No. Does he claim that the theory explains how that spectro-temporal signal makes contact with representations of word forms in the listener? No. Does visual simulation of the events described in the sentence explain the word order in the sentence? Or the position and use of words like the and to in that sentence? No, those are the kinds of things that perceptual scientists and linguists worry about: the transformation of the acoustic signal into some format that will allow contact with meaning and Bergen's simulation theory has nothing to say about it, which means that it has nothing to say about the "module for language" that many linguists used to believe in.  Moral: don't claim to have solved puzzle A when you are fiddling with the pieces of puzzle B.

3. Simulation theories of conceptual representation don't solve any problems.  Let's consider Bergen's theory: we understand the sentence "the shortstop threw the ball to first base" by simulating what it would be like to see the action and by simulating what it would be like to do the action.  And, he argues elsewhere, we understand things we have never seen or done by combining or generalizing from things we have seen and done.  So "flying pig" is understood by combining the experienced concepts of FLY (as seen with birds) with that of PIG.  The result is the visual activation of the image of a pig with wings, which is the neural basis of our understanding.  But wait, Bergen said that the way we understand action (flying is an action) is by simulating it in our visual system and by doing it with our motor system.  It's not clear how we can simulate FLYING PIG in our motor system, so the motor part must not be critical in this case, which makes us question whether it is critical in the shortstop throwing a ball case.  (Good thing we have a reason to question the motor part, because then we have an explanation for why quadraplegic individuals can enjoy baseball as much as the rest of us.)  So, we must conclude, simulation of the perceptual bit is where our understanding of "flying pig" comes from.  But now I'm confused.  How do we know which perceptual experience to simulate?  Do I combine my experience with pigs and birds and give the hybrid creature wings? Or do I combine pig with superman and give it a cape (a possibility noted in the interview)?  Or maybe I combine pig with my experience flying on 737s and imagine a pig sitting in coach ordering a Diet Coke.  Or should I combine pig with my baseball experiences and picture a mini pig being used as a baseball and getting smacked out to center field.  Maybe, an embodied theorist might claim, that's the cool part: depending on which experiences I combine, I get a different meaning.  Fine, but let's flip it around.  How do I know that a pig with wings, a pig in coach, and a ball-shaped pig, one flapping, one sitting and sipping, and one hurtling through space are all examples of flying pigs?  What is telling me that each of those simulations are linked?  You might say that they are linked due to similarity of experience.  By what metric?  My perceptual experiences with each of these kinds of FLYING are wildly different.  How does the brain know to associate them?  Something must be telling the system that those instances are "similar" in the relevant respects.  Now we need a theory of that!  Here's the point: simulating a specific experience of say FLY can't be enough because it doesn't capture our ability to generalize the meaning to birds, planes, and baseballs.  We have to be "simulating" something more abstract such that it captures those generalizations; and if we are "simulating" an abstract something, we might as well call it an abstract representation just like in "classical" theories. Saying that we understand by "simulation" just relabels the problem, it doesn't solve anything.

I'm sure we could go on but I think I'll just conclude instead: Bergen's theory is not about language so whatever claims that are made on that front are just hyperbole.  And in the domain that the theory actually applies, it doesn't improve our understanding.

Thursday, May 2, 2013

Post Doctoral Position - Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig (Obleser lab)


The Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and the Max Planck Research Group “Auditory Cognition” (headed by Jonas Obleser) are now offering a Postdoctoral researcher position, for initially 2 years, preferably starting by October 2013.
Successful candidates will have a PhD in cognitive neuroscience, psychology, or natural sciences. Prior experience with either fMRI or EEG/MEG methods is expected, and an interest in further applying and combining both domains in their research is highly desirable. Candidates with a background and/or interest in advanced fMRI methods are particularly encouraged to apply.

The successful candidate will share our enthusiasm in problems of auditory cognition and auditory neuroscience, and ideally has already demonstrated this by contributing to the field. However, researchers with a background in visual or other neuroscience are also encouraged to apply. He or she should have a solid methods background and strong methods interest, hands-on experience in problems of data and statistical analysis, and the interest to co-supervise the PhD and Master students in the group. The position offered does not include any teaching obligations.
Starting date is flexible. Salary is dependent on experience and based on MPI stipends or equivalent salary according to German Public service regulations.

The research will be conducted at the MPI CBS in Leipzig, Germany, an internationally leading centre for cognitive and imaging neuroscience equipped with a 7.0 T MRI scanner, three 3.0 T MRI scanners, a 306 channels MEG system, a TMS system and several EEG suites. All facilities are supported by experienced IT and physics staff. Our institute (just 190 km, or 70 minutes by train, south of Berlin) offers a very international environment, with English being the language spoken in the laboratory. It offers a friendly and generous environment of researchers with diverse backgrounds and with an excellent infrastructure.

In order to increase the proportion of female staff members, applications from female scientists are particularly encouraged. Preference will be given to disabled persons with the same qualification.

Applications should be kindly sent to personal@cbs.mpg.de using the application code “PD 03/2013” in the subject. Please send your application as a single pdf attachment, with the file name containing your surname. It should enclose a cover letter (max. 2 pages) that also specifies your future research interests; a CV; up to three representative reprints; and contact details of 2 personal references. This call remains open until the position is filled.

For further details please contact Dr Jonas Obleser, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, obleser@cbs.mpg.de