Saturday, March 28, 2009

Auditory and Language Neuroscience Society

Well we asked if you would be interested in attending a conference on auditory and language neuroscience and the answer was a fairly resounding YES. More than half of Talking Brains survey responders said they'd attend regularly, and another 32% said they'd attend occasionally. Apparently, there is a decent amount of interest in this kind of conference.

It turns out that Andrew Lotto (Arizona), rockin' the Kiss shirt in the photo at left, and Julie Liss (Arizona State), not rockin' a Kiss shirt in the photo on the right, have been holding small-scale meetings with a focus rather nicely in line with our interests, namely their Auditory Cognitive Neuroscience Society meeting. I attended the ACNS meeting this last January and was impressed by the unique format that promoted lots of discussion and a great deal of interaction among people of differing views and approaches. Andrew and Julie were foolish enough to let me and David get involved in developing their society/meeting. We are still in the planning stages, but here is what the four of us have on the table so far...

First, we decided a name change was in order. Current favorite: Auditory and Language Neuroscience Society (ALNS). We want the meeting to include folks who study language (broadly construed to include everything from motor control of speech to sentence comprehension to sign language) as well as auditory perception in both human and animal. We are considering an annual meeting that will bounce between East and West U.S. coasts. To date we have a governing board that includes (besides Hickok, Liss, Lotto, & Poeppel) F. Guenther, L. Holt, W. Yost, and R. Zatorre, with more invites in the works.

The current plan is to have the inaugural meeting in sometime during the 2010/2011 academic year. Stay tuned for more information. We welcome your comments, of course!

Thursday, March 26, 2009

A couple of photos from CNS 2009

Here's a couple of photos from the CNS meeting in San Francisco. David and I will post a comment or two in the near future with a bit more scientific content...

David saying something he thinks is profound. Greig de Zubicaray not so sure.


Talking Brains West post doc, Kai Okada, who is almost as big as her latte, explaining our study showing that area Spt has both sensory and motor response properties.


David and Greg working on a detailed Talking Brains entry on the neural basis of something.


Angela Friederici explaining why we've got it all wrong.

Wednesday, March 25, 2009

Talking Brains is inching towards a useful medium! -- Comment from Luciano Fadiga

When David and I started this blog we hoped it would be a forum for discussion of issues in language science, that would include PIs as well as junior scientists. Up until recently when we seem to be getting an increase in commentary, it has been just the David and Greg show. In this light, I can't tell you how happy I am to hear from Luciano Fadiga, whose work has been a topic of this blog on occasion. I've had very interesting, open, and informative email discussion with Luciano, and now I'm thrilled to see he is willing to discuss some of these issues publicly so that we all can benefit.

Although we've never met personally, I can tell you that I like Luciano quite a lot and learn much from his insights and ideas. So welcome Luciano! By the way, you noted in your comment that the blog is unbalanced in that we get to post on the main page and others are relegated to the virtual basement a click, and maybe a scroll, below. When we started the blog we offered to post commentary from other investigators on the main level -- a kind of "from the lab of" feature -- but no one took us up on it. The offer still stands! So in that spirit, I'm re-posting Luciano's comment here. I will respond in the basement. :-)

****************
Dear Greg,

I see that now we agree on almost everything!

1) We think that the addition of noise to speech is important to make the task difficult enough (now we know from a new experiment that this is definitely true and we are trying to understand whether it is a problem of ceiling effect or if it reflects other mechanisms).

2) We both think that the motor involvement may reflect an attentional-like mechanism (never heard the so called "premotor theory of attention"?).

3) We agree now on the fact that 80% of misunderstanding is a relevant deficit and not the proof that the lesion of Broca’s area has nothing to do with speech perception.

If you remember, however, the fact that Broca’s area plays a marginal role in phoneme discrimination is exactly the title of our contribution to the special issue on mirror neurons that you are editing now. I would say ‘surprisingly’ because I completely ignored your so deep anti-mirror position when I accepted to contribute to this issue.

However, here the weather is sunny, we are in the most beautiful country of the world, we are very happy of how things are going on, and now we are even happier because you agree with us!
Unfortunately, we don't have enough time now to go in depth in this debate (we are supposed to stay in the lab to make experiments to answer to yours - and ours - doubts) and, as you probably know, I am quite refractory to formulate “highly theoretical theories”. I have the impression that sometimes you make some confusion between what I write and what they write some much more intelligent and ‘multidisciplinary’ colleagues of mine.

I confirm, however, that my intention of the last five years was to investigate the CAUSAL role of the motor/premotor system in perception. Now we have interesting results, such as this on CB and (please, prepare a lot of space on your blog!) another paper coming soon, showing that frontal aphasics do have problems in pragmatically representing others’ actions. However, also if I had found the opposite of what I am finding, it would have been for me a precious information. My goal is to understand how the brain works by keeping a constant distinction between data and speculations.

What I definitely disagree is the fact that your posts on the blog are so evident and visible, while the comments from others can be seen only by clicking on a small link. But this is a secondary issue.
Have a nice day (I hope sunny as well, in California, surrounded by palm trees)!
Friendly yours,

Luciano

Tuesday, March 24, 2009

What is speech perception?

I have to admit I'm starting to get a bit depressed about the field of speech perception.

Most experiments on "speech perception" ask participants to discriminate pairs of syllables or identify which sound they heard. The recent paper by D'Ausilio et al. used such a measure and in their response to my commentary some questions were raised about my "task specific effect" comment (I decided not to address them for lack of energy, but I'll tell you why I don't buy their arguments if anyone wants to know). A recent thoughtful comment here on Talking Brains by Marc Sato included a quote that sparked sufficient energy to motivate a few words on my part though. The quote was from a recent paper (this is not a dig against anything Marc said, it's just that this quote got me thinking):
speech perception is best conceptualized as an interactive neural process involving reciprocal connections between sensory and motor areas whose connection strengths vary as a function of the perceptual task and the external environment.

I don't know what other folks are studying when they study speech perception, but to me speech perception is best conceptualized as that process that allows a listener to access a lexical concept (~word meaning) from a speech signal. This is what "speech perception" does in the real world. It is one step in the conversion from variations in air pressure into meaning. I'm pretty sure the capacity for "speech perception" didn't evolve or develop to allow us to tell an experimenter whether we heard a /ba/ or a /pa/. In fact, the next time you have the pleasure of talking to a speech scientist who regularly employs such methods, pause after a sentence you speak and ask if in the last sentence you uttered the syllable /ba/ or not. S/he will have no idea; we don't perceive phonemes, we perceive word meanings. For the most part, the ability to make conscious decisions about phonemes is a useless ability in the context of auditory speech processing, one that is probably only available to literate individuals by the way (I can dig up some refs if anyone is interested). If you are interested in studying the ability to make judgments about speech sounds, that is perfectly fine; after all it appears to be highly relevant to reading -- an important issue. But don't assume that you are studying anything that is necessarily relevant to what happens in the real world of auditory speech processing.

Let me really stick my neck out and say this: if you are going to use a task that requires listeners to make judgments about speech sounds (syllable discrimination or identification), then in order to make the claim that you are studying anything relevant to how speech is actually processed in the real world, you better have some empirical data to back it up; i.e., it better hold for comprehension and not just metalinguistic judgments.

D'Ausilio et al.'s response regarding the role of motor cortex in speech perception

Let's go through D'Ausilio et al.'s response to my commentary point by point and see if there is anything valid.

Dr. Hickok thinks "we are perfectly capable of perceiving tog when it is presented in its acoustically unambiguous form." In our study we show exactly that, for this to happen, activity in the motor system needs to be consistent with the information reaching the temporal lobes. If this condition is not met, we may mistake tog for tod. As this happens, in spite of the obvious fact that the ears are indeed not attached to the motor system, we concluded that motor systems interact with superior-temporal cortex in the speech perception process.


First, no one doubts (that I know of) that the motor system can interact with superior temporal cortex. The issue is whether the motor system is required for speech perception to happen. Regarding my specific point that we hear tog as tog when it is presented in an acoustically unambiguous form, D'Ausilio et al. seem to be suggesting that the motor system is required for this to happen. Yet in their article they state, "In order to avoid ceiling effects in the phoneme identification task, we immersed vocal recordings in 500 ms of white noise." Hmmm. These ceiling effect would be what exactly? That subjects would hear the phonemes exactly as they are presented acoustically despite motor stimulation? Apparently, their own study demonstrates my point.

They go on to say:

One may conceptualize the underlying mechanisms as similar to attentional influences, stemming from the bidirectional feedback and feedforward connections [1] between superior-temporal and motor systems, and leading to an enhancement of superior-temporal activation as a consequence of the joint system they encompass [2].


Sounds right to me! In fact, this is pretty much what I said in my commentary: "there is strong evidence that motor-related systems are not fundamental to speech perception, but instead, simply modulate the process in some way."

Another important point we would like to stress is that, although we apply TMS on M1, we explicitly state in our paper that areas adjacent to M1 may be critically involved in speech perception.


My arguments are not aimed directly at M1 but a motor systems more generally.

The striking finding is, however, that the facilitation and disfacilitation is manifest in a somatotopic manner, yielding double dissociations on accuracies and reaction times, thus demonstrating a causal relationship between motor and acoustic mechanisms.


Yes, this is a very nice finding, and yes it does show that motor stimulation can influence speech perception. But again that is not what the argument is about.

Here's where it starts to get interesting:

..old neurological models [4, but see 5 for a critical historical commentary], and equally the proposal by Hickok [6], have denied a necessary role of the motor system in speech perception. This is in contrast with evidence from the aphasia literature, where it had been known for a long time that aphasia, even if its underlying lesion is restricted to the frontal cortex, is a general multimodal deficit affecting both the production of speech and its perception and comprehension [7]. Clinical tests for selecting aphasics from other brain-damaged individuals include, thus, speech comprehension test [8].


Reference #7 is to a clinical textbook on aphasia (Rosenbek, J. C., LaPointe, L. L., & Wertz, R. (1995). Aphasia: A clinical approach (2nd Edition ed.). Boston: College-Hill Press.). I'm sure it is a wonderful book, but probably not the best primary source for their claim. Reference #8 is to the Token Test (De Renzi, E., and Vignolo, L. (1962). The Token Test: a sensitive test to detect receptive disturbances in aphasics. Brain, 85, 665-678) which assesses comprehension of commands. The test involves a set of "tokens" of various sizes, colors, and shapes, and ranges from simple commands ("touch the yellow circle") to multi-clause, multi-step comments ("put the large black square on the small yellow circle"). This is obviously a very general measure that will pick up any number deficits ranging from auditory comprehension, to working memory, to executive function. It is not surprising that exclusively frontal lesions can lead to deficits on this task. More the point, the issue of sentence comprehension is completely orthogonal to the role of motor involvement in speech perception.

Furthermore, aphasic patients generally exhibit abnormalities in speech perception [9], especially a deficit in phoneme identification, in tasks such as the one used in our study [10].


Now we come back to question of task issues. There is no need to rehash the details of the arguments here – well maybe there is but I won’t – other than to say (again) that performance on phoneme identification and discrimination tasks double-dissociate from performance on auditory comprehension tasks even those comprehension tasks that require fine phonemic discriminations (i.e., they are minimal pairs, differing by a single feature). In short, it turns out that phoneme identification is a metalinguistic skill that doesn’t reflect normal speech perception. The fact that aphasics may exhibit abnormalities on phoneme identification tasks is invalid because the task is invalid. If you aren’t convinced of this, please read Hickok & Poeppel (2000, 2004, 2007) and this blog entry.

We should also stress that the hypothesis of perceptual relevance of motor systems requires precise experiments addressing this issue. However, Dr. Hickok refers to negative evidence that did not explicitly test perceptual relevance of motor centers, but rather are based on anecdotic reports or clinical tests at best


I referred to the observation that damage to M1, or to Broca’s area (bilaterally), or to large sectors of fronto-parietal cortex (severe Broca’s aphasia), or to the entire left hemisphere (my own Wada studies using minimal pair stimuli), or to individuals who failed to develop speech (anarthrias), or had not yet developed speech (infants), or who don’t have the capacity to develop speech (Chinchillas) do not prevent speech perception from occurring. If these studies are “anecdotic reports or clinical tests at best” then I stand corrected.

[see ref. 11 for an interesting demonstration of why Broca's aphasics usually show intact comprehension in standard clinical tests, although they are impaired in such ability].


Reference 11 is a very interesting study of the comprehension of acoustically distorted words; there were both low pass filtered and compressed in time by 50% (Moineau, S., Dronkers, N. F., Bates, E. (2005). Exploring the processing continuum of single-word comprehension in aphasia. J Speech Lang Hear Res, 48, 884-96). What they found was that (i) word comprehension was worse in distorted compared to non-distorted conditions for Broca’s aphasics, but also for Wernicke’s aphasics, anomic aphasics, right hemisphere damaged patients, and normal controls, but also that (ii) word comprehension was more affected by distortion in Broca’s and even more so in Wernicke’s aphasics than the other groups. This latter result indicates that damage to frontal or posterior left hemisphere regions impacts speech comprehension under non-optimal conditions. Given that the lesions associated with Broca’s aphasia tend to be large, it is difficult to attribute this effect to damage to primary motor cortex, premotor cortex, or Broca’s region, but for the sake of argument, let’s suppose it is. This still does not mean that speech perception is grounded in motor systems. First, the fact remains that under optimal listening conditions, comprehension performance among Broca’s aphasics did not differ statistically from normal controls (whereas Wernicke’s aphasics performance did). With the speech motor system largely out of the picture in Broca’s aphasia, something is supporting auditory comprehension. Presumably it is the temporal lobe(s). If speech perception were grounded in the motor speech system, one would expect even normal speech perception to be impaired following large lesions to this system, yet this is not the case. Rather, the finding that frontal lesions can exacerbate speech recognitions deficits under distorted listening conditions suggests that this tissue can modulate speech recognition processes to some degree, perhaps via motor prediction (forward models) or perhaps via attention, executive, or working memory systems.

what Dr. Hickok considers an index of preserved comprehension (80% of accuracy) is, in our view, a really relevant deficit


Eighty percent accuracy is indeed a significant deficit on a word recognition task. But as I pointed out, much of this deficit may not result from difficulty in speech sound perception but from higher-level dysfunction. Further, this performance level holds for non-fluent patients with effective zero speech production capacity. In this context, 80% accuracy far outstrips the ~0% motor speech performance.

although we consider patients studies as strongly informative on brain function, we should keep in mind the fact that it is often extremely difficult to generalize these data to situations not specifically tested by a given study.


So the fact that patients with lesions to the motor system or with no motor speech capacity can nonetheless comprehend speech is non-generalizable because “situations” were not specifically tested? This is hand-waving. What are these “situations”? And more importantly, how does one explain the preserved comprehension in the face of motor speech system damage?

Now it gets confusing:

Dr. Hickok proposes three alternative interpretations to explain our data, that we summarize as follows: 1. motor to sensory flow (activation of forward models); 2. existence of a "third" decision area gathering information from sensory and motor cortices; 3. TMS targeted attentional processes towards phonological features. The first explanation is actually our interpretation.


If this is the authors’ interpretation then indeed we have no argument. However, in the next paragraph we see this statement:

One may still want to claim, "The temporal lobe perceives speech while the motor system only helps." However, we think that this position stems from old-fashioned philosophies about the nature of brain areas as a modular input or output processors. As we point out in our paper, advances in the brain sciences in the last twenty years have taught us that neuronal assemblies encompass motor and perceptual "modules" of the brain and build distributed functional systems to which especially the motor system makes an eminent contribution [14].


They don’t seem to believe that the motor system only helps (my position). What do they mean then that their findings are explained by motor to sensory flow? Maybe I’m too old-fashioned to understand (see below for old-fashioned speculation). By the way, they are incorrect about the “old-fashioned” theories and it is not just the last 20 years that have taught us about sensory-motor relations. Wernicke noticed that posterior aphasics have speech production deficits – that’s right, production deficits resulting from damage to sensory cortex -- and explicitly proposed that sensory systems interact with (help guide) the motor system during speech acts. Wernicke was just as modern in this respect as say Pulvermuller (who’s model is functionally identical to Wernicke’s) except that the dynamic influence flowed most noticeably in the sensory to motor direction (sensory guides motor) rather than the motor emphasis of the “modern” theorists.

Thus, specific motor-perceptual channels seem to exist in the brain and these channels work by associating the acoustic property of, e.g., the speech sound /b/ with the motor representation of the articulatory gesture leading to the production of the same speech sound in the listener's motor brain. We see this finding very close to the Liberman's idea of motor perception and we felt ourselves obliged to recognize the intellectual merit of his intuition.


Liberman believed that the activation of motor speech systems WAS speech perception, not the mere association. Again, this is an interesting and thoughtful (but incorrect) hypothesis. But how close is “very close”? We need some clarification.

Distributed systems with a strongly linked action and perception subcomponents explain patterns of deficits in aphasia, especially dissociations between motor and perceptual impairments in case of lesion of the distributed neuronal assemblies at their acoustic or motor ends [15, 16].


Ok, wait. So motor and perceptual impairments do dissociate? This is what I was arguing! No fair switching sides! (I kind of feel like Daffy Duck arguing ‘Duck season – Wabbit season with Bugs Bunny!) Why couldn’t we just start with this admission and move on from there?

Ultimately, as a distributed circuit needs to receive sensory input and control motor output, cutting of these afferent and efferent connections does explain the occasionally observed unimodal deficits mentioned in Hickok's contribution.


Oh, maybe they mean the peripheral sensory and motor systems… Like the entire left hemisphere, for example, or Broca’s area bilaterally.

By no means do these dissociations prove the modular nature of the language system. Lesion evidence argues in favour of a distributed systems account [17]. In sum, we do not think that Hickok's proposal provide reasonable arguments for rejecting functional interactions between motor and language systems, speech perception systems included.


I’m not claiming the language system is modular, nor aim I rejecting the existence of functional interactions between motor and language (they probably meant sensory) systems. No fair switching arguments!
Here’s what I am guessing the authors believe (if you push hard enough to find out). Speech sounds are represented in distributed sensory-motor systems. Activation of the entire sensory-motor network = activation of a phonological representation. These distributed representations are then used for lexical look up. This is a reasonable hypothesis. However, this is not a motor theory of speech perception, nor is this a theory in which speech perception is “grounded” in motor circuits. If this is in fact what the authors believe, then it is misleading for them to place so much emphasis on the motor half of the equation. On the other hand, this work grows out of the mirror neuron literature where very explicit claims are made regarding the central role of the motor system in action understanding. So maybe they really do believe in a motor theory of speech perception.

I would love to hear from any of the authors on the paper so we can sort these issues out.

Speech Perception Does Not Rely on Motor Cortex: Response to D'Ausilio et al.

My comment on the D'Ausilio, et al. (2009) study has now been published on the Current Biology website: http://www.cell.com/current-biology/comments_Dausilio

If you read the original article in the context of the authors' (specifically Fadiga's and Pulvermuller's) work on mirror neurons and embodied semantics it is clear that they are arguing for a motor theory of speech perception. My review specifically critiques this proposal. A reviewer of my commentary doubted that they actually believed what I was arguing againts (a reasonable comment because they do talk about the motor system in the context of a larger network), but after reading the authors' response to my commentary, it is clear that this is in fact exactly what they propose.

Have a look both at my critique and their response and let me know what you think. Essentially all I've done here is reiterate why the motor theory of speech perception was abandoned by speech scientists decades ago, with a little modern data augmentation. The motor theory was an interesting idea, it just happens to be wrong -- still.

In a subsequent post, I'll pick apart D'Ausilio et al.'s response to my commentary and show why none of their arguments hold any water.

D'Ausilio, A., Pulvermüller, F., Salmas, P., Bufalari, I., Begliomini, C., & Fadiga, L. (2009). The Motor Somatotopy of Speech Perception Current Biology, 19 (5), 381-385 DOI: 10.1016/j.cub.2009.01.017

Friday, March 20, 2009

Multiple Research Assistant/Fellowship Positions -- University of Maryland

Multiple Research Assistant/Fellowship Positions

The Department of Linguistics at the University of Maryland, College Park, is looking to fill up to three full-time positions for post-baccalaureate researchers. Starting date for all positions is summer or fall 2009. Salary is competitive, with benefits included. The positions would be ideal for individuals with a BA degree who are interested in gaining significant research experience in a very active lab as preparation for a research career. Applicants must be US or Canadian citizens or permanent residents, and should have completed a BA or BS degree by the time of appointment. Previous experience in linguistics is required, and relevant research experience is preferred.

Applicants may request to be considered for all positions. Review of applications for all positions will begin immediately, and will continue until the positions are filled. For best consideration, completed applications should be received by April 21st.

Position #1: Research Assistant in Psycholinguistics/Cognitive Neuroscience

This person will take a leading role in research projects in psycholinguistics and cognitive neuroscience of language. The person will be involved in all aspects of the design, testing and analysis of studies of language comprehension in adults, using behavioral and neuroscientific techniques, including ERP and MEG brain recordings (training provided). The person will also play a key role in the management of an active lab group and will contribute to Maryland's new IGERT training program in "Biological and Computational Foundations of Language Diversity". Previous experience in linguistics and/or psycholinguistics is preferred. The ability to interact comfortably with a wide variety of people (and machines) is a distinct advantage. The position is for a one year initial appointment, with the possibility of extension beyond that time. For more information contact Dr Colin Phillips, colin@umd.edu, (301) 405-3082. http://www.ling.umd.edu/colin

Positions #2-#3: Baggett Research Fellowships 2009-2010

One-year Baggett Fellowships are full-time positions intended for individuals with a BA or BS degree who are interested in gaining significant research experience in an active interdisciplinary environment before pursuing graduate study in some area of linguistics or cognitive science. One or two fellowship positions are available for the 2009-2010 year. Salary is competitive, with benefits included.

Applicants for all positions should submit a cover letter (outlining relevant background and interests, including potential faculty mentors), a current CV, and the names and contact information for 3 potential referees (letters are not needed as part of the initial application), and a writing sample. Fuller details at http://www.ling.umd.edu/baggett. All application materials should be submitted electronically to Jeff Lidz (jlidz@umd.edu). NOTE: Put "Baggett Fellowship" in the subject line. Prospective fellows should fel free to send a preliminary letter of interest to Dr Lidz or Dr Phillips.
=======================================================================
The Cognitive Neuroscience of Language Lab is a well-integrated community of over 40 faculty, students and research staff, engaged in research on a wide variety of areas of language, ranging from acoustics to semantics, in children and adults, normal and disordered populations, and covering around 10 languages. The lab has facilities for behavioral testing of infants, children and adults, two eye-tracking labs, an ERP lab and a whole-head MEG facility. The lab is affiliated with the Department of Linguistics and with the Neuroscience and Cognitive Science Program.

http://www.ling.umd.edu/

The University of Maryland is an Affirmative Action/Equal Opportunities Title IX employer. Women and minority candidates are especially encouraged to apply.
=======================================================================

Monday, March 16, 2009

Does speech perception rely on motor cortex? Of course not!

D’Ausilio and colleagues report a very nice new study showing that stimulation of human motor cortex (via TMS) directly affects the perception of speech sounds. TMS was applied to the lip or tongue areas of M1 while participants were asked to identify speech sounds that either involved prominent lip articulation, [b] and [p], or prominent tongue articulation, [d] or [t]. They found a double-dissociation: relative to a non-stimulation baseline, participants were faster to indicate that they heard a lip-related sound when TMS was applied to motor lip areas, and faster to indicate that they heard a tongue-related sound when TMS was applied to motor tongue areas. The authors conclude that “motor structures provide a specific functional contribution to the perception of speech sounds” and go on to propose “a modified ‘motor theory of speech perception’ according to which speech comprehension is grounded in motor circuits…”.

I strongly disagree with this conclusion and I have convinced the editors of Current Biology to let me tell you why. Since I don't have the cool flashy experiment to report, I only get my piece on their website, not in print, but I'll take it nonetheless. Fadiga and company are writing a response to my arguments. Have a look when it comes out and let me know what you think.

D'Ausilio, A., Pulvermüller, F., Salmas, P., Bufalari, I., Begliomini, C., & Fadiga, L. (2009). The Motor Somatotopy of Speech Perception Current Biology, 19 (5), 381-385 DOI: 10.1016/j.cub.2009.01.017

Friday, March 13, 2009

Ever wonder where the AST theory came from?

"Bilateral but asymmetric in time, speech perception is..."

Most of you are probably unaware that Talking Brains East PI, David Poeppel gave up a lucrative stage career to try to unravel the mysteries of speech perception (actually a true statement). Well, here is the proof. You can call him Luke.

Thursday, March 12, 2009

Bilateral lesions to Broca's area

A couple weeks ago a reader raised the question of whether unilateral lesions to Broca's area constitute a strong enough test of the motor theory of speech perception. I suggested they were because they sometimes severely disrupted speech production with minimal effects on the recognition (comprehension) of speech. The question continued to nag me though, so I started looking for cases in the literature of bilateral lesions to Broca's area. It turns out there are a handful. Here is the most interesting case report by Levine & Mohr in 1979.

Case 3

A 20-year-old woman suffered a large left perisylvian stroke including Broca's area and developed chronic severe Broca's aphasia. Nine years later she suffered another stroke, this one involving the right homologue to Broca's area.


Here is Levine and Mohr's description of the patient's language abilities:
The patient's speech production was absent, and she was unable even to vocalize. Her speech comprehension was very slightly impaired; she obeyed two-commission but not three-commission verbal commands, and performed approximately 1 standard deviation above the average aphasic patient in the auditory comprehension subsection of the BDAE. p. 932

Clearly, the patient is able to comprehend words quite well; her only difficulty being the comprehension of rather complicated sentences, consistent with Broca's aphasia.

So,

large unilateral lesions that destroy speech production ability do not cause substantial speech recognition deficits (Cases 7, 11, 16, 17; Naeser et al. 1989);

complete deactivation of the entire left hemisphere during Wada procedures does not cause substantial speech recognition deficits (Hickok et al. 2008);

bilateral lesions to the frontal operculum that cause Foix–Chavany–Marie syndrome (anarthria/severe dysarthria and loss of voluntary muscular functions of the face and tongue including speech) do not cause substantial speech recognition deficits (Weller, 1993);

failure to develop the ability to speak does not cause substantial speech recognition deficits (Lenneberg, 1962; Christen, et al. 2000);

and, bilateral lesions to Broca's region does not cause substantial speech recognition deficits (Levine & Mohr, 1979).

Now can we all agree that there is a small problem with the motor theory of speech perception?

References

Christen HJ, Hanefeld F, Kruse E, Imhäuser S, Ernst JP, Finkenstaedt M. (2000). Foix-Chavany-Marie (anterior operculum) syndrome in childhood: a reappraisal of Worster-Drought syndrome. Dev Med Child Neurol., 42, 122-32

Hickok, G., Okada, K., Barr, W., Pa, J., Rogalsky, C., Donneley, K., Barde, L., & Grant, A. (2008). Bilateral capacity for speech sound processing in auditory comprehension: Evidence from Wada procedures Brain and Language, 107 (3), 179-184 DOI: 10.1016/j.bandl.2008.09.006

Lenneberg, E. (1962). Understanding language without ability to speak: a case report. Journal of Abnormal and Clinical Psychology, 65, 419-425.

Levine DN, Mohr JP. (1979) Language after bilateral cerebral infarctions: role of the minor hemisphere in speech. Neurology, 29(7):927-38.

Naeser, M.A., Palumbo, C.L., Helm-Estabrooks, N., Stiassny-Eder, D., and Albert, M.L. (1989). Severe nonfluency in aphasia: Role of the medical subcallosal fasciculus and other white matter pathways in recovery of spontaneous speech. Brain 112, 1-38.

Weller M. (1993) Anterior opercular cortex lesions cause dissociated lower cranial nerve palsies and anarthria but no aphasia: Foix-Chavany-Marie syndrome and "automatic voluntary dissociation" revisited. J Neurol, 240(4):199-208

Sunday, March 8, 2009

The Battle for Broca's area

David commented previously on Grodzinsky and Santi's (2008) TICS paper on the function of Broca's area. Justifiably in my view, David expressed his disappointment that an obviously multi-functional region was being linked to a single function, syntactic movement. In addition, a recent reply to the paper in TICS by Roel Willems and Peter Hagoort (2009) highlighted the possible role of Broca's area in semantic processing, an issue that was not addressed at all in the Grodzinsky and Santi paper. So there is no shortage of critics of the opinion piece. I might as well join the party...

By way of reminder, Grodzinsky and Santi discuss four main hypotheses regarding the function of Broca's area: action comprehension, working memory, syntactic complexity, and syntactic movement. They argue for the last as being the one that is best supported by the data. Having dabbled in theoretical explanations of the pattern of comprehension deficits in Broca's aphasia, I'm sympathetic to the syntactic movement account. However, there are two problems with the proposed link to Broca's area (BA 44/45). One is that the argument against the working memory theory is exceptionally weak and the other is that the comprehension deficit does not appear to be linked specifically with Broca's area.

The working memory theory. The idea behind the working memory theory is that sentences containing syntactic movement require additional working memory resources to process and Broca's region (or more accurately, the lesions that produce Broca's aphasia and agrammatic comprehension) is(are) critical for working memory. G&S argue against this position on the basis of "preliminary studies" (p. 477) of the comprehension of reflexive constructions, such as Mama Bear touched herself, which they suggest requires working memory. Six Broca's aphasics were able to comprehend such sentences "contrary to the prediction of a WM deficit account" (p. 477). Well, what if the amount of working memory required to comprehend Mama Bear touched herself is less than the amount of working memory required to comprehend The cat that the dog chased was very big? If there are working memory differences between these constructions, which seems a priori plausible, then the argument against a working memory explanation is invalid. Further, our own work suggest that working memory may account for at least a portion of the comprehension pattern attributed to patients with left frontal convexity lesions (see this post).

Broca's area is not specifically implicated in agrammatic comprehension. It is well known (well, maybe not well known, but well established) that damage restricted to Broca's area does not cause Broca's aphasia (Mohr, 1976; Mohr, et al. 1978). Agrammatic comprehension -- i.e., the pattern of comprehension deficits that Grodzinsky and Santi are trying to explain -- is associated with Broca's aphasia, not with Broca's area lesions. From this we can infer that lesions to Broca's area alone do not cause syntactic movement deficits. Add to this the observation that conduction aphasics, i.e., patients with posterior lesions, also tend to exhibit agrammatic comprehension, and you have a one-two punch: Lesions to restricted to Broca's area don't cause agrammatic comprehension and lesions to other brain regions (and sparing Broca's area) can cause agrammatic comprehension. From this we conclude that Broca's area plays no special role in syntactic movement (Hickok, 2000).

One of the complications here is that there are probably a number ways to cause deficits on the comprehension of sentences with long-distance dependencies. These structures tend to be harder to comprehend in control subjects. As such, one would expect that any disruption of processing efficiency, e.g., working memory or attentional deficits, could cause "impairments" in the comprehension of these types of sentences. Of course, a specific disruption of the syntactic operation computing long-distance dependencies could also disrupt comprehension on these sentences, but other possible sources of the deficit are rarely assessed, let alone ruled out when testing such patients.

We still don't understand the role of Broca's area in sentence comprehension, working memory, action processing, semantic processing, or any of its other possible functions.

References

Y GRODZINSKY, A SANTI (2008). The battle for Broca’s region Trends in Cognitive Sciences, 12 (12), 474-480 DOI: 10.1016/j.tics.2008.09.001

Roel M. Willems, Peter Hagoort (2009). Broca's region: battles are not won by ignoring half of the facts Trends in Cognitive Sciences, 13 (3), 101-101 DOI: 10.1016/j.tics.2008.12.001

Hickok, G. (2000). The left frontal convolution plays no special role in syntactic processing. Behavioral and Brain Sciences, 23, 35-36.

Mohr, J. P. (1976). Broca's area and Broca's aphasia. In H. Whitaker & H. A. Whitaker (Eds.), Studies in neurolinguistics, vol. 1 (pp. 201-235). New York: Academic Press.

Mohr, J. P., Pessin, M. S., Finkelstein, S., Funkenstein, H. H., Duncan, G. W., & Davis, K. R. (1978). Broca's aphasia: Pathological and clinical. Neurology, 28, 311-324.

Friday, March 6, 2009

Post-Doctoral Research Associate in the Neural Bases of Speech and Lexical Processing - Brown University

A post-doctoral position in the neural bases of speech and lexical processing is available starting the summer of 2009. The research program focuses on using event-related fMRI to investigate neural systems underlying phonetic category invariance, competition across levels of the grammar, and the interaction of phonetic/phonological properties and lexical access in speaking and understanding. Facilities include a research dedicated 3T Siemens Trio MR system located at the Magnetic Resonance Facility on the Brown University campus (http://www.brainscience.brown.edu/MRF/). Candidates should have an interest and some background in language processing research and should have some experience working with functional neuroimaging including fMRI design and analysis. Ph.D. must be completed before starting. Current position is for two years with possible extension for up to five years. Send vita, brief statement of research interests, and 3 letters of reference to Sheila_Blumstein@brown.edu. Applications will be reviewed until the position is filled.


Brown University is an Equal Opportunity/Affirmative Action Employer

Abstraction, and a way (maybe) to image it?

An Opinion piece in the January issue of Trends in Cognitive Sciences by Jonas Obleser and Frank Eisner highlights the necessity to derive abstract representations in spoken language comprehension. Saying this (i.e. the necessity of abstraction) often is worthwhile, I think, because there continues to be controversy. Here my $0.02: claiming that there exist abstract representations is not tantamout to denying the existence of episodic/indexical effects. It seems to me that on that topic there is controversy -- but no issue. I think the evidence for indexical effects is strong, and I think the evidence for abstract representations is overwhelming. Both aspects of speech recognition need to be accomodated in any successful theory.

A second aspect of Jonas' and Frank's paper that is helpful is that they point to the recent developments in the analysis of imaging data (well exemplified by Elia Formisano's recent work) that raise the bar in terms of eludicating spatial results.

Now ... a dfifferent issue is whether even such granular spatial analyses get us closer to computational theories of perception a la neural coding. But that's just my thing ...

Reminder: new outlet for Cognitive Neuroscience of Language

The journal Language and Cognitive Processes now has a section devoted to Cognitive Neuroscience of Language, edited by David Poeppel. Please consider submitting your manuscripts there. Don't be shy.

The electrophysiology of everything -- about vowels …

A recent paper in the Journal of Neuroscience by Bonte, Valente, and Formisano (2009) reports the results of an EEG/ERP study in which listeners performed a one-back task on a sequence of vowels (/a/, /i/, /u/) from three different speakers (make and female; different tokens from each speaker).

What’s impressive – and a little bit intimidating – about this paper is that it tries to tackle a whole bunch of issues in one single experiment: the perceptual analysis of vowels, the abstract representation of vowels versus speaker identity, task-driven modulation of cortical responses, the role of oscillations in neural coding and perception … Pretty heady stuff. There is much to like about this paper, particularly the thorough and creative analysis of the electrophysiological data. Anyone using ERP or MEG to study speech will benefit from looking at all the analyses they used. The paper connects well to a recent imaging paper by Formisano and colleagues in Science (Science 7 November 2008 322: 817) -- in which, by the way, the same materials were used -- regarding the anatomic representation of speech versus speaker.

My favorite part of the article is the fact that it starts out with a nice model, a neural coding hypothesis regarding how the neurophysiological response profile will change as a function of the interaction between stimulus materials and task. Basically, executing a task (e.g. vowel identity) will realign in phase the response typically elicited by the stimulus, and the nitty-gritty of the realignment depends on the specifics of the task. This is the kind of model/hypothesis at the interface of systems neuroscience and cognitive neuroscience that I wish we saw more of in the literature. My second favorite part is the thoughtful discussion in which Bonte et al. link their study to the literature on speech, oscillations, top-down effects, and so on. My third favorite part is the fact that their data further support the view that analyzing response phase yields a tremendous amount of additional information, a view which I favor and which has received provocative support in the recent literature (see lots of stuff by Charlie Schroeder and colleagues, a 2007 paper by Luo & Poeppel, a brand new paper by Kayser et al in Neuron 2008.)

The technical tour-de-force notwithstanding, I do have some questions that require clarification. My major question concerns the proposed time line. Two conclusions are highlighted. One is that the initial acoustic-phonetic based analysis is reflected in the N1/P2 responses but that abstract representations are really only reflected late (300 ms and later). The data are the data, but I do find this conclusion surprising in light of the mismatch negativity (MMN; ERP) and mismatch field (MMF; MEG) studies that highlight access to abstract representations by the time the MMN peaks (say 150-200 ms). For example, various findings by Näätänen and colleagues (eg Nature, 1997) and data from Phillips and colleagues (eg J Cog Neurosci 2000; PNAS 2006) suggest abstraction ‘has happened’ by then. Is this different conclusion due to task differences? My second question has to do with how early top-down task-effects are revealed. Again, selective attention tasks reveal response amplitude modulation at the N1/N1m and even earlier (eg Woldorff). I fact, people working on brainstem responses such as Nina Kraus or Jack Gandour see early early early effects. So task differences make the difference there, too? I would like to understand this part better.

I do like the technical cleverness and experimental simplicity of this study, but I would like to get my head around the time line more.

M. Bonte, G. Valente, E. Formisano (2009). Dynamic and Task-Dependent Encoding of Speech and Voice by Phase Reorganization of Cortical Oscillations Journal of Neuroscience, 29 (6), 1699-1706 DOI: 10.1523/JNEUROSCI.3694-08.2009

Thursday, March 5, 2009

Understanding language without ability to speak

In 1962 Eric Lenneberg published an interesting case report of an 8 year old boy who had a congenital disorder that prevented him from developing the ability to speak. He could perform many oro-facial behaviors like chewing, swallowing, blowing, licking and he spontaneously made noises "that sound somewhat like Swiss yodeling" but he could not speak. With intensive speech therapy he eventually achieved the ability to "repeat a few words after his speech therapist or his mother but the words are still barely intelligible" (p. 420). In contrast, his speech comprehension had been judged as fully normal by the author as well as by neurologists and speech pathologists over the course of 20 or so visits. Lenneberg goes on to report a more systematic examination of the boy's comprehension which revealed it to be well preserved.

Lenneberg couched his case report, which he indicated is typical of a larger category of patients, in the context of theories of speech development which held that babbling and speech output was critical to the normal development language abilities including receptive (comprehension) skills. He argued that this type of case argues against the view that speech production is critical to the development of receptive speech.

Today, Lenneberg might have highlighted the relevance of his case for mirror neuron/motor theories of speech perception. These theories claim that speech is perceived by mapping acoustic speech sounds onto motor representations coding the production of speech. For example, Rizzolatti and Arbib (1998) stated,

mirror neurons represent the link between sender and receiver that Liberman postulated in his motor theory of speech perception as the necessary prerequisite for any type of communication (p. 189)


Such a theory would seem to predict that if an individual failed to develop motor systems underlying speech production they should not be able to perceive and comprehend speech. Lenneberg's report demonstrates that this prediction is incorrect.

Are there more recent cases of the development of normal language comprehension in the face of failures to develop speech production. Yes, here is a case I recently came across (Case 1 from Christen et al. 2000).

A three month old girl had an acute febrile illness (possibly meningitis) with epileptic seizures. After recovery from the acute illness, her motor development was delayed, she exhibited constant drooling, took only pureed food, and never acquired expressive language. She attended a school for disabled children, but made normal progress in writing and reading. She was examined at 15 years old by the paper authors. The patient presented with pseudobulbar palsy (difficulty with orofacial movements such as chewing, swallowing, speech), her “mental state” was normal but she could communicate only by non-verbal means as she was unable to produce identifiable speech sounds. However her language comprehension was reported as normal. An MRI showed bilateral lesions of the anterior opercular region which the authors believed were acquired at age three months during the child's illness and which damaged speech output systems. Bilateral lesions in this region in adults produce a similar disruption of speech output, without affecting comprehension abilities, so called Foix-Chavany-Marie syndrome.

Again, we have a clear demonstration of preserved receptive speech abilities despite a complete lack of development of motor speech capacity. This kind of result is not straightforwardly explained by theories which hold that speech production is critical for speech perception.

References

Christen HJ, Hanefeld F, Kruse E, Imhäuser S, Ernst JP, Finkenstaedt M. (2000). Foix-Chavany-Marie (anterior operculum) syndrome in childhood: a reappraisal of
Worster-Drought syndrome. Dev Med Child Neurol., 42, 122-32

Lenneberg, E. (1962). Understanding language without ability to speak: a case report. Journal of Abnormal and Clinical Psychology, 65, 419-425.

G Rizzolatti, M Arbib (1998). Language within our grasp Trends in Neurosciences, 21 (5), 188-194 DOI: 10.1016/S0166-2236(98)01260-0

Tuesday, March 3, 2009

Neural mechanisms underlying auditory feedback control of speech

Auditory feedback is an important aspect of speech production. Delayed auditory feedback results in non-fluencies, and altered speech feedback, e.g., shifting fundamental frequency, results in compensatory speech adjustments opposite the direction of the alteration. What is the neural mechanism underlying this system? That was the question addressed in a recent report by Tourville, Reilly, & Guenther (2008).

The design of their fMRI experiment was straightforward. Subjects produced words under two conditions, (i) normal auditory feedback and (ii) auditory feedback in which the first formant frequency of their speech was shifted either up or down in real time. As expected subjects compensated for this shift rapidly, within about 100 msec. To identify the brain regions involved in this processes Tourville et al. compared the shifted speech condition with the non-shifted speech condition. Here's what they found:


It is no surprise that auditory cortex is involved in registering the shift; it is perhaps interesting that a good chunk of right STG is highlighted in this pitch shift manipulation. The involvement of right pre-motor cortex is a bit of a mystery... But what I'm most excited about is the location of the major blob of activation in the left hemisphere. This seems to be centered right over area Spt, which of course is the region I believe supports sensory-motor integration for speech and related functions (i.e., it translates between sensory and motor speech representations). The observation that this area is involved in auditory feedback control fits perfectly with this view. After all, sensory-motor integration is critically involved in auditory feedback control of speech. My previous posts on the link between delayed auditory feedback and this left planum temporale region converge nicely with this new study.

I highly recommend having a close look at this paper. There's a lot more in it than I have the energy to outline here, including a nice computational model (Guenther's DIVA model), structural equation modeling, and very useful literature review.

References

J TOURVILLE, K REILLY, F GUENTHER (2008). Neural mechanisms underlying auditory feedback control of speech NeuroImage, 39 (3), 1429-1443 DOI: 10.1016/j.neuroimage.2007.09.054