Wednesday, May 27, 2009

Do mirror neurons exist in humans? A new study says 'no'

Already a new paper to appear in PNAS is generating a buzz in the press.

The study is by Alfonso Caramazza and colleagues who used an fMRI adaptation paradigm. Adaptation was assessed both for observing (O) and then executing (E) actions and executing and then observing (as well as O-O and E-E conditions). Assessing adaption in both directions, E->O and O->E, is critical because (i) if mirror neurons exist, adaptation should occur in both situations, and (ii) adaptation in the case of observing and then executing could be interpreted as motor priming during the observation event. The critical result was that in the regions they examined, fMRI adaptation was found for E-E conditions, showing that there is coding of information relevant to action execution, and also in O-E conditions suggesting prima facie that action observation and action execution are activating the same set of neurons in the ROIs. However, E-O trials did not exhibit adaption, which they should have if in fact there is a shared substrate for observation and execution (it shouldn’t matter what the order of presentation), and neither did O-O trials indicating that the ROIs were not coding perceptually driven information. This pattern of results can be explained if the ROIs are coding action execution information (E-E adaptation) and if observing an action that one might have to execute can prime these action coding regions (O-E adaptation).

This is a significant advance over previous attempts to find adaptation effects in the human mirror system because clear evidence of adaptation was identified, ruling out a power issue, and because they assessed observation-execution adaptation in both directions. This allows the authors to conclude with some degree of confidence that the direct matching hypothesis is incorrect.

So could it be possible that mirror neurons don't exist in humans? I have said that such an outcome would be surprising. But this new result makes me wonder whether there might be something funky about the training situation of macaques in which mirror neurons have been found that lead to the development of neurons with mirror properties. In other words, do mirror neurons even exist naturally in monkeys? ...

Wednesday, May 20, 2009

Do Broca's aphasics have trouble comprehending degraded speech?

Patients with Broca's aphasia are able to comprehend spoken words quite well, in fact, this preserved comprehension in the face of non-fluent speech production is a diagnostic criterion for the syndrome. This fact -- the dissociation between expressive and receptive speech -- demonstrates that the motor speech system is not critical to speech perception. Or does it?

A study by Moineau, Dronkers, & Bates (2005) suggests that Broca's aphasics have trouble comprehending single words under degraded acoustic listening conditions. This finding has been referred to as evidence supporting an important role for the motor system in speech perception. E.g., see this comment. But how solid is the finding?

Moineau et al. tested three aphasic groups (Broca's, Wernicke's, & anomic), right hemisphere non-aphasics (RHD), and control subjects on a word comprehension test under two listening conditions, clear speech and degraded speech (low-pass filtered and temporally compressed). The comprehension test was a picture-word verification test: Subjects heard a word and saw a picture that either matched or mismatched. They indicated match or mismatch by button press. Non-matching pictures were semantically and phonologically unrelated to the target (to the best of my reading).

In the clear speech condition only the Wernicke's patients showed any deficits on the comprehension task. In the degraded speech condition all subjects performed more poorly -- no surprise -- but now the Broca's patients performed statistically as poorly as the Wernicke's patients and both Broca's and Wernicke's performed worse than controls and RHD patients (Broca's did not differ from anomic aphasics, but Wernicke's did).

In other words, single word comprehension deficits in Broca's aphasia appear to be uncovered by presenting speech in an acoustically degraded form, and under these conditions they look as bad as Wernicke's aphasics. This is a pretty dramatic result! And it provides prima facie evidence in support of a role for the motor system in speech perception/comprehension.

But there's a problem. Two actually. The first is that the lesions in Broca's aphasia are not restricted to the motor system but also likely include many other frontal and parietal regions that may be important for attention, response selection, and other executive functions. Thus there is no direct evidence linking the motor speech system to the auditory comprehension deficit.

The other problem is the way Moineau et al. analyzed their data. Recall that the task is to detect matches and reject mismatches. This is a classic signal detection design. An important factor in signal detection experiments is response bias. Some subjects may have a bias toward responding "yes" and others may have a bias in the reverse direction. This affects the results. Luckily there are ways of correcting for response bias, for example the d-prime statistic which uses the proportion of hits (correct acceptances) versus the proportion of false alarms (incorrect acceptances) to correct for bias. Unfortunately Moineau et al. did not calculate d-primes in their analysis. Instead they simply took the proportion correct in the match and mismatch trials to calculate accuracy scores and this could lead to biased, possibly invalid results. In fact, when they looked at accuracy as a function of "congruence" (whether it was a match or mismatch condition) they reported that Broca's and Wernicke's patients have opposite biases! Wernicke's and control subjects performed better on the congruent trials (they tended toward "yes" responses) and Broca's and RHD subjects performed better on the incongruent trials (they tended toward "no" responses). Anomics showed no difference. These group differences in response bias suggest that the overall findings are indeed themselves biased.

To illustrate the problem consider the following graph. At each point along the x-axis is a different pair of hit and correct rejection scores (indicated on the y-axis) that average to equal the performance level (roughly 63%) for Broca's aphasics eyeballed from Moineau et al.'s graph. These are values that reflect the reported bias, incongruent>congruent. The x-axis labels are the A-prime scores for a given pair of hit/correct rejection scores. A-prime is a biased corrected estimate of proportion correct (it's more intuitive to think about than d-prime scores). Notice that for the same average accuracy, the corrected proportion correct scores (a-prime) vary from .7 to more than .8 and that all of the a-prime scores are greater than the reported accuracy of .63. Average uncorrected accuracy underestimates how well subjects are able to discriminate matches from mismatches in this range of values.

Here is the graph for the eyeballed Wernicke's score of ~53% average accuracy. These are the pairs of scores that reflect the reported bias, congruent > incongruent. Notice that most of the distribution of scores is in the 50-60% a-prime range (unlike Broca's which is higher) but also that there is an even wider spread of possible a-prime scores for the same average accuracy as reported by Moineau et al.

So it is really quite impossible to know how well these patients are performing on the comprehension test when response bias is not corrected. One might argue that even the most generous a-prime score for the Broca's patients is still in the low 80% range and that reflects comprehension deficits. True, but remember that this has to be compared against the a-primes for the control groups and since we can't know their bias corrected scores, we can't evaluate how poorly the Broca's patients are performing.

To be quite honest, this is a paper that never should have been published with this analysis. The concept behind the study is fantastic. It's a shame that we can't interpret the findings. So we still don't know whether Broca's aphasics have disproportionate difficulty comprehending acoustically degraded speech, and still no evidence that damage to the motor system produces significant deficits in single word comprehension.

Moineau, S., Dronkers, N.F., & Bates, E. (2005). Exploring the Processing Continuum of Single-Word Comprehension in Aphasia Journal of Speech, Language, and Hearing Research, 48 (4), 884-896 DOI: 10.1044/1092-4388(2005/061)

Friday, May 15, 2009

Another year of Talking Brains

May 16th marks the second anniversary of Talking Brains. We've gotten a decent amount of positive feedback which we very much appreciate, the online comments to some of our posts have been instructive, and our hit count continues to grow, doubling our monthly average in the last year.

So in general I'm pretty pleased with this little experiment. Again I would like to emphasize that we really want this blog to be a language science community forum and resource, and NOT just a place where David and I get speak our minds. This last year has seen a lot more interaction/commentary and that is a very positive development. Thank you very much to all the folks who have contributed! We hope in the next year to increase the contributions from the research community in a number of ways including more comments/discussion, sending us summaries of your recent pubs to post as guest entries (even an abstract and a figure would be great!), maybe more "interviews", and the less exciting but very useful job listings and conference announcements.

If anyone has any ideas on how to improve the blog please let us know!

Tuesday, May 12, 2009

TB Interview with Matt Davis & Gareth Gaskell: Learning and Consolidation of Novel Spoken Words

New Talking Brains feature: the TB Interview! Here's a bit about Matt Davis and Gareth Gaskell's recent paper in JoCN...

Greg Hickok (Talking Brains): Tell me about your recent paper in JoCN. How did this project come about?

Matt Davis (MRC-CBU, Cambridge): Basically we put together two things that we’d worked on separately in the years since Gareth co-supervised my PhD. I really liked some of the behavioural studies of word learning that Gareth had been doing (e.g. Gaskell & Dumay, 2003).

Gareth Gaskell (University of York): And I was interested in your fMRI priming studies with Eleni Orfanidou and William Marslen-Wilson (Orfanidou, Marslen-Wilson & Davis, 2006). The idea was to combine these two projects.

Matt: In hindsight, they fit together like pieces in a jigsaw. The fMRI study showed clear differences in the fMRI response to familiar spoken words and novel pseudowords, and that these differences didn’t change with repetition priming.

Gareth: My behavioural studies with Nicolas Dumay showed that though you could learn a made-up word (like cathedruke) really quickly, these new words don’t compete for recognition with similar existing words (like cathedral) until some time after initial learning.

Matt: To me, though, the sleep study (Dumay & Gaskell, 2007) was the really jaw-dropping result. The result is that people who learn new words at 8am (AM group) don’t show a competition effect until 8am the following day. However, people who learned new words at 8pm (PM group) show lexical competition 12 hours later.

Gareth: It’s sleep that makes the difference. Newly learned words don’t behave the same way as existing words until you’ve had a chance to sleep on them.

Matt: Which in turn explains why our repetition priming study didn’t show pseudowords turning into real words – subjects didn’t fall asleep in the scanner.

Gareth: At least not on purpose…!

Greg: So what did you do in the present study?

Matt: Since we couldn’t scan people at 8pm and again at 8am we had to teach people two different sets of words on successive days instead. Participants learned one set of new words on Day 1, another set of words on Day 2, and were tested on these two sets and a set of untrained words after training on the second day. That way we can assess effects of training with and without overnight consolidation in a single test session.

Gareth: We’d not done any behavioural experiments using this design before, but Anna Maria Di Betta showed that it worked well and produced the same lexical competition effect that we’d seen before. This is Experiment 1 in the JoCN paper. There's no lexical competition from items learned and tested on Day 2, but there is a competition effect for items learned the previous day.

Matt: Mark Macdonald and I used the same design for the fMRI study (Experiment 2). To make sure that we could separate out effects of training and lexicality we taught people real words at the same time as the pseudowords. Then on the second day of the study, we used fMRI to look at how word/pseudoword differences change due to training and overnight consolidation.

Gareth: Apart from that, though, everything else was pretty similar to the behavioural study. We kept the same training task (phoneme monitoring), and test task (pause detection) from some of the behavioural studies and combined these with the fast, event-related sparse imaging design from your fMRI work.

Greg: So how did the fMRI data come out?

Matt: We were a bit confused at first. I’d expected to see an increased response to real words compared to pseudowords, but that was non-significant. I think that’s because we didn’t give participants any meaning for the pseudowords and because of the pause-detection task we used in the scanner emphasized phonological processing.

Gareth: But, we did see lots of activation for the reverse contrast.

Matt: That’s right – the superior temporal gyrus responds more to pseudowords than to real words. And that response stays the same for items trained just before people go into the scanner. In a way, this is similar to the Orfanidou result – you can’t turn a pseudoword into a real word with short term training.

Gareth: However, just like in the behavioural study, training plus overnight consolidation makes pseudowords respond more like real words. In the STG, the pseudoword response is significantly smaller for items that have been learned and consolidated. This novelty by consolidation interaction is even more significant in other areas that respond to pseudowords such as the precentral gyrus, SMA and right cerebellum.

Greg: So you need overnight consolidation to learn a new word? That seems wrong – people can learn new words much quicker than that.

Matt: I agree completely – people can learn new words quickly. But it seems that the cortex can’t learn as fast as people. In our fMRI experiment, it’s only the day after learning that you see changes to pseudoword responses in the cortex.

On the other hand, we see lots of evidence for rapid learning in the medial temporal lobe. There are three results in our study that suggest that the hippocampus is involved in initial learning of novel spoken words: (1) it responds more to untrained items that are truly novel at the time of scanning, (2) it habituates rapidly when untrained items are repeated, and (3) the strength of both these effects is correlated with how well individual participants learn new words.

Gareth: In conjunction with other results (e.g. Breitenstein et al., 2005), we suggest that there’s two systems involved in learning new words.

The hippocampus learns quickly, but doesn’t represent new words in the same way as existing familiar words. The cortex learns more slowly and uses overnight consolidation to ensure that new words and existing words can be stored in a single set of distributed representations.

Greg: Two complementary learning systems, one fast and one slow.

Matt: Exactly! This is the same idea that Jay McClelland, Bruce McNaughton and Randy O’Reilly proposed for neural network models of memory (McClelland, McNaughton & O’Reilly, 1995). To ensure that the cortex can learn new words without forgetting old words you have to interleave old and new items during training. We think the brain achieves this by storing new words initially in the hippocampus, and then transferring knowledge into overlapping cortical representations overnight whilst people sleep. Lexical competition is one hallmark of overlapping cortical representations which explains why you need to sleep after learning in order to show lexical competition (Dumay & Gaskell, 2007).

Gareth: We’re currently revising a review paper that summarises this “complementary learning systems account” of word learning. In this paper, we attempt to explain precisely what people can and can’t learn quickly about new spoken words.

Matt: We’ll probably get shot down in flames on this last point. We make the very strong prediction that as long as there isn't task-specific repetition priming, then training can not cause the cortical response to pseudowords to resemble the response to real words. In our experiment we showed that it was only after overnight consolidation that the pseudoword response was reduced in regions like the STG that show an elevated response for untrained pseudowords. We didn’t show the reverse pattern for regions that respond more to real words, but we’d predict the same thing –changes to these responses require learning and overnight consolidation.

Greg: Sounds like there’s plenty of opportunity for you to be proved wrong. Perhaps Talking Brains readers know of some counter-evidence already.

Gareth: We’ll look forward to hearing about it!


Breitenstein, C., Jansen, A., Deppe, M., Foerster, A. F., Sommer, J., Wolbers, T., et al. (2005). Hippocampus activity differentiates good from poor learners of a novel lexicon. Neuroimage, 25, 958–968.

Dumay, N., & Gaskell, M. G. (2007). Sleep-associated changes in the mental representation of spoken words. Psychological Science, 18, 35–39.

Gaskell, M. G., & Dumay, N. (2003). Lexical competition and the acquisition of novel words. Cognition, 89, 105–132.

McClelland, J. L., McNaughton, B. L., & O’Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successesand failures of connectionist models of learning and memory. Psychological Review, 102, 419–457.

Orfanidou, E., Marslen-Wilson, W. D., & Davis, M. H. (2006). Neural response suppression predicts repetition priming of spoken words and pseudowords. Journal of Cognitive Neuroscience, 18, 1237–1252.

Monday, May 11, 2009

Stigler's law of eponymy in neuroscience of language research

Stigler's law of eponymy states, "No scientific discovery is named after its original discoverer." A famous example of this law is the Gaussian distribution, which was introduced by Abraham de Moivre in 1733 and only later was used and defended by Carl Friedrich Gauss.

The neuroscience of language has a few examples of Stigler's law.

Broca's aphasia. Non-fluent forms of aphasia were well known long before Broca.

Broca's area. It is now known that Marc Dax discovered the link between non-fluent aphasia and the left inferior frontal gyrus. Of course, Dax never published his findings so in one sense Broca's claim to fame is justified. Nonetheless, Stigler's law still holds.

Wernicke's aphasia. A fluent form of aphasia, them often referred to as speech amnesia, was also well known before Wernicke.

Wernicke's area. Wernicke was not the first to associate "Wernicke's" aphasia with the left posterior superior temporal gyrus, the region we today call Wernicke's area. That honor appears to belong to Wernicke's mentor Theodor Meynert.

Hebbian learning. This attribution, while not obviously relevant to the neuroscience of language, is a dash of karma for Wernicke. Donald Hebb was not the first to describe "Hebbian learning" as Wernicke discussed the same principle decades earlier.

Junior scientists tend to worry a lot about getting scooped. I've received more than one panicked email from a grad student who discovers a published study almost identical to the one they were just writing up. But if Stigler is correct, precedence doesn't mean all that much in scientific attribution. You don't have to discover it, you just have to popularize it.

Thursday, May 7, 2009

Supercalifragilisticexpialidocious: Neural circuits involved in new word learning

A speech-related sensory-motor integration network is useful for lots of things: auditory feedback control of speech production, producing multisyllabic words, enabling motor-to-sensory modulation of speech perception, phonological short-term memory. In addition to these, I have suggested previously that such a circuit, which includes area Spt at the posterior parietal-temporal junction, also supports new vocabulary development:
Auditory–motor processes at the level of sequences of segments would be involved in the acquisition of new vocabulary… (Hickok & Poeppel 2007, p. 399)

I never had much evidence to support this supposition and hadn't yet done the experiments, so when I came across a 2009 paper by Paulesu and colleagues claiming to have identified a system for new word learning, I was excited. After glancing at the paradigm and scanning the figures I was positively stoked -- Spt was lit up like a Las Vegas billboard (red blob in the crosshair).
I thought for sure this was going to be one of my new favorite papers. Then I read the details and unfortunately found it was one of my new not-so-favorite papers...

Here's what they did. Two PET experiments each involving six subjects. In the first experiment subjects listened to lists of non-words, lists of words, or rested. During the list scans they were asked to learn the non-words/words. The same set of items was presented in each of 5 learning scans. Learning was assessed via free recall after each scan. Experiment 2 was similar but instead of learning a list, they learned word or non-word pairs and learning was assessed by presenting one the pair and asking the subject to recall its associate. Not much came out differently in the two tasks so that manipulation will be ignored here.

The main effect of learning relative to rest involved lots of brain areas (blue in image above) including auditory regions (they were listened to speech) and frontal parietal networks (they were trying to remember the items). Nothing exciting here. The interesting contrast was between words and non-words. Non-words of course place much more burden on the "phonological" system and so should highlight regions involved in the acquisition of new phonological forms. This contrast (non-words minus words) produced activation in Spt (temporo-parietal junction), Broca's area, and some other sites.

So Spt activates for the learning of new phonological word forms. That's what I would (did) predict. Why then am I disappointed with the study? The authors pointed out that the regions that were active were basically the same as those regions previously found active in phonological working memory tasks, and that this overlap "establishes an explicit anatomical link between these two aspects of human cognition" (p. 1375). Even though I think there is a link (albeit to a sensory-motor integration circuit rather than to a "phonological short-term memory" system) their study doesn't make the connection. The reason is that their findings can be explained purely in terms of phonological working memory. During non-word as well as word learning subjects probably rehearsed the lists they were hearing thus activating their phonological working memory system. The phonological load was greater in the non-word condition so you get more activity during rehearsal of non-words than words. So phonological word form learning overlaps phonological short-term memory systems because their learning task likely induced phonological rehearsal. Again I don't doubt their conclusions, it's just that the reasoning is circular. A better test would have been to correlate brain activity with learning (recall scores).

To be fair, they did look at learning effects in terms of changes in brain activity as a function of learning scan. They didn't find a significant difference in Spt or Broca's region however, apparently subjects were rehearsing every scan. They did find an interesting effect in the mid-anterior STS/MTG though, where activity decreased as a function of learning scan, mostly for non-words.

Perhaps this is a phonological representation of some sort (the phonological "store"!) that is getting more stable with learning. This might be the most interesting part of the study.

What I really didn't like was the discussion. First they claim to have localized the functional anatomy of the "phonological word-form learning device". I think rather that they've (re)localized a circuit that supports phonological short-term memory (but is not dedicated to this function). Then they suggest that the left temporo-parietal junction and Broca's region is "associated with the auditory lexicon" citing an important but now aging study by Howard et al. (1992). This position is oblivious to the fact that damage to these structures does not impair auditory comprehension (Hickok & Poeppel, 2000, 2004, 2007) which would be expected if this is where the "auditory lexicon" lives.

Finally, the paper attempts to address theories of lateralization of language function. Citing the classic early papers on "phonological processing" by Paulesu et al., 1993, Petersen et al., 1989, and Zatore et al., 1992, and ONLY these papers, which show left lateralization of "phonological processing" it is suggested that
"Lateralization of the neural substrates for phonology and for vocabulary acquisition must be important factors to determine hemisphere superiority for language" (p. 1376).
It becomes clear in the next sentence that they are not just talking about phonological processes in production.
Given that the right hemisphere has some lexical competence (Zaidel, 1986), it remains to be established how the relevant neural representations are formed in this side of the brain.
The only thing I can say here is that someone dropped the ball in the lit review department as there has been relevant research published on this topic since the late 80s/early 90s.


Paulesu, E., Vallar, G., Berlingeri, M., Signorini, M., Vitali, P., Burani, C., Perani, D., & Fazio, F. (2009). Supercalifragilisticexpialidocious: How the brain learns words never heard before NeuroImage, 45 (4), 1368-1377 DOI: 10.1016/j.neuroimage.2008.12.043

Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences, 4, 131-138.

Hickok, G., & Poeppel, D. (2004). Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language. Cognition, 92, 67-99.

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nat Rev Neurosci, 8(5), 393-402.

Howard, D., Patterson, K., Wise, R., Brown, W., Friston, K., Weiller, C., et al., 1992. The cortical localization of the lexicons: positron emission tomography evidence. Brain 115, 1769–1782.

Paulesu, E., Frith, C. D., & Frackowiak, R. S. J. (1993). The neural correlates of the verbal component of working memory. Nature, 362, 342-345.

Petersen, S., Fox, P., Posner, M., Mintun, M., Raichle, M., 1989. Positron emission tomographic studies of the processing of single words. J. Cogn. Neurosci. 1, 153–170.

Zaidel, E., 1986. Callosal dynamics and the right hemisphere language. In: Lepore, F, Ptito, M (Eds.), Two Hemispheres-one brain: Functions of the Corpus Callosum. Alan R. Liss, New York.

Zatorre, R. J., Evans, A. C., Meyer, E., & Gjedde, A. (1992). Lateralization of phonetic and pitch discrimination in speech processing. Science, 256, 846-849.

Monday, May 4, 2009

Post Doctoral position -- Center for Aphasia Research and Rehabilitation, Department of Neurology, Georgetown University

Aphasia and Dementia Studies (Postdoctoral Position)
Center for Aphasia Research and Rehabilitation, Department of Neurology, Georgetown University

The Cognitive Neuropsychology Lab at Georgetown University focuses on language and learning/memory function and dysfunction. Ongoing research projects include investigations of experimental cognitive treatments for alexia and for anomia; studies of semantic memory in dementia populations; and possible remediation of cognitive deficits in early stage dementia. Methodology includes behavioral, eye-tracking, fMRI and ERP studies of patients and normal controls.
The post-doc will participate primarily in a study involving patients with mild cognitive impairment or early Alzheimer’s Disease, with the opportunity to be involved in work with aphasic patients as well.
Requirements include a PhD in neuroscience, psychology, cognitive science, psycholinguistics or a related field. The ideal candidate will have experience with brain-damaged populations; statistical proficiency; excellent oral and written communication skills; and excellent computer skills.
Position contingent upon funding.
Georgetown University is an Affirmative Action/Equal Opportunity employer.
Please email a cover letter and CV, and arrange for three letters of reference to be sent via email to:

Contact Information:
Rhonda Friedman

POSTDOCTORAL POSITIONS at the Basque Center on Cognition Brain and Language

POSTDOCTORAL POSITIONS at the BCBL (Postdoctoral Position)
Basque Center on Cognition Brain and Language

The Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) offers 2-3 year postdoctoral positions in several areas: language acquisition, production, multilingualism, neurodegeneration of language, language and learning disorders and advanced methods for cognitive neuroscience. The center promotes a rich research environment without teaching obligations with access to the most advanced behavioral and neuroimaging techniques, including MRI 3 Tesla, a whole-head MEG system, four ERP labs, a TMS lab, an eyetracking lab, and several behavioural labs well equipped, as well as to technical support.

We are looking for experimental scientists with a background in psycholinguistics and/or cognitive neuroscience neighbor areas for the content areas and physics and/or engineers for the methodological areas. All interested in undertaking research in the fields described in (research).

Candidates should have a strong publication track record according to their research experience.

Applications should include:
(i) a curriculum vitae.
(ii) a list of publications.
(iii) two letters of recommendation.
(iv) examples of published work.
(v) a cover letter describing research interests.

For more information about the positions and how to apply please check the web page and click on JOBS.

For information about the positions, please contact Manuel Carreiras

Contact Information:
Manuel Carreiras