Thursday, August 27, 2009

Functional brain imaging: it's not always where you think it is

It is a standard practice in functional brain imaging studies to spatially "normalize" one's data to a common space such as Talairach space or MNI space. The reason for doing this is to allow data from multiple subjects to be averaged. (Here is a discussion of this process.) It is acknowledged that there is error in the process due to variation from one brain to the next. Yet at the same time the (also standard) practice of overlaying warped and group-averaged brain activations on a single high resolution structural MRI temps investigators and readers alike to interpret the location of the blob of activations very literally. More than once I have heard comments about this or that activation being on the upper or lower bank of a sulcus, for example. We simply don't have the spatial resolution to tell in this sort of fMRI analysis.

This situation is even worse though, at least for some brain areas. This issue came up in the comments to a previous post where the discussion with Raj Raizada turned to the localization of a particular activation focus in one of his papers (Raizada & Poldrack, 2007). They correctly localized the group activation to the supramarginal gyrus, which is a tad north of the Sylvian fissure. Here is the activation in question:



Although orthogonal to the main conclusions of the study, I suggested the "real" location may be inside the Sylvian fissure, and may reflect activity of a sensory-motor integration area, Spt (Hickok et al. 2009). The reason for this speculation is that we have found that simply normalizing a single subject's functional imaging data can cause the activation focus to jump from within the Sylvian fissure to above it, falling on the SMG. The figure below shows activations in native space (left side sagittal and coronal images), that is, non-normalized and on the subject's own MRI (also non-normalized) and after warping to a standard space (right side sagittal and coronal images). The activation of interest is in the crosshairs.



Notice that in native space the focus of activation is within the Sylvian fissure and after standardization it moves up above and is on the SMG. This happened consistently enough across subjects in two independent datasets that the group focus ended up in the SMG despite individual subjects showing activation in the Sylvian fissure. This bit of spatial error may not seem like much of a problem, but it is. This jump crosses cytoarchitectonic boundaries and could lead to dramatically different functional interpretations based on what is known about these areas.

So what do we do? If location matters, you can confirm your activations in native space in individual subjects, or you can try normalizing with sulcal based co-registration schemes rather than the standard methods.


References

Raizada, R., & Poldrack, R. (2007). Selective Amplification of Stimulus Differences during Categorical Processing of Speech Neuron, 56 (4), 726-740 DOI: 10.1016/j.neuron.2007.11.001

Hickok, G., Okada, K., & Serences, J. (2008). Area Spt in the Human Planum Temporale Supports Sensory-Motor Integration for Speech Processing Journal of Neurophysiology, 101 (5), 2725-2732 DOI: 10.1152/jn.91099.2008

Tuesday, August 25, 2009

"Categorical perception" in neuroscience studies of speech

Old speech phenomena don't die they just become morphed into neuroscience studies.
-Andrew Lotto

The phenomenon of categorical perception appears to be riding the coattails of the resurgence of interest in motor theories of speech perception. Back in the motor theory heyday, categorical perception was all the rage. Listeners appeared to perceive speech sounds differently from non-speech sounds, i.e., categorically, and this was taken as evidence for the motoric nature of the speech perception process. The argument was something like this... Acoustic signals vary continuously. Articulatory patterns are categorical (/b/ is always produced bilabially). Perception mirrors the categorical nature of articulation. Therefore we perceive speech via our motor system.

Problems with this view quickly arose. Non-human, and therefore non-speaking, animals such as chinchillas and quail, were found to exhibit categorical perception for speech sounds. Babies too, who hadn't yet acquired the ability to articulate speech, also exhibited categorical perception. Categorical perception of non-speech sounds was also demonstrated. Further, perception of speech sounds was found to be continuous if listeners were asked to rate how well a stimulus represented a given category rather than asking them to make a binary decision.

Interest in categorical perception (CP) faded -- except in neuroscience where the pace of CP studies seems to be accelerating. Here's just a few from this year:

Möttönen R, Watkins KE. Motor representations of articulators contribute to
categorical perception of speech sounds. J Neurosci. 2009 Aug 5;29(31):9819-25.

Salminen NH, Tiitinen H, May PJ. Modeling the categorical perception of speech
sounds: A step toward biological plausibility. Cogn Affect Behav Neurosci. 2009
Sep;9(3):304-13.

Clifford A, Franklin A, Davies IR, Holmes A. Electrophysiological markers of categorical perception of color in 7-month old infants. Brain Cogn. 2009

Prather JF, Nowicki S, Anderson RC, Peters S, Mooney R. Neural correlates of categorical perception in learned vocal communication. Nat Neurosci. 2009 Feb;12(2):221-8.

I hinted previously that the failure to use signal detection analysis methods in the context of categorical perception studies may have contaminated the whole field of CP research. Lori Holt recently pointed me to a paper by Schouten et al. 2003, provocatively titled "The End of Categorical Perception as We Know It". The point of the paper is exactly was I was hinting at: perception only looks categorical because of inherent bias in the tasks used to measure it.

The traditional categorical-perception experiment measures the bias inherent in the discrimination task
(Schouten et al. 2003, p. 71)


Here's another interesting quote from this paper:

Despite an auspicious beginning with a clear experimental definition ... categorical perception has in practice remained an ill-defined or even undefined concept, which could be used to underpin a variety of sometimes mutually exclusive claims, for example for or against the motor theory (p. 72)


This is an interesting paper that is worth a close look. But back to bias...

Let me illustrate very simply using some categorical perception data that I pulled from the literature. The graph below shows real data from a CP experiment using a GA-DA continuum. The task is explicitly categorical: subjects are asked to decide whether a stimulus is an example of GA or DA. This is not a good task to determine whether subjects perceive speech sounds categorically because it forces them to categorize. As Schouten et al. put it, "... if the nature of the task compels subjects to use a labelling strategy, categorical perception will be pretty much a foregone conclusion" (p. 77). Nonetheless, use of d-prime measures shows a rather different picture to standard measures. The vertical access is proportion of GA responses, and the horizontal axis is the various stimuli along the continuum. Perception looks nicely categorical.



Now plot the same data in d-prime units. To do this you can calculate d' for each pair of adjacent stimuli (how well are Ss discriminating Stim1 from Stim2, Stim2 from Stim3, etc.). Plotted here is cumulative d'. We should see discontinuities in the cumulative d'. Instead we see a more continuous function.



Have a look at the papers by Lori Holt and Andrew Lotto that I highlighted in a previous post as well as the Schouten et al. paper for more critical views on the nature of categorical perception. Then there's always long-time CP skeptic Dominic Massaro. His work on the topic is also worth a look.

What are the implications for neuroscience studies of speech perception? Well, if CP is nothing more than task effects and/or subject bias, then by using CP paradigms to map speech perception systems, all that is being mapped is task strategies and/or subject bias. No wonder all these studies find effects in the frontal lobe!

Schouten, B. (2003). The end of categorical perception as we know it Speech Communication, 41 (1), 71-80 DOI: 10.1016/S0167-6393(02)00094-8

Thursday, August 20, 2009

Speech perception happens in the auditory system

I know this sounds like an outlandish claim given the current motor-oriented neuroscience culture, but I really do think that speech perception is something that is achieved in the auditory system. I try not to say this too loudly around auditory people because they just laugh. "Really? You're claiming that perceiving speech sounds is a function of the auditory system? Brilliant. I never would have thought of that."

I've spent enough time publicizing and critiquing paper that argue for motor involvement in speech perception. Now I'd like to point out some of the papers put out there by the folks who argue from an auditory perspective. These don't usually get published in high profile neuroscience journals so they can fly under the radar of the neuroscience crowd. So here are two good papers, one which provides a very brief overview, and another a more in depth review. Definitely required reading for anyone interested in studying speech perception:

Diehl, R., Lotto, A., & Holt, L. (2004). Speech Perception Annual Review of Psychology, 55 (1), 149-179 DOI: 10.1146/annurev.psych.55.090902.142028

Holt, L., & Lotto, A. (2008). Speech Perception Within an Auditory Cognitive Science Framework Current Directions in Psychological Science, 17 (1), 42-46 DOI: 10.1111/j.1467-8721.2008.00545.x

Friday, August 14, 2009

Connectivity patterns between Broca's area and the temporal lobe

There have now been a sufficient number of in vivo tractography studies of language-related areas to discern the general patterns of connectivity between Broca's area and the temporal lobe. In vivo tractography is achieved using diffusion tensor imaging (DTI) which is sensitive to the direction (orientation) of water molecule diffusion. Because water diffuses more readily along myelinated fiber tracts DTI can be used to map white matter fiber projections. Like fMRI, DTI findings are highly dependent on the details of how the data are analyzed, e.g., how the seeds are selected, whether you average across subjects or not, etc. So, like fMRI I think any individual study needs to be interpreted cautiously until similar patterns start showing up across studies/labs/methods.

I recently looked at four published studies to see if there were any patterns that emerged. The figure below is my summary of what these studies showed. All of them distinguished between anterior and posterior sectors of Broca's area, the pars triangularis (PTr, ~BA45) and pars opercularis (PO, ~BA44), and two studies distinguished a third region referred to as the deep frontal operculum (DFO). It is not yet clear to me where this DFO is exactly; more on that later. Two of the studies differentiated temporal lobe regions into a dorsal site, the superior temporal gyrus (STG) and a more ventral site, the middle temporal gyrus (MTG).

There is clear evidence for two pathways, a dorsal pathway corresponding to the classic arcuate fasciculus link between Broca's and Wernicke's areas, and a ventral pathway that projects through the anterior temporal lobe and into the inferior frontal gyrus via the uncinate fasciculus and/or the extreme capsule.

The figure below shows the dominant connections reported in each study. Lines are color coded by study so that where there are more lines between two regions we can have more confidence that the connection exists (i.e., it replicated across more studies). For studies that didn't report a specific temporal lobe target, the lines to the temporal regions are left undetermined in the figure.

There is clear evidence for a link between the STG and PO, the posterior portion of Broca's area. This makes some sense as both the STG and PO have been implicated in phonological-level processing. This link appears to be via the dorsal route. One study found that this relation may be mediated by a region in the inferior parietal lobe (IPL). It is tempting to think this may be Spt, but it is only one study, and this study did not specify the specific temporal lobe target.

The pars triangularis has a different connectivity pattern. It has projections via both the dorsal and ventral route and where specified, it connects to the MTG. This also makes some sense as both the PTr and the MTG have been implicated in aspects of higher-level (e.g., "semantic") processing.

The deep frontal operculum appears to connect with the temporal lobe primarily via the ventral route. Some have suggested that the DFO plays a role in syntactic processes. I'm not so sure about this yet, but it is interesting that it projects through the anterior temporal region which also seems to supports some aspect of sentence-level processing.

So what we seem to have is a hierarchical connection pattern between the temporal lobe and inferior frontal lobe. Temporal lobe regions that are closer to the auditory periphery connect with IFG regions that (may be) closer to the motor periphery (the STG-PO circuit), whereas temporal lobe regions that are doing some higher-order operations connect with higher-order IFG regions (the MTG-PTr circuit).

I think connectivity studies are crucially important in helping to constrain theories of language organization. While tractography research is still in its infancy, this set of studies strikes me as a really great start.



Red: Catani, et al. 2005
Blue: Glasser & Rilling, 2008
Green: Anwander, et al., 2007
Orange: Saur, et al., 2008

References

Anwander, A., Tittgemeyer, M., von Cramon, D., Friederici, A., & Knosche, T. (2006). Connectivity-Based Parcellation of Broca's Area Cerebral Cortex, 17 (4), 816-825 DOI: 10.1093/cercor/bhk034

Catani, M., Jones, D., & ffytche, D. (2005). Perisylvian language networks of the human brain Annals of Neurology, 57 (1), 8-16 DOI: 10.1002/ana.20319

Glasser MF, & Rilling JK (2008). DTI tractography of the human brain's language pathways. Cerebral cortex (New York, N.Y. : 1991), 18 (11), 2471-82 PMID: 18281301

Saur, D., Kreher, B., Schnell, S., Kummerer, D., Kellmeyer, P., Vry, M., Umarova, R., Musso, M., Glauche, V., Abel, S., Huber, W., Rijntjes, M., Hennig, J., & Weiller, C. (2008). Ventral and dorsal pathways for language Proceedings of the National Academy of Sciences, 105 (46), 18035-18040 DOI: 10.1073/pnas.0805234105

Thursday, August 13, 2009

A new study claims to have identified mirror neurons in the human brain

In case you haven't seen it yet, there is a new paper in J. Neuroscience that reports the existence of mirror neurons in human inferior frontal gyrus (~Broca's area). It used a repetition suppression fMRI paradigm and found a suppression effect (different > same) both when subjects executed and then observed the same action and when subjects observed and then executed the same action. This appears to be the best evidence yet for the existence of mirror neurons in humans: an effect was found in both directions, execute-->observe & observe-->execute, and it is showing up in the right place, Broca's area, the presumed human homologue of monkey area F5. Another thing I like about this study is that it used object directed actions rather than pantomime which makes it more comparable to the monkey studies.

I haven't yet read the paper closely. I'd be very interested to hear what folks think, so please post a comment. My only general concern is with the repetition suppression effect itself. We've used it previously and it strikes me as a bit on the sketchy side. The effects are very subtle and susceptible to the boredom critique: the suppression effect is not neural adaptation but instead reflects the fact that the subject is bored with the repetition of stimuli and therefore allocates less attention for same trials versus different trials. I'm not sure this critique applies here though given that the same trials cross modality. I'll have to think about that one.

Reference

Kilner, J., Neal, A., Weiskopf, N., Friston, K., & Frith, C. (2009). Evidence of Mirror Neurons in Human Inferior Frontal Gyrus Journal of Neuroscience, 29 (32), 10153-10159 DOI: 10.1523/JNEUROSCI.2668-09.2009

Friday, August 7, 2009

Does stimulation of motor lip areas affect categorical perception of lip related speech sounds?

Short answer: We don't know. Despite the title of a recent paper by Riikka Mottonen and Kate Watkins' in The Journal of Neuroscience, Motor Representations of Articulators Contribute to Categorical Perception of Speech Sounds, the data reported are, unfortunately, uninterpretable.

Here's the long answer: Mottonen and Watkins asked subjects to perform syllable identification (which of two syllables did you hear?) and syllable discrimination (are the two syllables you just heard same or different). The stimuli were place or voice onset time continua and the design followed a standard categorical perception (CP) experimental design. These tasks were performed either pre-rTMS of after rTMS was applied to the lip area of left motor cortex. I will focus here on the discrimination task. The critical condition was when the subjects discriminated a lip-related sound (ba or pa) from a non-lip-related sound (da or ga). They report that discrimination across a category boundary was less accurate after TMS to motor lip areas than before. Specifically, for the ba-da stimuli, the mean proportion of "different responses" to cross category (i.e., physically different) syllables was .73 pre-TMS and .58 post-TMS; similar findings are reported for the pa-ta stimuli. Discriminations that did not involve lip-related sounds (ka-ga or da-ga) were not affected by stimulation, nor was lip-related speech sound discrimination affected by motor hand area stimulation.

So how is this uninterpretable? At risk of becoming a methods curmudgeon (it's probably too late), this paper, like many I've discussed here, used the wrong analysis. Instead of using an unbiased measure, d-prime, they used a biased measure, proportion of different responses. This renders the data uninterpretable because we can't actually tell whether TMS affected the perception of the speech sounds (an interesting possibility) or the subjects' response biases (a less interesting possibility). It's not the authors' fault though. Speech studies of this sort have used the wrong measure for decades. Does this make the entire field uninterpretable? Well... I'll let you draw your own conclusions.

To illustrate the problem, consider the graph below which plots d-prime values as a function of a range of possible hit rate values (hits in this study would be correctly identifying same trials as same) for two constant false alarm (FA) rates corresponding to the pre- and post-TMS values reported by Mottonen and Watkins. FA was calculated simply by subtracting the proportion of different response values from 1, i.e., proportion of different responses (to different trials) is the proportion of correct rejections (the two stimuli are not the same), and FAs is just the inverse of correct rejections. d-prime is a bias corrected measure of discrimination where 0=chance and anything over 3 is real good.

Notice from the graph that depending on the proportion of hits, the d-prime value can range anywhere between 0 to almost 5! But this is true for any constant FA rate, of course, as discrimination ability is only meaningful by looking at the relation between hits and FAs. Think of it this way. If you have a FA rate of 0 (perfect performance on different trials) that seems fantastic, but at the same time, if your hit rate is also 0, well then you clearly just like to say "different" all the time no matter what you hear. A FA rate of 0 is only meaningful if you have a higher than 0 hit rate and the higher the hit rate, the higher your d-prime score.


Notice too that because a given FA value can result any number of d-prime values depending on the hit rate there is virtually complete overlap in d-prime values for the two curves (except for the upper end). This means that TMS could have had no affect whatsoever on the ability of subjects to discriminate lip-related speech sounds. For example, if the hit rate for pre-TMS was .7 and hit rate for post-TMS was .8 then the d-prime in both cases is approximately 2.5 -- no difference. It is also possible that discrimination was worse prior to TMS than after TMS (e.g., if hit rates were .6 and .8 respectively)!

I'm not saying that the study is necessarily wrong, or that TMS didn't affect the perception of lip-related speech sounds. What I'm saying is we can't tell from these data. Maybe once they calculate d-prime the data will show an even more impressive effect. But since we only have half of the information there is no way to know whether perception was affected: the findings are uninterpretable. T

This study is potentially really important and very interesting. As such, I would like to urge the authors to redo the analysis and publish an addendum in J. Neuroscience. I certainly would like to know if it actually worked!

References

Mottonen, R., & Watkins, K. (2009). Motor Representations of Articulators Contribute to Categorical Perception of Speech Sounds Journal of Neuroscience, 29 (31), 9819-9825 DOI: 10.1523/JNEUROSCI.6018-08.2009

Tuesday, August 4, 2009

Speech perception when the motor system is compromised

Stephen Wilson's recent comment in TICS provides a nice jumping off point for considering the strongest evidence favoring a role for the motor system in speech perception. By the way, Stephen is a friend and former post doc of mine and someone whose work I respect a great deal. Given how short the published letters are in TICS we had talked about continuing the discussion here. So here we go...

Let's start with this statement in Stephen's letter held up as one piece of evidence for the role of the motor system in speech perception. He cites the study by Baker et al. 1981 and concludes:

Broca's aphasics performed worse at discriminating place of articulation than voicing, and made more errors when forced to rely on just one phonetic feature than on two.


Stephen is correct and here's the graph to prove it (W=Wernicke's, B=Broca's, P=place contrast, V=voice contrast, S=same trials, D=different trials):



Note worse performance by Broca's patients for the middle point (place contrasts) in the different trials compared to the left or right points. Note also two other things:

(i) The Wernicke patients are doing worse than the Broca's. This suggests that damage to auditory-related areas interferes with speech perception more than damage to motor-related areas. (Stephen isn't arguing that motor cortex is more important than auditory, but I want to point this fact out nonetheless.)

(ii) Take a close look at the y-axis scale. It goes from 1.5 to 12.0 (yes there's a decimal in there!). This is mean error rate on a test that involves 40 trials in each condition. So considering only the place condition where Broca's patients perform most poorly, we are looking at an estimated overall percent correct on the discrimination task of 86% (Wernicke's = 71%). 86% isn't horrible, but it is not at ceiling either. HOWEVER, percent correct is the WRONG measure because it includes both the ability of the subject to discriminate the two speech sounds AND any response bias the subject may have (e.g., a tendency to say "same"). In this context, it is not inconsequential that frontal regions such as Broca's area has been implicated in response selection! Luckily there are well-developed signal detection methods for removing response bias, for example a-prime and d-prime. Given the way Baker et al. graphed their data separated into same and different trials, we can calculate false alarm and hit rates (eyeballed from the graph and converted appropriately) and from this calculate a-prime and d-prime (I used a look up table in Macmillan & Creelman (1991 Detection theory: a user's guide. Cambridge) for the latter. A-prime, which is an estimate of area (proportion) under the ROC curve for Broca's = .965 and for Wernicke's = .919. This is good performance! D-prime is even more impressive with Broca's pulling down a healthy 4.8 and Wernicke's a reduced but still impressive 3.81. How do you make sense of these numbers? Think of them as z-scores where 0=chance performance, 1.0 is threshold discrimination, and integer units are roughly equal to standard deviations. Considered in this way, we can say that Broca's are discriminating minimal pair place features at 4.8 standard deviations above chance and 3.8 standard deviations above threshold. That is killer good discrimination! Are Broca's worse than controls? It wasn't tested so we don't know.

Overall, it is very hard to conclude that Broca's patients have any deficit at all in discriminating speech sounds based on the Baker et al. study. If anything, the study demonstrates the superb ability of such patients to perceive and discriminate even fine phonetic contrasts.

References

Baker, E. (1981). Interaction between phonological and semantic factors in auditory comprehension Neuropsychologia, 19 (1), 1-15 DOI: 10.1016/0028-3932(81)90039-7

Wilson, S. (2009). Speech perception when the motor system is compromised Trends in Cognitive Sciences DOI: 10.1016/j.tics.2009.06.001

Changing meaning causes coupling changes within higher levels of the cortical hierarchy

An interesting new paper in PNAS examined neural coupling between early (~Heschl's gyrus) and later (~STG) stages of auditory processing of speech sounds using a mismatch paradigm. The standard was a word and the deviant differed acoustically within or across a vowel category. Stimuli activated networks in both the left and right hemisphere, but when the deviant crossed a phoneme category boundary an increase in coupling between early and later auditory areas was found in the left hemisphere. A reverse hemisphere effect was found for control tone stimuli.

Here is a more detailed summary of the article provided by lead author, Tom Schofield:

We were interested in discovering how the abstraction of meaningful, invariant representations of speech sounds (i.e. phonemes) occurs in the brain. To do this we used MEG, DCM and a mismatch paradigm employing speech and non-speech stimuli. In the speech condition, we played a repeating ‘standard’ stimulus (the word 'Bart') and periodically interleaved three infrequent ‘deviant’ stimuli created by manipulating the spectro-temporal profile (i.e. formant frequencies) of the vowel sound within the word. The first deviant was behaviourally distinguishable from the standard, but still sounded like 'Bart'. The second deviant was only slightly further away than the first in terms of formant frequency but far enough away that it contained a different phoneme and therefore sounded like a different word - ‘Burt’. The third deviant was much further away acoustically from the standard and sounded like the word ‘Beet’. We thus had three speech deviants that were all acoustically different from the standard, two of which also differed phonemically. We presented the same subjects with an additional set of non-speech sine wave stimuli; a standard tone of the same length and frequency as the 2nd formant of the vowel sound of the speech standard, and three tone deviants of increasing frequency that were matched on the basis of behavioural discrimination to the speech deviants. We modelled the event-related field associated with each deviant with DCM. Our main finding was that although processing of both speech and non-speech sounds engage the same bilateral network within auditory cortex (HG and STG), there is a difference in the way that the brain processes a stimulus change that has a functional meaning (i.e. a phoneme change) versus those that do not. Essentially, phonological processing causes a relative increase in postsynaptic sensitivity within higher levels of auditory cortex in the left hemisphere (left STG) and a concomitant decoupling of the hemispheres at this level. In contrast, the effects of equivalent non-speech stimuli change are seen in lower-levels of auditory cortex in the right hemisphere. We do not argue that speech perception at the phonological level is purely left-lateralised, but rather that it engages a bilateral network that displays asymmetric organisation at higher levels of the cortical hierarchy. My guess is that this asymmetry exists because of the likely subsequent interaction between phonemic and lexicosemantic representations; I would argue that the abstraction of higher level, post-phonemic representations is quite strongly left lateralised (e.g. see our Journal of Neuroscience fMRI DCM paper of last year).


This seems to be a nice demonstration of how left and right hemisphere systems process word-level information differently. Tom, I think your conclusions are perfectly reasonable. What do you think would have happened if you used non-word stimuli? Would the strength of the left hemisphere coupling have diminished?

Schofield, T., Iverson, P., Kiebel, S., Stephan, K., Kilner, J., Friston, K., Crinion, J., Price, C., & Leff, A. (2009). Changing meaning causes coupling changes within higher levels of the cortical hierarchy Proceedings of the National Academy of Sciences, 106 (28), 11765-11770 DOI: 10.1073/pnas.0811402106

Monday, August 3, 2009

What is the role of motor cortex in speech perception?

This seems to be a rather contentious issue lately. One recent thread involves a critical paper by Lotto, Holt, & Hickok in TICS, a commentary on that paper by Stephen Wilson, and a response to Wilson. What I like about this thread is that it basically leads to an agreement of viewpoints. Lotto et al. argue that mirror neurons are not a viable instantiation of the motor theory of speech perception. Wilson points out that the motor system plays a top-down role in speech perception (which is different than saying that it is where speech perception happens). We agree completely with Stephen but emphasize that the motor system's role appears to be fairly small. Stephen discusses some interesting data in his commentary and makes a coherent argument. It is definitely worth a look. The other nice thing about this thread: it's nice and short.


Hickok, G., Holt, L., & Lotto, A. (2009). Response to Wilson: What does motor cortex contribute to speech perception? Trends in Cognitive Sciences DOI: 10.1016/j.tics.2009.05.002

Lotto, A., Hickok, G., & Holt, L. (2009). Reflections on mirror neurons and speech perception Trends in Cognitive Sciences, 13 (3), 110-114 DOI: 10.1016/j.tics.2008.11.008

Wilson, S. (2009). Speech perception when the motor system is compromised Trends in Cognitive Sciences DOI: 10.1016/j.tics.2009.06.001

Maps and streams in auditory cortex -- continued, take 4

The final statement I'd like to comment on from Rauschecker and Scott's recent paper is this:

4. "The postero-medial planum temporale area ... is an auditory area important in the act of articulation" p. 721.

The planum temporale clearly contains auditory cortex. However, not all of the PT is auditory. The posterior 1/3 or so corresponds to cytoarchitectonic area Tpt (yellow shade in figure below).



Tpt is not auditory cortex. But don't take my word for it. Here's a quote from Galaburda & Sanides:

Area Tpt represents a transitional type of cortex between the specialized isocortices of the auditory region and the more generalized isocortex (integration cortex) of the inferior parietal lobule. -Galaburda & Sanides 1980, p. 609


Given that many of Rauschecker and Scott's arguments are based on macaque data, here is a quote re: monkey area Tpt:

Tpt is an auditory-related multisensory area, but not part of auditory cortex. -Smiley, Hackett, et al. 2007


Why should we care that area Tpt is not part of auditory cortex? Because (i) it shows clearly that thinking about the planum temporale as a single functional area performing a single computational operation is misguided and (ii) it supports the view that the posterior sector of the PT is involved in something else besides auditory computations, for example, sensory(not just auditory!)-motor integration.


References

Rauschecker, J., & Scott, S. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing Nature Neuroscience, 12 (6), 718-724 DOI: 10.1038/nn.2331

Galaburda, A., & Sanides, F. (1980). Cytoarchitectonic organization of the human auditory cortex. Journal of Comparative Neurology, 190, 597-610.

Smiley, J. F., Hackett, T. A., Ulbert, I., Karmas, G., Lakatos, P., Javitt, D. C., & Schroeder, C. E. (2007). Multisensory convergence in auditory cortex, I. Cortical connections of the caudal superior temporal plane in macaque monkeys. J Comp Neurol, 502(6), 894-923.