Sunday, December 26, 2010

7th INTERNATIONAL MORPHOLOGICAL PROCESSING CONFERENCE

Call for Papers and Posters

7th INTERNATIONAL MORPHOLOGICAL PROCESSING CONFERENCE

Basque Center on Cognition, Brain and Language.
Donostia-San Sebastián
Spain

June 22nd – 25th 2011
http://www.bcbl.eu/events/morphological/


Keynote Speaker:
William Marslen-Wilson: "Morphology and the Brain".

Podium Discussion
Alec Marantz vs. Dave Plaut: The Morphological Mind

Symposia
Second Language Morphology (Organizer: Harald Clahsen)
Morphology and Cognitive Neuroscience (Organizer: Rob Fiorentino)
Expanding the scope of theories of morphology in reading (Organizers: Ram Frost and Jay Rueckl)

Submissions:

We welcome submissions of abstracts for oral or poster presentations on topics related to Morphological Processing. 


Abstracts can now be submitted electronically, and must be submitted by the
deadline of February 1st, 2011. They will be reviewed anonymously by expert reviewers, and authors will be notified with decisions by February 15th, 2011.


***IMPORTANT DATES***
Abstract submission deadline:  February 1st, 2011
Notification of abstract acceptance: February 15th, 2011
Early registration deadline: March 1st, 2011
Online registration deadline: May 15th, 2011
Conference dates: June 22nd - 25th, 2011

Tuesday, December 21, 2010

POSTDOCTORAL POSITION -- CENTER FOR LANGUAGE SCIENCE, PENN STATE UNIVERSITY

The Center for Language Science at the Pennsylvania State University invites applications for an anticipated postdoctoral position associated with a new NSF training program, Partnerships for International Research and Education (PIRE): Bilingualism, mind, and brain: An interdisciplinary program in cognitive psychology, linguistics, and cognitive neuroscience. The program seeks to provide training in research on bilingualism that will include an international perspective and that will take advantage of opportunities for collaborative research conducted with one of our international partner sites in the UK (Bangor, Wales), Germany (Leipzig), Spain (Granada and Barcelona), The Netherlands (Nijmegen), and China (Hong Kong and Beijing) and in conjunction with our two domestic partner sites at Haskins Labs (Yale) and the VL2 Science of Learning Center at Gallaudet University.

The successful candidate will benefit from a highly interactive group of faculty whose interests include language processing, language acquisition in children and adults, and language contact. Applicants with an interest in extending their expertise within experimental psycholinguistics, cognitive neuroscience, or linguistic field research are particularly welcome to apply. There is no expectation that applicants will have had prior experience in research on bilingualism. The time that a candidate will spend abroad will be determined by the nature of their research project and by ongoing collaborative arrangements between Penn State and the partner sites.
Questions about faculty research interests may be directed to relevant core training faculty: Psychology: Judith Kroll, Ping Li, Janet van Hell, and Dan Weiss; Spanish: Giuli Dussias, Chip Gerfen, and John Lipski; German: Richard Page and Carrie Jackson. Administrative questions can be directed to the Director of the Center for Language Science, Judith Kroll: jfk7@psu.edu or to the Chair of the search committee, Janet van Hell: jgv3@psu.edu. More information about the Center for Language Science (CLS) and faculty research programs can be found at http://www.cls.psu.edu.

The initial appointment will be for one year, with the possibility of renewal for the next year. Salary and benefits are set by NSF guidelines. Provisions of the NSF training program limit funding to US citizens and permanent residents.

Applicants should send a CV, several reprints or preprints, and a statement of research interests. This statement should indicate two or more core PIRE faculty members as likely primary and secondary mentors and should describe the candidate's goals for research and training during a postdoctoral position, including directions in which the candidate would like to expand his/her theoretical and methodological expertise in the language sciences and ways in which the opportunity to conduct research abroad with different bilingual populations would enhance those goals. Applicants should also provide names of three recommenders and arrange for letters of recommendation to be sent separately.

Application materials should be sent electronically to pirepostdoc@gmail.com. For fullest consideration, all materials should be received by February 1, 2011. Decisions will be made by March 2011. The appointment can begin any time between May 15 and August 15, 2011. We encourage applications from individuals of diverse backgrounds. Penn State is committed to affirmative action, equal opportunity and the diversity of its workforce.

Thursday, December 16, 2010

Graduate student openings: Nicole Wicha's lab, UT San Antonio

Openings for graduate students are available for Fall 2011 in the laboratory of Nicole Wicha, PhD, Assistant Professor of Neurobiology at the University of Texas San Antonio (UTSA).  Students interested in the neurobiology of language and bilingual language processing are encouraged to apply.  Students will receive training in event-related potentials (ERPs), eye tracking and behavioral measures, and a PhD in Neurobiology through the Department of Biology at UTSA.  Applications will be accepted until 2/1/2011.  For more information please visit http://wichalab.utsa.edu/index.html and http://bio.utsa.edu/neurobiology/ ; Or contact Dr Wicha at Nicole.Wicha@UTSA.edu or 1.210.458.7013.

Tuesday, December 14, 2010

More on intelligibility: Guest post from Jonathan Peelle

Guest post from Jonathan Peelle:

There were certainly a lot of interesting topics that came up at the SfN nanosymposium, which goes to show that I think we should do this sort of thing more often.

The study of intelligible speech has a long history in neuroimaging. On the one hand, as Greg and others have emphasized, it is a tricky thing to study, because a number of linguistic (and often acoustic) factors are confounded when looking at intelligible > unintelligible contrasts. So once we identify intelligibility-responsive areas, we still have a lot of work to do in order to relate anatomy to cognitive operations involved in speech comprehension. That being said, it does seem like a good place to start, and a reasonable way to try to dissociate language-related processing from auditory/acoustic processing. Depending on the approach used, intelligibility studies can also tell us a great deal about speech comprehension under challenging conditions (e.g. background noise, cochlear implants, hearing loss) that have both theoretical and practical relevance.

One thing I suspect everyone agrees on is that, at the end of the day, we should be able to account for multiple sources of evidence: lesion, PET, fMRI, EEG/MEG, as well as various effects of stimuli and analysis approach. With that in mind, there are a few comments to add to this discussion.

Regarding Okada et al. (2010), I won’t repeat all the points we have made previously (Peelle et al., 2010a), but the influence of background noise (continuous scanning) shouldn’t be underestimated. If background noise simply increases global brain signal (i.e. an increase in gain), it shouldn’t have impacted the results. But background noise can interact with behavioral factors, and results in spatially constrained patterns of univariate signal increase (including left temporal cortex, e.g. Peelle et al. 2010b):




So, in the absence of data I am reluctant to assume that background noise and listening effort wouldn’t affect multivariate results. This goes along with the point that even if two types of stimuli are intelligible, they can differ in listening effort, which is going to impact the neural systems engaged in comprehension. In Okada et al. (2010), this means that a region that distinguishes between the clear and vocoded conditions might be showing acoustic sensitivity (the argument made by Okada et al.), or it may instead be indexing listening effort.

Another point worth emphasizing is that although the materials introduced by Scott et al. (2000) have many advantages and have been used in a number of papers, there are a number of ways to investigate intelligibility responses, and we should be careful not to conclude too much from a single approach. As we have pointed out, Davis and Johnsrude (2003) parametrically varied intelligibility within three types of acoustic degradation, and found regions of acoustic insensitivity both posterior and anterior to primary auditory areas in the left hemisphere, and anterior to primary auditory cortex in the right hemisphere.



One advantage to this approach is that parametrically varying speech clarity may give a more sensitive way to assess intelligibility responses than a dichotomous “intelligible > unintelligible” contrast. The larger point is that multivariate analyses, although extremely useful, are not a magic bullet; we also need to carefully consider the particular stimuli and task used (which I would argue also includes background noise).

Incidentally, in Davis and Johnsrude (2003), responses that are increased when speech is distorted (aka listening effort) look like this (i.e. including regions of temporal cortex):





The role of inferotemporal cortex in speech comprehension

One side point which came up in discussion at the symposium was the role of posterior inferior temporal gyrus / fusiform, which appears in the Hickok & Poeppel model; I think the initial point was that this is not consistently seen in functional imaging studies, to which Greg replied that the primary support for that region was lesion data. It’s true that this region of inferotemporal cortex isn’t always discussed in functional imaging studies, but it actually occurs quite often—often enough that I would say the functional imaging evidence for its importance is rather strong. We review some of this evidence briefly in Peelle et al. (2010b; p. 1416, bottom), but it includes the following studies:



Speaking of inferotemporal cortex, there is a nice peak here in the Okada et al. results (Figure 2, Table 1):



Once you start looking for it, it crops up rather often. (Although it’s also worth noting that the lack of results in this region in fMRI studies may be due to susceptibility artifacts in this region, rather than a lack of neural engagement.)

Anterior vs. Posterior: Words vs. Sentences?

With respect to the discussion about posterior vs. anterior temporal regions being critical for speech comprehension, it strikes me that we all need to be careful about terminology. I.e., does “speech” refer to connected speech (sentences) or single words? One explanation of the lesion data referred to in which a patient with severe left anterior temporal damage performed well on “speech perception” is that the task was auditory word comprehension. How did this patient do on sentence comprehension measures? I think a compelling case could be made that auditory word comprehension is largely bilateral and more posterior, but that in connected speech more anterior (and perhaps left-lateralized) regions become more critical (e.g., Humphries et al., 2006):



As far as I know, no one has done functional imaging of intelligibility of single words in the way that many have done with sentences; nor have there been sentence comprehension measures on patients with left anterior temporal lobe damage. So, at this point I think more work needs to be done before we can directly compare these sources of evidence.

Broadly though, I don’t know how productive it will be to specify which area responds “most” to intelligible speech. Given the variety of challenges which our auditory and language systems need to deal with, surely it comes down to a network of regions that are dynamically called into action depending on (acoustic and cognitive) task demands. This is why I think that we need to include regions of prefrontal, premotor, and inferotemporal cortex in these discussions, even if they don’t appear in every imaging contrast.




References:

Awad M, Warren JE, Scott SK, Turkheimer FE, Wise RJS (2007) A common system for the comprehension and production of narrative speech. Journal of Neuroscience 27:11455-11464. http://dx.doi.org/10.1523/JNEUROSCI.5257-06.2007

Davis MH, Johnsrude IS (2003) Hierarchical processing in spoken language comprehension. Journal of Neuroscience 23: 3423-3431. http://www.jneurosci.org/cgi/content/abstract/23/8/3423

Humphries C, Binder JR, Medler DA, Liebenthal E (2006) Syntactic and semantic modulation of neural activity during auditory sentence comprehension. Journal of Cognitive Neuroscience 18:665-679. http://dx.doi.org/10.1162/jocn.2006.18.4.665

Okada K, Rong F, Venezia J, Matchin W, Hsieh I-H, Saberi K, Serences JT, Hickok G (2010) Hierarchical organization of human auditory cortex: Evidence from acoustic invariance in the response to intelligible speech. Cerebral Cortex 20:2486-2495. http://dx.doi.org/10.1093/cercor/bhp318

Orfanidou E, Marslen-Wilson WD, Davis MH (2006) Neural response suppression predicts repetition priming of spoken words and pseudowords. Journal of Cognitive Neuroscience 18:1237-1252. http://dx.doi.org/10.1162/jocn.2006.18.8.1237

Peelle JE, Johnsrude IS, Davis MH (2010a) Hierarchical processing for speech in human auditory cortex and beyond [Commentary on Okada et al. (2010)]. Frontiers in Human Neuroscience 4: 51. http://frontiersin.org/Human_Neuroscience/10.3389/fnhum.2010.00051/full

Peelle JE, Eason RJ, Schmitter S, Schwarzbauer C, Davis MH (2010b) Evaluating an acoustically quiet EPI sequence for use in fMRI studies of speech and auditory processing. NeuroImage 52: 1410–1419. http://dx.doi.org/10.1016/j.neuroimage.2010.05.015

Rodd JM, Davis MH, Johnsrude IS (2005) The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity. Cerebral Cortex 15:1261-1269. http://dx.doi.org/doi:10.1093/cercor/bhi009

Rodd JM, Longe OA, Randall B, Tyler LK (2010) The functional organisation of the fronto-temporal language system: Evidence from syntactic and semantic ambiguity. Neuropsychologia 48:1324-1335. http://dx.doi.org/10.1016/j.neuropsychologia.2009.12.035

Scott SK, Blank CC, Rosen S, Wise RJS (2000) Identification of a pathway for intelligible speech in the left temporal lobe. Brain 123:2400-2406. http://dx.doi.org/10.1093/brain/123.12.2400

Monday, December 6, 2010

To remember Tom Schofield

I am writing to inform the community of very sad news. Tom Schofield, a terrific young scholar who trained with Alex Leff and Cathy Price in London and moved to New York recently as a post-doc in my lab, was killed in a bus accident in Colombia, South America, last week. He was traveling over the Thanksgiving break.

Obviously, his family, friends, and colleagues are in shock and completely distraught over this tragedy. We share in our profound grief with Tom's parents and sisters, his girlfriend Rashida, and all his friends and colleagues.

We have all been cheated out of a friend and a young scientist with tremendous promise. Tom quickly became a treasured colleague and companion to people around him. His mixture of low-key but incisive intelligence, personal warmth, sense of humor and perspective made him a focal point of a lab group.  

There is little to say in the wake of such a disaster. I would like this community to celebrate Tom by remembering his work and thinking about his contributions and the direction his research was taking. Tom had just defended his dissertation in London (his viva) and was already deeply into new projects in New York. Here are papers that Tom played a critical role in. Tom was a regular reader and (one of the few) contributors/commenters on this blog.



The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke. Leff AP, Schofield TM, Crinion JT, Seghier ML, Grogan A, Green DW, Price CJ. Brain. 2009 Dec;132(Pt 12):3401-10.

Changing meaning causes coupling changes within higher levels of the cortical hierarchy. Schofield TM, Iverson P, Kiebel SJ, Stephan KE, Kilner JM, Friston KJ, Crinion JT, Price CJ, Leff AP. Proc Natl Acad Sci U S A. 2009 Jul 14;106(28):11765-70.

Vowel-specific mismatch responses in the anterior superior temporal gyrus: an fMRI study. Leff AP, Iverson P, Schofield TM, Kilner JM, Crinion JT, Friston KJ, Price CJ. Cortex. 2009 Apr;45(4):517-26.

The cortical dynamics of intelligible speech. Leff AP, Schofield TM, Stephan KE, Crinion JT, Friston KJ, Price CJ. J Neurosci. 2008 Dec 3;28(49):13209-15.

Inter-subject variability in the use of two different neuronal networks for reading aloud familiar words. Seghier ML, Lee HL, Schofield T, Ellis CL, Price CJ. Neuroimage. 2008 Sep 1;42(3):1226-36.


For those of you who would like to contribute a comment, story, memory, or any other piece of information about Tom, I have set up a blog in his memory: tomschofield.blogspot.com.

David

Friday, December 3, 2010

Did Wernicke really postulate only two language centers?

Every once and a while I look back at Wernicke's original 1874 monograph and every time I do, I learn something new. It is not much of a stretch (and might even be true) to say that our modern accounts of the functional anatomy of language are relatively minor tweaks to Wernicke's model -- despite what Friedemann Pulvermuller claims to the contrary ;-)

So today I looked again and noticed that in contrast to current belief, including my own, Wernicke did not just postulate two language centers. In fact he postulated a continuous network that comprised the "first convolution" together with the insular cortex as "a speech center". By "first convolution" Wernicke means the gyrus that encircles the Sylvian fossa, i.e., the superior temporal, supramarginal, and inferior frontal gyrus (it does make a nice continuous arc).

But this was a network organized into a functional continuum, with the superior temporal region serving sensory (auditory) functions, and the inferior frontal region serving motor functions. Now we all think that Wernicke considered these two zones to be connected via a white matter fiber bundle, the arcuate fasciculus, but this is not true (the AF was postulated later). My earlier readings of Wernicke suggested to me that he thought the connection was via a white matter tract that coursed behind the insula. But it seems that this is wrong too. Rather, Wernicke proposes that the entire first convolution zone is interconnected via the insular cortex. Here are the relevant quotes:

The existence of fibrae propriae [a term from Meynert referring, I believe, to connection fibers generally]..., between the insular cortex and the convolutions of the convexity has also been demonstrated. Since to my knowledge these have not previously been described and since they constitute a major proof of the unitary character of the entire first primitive convolution and the insular cortex, the reader will permit me to speak further of them. p. 46


He goes on for several paragraphs describing fibers that seem to connect the first convolution with the insula. At one point he even gives advice on how to see them for yourself...

...it is best first to apply the scalpel about halfway up the inner surface of the operculum... p. 47


I suppose that is kind of like us now saying that it is best first to apply spatial smoothing with a gaussian kernel... Anyway, here he states his conclusions on the matter quite clearly:

The consideration of the anatomical circumstances just described, of the numerous supporting post-mortem studies, and finally of the variety in the clinical picture of aphasia thus leads compellingly to the following interpretation of the situation. The entire region of the first convolution, which circles around the fossa Sylvii serves in conjunction with the insular cortex as a speech center. The first frontal convolution, which is a motor area, is the center of representations of movement; the first temporal convolution, a sensory area, is the center for sound images. The fibrae propriae which come together in the insular cortex form the mediating psychic reflex arcs. p. 47


So it isn't just Broca's area, Wernicke's area, and a white matter bundle. Rather it is a continuous but functionally graded region inter-connected by a -- dare I say -- computational hub, the insula. He may not have been entirely correct about the insula as a whole, but what seems clear is that the 19th century neurologists, including the so-called "classical ones" exemplified by Wernicke, had a much more dynamic and complex view of brain systems that we give them credit for.

Reference

Wernicke C (1874/1969) The symptom complex of aphasia: A psychological study on an anatomical basis. In: Boston studies in the philosophy of science (Cohen RS, Wartofsky MW, eds), pp 34-97. Dordrecht: D. Reidel Publishing Company.

Wednesday, December 1, 2010

Why the obsession with intelligibility in speech processing studies?

There was a very interesting speech/language session at SfN this year organized by Jonathan Peelle. Talks included presentations Sophie Scott, Jonas Obleser, Sonia Kotz, Matt Davis and others spanning an impressive range of methods and perspectives on auditory language processing. Good stuff and a fun group of people. It felt kind of like a joint lab meeting with lots of discussion.

I want to emphasize one of the issues that came up, namely, the brain's response to intelligible speech and what we can learn from it. Here's a brief history.

2000 - Sophie Scott, Richard Wise and colleagues published a very influential paper which identified a left anterior temporal lobe region that responded more to intelligible speech (clear and noise vocoded sentences) than unintelligible speech (spectrally rotated versions of the intelligible speech stimuli). It was argued that this is the "pathway for intelligible speech".

2000 - Hickok & Poeppel published a critical review of the speech perception literature arguing, on the basis of primarily lesion data, that speech perception is bilaterally organized and implicates posterior superior temporal regions in speech sound perception.

2000-2006 - Several more papers from Scott/Wise's group replicated this basic finding but additional areas started creeping into the picture including left posterior regions and right hemisphere regions. The example figure below is from Sptsyna et al. 2006



2007 - Hickok & Poeppel again reviewed the broader literature on speech perception including lesion work as well as studies that attempted to isolate phonological-level processes more specifically. It is concluded, yes you guessed it, that Hickok & Poeppel 2000 were pretty much correct their claim of a bilaterally organized posterior temporal speech perception system.

2009 - Rauschecker and Scott publish their "Maps and Streams" review paper arguing just as strongly that speech perception is left lateralized and is dependent on an anterior pathway. As far as I can tell, this claim is based on (i) analogy to the ventral stream pathway projection in monkeys (note: we might not yet fully understand the primate auditory system and given that monkeys don't have speech, the homologies may be less than perfect), and (ii) the fact that the peak activation in intelligible minus unintelligible sentences tends to be greatest in the left anterior temporal lobe.

2010 - Okada et al. publish a replication of Scott et al. 2000 using a much larger sample than any previous study (n=20 compared to n=8 in the Scott et al. 2000) and find robust bilateral anterior and posterior activations in the superior temporal lobe for intelligible compared to unintelligible speech. See figure below which shows the group activation (top) and peak activations in individual subjects (bottom). Note that even though it doesn't show up in the group analysis, activation extends to right posterior STG/STS in most subjects.


So that's the history. As was revealed at the SfN session controversy still remains, despite the existence of what I thought was fairly compelling evidence against an exclusively anterior-going projection pathway.

Here's what came out at the conference.

I presented lesion evidence collected with my collaborators Corianne Rogalsky, Hanna Damasio, and Steven Anderson, which showed that destruction of the left anterior temporal lobe "intelligibility area" has zero effect on speech perception (see figure below). This example patient performed with 100% accuracy on a test of auditory word comprehension (4AFC, word to picture matching with all phonemic foils, including minimal pairs), and 98% accuracy on a minimal pair syllable discrimination test. Combine this with the fact that auditory comprehension deficits are most strongly associated with lesions in the posterior MTG (Bates et al. 2003) and this adds up to a major problem for the Scott et al. theory.



The counter-argument from the Scott camp was addressed exclusively at the imaging data. I'll try to summarize their main points as accurately as possible. Someone correct me if I've got them wrong.

1. Left ATL is the peak activation in intelligible vs. unintelligible contrasts
2. Okada et al. did not use sparse sampling acquisition (true) which increased the intelligibility processing load (possible) thus recruiting posterior and right hemisphere involvement
3. Okada et al. used an "active task" which affected the activation pattern (we asked subjects to press a button indicating whether the sentence was intelligible or not).

First and most importantly, none of these counter-arguments provides an account of the lesion data. We have to look at all sources of data in building our theories.

Regarding point #2: I will admit that it is possible that the extra noise taxed the system more than normal and this could have increased the signal throughout the network. However, these same regions are showing up in the reports of Scott and colleagues, even in the PET scans, and the regions that are showing up (bilateral pSTG/STS) are the same as those implicated in lesion work and in imaging studies that target phonological level processes.

Regarding point #3: I'm all for paying close attention to the task in explaining (or explaining away) activation patterns. However, if the task directly assesses the behavior of interest (which is not the case in many studies), this argument doesn't hold. The goal of all this work is to map the network for processing intelligible speech. If we are asking subjects to tell us if the sentence is intelligible, this should drive the network of interest. Unless, I suppose, you think that the pSTG is involved decision processes which is highly dubious.

This brings us to point #1: Yes, it does appear that the peak activation in the intell vs. unintell contrast is in the left anterior temporal lobe. This tendency is what drives the Scott et al. theory. But why the obsession with this contrast? There are two primary reasons why we shouldn't be obsessed with it. In fact, these points question whether there is any usefulness to the contrast at all.

1. It's confounded. Intelligible speech differs from unintelligible speech on a host of dimensions: phonemic, lexical, semantic, syntactic, prosodic, and compositional semantic content. Further, the various intelligibility conditions are acoustically different, just listen to them, or note that A1 can reliably classify each condition from the other (Okada et al. 2010). It is therefore extremely unclear what the contrast is isolating.

2. By performing this contrast, one is assuming that any region that fails to show a difference between the conditions is not part of the pathway for intelligible speech. This is clearly an incorrect assumption: in the extreme case, peripheral hearing loss impairs the ability understand speech even though the peripheral auditory system does not respond exclusively to intelligible speech. Closer to the point, even if it was the case that the left pSTG/STS did not show an activation difference between intelligible and unintelligible speech it could still be THE region responsible for speech perception. In fact, if the job of a speech perception network is to take spectrotemporal patterns as input and map these onto stored representations of speech sound categories, one would expect activation of this network across a range of spectrotemporal patterns, not only those that are "intelligible".

I don't expect this debate to end soon. In fact, one suggestion for the next "debate" at the NLC conference is Scott vs. Poeppel. That would be fun.

References

Bates, E., Wilson, S.M., Saygin, A.P., Dick, F., Sereno, M.I., Knight, R.T., and Dronkers, N.F. (2003). Voxel-based lesion-symptom mapping. Nat Neurosci 6, 448-450.

Hickok, G., and Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences 4, 131-138.

Hickok, G., and Poeppel, D. (2007). The cortical organization of speech processing. Nat Rev Neurosci 8, 393-402.

Okada K, Rong F, Venezia J, Matchin W, Hsieh IH, Saberi K, Serences JT, & Hickok G (2010). Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. Cerebral cortex (New York, N.Y. : 1991), 20 (10), 2486-95 PMID: 20100898

Narain, C., Scott, S.K., Wise, R.J., Rosen, S., Leff, A., Iversen, S.D., and Matthews, P.M. (2003). Defining a left-lateralized response specific to intelligible speech using fMRI. Cereb Cortex 13, 1362-1368.

Rauschecker, J.P., and Scott, S.K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat Neurosci 12, 718-724.

Scott, S.K., Blank, C.C., Rosen, S., and Wise, R.J.S. (2000). Identification of a pathway for intelligible speech in the left temporal lobe. Brain 123, 2400-2406.

Spitsyna, G., Warren, J.E., Scott, S.K., Turkheimer, F.E., and Wise, R.J. (2006). Converging language streams in the human temporal lobe. J Neurosci 26, 7328-7336.