Sunday, December 26, 2010

7th INTERNATIONAL MORPHOLOGICAL PROCESSING CONFERENCE

Call for Papers and Posters

7th INTERNATIONAL MORPHOLOGICAL PROCESSING CONFERENCE

Basque Center on Cognition, Brain and Language.
Donostia-San Sebastián
Spain

June 22nd – 25th 2011
http://www.bcbl.eu/events/morphological/


Keynote Speaker:
William Marslen-Wilson: "Morphology and the Brain".

Podium Discussion
Alec Marantz vs. Dave Plaut: The Morphological Mind

Symposia
Second Language Morphology (Organizer: Harald Clahsen)
Morphology and Cognitive Neuroscience (Organizer: Rob Fiorentino)
Expanding the scope of theories of morphology in reading (Organizers: Ram Frost and Jay Rueckl)

Submissions:

We welcome submissions of abstracts for oral or poster presentations on topics related to Morphological Processing. 


Abstracts can now be submitted electronically, and must be submitted by the
deadline of February 1st, 2011. They will be reviewed anonymously by expert reviewers, and authors will be notified with decisions by February 15th, 2011.


***IMPORTANT DATES***
Abstract submission deadline:  February 1st, 2011
Notification of abstract acceptance: February 15th, 2011
Early registration deadline: March 1st, 2011
Online registration deadline: May 15th, 2011
Conference dates: June 22nd - 25th, 2011

Tuesday, December 21, 2010

POSTDOCTORAL POSITION -- CENTER FOR LANGUAGE SCIENCE, PENN STATE UNIVERSITY

The Center for Language Science at the Pennsylvania State University invites applications for an anticipated postdoctoral position associated with a new NSF training program, Partnerships for International Research and Education (PIRE): Bilingualism, mind, and brain: An interdisciplinary program in cognitive psychology, linguistics, and cognitive neuroscience. The program seeks to provide training in research on bilingualism that will include an international perspective and that will take advantage of opportunities for collaborative research conducted with one of our international partner sites in the UK (Bangor, Wales), Germany (Leipzig), Spain (Granada and Barcelona), The Netherlands (Nijmegen), and China (Hong Kong and Beijing) and in conjunction with our two domestic partner sites at Haskins Labs (Yale) and the VL2 Science of Learning Center at Gallaudet University.

The successful candidate will benefit from a highly interactive group of faculty whose interests include language processing, language acquisition in children and adults, and language contact. Applicants with an interest in extending their expertise within experimental psycholinguistics, cognitive neuroscience, or linguistic field research are particularly welcome to apply. There is no expectation that applicants will have had prior experience in research on bilingualism. The time that a candidate will spend abroad will be determined by the nature of their research project and by ongoing collaborative arrangements between Penn State and the partner sites.
Questions about faculty research interests may be directed to relevant core training faculty: Psychology: Judith Kroll, Ping Li, Janet van Hell, and Dan Weiss; Spanish: Giuli Dussias, Chip Gerfen, and John Lipski; German: Richard Page and Carrie Jackson. Administrative questions can be directed to the Director of the Center for Language Science, Judith Kroll: jfk7@psu.edu or to the Chair of the search committee, Janet van Hell: jgv3@psu.edu. More information about the Center for Language Science (CLS) and faculty research programs can be found at http://www.cls.psu.edu.

The initial appointment will be for one year, with the possibility of renewal for the next year. Salary and benefits are set by NSF guidelines. Provisions of the NSF training program limit funding to US citizens and permanent residents.

Applicants should send a CV, several reprints or preprints, and a statement of research interests. This statement should indicate two or more core PIRE faculty members as likely primary and secondary mentors and should describe the candidate's goals for research and training during a postdoctoral position, including directions in which the candidate would like to expand his/her theoretical and methodological expertise in the language sciences and ways in which the opportunity to conduct research abroad with different bilingual populations would enhance those goals. Applicants should also provide names of three recommenders and arrange for letters of recommendation to be sent separately.

Application materials should be sent electronically to pirepostdoc@gmail.com. For fullest consideration, all materials should be received by February 1, 2011. Decisions will be made by March 2011. The appointment can begin any time between May 15 and August 15, 2011. We encourage applications from individuals of diverse backgrounds. Penn State is committed to affirmative action, equal opportunity and the diversity of its workforce.

Thursday, December 16, 2010

Graduate student openings: Nicole Wicha's lab, UT San Antonio

Openings for graduate students are available for Fall 2011 in the laboratory of Nicole Wicha, PhD, Assistant Professor of Neurobiology at the University of Texas San Antonio (UTSA).  Students interested in the neurobiology of language and bilingual language processing are encouraged to apply.  Students will receive training in event-related potentials (ERPs), eye tracking and behavioral measures, and a PhD in Neurobiology through the Department of Biology at UTSA.  Applications will be accepted until 2/1/2011.  For more information please visit http://wichalab.utsa.edu/index.html and http://bio.utsa.edu/neurobiology/ ; Or contact Dr Wicha at Nicole.Wicha@UTSA.edu or 1.210.458.7013.

Tuesday, December 14, 2010

More on intelligibility: Guest post from Jonathan Peelle

Guest post from Jonathan Peelle:

There were certainly a lot of interesting topics that came up at the SfN nanosymposium, which goes to show that I think we should do this sort of thing more often.

The study of intelligible speech has a long history in neuroimaging. On the one hand, as Greg and others have emphasized, it is a tricky thing to study, because a number of linguistic (and often acoustic) factors are confounded when looking at intelligible > unintelligible contrasts. So once we identify intelligibility-responsive areas, we still have a lot of work to do in order to relate anatomy to cognitive operations involved in speech comprehension. That being said, it does seem like a good place to start, and a reasonable way to try to dissociate language-related processing from auditory/acoustic processing. Depending on the approach used, intelligibility studies can also tell us a great deal about speech comprehension under challenging conditions (e.g. background noise, cochlear implants, hearing loss) that have both theoretical and practical relevance.

One thing I suspect everyone agrees on is that, at the end of the day, we should be able to account for multiple sources of evidence: lesion, PET, fMRI, EEG/MEG, as well as various effects of stimuli and analysis approach. With that in mind, there are a few comments to add to this discussion.

Regarding Okada et al. (2010), I won’t repeat all the points we have made previously (Peelle et al., 2010a), but the influence of background noise (continuous scanning) shouldn’t be underestimated. If background noise simply increases global brain signal (i.e. an increase in gain), it shouldn’t have impacted the results. But background noise can interact with behavioral factors, and results in spatially constrained patterns of univariate signal increase (including left temporal cortex, e.g. Peelle et al. 2010b):




So, in the absence of data I am reluctant to assume that background noise and listening effort wouldn’t affect multivariate results. This goes along with the point that even if two types of stimuli are intelligible, they can differ in listening effort, which is going to impact the neural systems engaged in comprehension. In Okada et al. (2010), this means that a region that distinguishes between the clear and vocoded conditions might be showing acoustic sensitivity (the argument made by Okada et al.), or it may instead be indexing listening effort.

Another point worth emphasizing is that although the materials introduced by Scott et al. (2000) have many advantages and have been used in a number of papers, there are a number of ways to investigate intelligibility responses, and we should be careful not to conclude too much from a single approach. As we have pointed out, Davis and Johnsrude (2003) parametrically varied intelligibility within three types of acoustic degradation, and found regions of acoustic insensitivity both posterior and anterior to primary auditory areas in the left hemisphere, and anterior to primary auditory cortex in the right hemisphere.



One advantage to this approach is that parametrically varying speech clarity may give a more sensitive way to assess intelligibility responses than a dichotomous “intelligible > unintelligible” contrast. The larger point is that multivariate analyses, although extremely useful, are not a magic bullet; we also need to carefully consider the particular stimuli and task used (which I would argue also includes background noise).

Incidentally, in Davis and Johnsrude (2003), responses that are increased when speech is distorted (aka listening effort) look like this (i.e. including regions of temporal cortex):





The role of inferotemporal cortex in speech comprehension

One side point which came up in discussion at the symposium was the role of posterior inferior temporal gyrus / fusiform, which appears in the Hickok & Poeppel model; I think the initial point was that this is not consistently seen in functional imaging studies, to which Greg replied that the primary support for that region was lesion data. It’s true that this region of inferotemporal cortex isn’t always discussed in functional imaging studies, but it actually occurs quite often—often enough that I would say the functional imaging evidence for its importance is rather strong. We review some of this evidence briefly in Peelle et al. (2010b; p. 1416, bottom), but it includes the following studies:



Speaking of inferotemporal cortex, there is a nice peak here in the Okada et al. results (Figure 2, Table 1):



Once you start looking for it, it crops up rather often. (Although it’s also worth noting that the lack of results in this region in fMRI studies may be due to susceptibility artifacts in this region, rather than a lack of neural engagement.)

Anterior vs. Posterior: Words vs. Sentences?

With respect to the discussion about posterior vs. anterior temporal regions being critical for speech comprehension, it strikes me that we all need to be careful about terminology. I.e., does “speech” refer to connected speech (sentences) or single words? One explanation of the lesion data referred to in which a patient with severe left anterior temporal damage performed well on “speech perception” is that the task was auditory word comprehension. How did this patient do on sentence comprehension measures? I think a compelling case could be made that auditory word comprehension is largely bilateral and more posterior, but that in connected speech more anterior (and perhaps left-lateralized) regions become more critical (e.g., Humphries et al., 2006):



As far as I know, no one has done functional imaging of intelligibility of single words in the way that many have done with sentences; nor have there been sentence comprehension measures on patients with left anterior temporal lobe damage. So, at this point I think more work needs to be done before we can directly compare these sources of evidence.

Broadly though, I don’t know how productive it will be to specify which area responds “most” to intelligible speech. Given the variety of challenges which our auditory and language systems need to deal with, surely it comes down to a network of regions that are dynamically called into action depending on (acoustic and cognitive) task demands. This is why I think that we need to include regions of prefrontal, premotor, and inferotemporal cortex in these discussions, even if they don’t appear in every imaging contrast.




References:

Awad M, Warren JE, Scott SK, Turkheimer FE, Wise RJS (2007) A common system for the comprehension and production of narrative speech. Journal of Neuroscience 27:11455-11464. http://dx.doi.org/10.1523/JNEUROSCI.5257-06.2007

Davis MH, Johnsrude IS (2003) Hierarchical processing in spoken language comprehension. Journal of Neuroscience 23: 3423-3431. http://www.jneurosci.org/cgi/content/abstract/23/8/3423

Humphries C, Binder JR, Medler DA, Liebenthal E (2006) Syntactic and semantic modulation of neural activity during auditory sentence comprehension. Journal of Cognitive Neuroscience 18:665-679. http://dx.doi.org/10.1162/jocn.2006.18.4.665

Okada K, Rong F, Venezia J, Matchin W, Hsieh I-H, Saberi K, Serences JT, Hickok G (2010) Hierarchical organization of human auditory cortex: Evidence from acoustic invariance in the response to intelligible speech. Cerebral Cortex 20:2486-2495. http://dx.doi.org/10.1093/cercor/bhp318

Orfanidou E, Marslen-Wilson WD, Davis MH (2006) Neural response suppression predicts repetition priming of spoken words and pseudowords. Journal of Cognitive Neuroscience 18:1237-1252. http://dx.doi.org/10.1162/jocn.2006.18.8.1237

Peelle JE, Johnsrude IS, Davis MH (2010a) Hierarchical processing for speech in human auditory cortex and beyond [Commentary on Okada et al. (2010)]. Frontiers in Human Neuroscience 4: 51. http://frontiersin.org/Human_Neuroscience/10.3389/fnhum.2010.00051/full

Peelle JE, Eason RJ, Schmitter S, Schwarzbauer C, Davis MH (2010b) Evaluating an acoustically quiet EPI sequence for use in fMRI studies of speech and auditory processing. NeuroImage 52: 1410–1419. http://dx.doi.org/10.1016/j.neuroimage.2010.05.015

Rodd JM, Davis MH, Johnsrude IS (2005) The neural mechanisms of speech comprehension: fMRI studies of semantic ambiguity. Cerebral Cortex 15:1261-1269. http://dx.doi.org/doi:10.1093/cercor/bhi009

Rodd JM, Longe OA, Randall B, Tyler LK (2010) The functional organisation of the fronto-temporal language system: Evidence from syntactic and semantic ambiguity. Neuropsychologia 48:1324-1335. http://dx.doi.org/10.1016/j.neuropsychologia.2009.12.035

Scott SK, Blank CC, Rosen S, Wise RJS (2000) Identification of a pathway for intelligible speech in the left temporal lobe. Brain 123:2400-2406. http://dx.doi.org/10.1093/brain/123.12.2400

Monday, December 6, 2010

To remember Tom Schofield

I am writing to inform the community of very sad news. Tom Schofield, a terrific young scholar who trained with Alex Leff and Cathy Price in London and moved to New York recently as a post-doc in my lab, was killed in a bus accident in Colombia, South America, last week. He was traveling over the Thanksgiving break.

Obviously, his family, friends, and colleagues are in shock and completely distraught over this tragedy. We share in our profound grief with Tom's parents and sisters, his girlfriend Rashida, and all his friends and colleagues.

We have all been cheated out of a friend and a young scientist with tremendous promise. Tom quickly became a treasured colleague and companion to people around him. His mixture of low-key but incisive intelligence, personal warmth, sense of humor and perspective made him a focal point of a lab group.  

There is little to say in the wake of such a disaster. I would like this community to celebrate Tom by remembering his work and thinking about his contributions and the direction his research was taking. Tom had just defended his dissertation in London (his viva) and was already deeply into new projects in New York. Here are papers that Tom played a critical role in. Tom was a regular reader and (one of the few) contributors/commenters on this blog.



The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke. Leff AP, Schofield TM, Crinion JT, Seghier ML, Grogan A, Green DW, Price CJ. Brain. 2009 Dec;132(Pt 12):3401-10.

Changing meaning causes coupling changes within higher levels of the cortical hierarchy. Schofield TM, Iverson P, Kiebel SJ, Stephan KE, Kilner JM, Friston KJ, Crinion JT, Price CJ, Leff AP. Proc Natl Acad Sci U S A. 2009 Jul 14;106(28):11765-70.

Vowel-specific mismatch responses in the anterior superior temporal gyrus: an fMRI study. Leff AP, Iverson P, Schofield TM, Kilner JM, Crinion JT, Friston KJ, Price CJ. Cortex. 2009 Apr;45(4):517-26.

The cortical dynamics of intelligible speech. Leff AP, Schofield TM, Stephan KE, Crinion JT, Friston KJ, Price CJ. J Neurosci. 2008 Dec 3;28(49):13209-15.

Inter-subject variability in the use of two different neuronal networks for reading aloud familiar words. Seghier ML, Lee HL, Schofield T, Ellis CL, Price CJ. Neuroimage. 2008 Sep 1;42(3):1226-36.


For those of you who would like to contribute a comment, story, memory, or any other piece of information about Tom, I have set up a blog in his memory: tomschofield.blogspot.com.

David

Friday, December 3, 2010

Did Wernicke really postulate only two language centers?

Every once and a while I look back at Wernicke's original 1874 monograph and every time I do, I learn something new. It is not much of a stretch (and might even be true) to say that our modern accounts of the functional anatomy of language are relatively minor tweaks to Wernicke's model -- despite what Friedemann Pulvermuller claims to the contrary ;-)

So today I looked again and noticed that in contrast to current belief, including my own, Wernicke did not just postulate two language centers. In fact he postulated a continuous network that comprised the "first convolution" together with the insular cortex as "a speech center". By "first convolution" Wernicke means the gyrus that encircles the Sylvian fossa, i.e., the superior temporal, supramarginal, and inferior frontal gyrus (it does make a nice continuous arc).

But this was a network organized into a functional continuum, with the superior temporal region serving sensory (auditory) functions, and the inferior frontal region serving motor functions. Now we all think that Wernicke considered these two zones to be connected via a white matter fiber bundle, the arcuate fasciculus, but this is not true (the AF was postulated later). My earlier readings of Wernicke suggested to me that he thought the connection was via a white matter tract that coursed behind the insula. But it seems that this is wrong too. Rather, Wernicke proposes that the entire first convolution zone is interconnected via the insular cortex. Here are the relevant quotes:

The existence of fibrae propriae [a term from Meynert referring, I believe, to connection fibers generally]..., between the insular cortex and the convolutions of the convexity has also been demonstrated. Since to my knowledge these have not previously been described and since they constitute a major proof of the unitary character of the entire first primitive convolution and the insular cortex, the reader will permit me to speak further of them. p. 46


He goes on for several paragraphs describing fibers that seem to connect the first convolution with the insula. At one point he even gives advice on how to see them for yourself...

...it is best first to apply the scalpel about halfway up the inner surface of the operculum... p. 47


I suppose that is kind of like us now saying that it is best first to apply spatial smoothing with a gaussian kernel... Anyway, here he states his conclusions on the matter quite clearly:

The consideration of the anatomical circumstances just described, of the numerous supporting post-mortem studies, and finally of the variety in the clinical picture of aphasia thus leads compellingly to the following interpretation of the situation. The entire region of the first convolution, which circles around the fossa Sylvii serves in conjunction with the insular cortex as a speech center. The first frontal convolution, which is a motor area, is the center of representations of movement; the first temporal convolution, a sensory area, is the center for sound images. The fibrae propriae which come together in the insular cortex form the mediating psychic reflex arcs. p. 47


So it isn't just Broca's area, Wernicke's area, and a white matter bundle. Rather it is a continuous but functionally graded region inter-connected by a -- dare I say -- computational hub, the insula. He may not have been entirely correct about the insula as a whole, but what seems clear is that the 19th century neurologists, including the so-called "classical ones" exemplified by Wernicke, had a much more dynamic and complex view of brain systems that we give them credit for.

Reference

Wernicke C (1874/1969) The symptom complex of aphasia: A psychological study on an anatomical basis. In: Boston studies in the philosophy of science (Cohen RS, Wartofsky MW, eds), pp 34-97. Dordrecht: D. Reidel Publishing Company.

Wednesday, December 1, 2010

Why the obsession with intelligibility in speech processing studies?

There was a very interesting speech/language session at SfN this year organized by Jonathan Peelle. Talks included presentations Sophie Scott, Jonas Obleser, Sonia Kotz, Matt Davis and others spanning an impressive range of methods and perspectives on auditory language processing. Good stuff and a fun group of people. It felt kind of like a joint lab meeting with lots of discussion.

I want to emphasize one of the issues that came up, namely, the brain's response to intelligible speech and what we can learn from it. Here's a brief history.

2000 - Sophie Scott, Richard Wise and colleagues published a very influential paper which identified a left anterior temporal lobe region that responded more to intelligible speech (clear and noise vocoded sentences) than unintelligible speech (spectrally rotated versions of the intelligible speech stimuli). It was argued that this is the "pathway for intelligible speech".

2000 - Hickok & Poeppel published a critical review of the speech perception literature arguing, on the basis of primarily lesion data, that speech perception is bilaterally organized and implicates posterior superior temporal regions in speech sound perception.

2000-2006 - Several more papers from Scott/Wise's group replicated this basic finding but additional areas started creeping into the picture including left posterior regions and right hemisphere regions. The example figure below is from Sptsyna et al. 2006



2007 - Hickok & Poeppel again reviewed the broader literature on speech perception including lesion work as well as studies that attempted to isolate phonological-level processes more specifically. It is concluded, yes you guessed it, that Hickok & Poeppel 2000 were pretty much correct their claim of a bilaterally organized posterior temporal speech perception system.

2009 - Rauschecker and Scott publish their "Maps and Streams" review paper arguing just as strongly that speech perception is left lateralized and is dependent on an anterior pathway. As far as I can tell, this claim is based on (i) analogy to the ventral stream pathway projection in monkeys (note: we might not yet fully understand the primate auditory system and given that monkeys don't have speech, the homologies may be less than perfect), and (ii) the fact that the peak activation in intelligible minus unintelligible sentences tends to be greatest in the left anterior temporal lobe.

2010 - Okada et al. publish a replication of Scott et al. 2000 using a much larger sample than any previous study (n=20 compared to n=8 in the Scott et al. 2000) and find robust bilateral anterior and posterior activations in the superior temporal lobe for intelligible compared to unintelligible speech. See figure below which shows the group activation (top) and peak activations in individual subjects (bottom). Note that even though it doesn't show up in the group analysis, activation extends to right posterior STG/STS in most subjects.


So that's the history. As was revealed at the SfN session controversy still remains, despite the existence of what I thought was fairly compelling evidence against an exclusively anterior-going projection pathway.

Here's what came out at the conference.

I presented lesion evidence collected with my collaborators Corianne Rogalsky, Hanna Damasio, and Steven Anderson, which showed that destruction of the left anterior temporal lobe "intelligibility area" has zero effect on speech perception (see figure below). This example patient performed with 100% accuracy on a test of auditory word comprehension (4AFC, word to picture matching with all phonemic foils, including minimal pairs), and 98% accuracy on a minimal pair syllable discrimination test. Combine this with the fact that auditory comprehension deficits are most strongly associated with lesions in the posterior MTG (Bates et al. 2003) and this adds up to a major problem for the Scott et al. theory.



The counter-argument from the Scott camp was addressed exclusively at the imaging data. I'll try to summarize their main points as accurately as possible. Someone correct me if I've got them wrong.

1. Left ATL is the peak activation in intelligible vs. unintelligible contrasts
2. Okada et al. did not use sparse sampling acquisition (true) which increased the intelligibility processing load (possible) thus recruiting posterior and right hemisphere involvement
3. Okada et al. used an "active task" which affected the activation pattern (we asked subjects to press a button indicating whether the sentence was intelligible or not).

First and most importantly, none of these counter-arguments provides an account of the lesion data. We have to look at all sources of data in building our theories.

Regarding point #2: I will admit that it is possible that the extra noise taxed the system more than normal and this could have increased the signal throughout the network. However, these same regions are showing up in the reports of Scott and colleagues, even in the PET scans, and the regions that are showing up (bilateral pSTG/STS) are the same as those implicated in lesion work and in imaging studies that target phonological level processes.

Regarding point #3: I'm all for paying close attention to the task in explaining (or explaining away) activation patterns. However, if the task directly assesses the behavior of interest (which is not the case in many studies), this argument doesn't hold. The goal of all this work is to map the network for processing intelligible speech. If we are asking subjects to tell us if the sentence is intelligible, this should drive the network of interest. Unless, I suppose, you think that the pSTG is involved decision processes which is highly dubious.

This brings us to point #1: Yes, it does appear that the peak activation in the intell vs. unintell contrast is in the left anterior temporal lobe. This tendency is what drives the Scott et al. theory. But why the obsession with this contrast? There are two primary reasons why we shouldn't be obsessed with it. In fact, these points question whether there is any usefulness to the contrast at all.

1. It's confounded. Intelligible speech differs from unintelligible speech on a host of dimensions: phonemic, lexical, semantic, syntactic, prosodic, and compositional semantic content. Further, the various intelligibility conditions are acoustically different, just listen to them, or note that A1 can reliably classify each condition from the other (Okada et al. 2010). It is therefore extremely unclear what the contrast is isolating.

2. By performing this contrast, one is assuming that any region that fails to show a difference between the conditions is not part of the pathway for intelligible speech. This is clearly an incorrect assumption: in the extreme case, peripheral hearing loss impairs the ability understand speech even though the peripheral auditory system does not respond exclusively to intelligible speech. Closer to the point, even if it was the case that the left pSTG/STS did not show an activation difference between intelligible and unintelligible speech it could still be THE region responsible for speech perception. In fact, if the job of a speech perception network is to take spectrotemporal patterns as input and map these onto stored representations of speech sound categories, one would expect activation of this network across a range of spectrotemporal patterns, not only those that are "intelligible".

I don't expect this debate to end soon. In fact, one suggestion for the next "debate" at the NLC conference is Scott vs. Poeppel. That would be fun.

References

Bates, E., Wilson, S.M., Saygin, A.P., Dick, F., Sereno, M.I., Knight, R.T., and Dronkers, N.F. (2003). Voxel-based lesion-symptom mapping. Nat Neurosci 6, 448-450.

Hickok, G., and Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences 4, 131-138.

Hickok, G., and Poeppel, D. (2007). The cortical organization of speech processing. Nat Rev Neurosci 8, 393-402.

Okada K, Rong F, Venezia J, Matchin W, Hsieh IH, Saberi K, Serences JT, & Hickok G (2010). Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. Cerebral cortex (New York, N.Y. : 1991), 20 (10), 2486-95 PMID: 20100898

Narain, C., Scott, S.K., Wise, R.J., Rosen, S., Leff, A., Iversen, S.D., and Matthews, P.M. (2003). Defining a left-lateralized response specific to intelligible speech using fMRI. Cereb Cortex 13, 1362-1368.

Rauschecker, J.P., and Scott, S.K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nat Neurosci 12, 718-724.

Scott, S.K., Blank, C.C., Rosen, S., and Wise, R.J.S. (2000). Identification of a pathway for intelligible speech in the left temporal lobe. Brain 123, 2400-2406.

Spitsyna, G., Warren, J.E., Scott, S.K., Turkheimer, F.E., and Wise, R.J. (2006). Converging language streams in the human temporal lobe. J Neurosci 26, 7328-7336.

Tuesday, November 23, 2010

Guest post by Pamelia Brown

Neurobiologist finds link between music education and improved speech recognition

Dr. Nina Kraus, a professor of neurobiology at Northwestern University, announced on Feb. 20 to the American Association for the Advancement of Science her recent findings linking musical ability and speech pattern recognition. During the press conference, Kraus and her associates advised that music programs in K-12 schools be further developed, despite that many schools are completely cutting music education during the economic recession.

According to a Science Daily article, Kraus’ and other neuroscientists’ research discovered that playing a musical instrument significantly enhances the brain stem’s sensitivity to speech sounds. The research is the first of its kind to concretely establish a link between musical ability and speech recognition.

"People's hearing systems are fine-tuned by the experiences they've had with sound throughout their lives," Kraus explained. "Music training is not only beneficial for processing music stimuli. We've found that years of music training may also improve how sounds are processed for language and emotion."

Kraus also suggested that playing musical instruments may be helpful for children with learning disabilities, like developmental dyslexia or autism. Her findings have aligned closely with earlier research that indicated auditory training can help children with brainstem sound encoding anomalies.

Conducted at the Northwestern University’s Auditory and Neuroscience Laboratory using state-of-the-art technology, Kraus’ research was carried out by comparing the brain responses of musically-trained and untrained people. Kraus studied how the brain responded to variable sounds (like the sounds of noisy classroom) and to predictable sounds (like a teacher’s voice). She found that those who were musically trained had a much more sensitive sensory system, meaning that they could easily take advantage of stimulus regularities and distinguish between speech and background noise.

Previously, Kraus and her colleagues found that the ability to distinguish acoustic patterns was linked to reading ability and the ability to distinguish speech patterns immersed in noise. Kraus is also known for developing the clinical technology BioMARK , which objectively assesses the neural processing of sound and helps diagnose auditory processing disorders in children.

To view Kraus’ recently published research, visit the Auditory and Neuroscience Laboratory’s publications page .

By-line:
This guest post is contributed by Pamelia Brown, who writes on the topics of associates degree. She welcomes your comments at her email Id pamelia.brown@gmail.com .

Monday, November 15, 2010

Society for the Neurobiology of Language (yes, SNL)

The new Society for the Neurobiology of Language has been officially formed and voted into existence by the attendees of the Neurobiology of Language Conference, or shall we now call it the SNL meeting. We had over 400 registrants for the 2nd annual meeting which was a huge success. I overheard more than one person say that this is now THE conference in the field. I would agree.

There was lots of interesting stuff presented and some informative discussions. A few topics that stuck out for me were...

Keynote lecture on optogenetics -- insanely cool method, albeit a poorly targeted lecture. Nonetheless I think it was a worthwhile lecture.

Aphasic mice? Yes, it would seem so. Erich Jarvis presented some really interesting work on the ultrasonic "song" of mice. This is a potentially important model for language.

Keynote lecture on birdsong -- Dan Margoliash presented a somewhat controversial lecture on birdsong as a model of aspects of language. No one argued its relevance for vocal learning, but a few feathers were ruffled, including those of one D. Poeppel, when it was suggested that it may be a model of hierarchical processing.

Debates -- the debates were again a big hit. First bout: Patterson vs. Martin. Second bout: Dehaene vs. Price. Both were fun and highly informative. Arch rivals Stan and Cathy surprisingly had a handshake that led to hug on stage. Thankfully it didn't go any further than that.

Tons of great posters. New work on intelligibility from the Scott lab; a new auditory feedback study by Guenther lab; McGurk effects under STS TMS by Beauchamp lab and lots more.

I'll try to fill in some bits and pieces on some of these presentations as time allows.

In the meantime, if you have any comments or suggestions for the next meeting, please let me know.

X Symposium of PSYCHOLINGUISTICS

Call for Papers and Posters

X Symposium of PSYCHOLINGUISTICS

Basque Center on Cognition, Brain and Language.
Donostia-San Sebastián
Spain

April 13th – 16th 2011
http://www.bcbl.eu/events/psycholinguistics/


Keynote Speakers:
Riitta Salmelin. Low temperature laboratory. Helsinki, Finland.
David Poeppel. New York University. New York, USA.
Jamie I. D. Campbell. University of Saskatchewan. Saskatoon. Canada.
Sharon Thompson-Schill. University of Pennsylvania, USA.


Submissions:

We welcome submissions of abstracts for oral or poster presentations on topics related to Psycholinguistics.

We accept contributions from all over the world. Priority for oral presentations will be given to those contributions describing research done in Romance languages (Spanish, Catalan, Galician, Portuguese, French, Italian, etc.) as first or as second languages, and also for research done with the Basque language.

There is a clear and enduring bias to build models of language processing based on data collected in English alone. The Symposium will aim to contribute to the growing amount of psycholinguistic data collected in other languages, with the larger goal of moving towards a comprehensive theory of language processing that is built on data from as many languages as possible.


Abstracts can now be submitted electronically, and must be submitted by the
deadline of December 15th, 2010. They will be reviewed anonymously by expert reviewers, and authors will be notified with decisions by January 15th, 2011.


***IMPORTANT DATES***
Abstract submission deadline: December 15th, 2010
Notification of abstract acceptance: January 15th, 2011
Early registration deadline: February 1st, 2011
Online registration deadline: March 15th, 2011
Conference dates: April 13th - 16th, 2011

I look forward to seeing your scientific contributions at the “X symposium of Psycholinguistics.”

The organizing committee

Thursday, November 11, 2010

NLC 2010 -- Good turn out!

Conference just started. Nearly 400 registrants so far...

Wednesday, November 10, 2010

Comments on NLC 2010 #1

NLC starts tomorrow and after letting the 14 boxes of scientific programs age in my garage for a few weeks, I finally pulled one out and had a glance. One of the first abstracts I came across was one by Willems et al. (abs #8) titled, A functional role for the motor system in language understanding: Evidence from rTMS. Since the title includes the term "language understanding" I was hopeful that they assessed language understanding. My hopes were dashed when I read that their task was lexical decision. They argue that lexical decision is a "classical indicator of lexico-semantic processing" (note the terminology change: they did not claim it was a classical indicator of "language understanding"). I suspect that like syllable discrimination in the phonemic perception domain, lexical decision is a highly misleading indicator of what normally happens in "language understanding" because (i) we don't normally go around making lexical decisions (it may involve additional cognitive processes not normally involved in comprehension), (ii) you don't need to understand a word to make a lexical decision (think, "familiarity without knowing"), and (iii) lexical decision data usually comes in the form of RT data even though it is a classic signal detection type paradigm, and therefore the data are subject to bias.

Skepticism aside, what did they do and what did they find? They stimulated left or right premotor cortex while subjects made lexical decisions to manual (e.g., throw) or nonmanual (e.g., earn) verbs. Left but not right PM stimulation lead to faster RTs to manual verbs compared to nonmanual verbs.

They conclude,

This effect challenges the skeptical view that premotor activation during linguistic comprehension is epiphenomenal... These data demonstrate a functional role for premotor cortex in language understanding.


I think they've shown clearly that premotor cortex plays a role in lexical decision. The source of this effect remains unclear (does it affect the semantic representation or just bias, e.g., prime, the response) and more importantly, the relation between lexical decision (the measured behavior) and language understanding (the target behavior) is far from clear.

In short, they have done nothing to curb the skepticism regarding the role of premotor cortex in language understanding.

Tuesday, November 2, 2010

Internal forward models. Neuronal oscillations. Update from TB-East.

Recently, Greg commented on forward models (hope or hype). He raised a few critical points and speculated about the utility of this concept given how widespread it is. And – importantly – he has a cool paper coming out soonish that puts the cards on the table in terms of his position. Very cool stuff, developed with my grad school office mate John Houde.


It seems like every now and then, this concept comes up from different angles, for many of us. For me, the ‘analysis-by-synthesis’ perspective on internal forward models has come up in various experimental contexts, initially in work with Virginie van Wassenhove on audio-visual speech. There, based on ERP data recorded during perception of multi-sensory syllables, we argued for an internal forward model in which visual speech elicits the cascade of operations that comprise, among others, hypothesis generation and evaluation against input. The idea (at least in the guise of analysis-by-synthesis) has been recently reviewed as well (Poeppel & Monahan, 2010, in LCP; Bever & Poeppel 2010, in Biolinguistics, provides a historical view dealing with sentence processing a la Bever).


It is worth remembering that work on visual perception has been exploring a similar position (Yuille & Kersten on vision; reverse hierarchy theory of Hochstein & Ahissar; the seemingly endless stream of Bayesian positions).


Now, in new work from my lab, Xing Tian comes at the issue from a new and totally unconventional angle, mental imagery. In a new paper, Mental imagery of speech and movement implicates the dynamics of internal forward models, Xing discusses a series of MEG experiments in which he recorded from participants doing finger tapping tasks and speech tasks, overtly and covertly. For example, after training, you can do a pretty good job imagining that you are saying (covertly) the syllable da, or hearing the syllable ba.


This paper is long and has lots of intricate detail (for example, we conclude that mental imagery of perceptual processes clearly draws on the areas implicated in perception, but imagery of movement is not like a ‘weaker’ form of movement but resembles movement planning). Anyway, the key finding from Xing’s work is this. We support the idea of an efference copy, but there is arguably a cascade of predictive steps (a dynamic) that is schematized in the figure from the paper. The critical data point: a fixed interval after a subjects imagines articulating a syllable (nothing is said, nothing is heard!), we observe activity in auditory cortex that is indistinguishable from hearing the token. So, as you prepare/plan to say something, an efference copy is sent not just to parietal cortex but also auditory cortex, possible in series. Cool, no?


And on a totally different note … An important paper from Anne-Lise Giraud’s group just appeared in PNAS, Neurophysiological origin of human brain asymmetry for speech and language, by Benjamin Morillon et al. This paper is based on the concurrent recording of EEG and fMRI. It builds on the 2007 Neuron paper and incorporates an interesting task contrast and a sophisticated analysis allowing us to (begin to) visualize the network at rest and during language comprehension. The abstract is below:


The physiological basis of human cerebral asymmetry for language remains mysterious. We have used simultaneous physiological and anatomical measurements to investigate the issue. Concentrating on neural oscillatory activity in speech-specific frequency bands and exploring interactions between gestural (motor) and auditory-evoked activity, we find, in the absence of language-related processing, that left auditory, somatosensory, articulatory motor, and inferior parietal cortices show specific, lateralized, speech-related physiological properties. With the addition of ecologically valid audiovisual stimulation, activity in auditory cortex synchronizes with left-dominant input from the motor cortex at frequencies corresponding to syllabic, but not phonemic, speech rhythms. Our results support theories of language lateralization that posit a major role for intrinsic, hardwired perceptuomotor processing in syllabic parsing and are compatible both with the evolutionary view that speech arose from a combination of syllable-sized vocalizations and meaningful hand gestures and with developmental observations suggesting phonemic analysis is a developmentally acquired process.


Morillon B, Lehongre K, Frackowiak RS, Ducorps A, Kleinschmidt A, Poeppel D, & Giraud AL (2010). Neurophysiological origin of human brain asymmetry for speech and language. Proceedings of the National Academy of Sciences of the United States of America, 107 (43), 18688-93 PMID: 20956297

Thursday, October 28, 2010

Postdoctoral position - Rotman Research Institute, Toronto

A postdoctoral fellowship in neurobiology of language is available in the laboratory of Dr. Jed Meltzer, at the Rotman Research Institute, affiliated with the University of Toronto. The fellow will engage in research related to both basic language processes and applications to diagnosis and treatment of post-stroke aphasia, progressive aphasia, traumatic brain injury, and other neurological disorders. Candidates should have expertise and/or interest in some of the following topics:



- - sentence and discourse level comprehension and production

- - neurorehabilitation in stroke and dementia

- - frequency domain analysis of EEG/MEG data

- - multivariate pattern recognition analyses

- - applications of computational linguistics to neuroscience

- - quantitative analysis of naturalistic language samples

- - functional connectivity in fMRI and MEG



The Rotman Institute is fully equipped for cognitive neuroscience research, with a 3T MRI, 151-channel CTF MEG, several EEG systems, and an excellent infrastructure for patient recruitment and testing. We seek a candidate with excellent computational skills, academic knowledge of psycholinguistics, and a personal manner suitable for comfortable interactions with elderly patients with limited communication abilities. Prior experience with neuroimaging is helpful but not an absolute must.



Toronto is consistently ranked as one of the most livable cities in the world, as well as the most multicultural. It is an excellent place to work for those interested in cross-linguistic research, as native speaker populations can be found for dozens of world languages.



Applicants should have a recent Ph.D. or M.D. degree, and the potential for successfully obtaining external funding. The postdoctoral position carries a term of 2 years and is potentially renewable. Bursaries are in line with the fellowship scales of the Canadian Institutes of Health Research (CIHR) and include an allowance for travel and research expenses. A minimum of 80% of each fellow’s time will be devoted to research and related activities.

Start date is negotiable, but ideally in the spring of 2011.



To apply, please send a current CV and letter of interest to:

Jed Meltzer, Ph.D.

jmeltzer@rotman-baycrest.on.ca



Up to three letters of reference may be forwarded to the same address. Meetings and interviews may be arranged at the upcoming Neurobiology of Language and Society for Neuroscience conferences in San Diego, although this is certainly not required.



For more information on the institute, see



http://www.rotman-baycrest.on.ca/



and for our lab specifically,



http://www.rotman-baycrest.on.ca/index.php?section=1093

Wednesday, October 27, 2010

How can we measure "integration"?

"Integration" is a major operation in language processing (and other domains). We have to integrate bits of sounds to extract words, integrate morphemic bits to derive word meanings, integrate lexical-semantic information with syntactic information, sensory with motor information, audio with visual information, and all of this with the contextual background.

Some theorists talk specifically about regions of the brain that perform such integrations. I've got my favorite sensory-motor integration site, Hagoort has a theory about phonological, semantic, and syntactic integration in (different portions of) Broca's area, more broadly, Damasio has been talking about "convergence zones" (aka, integration sites) for years.

Two thoughts come to mind. One, is there any part of the brain that isn't doing integration, i.e., how useful is the concept? And two, if the concept does have some value, how do we identify integration areas?

I don't know the answer to first question and I have some concerns about the way some in the field approach the second. W.r.t. the latter, a typical approach is to look for regions that increase in activity as a function of "integration load". The idea is that by making integration harder, we will drive integration areas more strongly and this will cause them to pop out in our brain scans. This seems logical enough. But is it true?

Suppose Broca's area -- the region that always seems to get involved when the going gets tough -- activates more in an audiovisual speech condition in which the audio and visual signals mismatch compared to when they match (an actual result). Let's consider the possible interpretations.

1. Broca's area does AV integration. It is less active when integration is easy, i.e., when A and V match than when integration is hard, i.e., when they mismatch because it has to work harder to integrate mismatched signals.

2. Broca's area doesn't do AV integration. It is less active when integration is actually happening, i.e., when A and V match, reflecting its non-involvment, than when integration isn't working, i.e., when there is an AV mismatch. Of course, this explanation requires an alternative explanation for why Broca's activates more for mismatch situations. There are plenty of possibilities: ambiguity resolution, response selection, error detection or just a WTF response (given the response properties of Broca's area I sometimes wonder if we should re-label it as area WTF).

Both possibilities seem perfectly consistent with the facts. Similar possibilities exist for other forms of integration making me question whether the "load" logic is really telling us what we think it is telling us.

There is another approach to identifying integration zones, namely to look for areas that respond to both types of information independently but respond better when they appear together. In our example, AV integration zones would be those areas that respond to auditory speech or visual speech, but respond best to AV speech. I tend to like this approach a bit better.

What are your thoughts?

Sunday, October 24, 2010

Faculty position at NYU-AD: cognition/perception/cogneuro

Dear colleagues,


New York University is in the process of hiring tenure-track faculty for the Psychology program at its new campus in Abu Dhabi. The current search is for candidates with a strong program of research in the areas of cognition and/or perception, including cognitive neuroscience approaches.


NYUAD is committed to building top-tier research-focused programs in psychology and neuroscience. The present campus includes state-of-the art facilities for behavioral and neuroimaging research, and this facilities will continue to expand. In addition to being part of the growing academic community in Abu Dhabi, faculty will maintain close connections with colleagues in NYC, with opportunities to spend significant portions of time at the New York City campus – in all, a unique opportunity.


Please see the attached job ad for more details. You can also forward any inquiries to nyuad.science@nyu.edu or to the search committee chair, David Amodio, at david.amodio@nyu.edu.


Of interest to cognitive neuroscience of language types:


One of the research directions at NYU AD will be language-related research. A start-up grant was give to build a research center, housing MEG, EEG, and eye tracking.


The Neuroscience of Language Laboratory will explore how the ability to use natural language is implemented in the brain. While most of the existing research in this area is based on English language study, the laboratory’s location in Abu Dhabi will provide researchers with access to speakers of Arabic and many other languages, including Hindi, Bengali, and Tagolog. Professor Ali Idrissi, chair of the linguistics department at United Arab Emirates University, will serve as the lab’s senior research associate.


Principal Investigators: Alec Marantz, Professor of Linguistics and Psychology, Faculty of Arts and Science, NYU; Liina Pylkkänen, Assistant Professor of Linguistics and Psychology, Faculty of Arts and Science, NYU and David Poeppel, Professor of Psychology and Neural Science, Faculty of Arts and Science, NYU.


***************************************

FACULTY POSITIONS Psychology Cognitive Neuroscience, Cognition, and Perception

NYU Abu Dhabi


New York University has established a campus in Abu Dhabi, United Arab Emirates and invites applications for faculty positions at any level (assistant, associate or full professor). We are seeking candidates with a strong program of research in cognition and/or perception, including cognitive neuroscience approaches, who are also committed to excellence in teaching and mentoring.
The terms of employment are competitive compared to U.S. benchmarks and include housing and educational subsidies for children. Faculty may spend time at NYU in New York and at its other global campuses. The appointment may start as soon as September 1, 2011, or could be delayed until as late as September 1, 2012.


NYU Abu Dhabi is in the process of recruiting faculty of international distinction committed to active research and the finest teaching in order to build a pioneering global institution of the highest quality and forge an international community of scholars and students.


Alongside its highly-selective liberal arts college, NYU Abu Dhabi will create distinctive graduate programs and a world-class institute for advanced research that fosters creative work across the Arts, Humanities, Social Sciences, Sciences, and Engineering. Situated at a new global crossroads, NYU Abu Dhabi has the resources and resolve to become a preeminent center of collaborative intellectual pursuit and impact.


NYU New York and NYU Abu Dhabi are integrally connected. The faculties work together, and the campuses form the foundation of a unique global network university, linked to NYU’s other study and research sites on five continents.


Major research projects and public programs are underway. We have recruited our first cohort of faculty across many disciplines and the first class of students of remarkable potential from across the world arrived in fall 2010. The international character of NYUAD is reflected in the global composition of the faculty and the student body as well as the research agenda and curriculum, which have been designed to promote inventiveness, intellectual curiosity, multidisciplinary interest, and intercultural understanding.


The review of applications will begin on December 1, 2010. Applicants must submit a curriculum vitae, statement of research and teaching interests, representative publications and three letters of reference in PDF form to be considered. Please visit our website at http://nyuad.nyu.edu/human.resources/open.positions.html for instructions and other information on how to apply. If you have any questions, please e-mail nyuad.science@nyu.edu.NYU Abu Dhabi is an Equal Opportunity/Affirmative Action Employer.


*****************************



Friday, October 15, 2010

International Seminar on Speech Production

ISSP’11: Speech production: from brain to behavior

The ninth International Seminar on Speech Production (ISSP'11) will be held in Montreal, Canada from June 20th to 23rd, 2011. ISSP’11 is the continuation of a series of seminars dating back to Grenoble (1988), Leeds (1990), Old Saybrook (1993), Autrans (1996), Kloster Seeon (2000), Sydney (2003), Ubatuba (2006), and Strasbourg (2008). Several aspects of speech production will be covered, such as phonology, phonetics, linguistics, mechanics, acoustics, physiology, motor control, the neurosciences and computer science.

Montreal’s vieux port (old city), business district, and the nearby Laurentian mountains all contribute to Montreal’s international reputation. Montreal is one of the most important French-English bilingual cities in the world. A vibrant expression of French heritage in North America!

Wednesday, October 13, 2010

Asst/Assoc Research Prof positions - Center for Mind/Brain Sciences (CIMeC) at the University of Trento

The Center for Mind/Brain Sciences (CIMeC) at the University of Trento is seeking to fill a number of research positions in cognitive neuroscience at the Assistant or Associate Research Professor level. The Center offers an international and vibrant research setting in which to investigate the functioning of the brain through the analysis of its functional, structural and physiological characteristics, in both normal and pathological states. Researchers at the Center make use of state-of-the-art neuroimaging methodologies, including a research-only MRI scanner, MEG, EEG and TMS, as well as behavioral, eye tracking and motion tracking laboratories. The Center also includes a neuropsychology and neuro-rehabilitation clinic (CERiN). The Center strongly encourages collaborative and innovative research, and provides the opportunity for all researchers to access laboratory resources and to be part of the Doctoral School in Cognitive and Brain Sciences. CIMeC also has close collaborations with local research centers, including FBK (Fondazione Bruno Kessler) and IIT (Italian Institute of Technology), through joint projects and through the doctoral school. Further information about the Center can be found at: http://www.cimec.unitn.it.

The ideal researchers (from all areas of cognitive neuroscience, including computational neuroscience and neuroimaging methods) must hold the Ph.D. or M.D. degree, and should have a record documenting research creativity, independence, and productivity. We are looking for researchers able to build and maintain a high quality research program and to contribute to the maintenance of a collegial and collaborative academic environment.

The Center offers excellent experimental facilities and a competitive European-level salary in the context of a rapidly growing and dynamic environment. Funding is available for 6 years. The initial contract would be for 3 years. There is no associated university teaching load, although researchers will be expected to participate in the research culture of the Center through seminars, supervision of students and other activities.

The University of Trento is ranked first among research universities in Italy, and the Trentino region is consistently at the top for quality of life and for the most efficient services in Italy. English is the official language of the CIMeC, where a large proportion of the faculty, post-docs and students come from a wide range of countries outside of Italy. CIMeC’s labs and the PhD School are in Rovereto (about thirty kilometres south of Trento) and Mattarello (eight kilometres south of Trento).

If you wish to receive further information please contact the Director of the CIMeC, Prof. Alfonso Caramazza (alfonso.caramazza@unitn.it) or Vice-Director Prof. Giorgio Vallortigara (giorgio.vallortigara@unitn.it) by November 15, 2010.

Tuesday, October 12, 2010

Postdoctoral Position - Mount Sinai School of Medicine, New York

A postdoctoral position is available immediately in the laboratory of Dr. Kristina Simonyan in the Department of Neurology at the Mount Sinai School of Medicine, New York. The research emphases of the laboratory are on the studies of brain mechanisms of voice and speech production and the neurological correlates of primary focal dystonias (e.g., spasmodic dysphonia) using a multi-modal neuroimaging approach (fMRI, DTI, high-resolution MRI, PET).

The ideal candidate will have an M.D. and/or Ph.D. in neuroscience or a relevant field and knowledge of computational (especially Linux, MATLAB) and statistical (AFNI, FSL) methods. Familiarity with connectivity analysis and neuroreceptor mapping is preferred.

Inquires should be sent to kristina.simonyan@mssm.edu and interviews can be arranged at the Neurobiology of Language Conference in San Diego.

Alternatively, interested candidates should send CV, brief description of research experience and three references to:

Kristina Simonyan, M.D., Ph.D.
Department of Neurology
Mount Sinai School of Medicine
One Gustave L. Levy Place, Box 1137
New York, NY 10029
Tel: (212) 241-0656
Email: kristina.simonyan@mssm.edu

Friday, October 8, 2010

Steve Small joins UC Irvine faculty

Steve Small has been enticed to leave the Windy City where he was professor of Neurology and Psychology at the University of Chicago, and move to The OC in Southern California where he joins the faculty at UC Irvine as Chair of the Neurology Department. He will also have close ties to the Center for Cognitive Neuroscience and Department of Cognitive Sciences. Steve, of course, is a long-time, significant player in the world of Language Neuroscience. Besides his many publications, he is Editor-in-Chief of Brain and Language and led the effort to found the Neurobiology of Language Conference, which is gearing up for its second meeting.

Steve and I come from very different schools of thought when it comes to language generally: I was trained at MIT he was trained at CMU; he has been sympathetic to mirror neuron related approaches, me not so much. But, for those you expecting a bloody, glove-off battle, sorry... it turns out we agree on more things than either of us thought we would -- once you actually sit down and start talking that is. I am looking forward to working with him. For sure, the addition of Steve Small to our UC Irvine language science community will add a new dimension. I'm sure Steve and I will find a few things to debate, so it could be an interesting place to do doctoral or post-doc work. Stay tuned for future advertisements.

So what convinced Steve to come to Irvine? (Beside it being Talking Brains West, of course!)

Chicago winter:



Irvine winter:



What would you choose?

Thursday, October 7, 2010

Internal forward models -- New insight or just hype?

In case you haven't noticed, the concept of internal forward models -- an internal prediction about a future event or state -- are all the rage. The concept comes out of the motor control literature where one can find pretty solid evidence that motor control makes use of forward predictions of the sensory consequences of motor commands (e.g., check out the seminal paper by Wolpert, Ghahramani, & Jordan, 1995). These concepts have been extended to speech (e.g., Tourville et al. 2008; van Wassenhove et al., 2005) and there has been a ton of work trying to establish the neural correlates of these networks (e.g., see Golfinopoulos et al. 2009; Shadmehr & Krakauer, 2008), recent work suggesting an association with clinical conditions such as aspects of schizophrenia (Heinks-Maldonado, et al. 2007) and stuttering (Max et al. 2004), and even applications of the concept of high-level cognition such as "thought" (Ito, 2008), as well as applications to social cognition (Wolpert et al. 2003) with links to the mirror system (Miall, 2003).

I'm a big fan of control theory in general and I think there is a lot to be gained by thinking about speech processes in these terms. At the same time, I'm a little uncomfortable with the widespread application of these models. It kind of reminds me of the mirror neuron situation in that a framework for thinking about one problem is generalized to all kinds of situations. I'm also a bit uncomfortable about the assumed tethering between forward models and the motor system. A forward model is just a prediction. In the context of motor control, it makes sense to make predictions (e.g., sensory predictions) based on the likely outcomes of motor commands. But more generally, predictions can come from lots of sources. Perceptual fill-in processes are a kind of forward model: the visual system for example makes predictions about the color and texture of a given portion of the visual scene based on the color and texture around that region. One can predict the consequences of an ocean wave hitting a rock based on past perceptual experiences. So forward models don't have to come from the motor system and there are probably lots of systems and mechanisms that generate predictions (forward models). It is worth having a look at Karniel's (2002) short comment, "Three creatures named 'forward model'" for some cautionary discussion.

So is the internal forward model concept just hype? No, I don't think so. It has already demonstrated its utility in the motor control literature and there are systems in the brain that appear to support motor-related forward models (cerebellum is one, posterior parietal cortex is another). There are some real insights to be gained from this framework in the speech domain as well, but I think there is the danger of over-application of the concept and we need to proceed cautiously.

References

Golfinopoulos, E., Tourville, J.A., and Guenther, F.H. (2009). The integration of large-scale neural network modeling and functional brain imaging in speech motor control. Neuroimage 52, 862-874.

Heinks-Maldonado, T.H., Mathalon, D.H., Houde, J.F., Gray, M., Faustman, W.O., and Ford, J.M. (2007). Relationship of imprecise corollary discharge in schizophrenia to auditory hallucinations. Arch Gen Psychiatry 64, 286-296.

Ito, M. (2008). Control of mental activities by internal models in the cerebellum. Nat Rev Neurosci 9, 304-313.

Karniel, A. (2002). Three creatures named 'forward model'. Neural Networks 15, 305-307.

Max, L., Guenther, F.H., Gracco, V.L., Ghosh, S.S., and Wallace, M.E. (2004). Unstable or insufficiently activated internal models and feedback-biased motor control as sournces of dysfluency: A theoretical model of stuttering. Contemporary Issue in Communication Science and Disorders 31, 105-122.

Miall, R.C. (2003). Connecting mirror neurons and forward models. Neuroreport 14, 2135-2137.

Shadmehr, R., and Krakauer, J.W. (2008). A computational neuroanatomy for motor control. Exp Brain Res 185, 359-381.

Tourville, J.A., Reilly, K.J., and Guenther, F.H. (2008). Neural mechanisms underlying auditory feedback control of speech. Neuroimage 39, 1429-1443.

van Wassenhove, V., Grant, K.W., and Poeppel, D. (2005). Visual speech speeds up the neural processing of auditory speech. Proc Natl Acad Sci U S A 102, 1181-1186.

Wolpert, D., Ghahramani, Z., & Jordan, M. (1995). An internal model for sensorimotor integration Science, 269 (5232), 1880-1882 DOI: 10.1126/science.7569931

Wolpert, D.M., Doya, K., and Kawato, M. (2003). A unifying computational framework for motor control and social interaction. Philos Trans R Soc Lond B Biol Sci 358, 593-602.

Tuesday, October 5, 2010

Cognitive Neuropsychology -- New Editor and free access to TB highlighted article

A few days ago I highlighted an article that appeared in Cognitive Neuropsychology. As a result of this, the Cognitive Neuropsychology has made the article available free of charge: click here to access it.

It is worth noting that Cognitive Neuropsychology is now under new editorial guidance, that of Brenda Rapp. The journal was founded by Max Coltheart in 1984 and was (and perhaps remains) arguably the main outlet for traditional cognitive neuropsychological research. Beginning under the guidance of Alfonso Caramazza (the journal's second editor) and now with more force under Brenda Rapp, the journal has expanded its mission beyond traditional patient-based work to include brain imaging and other methods. Brenda summed up the journal's mission succinctly: "the goals: cognitive, the methods: neural" Her entire editorial can be found here. Below is an excerpt:

... the particular insight that was critical in creating the journal's unique identity amongst other cognitively oriented journals was the understanding that when “wishing to test theories concerning how some general mental activity is normally carried out, (researchers) need not confine themselves to investigations of those whose competence in this activity is normal” (Coltheart, 1984). This notion formed the basis of the journal's focus on research involving neuropsychological cases to develop and test theories of normal cognition. However, in recent years, the increasing sophistication of methods for the collection and analysis of neural data has allowed a broader range of neural evidence to be brought to bear on cognitive questions, making this an appropriate moment to expand upon the insight that neuropsychological data can be brought to bear on questions of cognition. Thus, consistent with the neuropsychological character of the journal and changes in direction already initiated by Alfonso Caramazza (the journal's second editor), the journal's Scope and Aims have now been expanded to promote research based on a broader understanding of the neuropsychological approach. This broader understanding includes not only methods based on brain pathology, but also on neural recording, neural stimulation or brain imaging. In other words, the journal will publish research that is not limited to the study of brain-lesioned individuals but also includes neurologically-intact adults, children or even non-human animals, as long as the methods involve some type of neural manipulation or measurement and the findings make an explicit and theoretically sophisticated contribution to our understanding of normal human cognition.

Thursday, September 30, 2010

Postdoctoral Fellow / Research Scientist Position -- Cognitive Neuroscientist in Adolescent Reasoning and Brain Development

Postdoctoral Fellow / Research Scientist Position
Cognitive Neuroscientist in Adolescent Reasoning and Brain Development

The Center for BrainHealth at The University of Texas at Dallas in collaboration with The University of Texas Southwestern Medical Center seeks to fill a Postdoctoral Research position in Cognitive Neurosciences with a productive and innovative investigator whose research interests address brain plasticity, cognitive training and reasoning. Applicable research experience desired includes an understanding of hierarchical cognitive strategies that support higher-order reasoning processes to foster deeper understanding and strengthen overall brain function and reasoning during adolescence in daily life. Additional experience would be useful but not required in multi-modality neuroimaging platforms (electrophysiology, MR technology, PET, etc.) and genetic factors related to frontal lobe and higher order cognitive development in adolescence. The research may be applied to elucidate the emergence and treatment effects acquired during normal development and in brain injuries or psychiatric diseases such as Traumatic Brain Injury, Attention Deficit/Hyperactivity Disorder (ADHD), Addictions, Obsessive-Compulsive Disorders, Mood Disorders, and Schizophrenia.

The Center for BrainHealth
School of Behavioral & Brain Sciences

Qualifications for the position include:
PhD, preferably completed in neuroscience, neuropsychology, neurocognition, or related field;
familiarity with fMRI, EEG, and physiological measures;
an ability to work well in a multidisciplinary, highly collaborative research team;
an interest in translational research between neuroscience and clinical populations;
and a strong record or potential for scholarly productivity.

The Center for BrainHealth is located in downtown Dallas adjacent to The University of Texas Southwestern Medical Center. The Center’s research is dedicated to applying cutting edge brain research to clinical populations to study brain plasticity. These projects cover a wide range of cognitive functions across the life-span, across a multitude of disorders, and across the most current functional brain imaging technologies. Established access available to special subject populations including: Alzheimers Disease (AD), Frontotemporal Lobar Degeneration (FTLD), Traumatic Brain Injury (TBI), ADHD, Autism, Military and Former Military, as well as healthy Aging, Stroke, Adolescent, and Pediatric groups. Access to state of the art facilities including: Philips 3T research-dedicated MRI scanner and Four Neuroscan SynAmps2 systems equipped for both 64 and 128 channel recordings.

Benefits of the job include:
*Ability to be involved with established, innovative, multidisciplinary collaborations.
*Ability to work on research projects highly relevant to health outcomes.
*Potential for high publication rate
*High potential for innovation in research design
*Competitive salary and benefits
*One year position, renewable for 2nd year based upon available funding, performance, and productivity

Submit application materials at http://provost.utdallas.edu/facultyjobs/welcome/jobdetail/pbv100810

Review of applicants will begin immediately and will continue until the position is filled. The starting date for this position is September 1, 2010. Indication of gender and ethnicity for affirmative action statistical purposes is requested as part of the application.

The University of Texas at Dallas is an Equal Opportunity / Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, citizenship status, Vietnam era or special disabled veteran’s status, or sexual orientation. UT Dallas strongly encourages applications from candidates who would enhance the diversity of the University’s faculty and administration.

Postdoctoral Fellow / Research Scientist Position -- Cognitive Neuroscientist in Adolescent Reasoning and Brain Development

Postdoctoral Fellow / Research Scientist Position
Cognitive Neuroscientist in Adolescent Reasoning and Brain Development

The Center for BrainHealth at The University of Texas at Dallas in collaboration with The University of Texas Southwestern Medical Center seeks to fill a Postdoctoral Research position in Cognitive Neurosciences with a productive and innovative investigator whose research interests address brain plasticity, cognitive training and reasoning. Applicable research experience desired includes an understanding of hierarchical cognitive strategies that support higher-order reasoning processes to foster deeper understanding and strengthen overall brain function and reasoning during adolescence in daily life. Additional experience would be useful but not required in multi-modality neuroimaging platforms (electrophysiology, MR technology, PET, etc.) and genetic factors related to frontal lobe and higher order cognitive development in adolescence. The research may be applied to elucidate the emergence and treatment effects acquired during normal development and in brain injuries or psychiatric diseases such as Traumatic Brain Injury, Attention Deficit/Hyperactivity Disorder (ADHD), Addictions, Obsessive-Compulsive Disorders, Mood Disorders, and Schizophrenia.

The Center for BrainHealth
School of Behavioral & Brain Sciences

Qualifications for the position include:
PhD, preferably completed in neuroscience, neuropsychology, neurocognition, or related field;
familiarity with fMRI, EEG, and physiological measures;
an ability to work well in a multidisciplinary, highly collaborative research team;
an interest in translational research between neuroscience and clinical populations;
and a strong record or potential for scholarly productivity.

The Center for BrainHealth is located in downtown Dallas adjacent to The University of Texas Southwestern Medical Center. The Center’s research is dedicated to applying cutting edge brain research to clinical populations to study brain plasticity. These projects cover a wide range of cognitive functions across the life-span, across a multitude of disorders, and across the most current functional brain imaging technologies. Established access available to special subject populations including: Alzheimers Disease (AD), Frontotemporal Lobar Degeneration (FTLD), Traumatic Brain Injury (TBI), ADHD, Autism, Military and Former Military, as well as healthy Aging, Stroke, Adolescent, and Pediatric groups. Access to state of the art facilities including: Philips 3T research-dedicated MRI scanner and Four Neuroscan SynAmps2 systems equipped for both 64 and 128 channel recordings.

Benefits of the job include:
*Ability to be involved with established, innovative, multidisciplinary collaborations.
*Ability to work on research projects highly relevant to health outcomes.
*Potential for high publication rate
*High potential for innovation in research design
*Competitive salary and benefits
*One year position, renewable for 2nd year based upon available funding, performance, and productivity

Submit application materials at http://provost.utdallas.edu/facultyjobs/welcome/jobdetail/pbv100810

Review of applicants will begin immediately and will continue until the position is filled. The starting date for this position is September 1, 2010. Indication of gender and ethnicity for affirmative action statistical purposes is requested as part of the application.

The University of Texas at Dallas is an Equal Opportunity / Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, age, citizenship status, Vietnam era or special disabled veteran’s status, or sexual orientation. UT Dallas strongly encourages applications from candidates who would enhance the diversity of the University’s faculty and administration.