Tuesday, November 23, 2010

Guest post by Pamelia Brown

Neurobiologist finds link between music education and improved speech recognition

Dr. Nina Kraus, a professor of neurobiology at Northwestern University, announced on Feb. 20 to the American Association for the Advancement of Science her recent findings linking musical ability and speech pattern recognition. During the press conference, Kraus and her associates advised that music programs in K-12 schools be further developed, despite that many schools are completely cutting music education during the economic recession.

According to a Science Daily article, Kraus’ and other neuroscientists’ research discovered that playing a musical instrument significantly enhances the brain stem’s sensitivity to speech sounds. The research is the first of its kind to concretely establish a link between musical ability and speech recognition.

"People's hearing systems are fine-tuned by the experiences they've had with sound throughout their lives," Kraus explained. "Music training is not only beneficial for processing music stimuli. We've found that years of music training may also improve how sounds are processed for language and emotion."

Kraus also suggested that playing musical instruments may be helpful for children with learning disabilities, like developmental dyslexia or autism. Her findings have aligned closely with earlier research that indicated auditory training can help children with brainstem sound encoding anomalies.

Conducted at the Northwestern University’s Auditory and Neuroscience Laboratory using state-of-the-art technology, Kraus’ research was carried out by comparing the brain responses of musically-trained and untrained people. Kraus studied how the brain responded to variable sounds (like the sounds of noisy classroom) and to predictable sounds (like a teacher’s voice). She found that those who were musically trained had a much more sensitive sensory system, meaning that they could easily take advantage of stimulus regularities and distinguish between speech and background noise.

Previously, Kraus and her colleagues found that the ability to distinguish acoustic patterns was linked to reading ability and the ability to distinguish speech patterns immersed in noise. Kraus is also known for developing the clinical technology BioMARK , which objectively assesses the neural processing of sound and helps diagnose auditory processing disorders in children.

To view Kraus’ recently published research, visit the Auditory and Neuroscience Laboratory’s publications page .

By-line:
This guest post is contributed by Pamelia Brown, who writes on the topics of associates degree. She welcomes your comments at her email Id pamelia.brown@gmail.com .

Monday, November 15, 2010

Society for the Neurobiology of Language (yes, SNL)

The new Society for the Neurobiology of Language has been officially formed and voted into existence by the attendees of the Neurobiology of Language Conference, or shall we now call it the SNL meeting. We had over 400 registrants for the 2nd annual meeting which was a huge success. I overheard more than one person say that this is now THE conference in the field. I would agree.

There was lots of interesting stuff presented and some informative discussions. A few topics that stuck out for me were...

Keynote lecture on optogenetics -- insanely cool method, albeit a poorly targeted lecture. Nonetheless I think it was a worthwhile lecture.

Aphasic mice? Yes, it would seem so. Erich Jarvis presented some really interesting work on the ultrasonic "song" of mice. This is a potentially important model for language.

Keynote lecture on birdsong -- Dan Margoliash presented a somewhat controversial lecture on birdsong as a model of aspects of language. No one argued its relevance for vocal learning, but a few feathers were ruffled, including those of one D. Poeppel, when it was suggested that it may be a model of hierarchical processing.

Debates -- the debates were again a big hit. First bout: Patterson vs. Martin. Second bout: Dehaene vs. Price. Both were fun and highly informative. Arch rivals Stan and Cathy surprisingly had a handshake that led to hug on stage. Thankfully it didn't go any further than that.

Tons of great posters. New work on intelligibility from the Scott lab; a new auditory feedback study by Guenther lab; McGurk effects under STS TMS by Beauchamp lab and lots more.

I'll try to fill in some bits and pieces on some of these presentations as time allows.

In the meantime, if you have any comments or suggestions for the next meeting, please let me know.

X Symposium of PSYCHOLINGUISTICS

Call for Papers and Posters

X Symposium of PSYCHOLINGUISTICS

Basque Center on Cognition, Brain and Language.
Donostia-San Sebastián
Spain

April 13th – 16th 2011
http://www.bcbl.eu/events/psycholinguistics/


Keynote Speakers:
Riitta Salmelin. Low temperature laboratory. Helsinki, Finland.
David Poeppel. New York University. New York, USA.
Jamie I. D. Campbell. University of Saskatchewan. Saskatoon. Canada.
Sharon Thompson-Schill. University of Pennsylvania, USA.


Submissions:

We welcome submissions of abstracts for oral or poster presentations on topics related to Psycholinguistics.

We accept contributions from all over the world. Priority for oral presentations will be given to those contributions describing research done in Romance languages (Spanish, Catalan, Galician, Portuguese, French, Italian, etc.) as first or as second languages, and also for research done with the Basque language.

There is a clear and enduring bias to build models of language processing based on data collected in English alone. The Symposium will aim to contribute to the growing amount of psycholinguistic data collected in other languages, with the larger goal of moving towards a comprehensive theory of language processing that is built on data from as many languages as possible.


Abstracts can now be submitted electronically, and must be submitted by the
deadline of December 15th, 2010. They will be reviewed anonymously by expert reviewers, and authors will be notified with decisions by January 15th, 2011.


***IMPORTANT DATES***
Abstract submission deadline: December 15th, 2010
Notification of abstract acceptance: January 15th, 2011
Early registration deadline: February 1st, 2011
Online registration deadline: March 15th, 2011
Conference dates: April 13th - 16th, 2011

I look forward to seeing your scientific contributions at the “X symposium of Psycholinguistics.”

The organizing committee

Thursday, November 11, 2010

NLC 2010 -- Good turn out!

Conference just started. Nearly 400 registrants so far...

Wednesday, November 10, 2010

Comments on NLC 2010 #1

NLC starts tomorrow and after letting the 14 boxes of scientific programs age in my garage for a few weeks, I finally pulled one out and had a glance. One of the first abstracts I came across was one by Willems et al. (abs #8) titled, A functional role for the motor system in language understanding: Evidence from rTMS. Since the title includes the term "language understanding" I was hopeful that they assessed language understanding. My hopes were dashed when I read that their task was lexical decision. They argue that lexical decision is a "classical indicator of lexico-semantic processing" (note the terminology change: they did not claim it was a classical indicator of "language understanding"). I suspect that like syllable discrimination in the phonemic perception domain, lexical decision is a highly misleading indicator of what normally happens in "language understanding" because (i) we don't normally go around making lexical decisions (it may involve additional cognitive processes not normally involved in comprehension), (ii) you don't need to understand a word to make a lexical decision (think, "familiarity without knowing"), and (iii) lexical decision data usually comes in the form of RT data even though it is a classic signal detection type paradigm, and therefore the data are subject to bias.

Skepticism aside, what did they do and what did they find? They stimulated left or right premotor cortex while subjects made lexical decisions to manual (e.g., throw) or nonmanual (e.g., earn) verbs. Left but not right PM stimulation lead to faster RTs to manual verbs compared to nonmanual verbs.

They conclude,

This effect challenges the skeptical view that premotor activation during linguistic comprehension is epiphenomenal... These data demonstrate a functional role for premotor cortex in language understanding.


I think they've shown clearly that premotor cortex plays a role in lexical decision. The source of this effect remains unclear (does it affect the semantic representation or just bias, e.g., prime, the response) and more importantly, the relation between lexical decision (the measured behavior) and language understanding (the target behavior) is far from clear.

In short, they have done nothing to curb the skepticism regarding the role of premotor cortex in language understanding.

Tuesday, November 2, 2010

Internal forward models. Neuronal oscillations. Update from TB-East.

Recently, Greg commented on forward models (hope or hype). He raised a few critical points and speculated about the utility of this concept given how widespread it is. And – importantly – he has a cool paper coming out soonish that puts the cards on the table in terms of his position. Very cool stuff, developed with my grad school office mate John Houde.


It seems like every now and then, this concept comes up from different angles, for many of us. For me, the ‘analysis-by-synthesis’ perspective on internal forward models has come up in various experimental contexts, initially in work with Virginie van Wassenhove on audio-visual speech. There, based on ERP data recorded during perception of multi-sensory syllables, we argued for an internal forward model in which visual speech elicits the cascade of operations that comprise, among others, hypothesis generation and evaluation against input. The idea (at least in the guise of analysis-by-synthesis) has been recently reviewed as well (Poeppel & Monahan, 2010, in LCP; Bever & Poeppel 2010, in Biolinguistics, provides a historical view dealing with sentence processing a la Bever).


It is worth remembering that work on visual perception has been exploring a similar position (Yuille & Kersten on vision; reverse hierarchy theory of Hochstein & Ahissar; the seemingly endless stream of Bayesian positions).


Now, in new work from my lab, Xing Tian comes at the issue from a new and totally unconventional angle, mental imagery. In a new paper, Mental imagery of speech and movement implicates the dynamics of internal forward models, Xing discusses a series of MEG experiments in which he recorded from participants doing finger tapping tasks and speech tasks, overtly and covertly. For example, after training, you can do a pretty good job imagining that you are saying (covertly) the syllable da, or hearing the syllable ba.


This paper is long and has lots of intricate detail (for example, we conclude that mental imagery of perceptual processes clearly draws on the areas implicated in perception, but imagery of movement is not like a ‘weaker’ form of movement but resembles movement planning). Anyway, the key finding from Xing’s work is this. We support the idea of an efference copy, but there is arguably a cascade of predictive steps (a dynamic) that is schematized in the figure from the paper. The critical data point: a fixed interval after a subjects imagines articulating a syllable (nothing is said, nothing is heard!), we observe activity in auditory cortex that is indistinguishable from hearing the token. So, as you prepare/plan to say something, an efference copy is sent not just to parietal cortex but also auditory cortex, possible in series. Cool, no?


And on a totally different note … An important paper from Anne-Lise Giraud’s group just appeared in PNAS, Neurophysiological origin of human brain asymmetry for speech and language, by Benjamin Morillon et al. This paper is based on the concurrent recording of EEG and fMRI. It builds on the 2007 Neuron paper and incorporates an interesting task contrast and a sophisticated analysis allowing us to (begin to) visualize the network at rest and during language comprehension. The abstract is below:


The physiological basis of human cerebral asymmetry for language remains mysterious. We have used simultaneous physiological and anatomical measurements to investigate the issue. Concentrating on neural oscillatory activity in speech-specific frequency bands and exploring interactions between gestural (motor) and auditory-evoked activity, we find, in the absence of language-related processing, that left auditory, somatosensory, articulatory motor, and inferior parietal cortices show specific, lateralized, speech-related physiological properties. With the addition of ecologically valid audiovisual stimulation, activity in auditory cortex synchronizes with left-dominant input from the motor cortex at frequencies corresponding to syllabic, but not phonemic, speech rhythms. Our results support theories of language lateralization that posit a major role for intrinsic, hardwired perceptuomotor processing in syllabic parsing and are compatible both with the evolutionary view that speech arose from a combination of syllable-sized vocalizations and meaningful hand gestures and with developmental observations suggesting phonemic analysis is a developmentally acquired process.


Morillon B, Lehongre K, Frackowiak RS, Ducorps A, Kleinschmidt A, Poeppel D, & Giraud AL (2010). Neurophysiological origin of human brain asymmetry for speech and language. Proceedings of the National Academy of Sciences of the United States of America, 107 (43), 18688-93 PMID: 20956297