Friday, December 18, 2009

During the holidays, don't forget to submit your papers -- to LCP Cognitive Neuroscience of Language

¿Tired of the same old journals? Nature ... Science ... Neuron ... PLoS ...
¿Tired of Reviewer #3 ruining your holiday vibe (see recent video of reviewer #3's impact ...)?
¿Ready for a new journal to consider your new stuff?

I've announced this before, but I'm not sure people are sufficiently aware: the journal Language and Cognitive Processes now has a regular issues devoted to Cognitive Neuroscience of Language. LCP-CNL is edited by Lolly Tyler and David Poeppel. The first issue was recently published, more papers are in the pipeline.

http://www.psypress.com/language-and-cognitive-processes-0169-0965

Please consider sending your work to the journal. We promise to soften reviewer #3's blow. (And analyses appreciating the utility of d-prime will get an extra-fast turnaround. That, after all, was one of the major outcomes of the Neurobiology of Language conference in Chicago.) And at the very least, there now exists another good publication outlet for papers that are theoretically well-motivated, computationally explicit, and neurobiologically sensible.

Don't be shy! Submit early. Submit often.

Tuesday, December 15, 2009

The parallel universes of Broca's area

Update on the function of Broca's area: we still don't know much.

It seems like everyone is studying Broca's area for any number of functions. The most recent is the Science paper that David commented on a while back. That paper argued that Broca's area supports sequential processing of lexical, grammatical, and phonological information. Like other claims about Broca's area, this paper has sparked a debate. For example, see the e-comment by Matt Goldrick et al.

But Broca's area is also a prime suspect in the hunt for mirror neurons, speech perception, syntactic movement, hierarchical structure processing, semantic integration, working memory, and cognitive control. What's interesting (or unfortunate) is that some of these debates go on without reference to others, like they are living in a parallel universe. E.g., you never hear the mirror neuron folks talking about cognitive control and vise versa.

New rule: when speculating about the function of Broca's area you have to at least mention the range of other ideas/data. Maybe this will promote cross-universe interaction.

New rule #2: don't use the term "Broca's area" without further specification. We all know that Broca's area is composed of at least two subregions that seem to do different things. Please specify.

Thursday, December 10, 2009

Postdoc in Barcelona: “Bilingualism and Cognitive Neuroscience” – (BRAINGLOT)

CONSOLIDER-INGENIO 2010 PROJECT
“Bilingualism and Cognitive Neuroscience” – (BRAINGLOT)

1. Position
Post-doctoral position in cognitive neuroscience / multisensory integration
Applications are invited for a full-time post-doctoral research position in the MULTISENSORY
RESEARCH GROUP at the Pompeu Fabra University (Barcelona). The post is part of the BRAINGLOT
project, a Spanish Research Network on Bilingualism and Cognitive Neuroscience (Consolider-Ingenio
2010 Scheme, Spanish Ministry of Science and Education).

2. Project

The project brings together the efforts of several research groups spanning different scientific disciplines
with the common purpose of addressing the phenomenon of bilingualism. The project is conceived with
an open and multidisciplinary vocation, as one of its major anchor points places the stress on the mutual
influence (both in terms of cognitive and neural processes) between bilingualism and other functions such
as auditory perception, multisensory integration, and the executive control attention. This is an excellent
opportunity for professional growth for those interested in the fields of psychology, neurobiology,
cognitive neuroscience or related disciplines including computer science. This position is available mainly
to lead brain imaging studies using fMRI of multisensory integration (possibly complemented with other
methodologies like ERP, behavioral, etc…).

3. Candidate Profile

Candidates must have a PhD and a background in cognitive neuroscience, neuroscience, and/or
cognitive psychology. Previous experience in speech perception and or multisensory processing will be
strongly valued. Experience with functional MRI data analysis and basic programming skills (e.g.,
Presentation, E-prime, and Matlab) is *necessary*. Applicants from outside the EU are welcome to apply
but must qualify for a valid visa.

4. Conditions

• Position: The position will be funded and renewable for up to three years
• Starting date: As soon as possible
• Salary: Commensurate with experience.
• Travel: The project will require short trips within Spain

5. How to apply

Applications should include:
• a C.V. including a list of publications
• the names of two referees who would willing to write letters of recommendation
• a cover letter describing research interests
For informal enquiries about the position and applications, please contact Salvador Soto-Faraco.
salvador.soto@icrea.es (http://www.mrg.upf.edu/mrg-home.htm). Applications will be accepted until the
position is filled.
Please, mention that you are applying to the POSTDOCTORAL position in the email subject

Wednesday, December 2, 2009

How to make your brain shrink: age

A new study in the Journal of Neuroscience reports that significant reductions in cortical volume occur during normal aging over the span of only one year. The researchers collected MRI data from 142 healthy elderly people aged 60-91 (60 is elderly? Really?). Cortical volume reduction was detectable in several regions, but most prominently in temporal and prefrontal cortices which of course includes regions involved in language function. No wonder I can't remember names anymore...

Fjell, A., Walhovd, K., Fennema-Notestine, C., McEvoy, L., Hagler, D., Holland, D., Brewer, J., & Dale, A. (2009). One-Year Brain Atrophy Evident in Healthy Aging Journal of Neuroscience, 29 (48), 15223-15231 DOI: 10.1523/JNEUROSCI.3252-09.2009

Live videofeed of the sectioning of H.M.'s brain today

Click here to view:
http://thebrainobservatory.ucsd.edu/

Tuesday, December 1, 2009

Role of the anterior temporal lobe in semantic word retrieval

Lesion studies are making a come back and this is a good thing. fMRI is a good and valuable technique, but it absolutely needs to be balanced with other methods and lesion studies remain important in this respect.

A new lesion study from Myrna Schwartz' group has recently appeared in the advance access section of the journal Brain. The study examines semantic word retrieval in aphasia using a picture naming task. For years this group has been doing fantastic psycholinguistically informed modeling work (with Gary Dell) on naming errors in aphasia and now adds lesion correlation to their arsenal. Using voxel-based lesion symptom mapping (VLSM) in a sample of 64 aphasics, the authors correlated semantic error rate in naming (misnaming elephant as zebra) with the presence or absence of lesion on a voxel-by-voxel basis. They also administered control tasks, one set that sought to identify non-verbal semantic comprehension deficits (Pyramid and Palms & Camel and Catus Tests) and another that sought to identify verbal comprehension (a word-to-picture matching test & a synonym judgment test). The non-verbal control is the most important because it rules out deficits caused by visual analysis of pictured stimuli and general conceptual semantic processes.

Correlation between semantic errors in naming and lesion data identified three main regions, anterior/mid middle temporal gyrus, posterior middle temporal gyrus, and inferior frontal gyrus (BA 45/46) (see figure below).



Factoring out the verbal comprehension measures didn't change the pattern, however, factoring out the non-verbal semantic tests eliminated the frontal and posterior temporal foci, leaving only the anterior temporal regions as significantly correlated with deficits in accessing word- (lemma) level information (see figure below).



A couple of surprising things came out of this study, for me anyway. One is that the anterior temporal focus remained significant even after factoring out performance on non-verbal semantic tests like the Pyramid and Palms Test. Patients with semantic dementia have ATL involvement, do poorly on the PPT, and have been argued to have amodal semantic deficits. I would have predicted that factoring out the PPT would result in only a posterior temporal focus surviving, but the reverse held. This is interesting and useful information.

Another interesting result is that neural systems involved in word-level access in naming (ATL) are not dramatically involved in word-level access in comprehension, otherwise one would have expected the ATL focus to diminish substantially when verbal comprehension is factored out. The non-involvement of ATL regions in comprehension are exactly what I would have predicted based on our claim that posterior regions are critical for this, but I also assumed there was a good deal of overlap in comprehension and production in terms of word-level access. One concern I have is the use of a composite verbal comprehension score that includes both comprehension and a synonym judgment task. If these tasks tap different neural systems to some extent (e.g., temporal vs. frontal respectively) then the composite score may be diluted. I would have liked to see the comprehension score alone factored out.

All in all, this is an important study that will have to be taken seriously by anyone developing models of the functional anatomy of language.

Schwartz, M., Kimberg, D., Walker, G., Faseyitan, O., Brecher, A., Dell, G., & Coslett, H. (2009). Anterior temporal involvement in semantic word retrieval: voxel-based lesion-symptom mapping evidence from aphasia Brain DOI: 10.1093/brain/awp284

Wednesday, November 18, 2009

What counts as auditory cortex?

I'm working on revisions of an fMRI paper that investigates the hierarchical organization of auditory cortex. Not surprisingly, the STS has figured into our findings. One reviewer raised the question of whether the STS is auditory cortex. This is a legit question and one that is not easy to answer.

The reviewer adopted a conservative definition of auditory cortex (conservative on his/her own admission): auditory cortex is cortex that receives projections from the medial geniculuate complex. I kind of like this definition because if we apply it to vision (using LGN projections of course), visual cortex = V1 (and maybe MT). THAT should slow down those greedy cortex grabbers! :-)

But surely this is too conservative. No one would claim that V2 isn't part of visual cortex just because it doesn't receive thalamic projections. So if vision people can claim territory beyond LGN projection fields, we should too.

But then where do we draw the line between auditory cortex and the rest of cortex? Do we base it on cytoarchitectonics? Do we include any auditory-responsive field as auditory cortex (which would include frontal areas!)? Is the notion of unimodal sensory cortex even meaningful?

What do you think?

Friday, November 13, 2009

Postdoctoral positions - Northwestern, Communication Sciences

The Department of Communication Sciences and Disorders at Northwestern University is pleased to announce the availability of PhD and postdoctoral fellow positions, funded by an NIH translational research training grant. The goal of the grant is to train young scientists in translational research in communication sciences and disorders, bridging basic and clinical research. Special emphasis is placed on translational projects related to sensory reception, motor control, and language processing. Postdoctoral candidates must hold a PhD in Communication Sciences and Disorders, Cognitive Science, Linguistics, Neuroscience, or a related field.

Trainees will receive funding for two years on this project. Additional funding is available for funding beyond the two years of this project. Interested candidates should send a cover letter stating your research interests and career goals, CV, and two letters of recommendation to Chuck Larson, Chair, Communication Sciences and Disorders, Northwestern University, 2240 Campus Dr., Evanston, IL 60208.

Applications will be reviewed quarterly, but it is anticipated that most positions will be filled at the beginning of the academic year in September.

Wednesday, November 11, 2009

Tenure-track or tenured professor Department of Communication Sciences and Disorders Emerson College, Boston, Massachusetts

The Department of Communication Sciences and Disorders at Emerson College
seeks to hire a tenure-track or tenured faculty member with primary expertise in the area of Speech-Language Pathology or a related field. Required qualifications are 1) a completed doctorate in Speech and Hearing Science, Communication Disorders, or a related discipline, 2) a record of excellence in teaching, and 3) an established record of research in Communication Sciences and Disorders. The successful applicant will be expected to publish research in her/his area of expertise, teach undergraduate and/or graduate courses, advise students, and participate in related academic service and scholarly activities. Appointment begins September 1, 2010.

Review of applications will begin January 11, 2010 and continue until the position is filled. Applicants should submit a letter of application (including a description ofresearch focus and teaching experience), a curriculum vita, and three letters of support to: CD Search Committee, Communication Sciences and Disorders, 120 Boylston Ave, Boston, MA 02116. Inquiries should be directed to Daniel Kempler, Department Chair, Daniel_Kempler@emerson.edu, 617-824-8302.

Emerson College values campus multiculturalism as demonstrated by the
diversity of its faculty, staff, student body, and constantly evolving curriculum. The successful candidate must have the ability to work effectively with faculty, students, and staff from diverse backgrounds. Members of historically underrepresented groups are encouraged to apply. Emerson College is an Equal Opportunity Employer that encourages diversity in its workplace.

Emerson College is the nation’s only four-year institution dedicated exclusively to majors in communication and the arts. The program in Communication Sciences & Disorders is one of the oldest and most respected in the country, and is highly ranked among the most competitive graduate programs in communication disorders in the US. The department offers state-of-the-art, handicap accessible, on-campus clinical facilities easily reached by public transportation. Emerson College is located in the center of Boston, surrounded by major healthcare and research centers, which provide a wide range of clinical and research opportunities for faculty and students. The College enrolls approximately 3,000 full-time undergraduates and nearly 1,000 full and part-time graduate students in its School of the Arts and School of Communication.

Tuesday, November 10, 2009

Cognitive Neuroscience Assistant or Associate Professor (tenure-track) - University of Washington Institute for Learning & Brain Sciences

The University of Washington’s Institute for Learning & Brain Sciences (I-LABS), an interdisciplinary brain research center, has a tenure-track faculty opening for an Assistant/Associate Professor in Cognitive Neuroscience with a focus on Language. Departmental affiliation can be in Psychology, Speech & Hearing Sciences, Linguistics, or Biology, depending on the applicant’s background and training. Ph.D. required. Appointment at the Associate Professor level will be considered for candidates who have an outstanding research record. I-LABS’ faculty study life-long learning and specialize in human cognitive development and learning. We have a growing developmental group at the Institute. The Institute will open its own MEG-Brain Imaging Center in April 2010. The successful candidate will be one who brings expertise in human cognitive neuroscience with a focus in the domain of language, development, and/or research using MEG/ERP. Faculty responsibilities will begin September 16, 2010.

Applicants should send a statement of teaching and research interests, curriculum vita, up to 5 publication reprints, and three letters of recommendation to:

Patricia Kuhl, Co-Director, Institute for Learning & Brain Sciences, Mailstop 357988, University of Washington, Seattle, WA 98195; e-mail: pkkuhl@u.washington.edu. Review of applications will begin in January 15, 2010, will continue until the position is filled.

The University of Washington is an affirmative action, equal opportunity employer, and is building a culturally diverse faculty and staff and strongly encourages applications from women, minorities, individuals with disabilities and covered veterans. UW faculty engages in teaching, research and service. The University of Washington, a recipient of the 2006 Alfred P Sloan award for Faculty Career Flexibility, is committed to supporting the work-life balance of its faculty.

Friday, November 6, 2009

The Pinker Panther Strikes Again - Recording from Broca’s Area

In a recent issue of Science (Vol 326, 445-449), Sahin, Pinker, Cash, Schomer, and Halgren summarize their findings from direct recordings from Broca’s region in patients undergoing presurgical epilepsy evaluation.

In an earlier, 2006, paper in Cortex, Sahin, Pinker, and Halgren reported fMRI data in participants doing one of the garden-variety past-tense tasks used often by Steve Pinker and his students and colleagues (the style of the experiment is something like this “visual cue: Yesterday I was in the park and ___. Target: to walk” -> participant produces “walked”). This 2009 paper is the fancier, intra-cranial recording companion piece (ICE, in their terminology, intracranial electrophysiology :-).

The piece represents something like the ‘harmonic convergence’ between the current enthusiasm for intracranial electrophysiological data (is anyone not doing this?), the long (historical) reach of Pinker’s past-tense-as-psycholinguistic-drosophila philosophy (yes, I remember having to read Pinker and Prince in grad school; and Greg even worked on some of this stuff!), and the growing interest in better cognitive neuroscience of language models.

Peter Hagoort and Pim Levelt provide a perspective in the same issue of Science (Vol 326, 372-373), largely because these data are directly linked to the Levelt production model. The numbers reported by Sahin et al. match nicely with Levelt’s production model (see, e.g. Indefrey & Levelt, 2004, Cognition) -- so the Max-Planck guys are certainly happy.

The centerpiece of the study -- recordings from three patients who also underwent fMRI scanning prior to electrode implantation -- concerns data from electrodes in Broca’s region, perhaps Brodmann’s area 45 (that point is not made with sufficient clarity). They identified in the electrode response three peaks, or rather a tri-phasic response. Across all three patients, there were peaks/valleys at ~200 ms, ~320 ms, and ~450 ms post-target onset. The first, 200 ms, peak was modulated by lexical manipulations (frequency), the second, 320 ms, peak by inflectional demands (grammatical manipulations), the third, 450 ms, peak by articulatory requirements. Based on these observations, they conclude (and this is the title of the article): “Sequential processing of lexical, grammatical, and phonological information within Broca’s area.”

The results are not particularly surprising. When presented with a word, it stands to reason that it has to be accessed/identified before it can be repeated … (Planning a word’s articulation before even seeing it would indeed be pretty novel.) Moreover, if any operation on the input representation is required prior to articulation, it is also not super-surprising that it would be temporally interposed between lexical access and articulatory planning/output generation. What would be the alternative? What is interesting in these data is that there is evidence for these stages in one very small region. Of course, many other regions will also play a role – here, by clinical necessity, only a small region can be investigated. What is not clear is whether the activation observed is functionally critical, i.e. whether the reported triphasic Broca’s region activity is necessary for the execution of these language tasks. If we want to conclude that Broca’s region provides the neuronal substrate for multiple different operations that participate causally in the execution of multiple language tasks – again, is there a credible alternative? – it would help to get a better sense of the role such localized frontal activation plays. In any case, the paper reflects the growing use of intra-cranial data in the study of language (see, e.g., the studies by Boatman, Crone, Knight, etc.)

Thursday, November 5, 2009

Why "where" cannot be a sensory processing stream

There is debate about the nature of the dorsal auditory processing stream. Some folks, Josef Rauschecker in particular, argue for a dorsal "where" stream, whereas others, Hickok & Poeppel and Warren et al., argue for a sensory-motor integration (sometimes called "how") stream. Here's why the "where" hypothesis can't, in principle, be right.

Spatial information associated with an auditory signal is a stimulus feature much like pitch. We don't talk about a "pitch stream" however. Why not? Because pitch (frequency) is just a cue for any number of processing goals. Pitch information can cue phonemic identity, speaker voice, auditory stream segregation ... even sensory-motor goals (humming back a tone). Spatial cues are no different in that they can cue explicit location judgments, auditory stream segregation, and any number of sensory-motor processes (head movements, saccades, locomotion toward or away from a source).

Processing streams, I'm suggesting, are defined by goals or tasks -- what the information is used for -- not by stimulus features. Sensory-motor integration for vocal tract actions defines a goal -- control of the vocal tract -- and therefore is a viable candidate for a processing stream. Identifying the meaning of an auditory object is also a goal and a good candidate for a processing stream. Stimulus features, like pitch or location are not goals, they are just cues that can be used within various task-driven processing streams.

Of course, this doesn't imply that there isn't a specialized location processing system in the brain that uses interaural time and level differences to compute spatial information. Almost for sure there is (my guess is that it's subcortical), just like there is a system that processes pitch using frequency information. But we shouldn't confuse a specialized feature processing system (area) with a cortical processing stream as the notion "stream" is typically used.

Which reminds me. It is probably time to redefine the notion of a processing "stream". In particular, I think the dorsal-ventral distinction is getting tired and has now outlived its usefulness. I'll expand in a later post...

References

Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

Rauschecker JP, & Scott SK (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nature neuroscience, 12 (6), 718-24 PMID: 19471271

Warren JE, Wise RJ, & Warren JD (2005). Sounds do-able: auditory-motor transformations and the posterior temporal plane. Trends in neurosciences, 28 (12), 636-43 PMID: 16216346

Monday, November 2, 2009

TENURED SENIOR FACULTY POSITION AT BOSTON UNIVERSITY

The Department of Speech, Language and Hearing Sciences at Boston University invites applications for the position of Senior Faculty at the level of Associate/Full Professor. Candidates should have demonstrated a strong research background and a successful record of obtaining external support for research and training/mentoring activities. Qualifications include: (a) earned doctorate with a specialty in one of the communication sciences or disorders areas, (b) experience in teaching, research and mentoring, and (c) clinical certification is preferred, but not required. Areas of research expertise are open but we encourage candidates with a clear interest in interdisciplinary research collaboration. The Department is moving to a rotating Chair model with a limited appointment as Chair and all senior faculty will be eligible to serve in this administrative opportunity as appropriate to their overall career development. This is a full-time, tenure track position, with a 9-month appointment. Rank and salary will be commensurate with qualifications and level of experience.
Registered Email
s-kiran@mail.utexas.edu
Contact Name
Swathi Kiran
Contact Address
For questions regarding this position, please email
Swathi Kiran Ph.D CCC-SLP, Chair of the search committee
kirans@bu.edu or call 617-358-5478
Application packets should be directed to:


Speech, Language and Hearing Sciences Search Committee
Boston University
College of Health and Rehabilitation Sciences: Sargent College
635 Commonwealth Avenue
Boston, MA 02215
Contact Email
kirans@bu.edu
Contact Phone
617-358-5478
Contact Website
http://www.bu.edu/sargent/

Faculty Position in Developmental Disorders of Speech, Language and Learning, Northwestern University

The Roxelyn and Richard Pepper Department of Communication Sciences
and Disorders at Northwestern University is searching for a
tenure-track assistant professor to begin September 2010 who will lead
a translational research program in the developmental disorders of
speech and learning (e.g., apraxia, phonological disorders). An
exceptional candidate may be considered for an endowed junior chair
position. Clinical certification is not required. In addition to an
earned doctorate from relevant fields (e.g., psychology, learning
science, speech science, neurobiology, genetics, molecular and cell
biology, biomedical engineering), applicants must have demonstrated
potential to lead a high-impact, externally-funded research program.
Northwestern University is a founder and leader of the discipline of
Communication Sciences and Disorders with undergraduate, professional
(Audiology, Speech-Language Pathology, and Learning Disabilities), and
PhD training programs; innovative and interdisciplinary approaches to
teaching and research dominate the direction of the University.

Duties: Develop a fundable program of research, teach courses in
developmental speech sciences and disorders and related topics, direct
student research, and engage in service to the department, school, and
university.

Qualifications: An earned PhD, a record of peer-reviewed publications,
potential for obtaining external grant funding, and potential for
being an effective teacher. Clinical qualification is not required.

Salary: Internationally competitive, depending on qualifications and experience.

Application procedures: Candidates should send a CV, research and
teaching statements, reprints of published articles, and four letters
of reference to: Charles Larson, PhD, CSD Faculty Search Committee
Chair (2240 Campus Dr., Evanston, IL 60208).

The University: Northwestern University is one of the nation’s largest
private research universities. The main campus is located in Evanston
and the medical campus is located 12 miles south in Chicago. Both
campuses are located on the shore of Lake Michigan. There is
continuing expansion of University facilities and programs,
particularly in the sciences and medicine. Cultural, social, and
recreational activities abound on and near each campus. For more
information, please visit:
http://www.communication.northwestern.edu/departments/csd/.

Closing Date: Ongoing until position is filled. Review of
applications will begin December 15, 2009.

Northwestern University is an Affirmative Action, Equal Opportunity
Employer. Women and minorities are encouraged to apply. Hiring is
contingent on eligibility to work in the United States.

Search # 15116
Registered Email
j-booth@northwestern.edu
Contact Name
Charles Larson

Faculty Positions: University of Maryland

The Linguistics Department at the University of Maryland seeks outstanding applicants for two tenure-track positions in the cognitive science of language. The first position is an Assistant or Associate Professor position in computational psycholinguistics, with a focus on models of human language processing and/or language learning. The second position is a tenure-track Assistant Professor position in the cognitive neuroscience of language. Candidates for both positions should contribute to a vibrant interdisciplinary language and cognitive science community that spans Linguistics and many other departments, programs, and institutes (e.g., Neuroscience and Cognitive Science, Electrical Engineering, Computer Science, Psychology, Hearing and Speech, Second Language Acquisition, Human Development, Center for Advanced Study of Language), and should contribute to the university's NSF-IGERT graduate training program in Biological and Computational Foundations of Language Diversity. The cognitive neuroscientist should ideally be able to take advantage of the university's multi-modal neuroimaging resources, which include ERP and MEG facilities and a new NSF-sponsored 3T fMRI facility (opening in 2010). Candidates for both positions should demonstrate the ability to lead an innovative and collaborative program of research and teaching that integrates computational and experimental approaches with linguistics and should have a Ph.D. in Linguistics or another Cognitive Science-related field.

For best consideration, applicants should submit a letter of application, including a research statement, a CV, and representative samples of scholarship by December 1, 2009 (by email if possible). Three letters of recommendation should be submitted separately (also by email if possible). Please indicate whether you plan to attend the LSA meeting in Baltimore. The position is open until filled.

Applications should be submitted to: linguistics_search@umd.edu or to: Search Committee, Linguistics Department, 1401 Marie Mount Hall, University of Maryland, College Park, MD 20742, USA.

The University of Maryland is an Equal Opportunity, Affirmative Action employer. Applications from women and minority candidates are especially encouraged
Registered Email
idsardi@umd.edu
Contact Name
William Idsardi
Contact Address
Contact Email
linguistics_search@umd.edu
Contact Phone
301-405-8376

Tuesday, October 27, 2009

Spatial Organization of Multisensory Responses in Temporal Association Cortex

An important unit physiology paper by Dahl, Logothetis, & Kayser appeared in J. Neuroscience a couple of weeks ago. These authors explored the spatial organization of cells in multisensory areas of the superior temporal sulcus in macaque, in particular the distribution of visual- versus auditory-preferring cells. What they found is that like-preferring cells cluster together in patches: auditory cells tend to cluster with other auditory cells, visual cells tend to cluster with other visual cells.


This is only mildly interesting in its own right because it just shows that functional clustering, long-known to be a feature of unimodal sensory cortex, also holds in multisensory cortex. What makes this important is the implications this finding has for fMRI. If "cells of a feather" cluster together and if these clusters are not uniformly distributed across voxels in an ROI then different voxels will be differentially sensitive to one cell type versus another. And this is exactly the kind of underlying organization that multivariate pattern analysis (MVPA) can detect. So, this new finding justifies the use of fMRI data analysis approaches such as MVPA.

Dahl CD, Logothetis NK, & Kayser C (2009). Spatial organization of multisensory responses in temporal association cortex. The Journal of neuroscience : the official journal of the Society for Neuroscience, 29 (38), 11924-32 PMID: 19776278

Monday, October 26, 2009

Exciting new job in David Gow's group at MGH!

FULL_TIME RESEARCH ASSISTANT/Massachusetts General Hospital;

RA position available in the Cognitive/Behavioral Research Group, Department of Neurology, at the Massachusetts General Hospital. Research focuses on the perception of spoken language and related speech processing in both unpaired adults and recovering stroke patients. All of our work involves integrated multimodal imaging with MRI, EEG and MEG using the state of the
art resources at the Athinoula A. Martinos Imaging Center (http://www.nmr.mgh.harvard.edu/martinos/aboutUs/facilities.php). Our lab is a leader in the application of Granger causality analysis to high spatiotemporal resolution brain activation data. The RA will work closely
with the PI on a regular basis, but will also be required to work independently. Responsibilities will include subject recruitment, stimulus development, experiment implementation and execution, subject testing including MRI and MEG/EEG scanning, data analysis, database management, and minor administrative work. The position will provide some patient
interaction with recent stroke victims. It will also involve training in multimodal imaging techniques, spectral and timeseries statistical analyses, speech recording, synthesis, analysis and digital editing techniques. This job is ideal for someone planning on applying for graduate school in cognitive neuroscience, imaging biotechnology, psychology, medicine, or the
speech and hearing sciences.

Minimum Requirements:
Bachelor¹s degree in psychology, cognitive neuroscience, computer science or a related field is required.

The ideal candidate will be proficient in Mac and Windows based applications (experience with Linux is a plus). S/he will have programming experience and will be comfortable learning to use and modify new research-related applications (e.g. MatLab code). Prior experience with neuroimaging techniques such as MRI, EEG or MEG is preferred, but not required. Good
written and oral communication skills are important assets. A sense of humor and adventure is a must. A minimum two year commitment is required. Funding is currently secure through 2014.

The candidate must be hardworking, mature, and responsible with excellent organizational and interpersonal skills. S/he must be able to work independently in a fast-paced environment, juggle and prioritize multiple tasks, and seek assistance when appropriate.

Position is available immediately. Secure funding is available through 2014. If interested, please send a CV and short statement of your interest, as well as the name and address of three references to Dr. David Gow at gow@helix.mgh.harvard.edu.

--
David W. Gow Jr., Ph.D.
Cognitive/Behavioral Neurology Group
Massachusetts General Hospital
175 Cambridge Street, CPZ-S340
Suite 340
Boston, MA 02114

ph: 617-726-6143
fax: 617-724-7836

New blurbs from a new contributor

A reader reminded me recently that not enough people comment on Talking Brains. I encouraged her to contribute. Since we were just at the Neurobiology of Language conference as well as the Society for Neuroscience, I suggested she write up a few blurbs on posters/presentations that made an impression on her. Thank you, Laura Menenti (from the Donders Center), for sending this. I hope it stimulates other readers to contribute more comments on their impressions of these two meetings (or anything else, as usual).

David

(By the way, I saw these three presentations as well. All three were very provocative and interesting - nice selection, Laura.)

An idiosyncratic sample of NLC/SfN studies

Here is an idiosyncratic sample of studies I noticed at the Neurobiology of Language Conference (Oct 15th-16th) and Neuroscience 2009 (Oct 17th-21st) - idiosyncratic because the population from which to draw was huge, because the sample size needs to be small, and because the sample is biased by my own interest - naturalistic language use.

Neuroscience 2009: Characteristics of language and reading in a child with a missing arcuate fasciculus on diffusion tensor imaging. J. Yeatman, L. H. F. Barde, H.M. Feldman

Considering the importance of the arcuate fasciculus in connecting classic language areas, the question as to what language is like when you don't have one is an exciting one. The authors tested a 12-year old girl without an arcuate fasciculus (due to premature birth) patient on a standardized neuropsychological test battery, and scanned her using Diffusion-weighted Tensor Imaging (DTI). The DTI showed that indeed, the patient completely lacked the bilateral arcuate fascicule. Surprisingly, her performance on the language tests fell within the normal range. The authors conclude that normal language performance without an arcuate fasciculus is possible, and that the brain therefore shows remarkable plasticity in dealing with the lack of such an essential pathway.

There is a catch, however: in a footnote the authors mention that the subject has very 'inefficient communication' and poor academic performance. As it turns out, the girl may be able to achieve a normal score on the tests, but not in a normal way: for example, answering the question 'What is a bird?' from the Verbal Intelligence Scale takes her three minutes, according to the experimenter. It is also essentially impossible to have a conversation with her.

To me, these results show two things:

- Normal language performance is not possible without an arcuate fasciculus, assuming that being able to hold a conversation is part of normal language performance.

- The neuropsychological tests used do not properly reflect language performance if they fail to capture such gross variations in how a patient arrives at the correct answer.

It would be extremely interesting in the light of recent discussions (Hickok and Poeppel, 2004; Saur et al., 2008) to test whether this patient's impairments are restricted to specific aspects of language processing.

Neuroscience 2009: Do we click? Brain alignment as the neuronal basis of interpersonal communication. L. J. Silbert, G. Stephens, U. Hasson

In an attempt to look at normal language use, these authors target the question of how participants in a conversation achieve mutual understanding. Possibly, they do so through shared neural patterns and this study is a first step in testing that hypothesis. The authors let a speaker tell a story in the scanner and then let eleven other subjects listen to that same story. They measured correlations between the BOLD-timeseries in the speaker and the listeners. Intersubject correlations between speaker and the average listener were highest in left inferior frontal gyrus, anterior temporal lobe and precuneus/PCC. The correlations were highest when the speaker time series was shifted to precede the listeners' by 1-3 seconds, implying that the correlations are not simply due to the fact that the speaker also hears herself speak.

To corroborate the idea that these correlations underlie communication, the authors did two further tests. First, they also recorded a Russian speaker telling a story which they then presented to non-Russian listeners. They found fewer areas showing an inter-subject correlation (there were some in for instance STS). Second, they correlated the listeners' level of understanding of the story with the strength of the inter subject correlation, and found a correlation between understanding and inter-subject correlation in basal ganglia, left temporo-parietal junction and anterior cingulate cortex. The interpretation of this finding is that the more the listeners correlate with the speaker, the more they understand.

I find the purpose of studying naturalistic communication laudable, and the results are intriguing. More detailed studies of communication are necessary however: one could interpret these results as showing that language areas are involved in speaking and listening. That, in itself, is not a shocking finding. The approach, nevertheless, holds promise for more research into naturalistic communication.

NLC: The neurobiology of communication in natural settings. J. I. Skipper and J. D. Zevin

This study attempts to avoid the concern raised above, by specifying what correlations are due to what. The authors showed subjects a movie of a TV-quiz, and used Independent Components Analysis (ICA) to identify independent brain networks underlying processing of movies. After having identified the networks, they are correlated to different aspects of the movie, identified through extensive annotation of that movie. For example, they find a component that involves bilateral auditory cortices. To find out what it does, they correlate it to diverse stimulus properties as the presence/absence of speech/gesture/speech without oral movement/topic shifts/movement without speech/... (This is done through a peak and valley analysis, in which they determine the likelihood of a specific property occurring when the signal in the component is rising or falling.) For this component, the conclusion is that it is involved when speech is present. That, of course, is not a terribly shocking finding either. But, has anyone ever investigated networks sensitive to speech without visible mouth movement, speech with mouth movement but without gesture, speech with mouth movement and gesture, movement during speech that is not gesture? Only by putting all these stimulus properties in one experiment can one look both at sensitivity to these aspects of communication separately, and at the overlap between them. Importantly, the co-occurrence of all these things is what makes naturalistic communication naturalistic communication. I think this study is a great advertisement for studying language in its natural habitat.

P.S. On a more general and totally unrelated note, the Presidential Lecture at Neuroscience 2009 by Richard Morris was an absolutely impressive example of how to conduct, and to present, science.

References

Saur D, Kreher BW, Schnell S, Kümmerer D, Kellmeyer P, Vry M-S, Umarova R, Musso M, Glauche V, Abel S, Huber W, Rijntjes M, Hennig Jr, Weiller C (2008) Ventral and dorsal pathways for language. Proceedings of the National Academy of Sciences, 105: 18035-18040.

Hickok G, Poeppel D (2004) Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92: 67-99.

Laura Menenti

Thursday, October 22, 2009

How do we perceive speech after 150 kisses?

I wouldn't know, but Marc Sato does. Marc tells me this was a tentative title for a poster he presented at SfN (or was it at NLC?). In any case the official title was Use-induced motor plasticity affects speech perception.

Rather than using that clumsy TMS technique, Marc and his colleagues decided to target speech-related motor systems using the CNS equivalent of a smart bomb: simply ask participants to make 150 lip or tongue movement over a 10 minute span. The idea is that this will fatigue the system and produce a kind of motor after effect in lip or tongue areas. Perception of lip- or tongue-related speech sounds (/pa/ and /ta/) can then be assessed behavioral. They used syllable discrimination (same-different) both with and without noise.

They calculated d' and beta scores. (WOOHOO!) d' of course is a measure of discrimination performance, corrected for response bias. Beta is a measure of the bias -- specifically, the threshold that a subject is using for deciding that the stimuli are, in this case, same or different.

So what did they find? Fatiguing lip or tongue motor systems had no effect on discrimination (d', top graph) but did have significant effector-specific effects on bias (beta scores, bottom graph).


Sato et al. conclude:

Compared to the control task, both the tongue and lip motor training biased participants’ response, but in opposite directions. These results are consistent with those observed by Meister et al. (2007, Current Biology) and d’Ausilio et al. (2009, Current Biology), obtained by temporarily disrupting the activity of the motor system through transcranial magnetic stimulation.


So what this shows is that motor stimulation/fatigue has no effect on acoustic perception, but on a higher-level decision/categorization process. Now, Marc has indicated that he views this decision/categorization process as "part of speech perception" which is perfectly legit. It certainly is part of speech perception as defined by this task. My own interests in speech perception, however, don't include all the processes involved in performing this task, so I take this as evidence that motor stimulation doesn't affect "speech perception".

Use-induced motor plasticity affects speech perception

Marc Sato1, CA, Krystyna Grabski1, Amélie Brisebois2, Arthur M. Glenberg3, Anahita Basirat1, Lucie Ménard2, Luigi Cattaneo4

1 GIPSA-LAB, CNRS & Grenoble Universités, France - 2 Département de Linguistique, Université du Québec à Montréal, Canada
3 Department of Psychology, Arizona State University, USA - 4 Centro Interdipartimentale Mente e Cervello, Università di Trento, Italy

Faculty Positions: PHONETICS/PHONOLOGY, BROWN UNIVERSITY

PHONETICS/PHONOLOGY, BROWN UNIVERSITY: The Department of Cognitive and Linguistic Sciences and the Department of Psychology announce that we will seek to fill four positions in language and linguistics over the next three years. Here we invite applications for an open-rank position in Phonetics/Phonology beginning July 1, 2010. Research focus is open, but we especially value programs of research that cross traditional boundaries of topics and methodology, including theoretical approaches. Interests in cross-linguistic and/or developmental research are highly desirable. The individual filling this position must be able to teach an introductory phonology course as well as a course in experimental phonetics. Additional positions that we will be hiring include a current search in (a) syntactic/semantic/pragmatic language processing, and two others tentatively in the areas of (b) lexical representation and processing, morphology, and/or word formation; and (c) computational modeling, cognitive neuroscience, and/or biology of language. Successful candidates are expected to have (1) a track record of excellence in research, (2) a well-specified research plan, and (3) a readiness to contribute to undergraduate and graduate teaching and mentoring. Brown has a highly interdisciplinary research environment in the study of mind, brain, behavior, and language and is establishing an integrated Department of Cognitive, Linguistic, and Psychological Sciences, effective July 2010. Plans to house the department in a newly renovated state-of-the-art building in the heart of campus are well under way. Curriculum vitae, reprints and preprints of publications, statements of research and teaching interests (one page each), and three letters of reference (for junior applicants) or names of five referees (for senior applicants) should be submitted on-line as PDFs to PhoneticsPhonologySearch@brown.edu, or else by mail to Phonetics/Phonology Search Committee, Department of Cognitive & Linguistic Sciences, Box 1978, Brown University, Providence, RI 02912 USA. Applications received by January 5, 2010 are assured of full review. All Ph.D. requirements must be completed before July 1, 2010. Women and minorities are especially encouraged to apply. Brown University is an Equal Opportunity/Affirmative Action Employer.

Wednesday, October 21, 2009

THE NEW PHONEBOOK IS HERE!


Cognitive neuroscience's equivalent, that is... Gazzaniga's The Cognitive Neurosciences IV arrived in my mailbox a couple of weeks ago. I own Vol. I & II, never bothered to get III and now have IV because I've got a chapter in it for the first time. I guess that means I'm officially a cognitive neuroscientist.

My chapter notwithstanding, the volume is pretty impressive and contains a lot of useful papers. It is again divided into the usual sections, Development and Evolution, Plasticity, Attention, Sensation and Perception (with a whopping three auditory chapters, including one by new UCI Cog Sci faculty member, Virginia Richards), Motor Systems, Memory, Language (the section I'm least likely to read), Emotional and Social Brain, Higher Cognitive Functions (i.e., everything else, including two chapters on neuroeconomics), and Consciousness (i.e., stuff we REALLY don't understand). An 11th section is titled "Perspectives" and features, well, perspectives by senior investigators. I'm looking forward to reading Sheila Blumstein's chapter in this section.

The only chapter I've already read is Tim Griffiths and Co.'s piece on auditory objects. There is some useful information there, including what appears to be a new study on auditory sequence processing using fractal pitch stimuli (sounds cool anyway). Way too much discussion on dynamic causal modeling though which includes two large figures and three tables -- TMI Tim :-)

There are a number of papers that I will eventually read. Some on the top of my list include motor-related papers by Reza Shadmehr John Krakauer (Computational Neuroanatomy of Voluntary Motor Control) and by Grant Mulliken and Richard Andersen (Forward Models and State Estimation in Posterior Parietal Cortex). There's sure to be some information there that is relevant to speech.

The volume is definitely worth a look.

Tuesday, October 20, 2009

Neurobiology of Language Conference (NLC) 2009

I think the conference was a huge success. The meeting had over 300 registrants -- so many that an overflow room with an AV feed was required during the sessions. The meeting attracted a diverse group of scientists ranging those with linguistically oriented approaches to traditional neuropsychology, functional imaging, animal neurophysiology, and genetics. The speakers were a diverse group and included both senior scientists and post docs. I have to say this now appears to be THE conference for us neuroscience of language folks. Congratulations and thank yous to Steve Small and his group (particularly Pacale Tremblay) for organizing the meeting!

At a business meeting of interested scientists (a nice range of personalities: Tom Bever, Luciano Fadiga, Yosef Grodzinsky, Richard Wise, Alec Marantz, Gabriele Miceli, David Poeppel, Greg Hickok, Steve Small and more) it was decided that the conference should become an annual event, for now tied to the SfN meeting as a satellite, which means it will be in San Diego next year. There was discussion of possibly alternating N. America-Europe (and perhaps Asia & S. America) meeting sites in the future.

So mark you calendars for next year in San Diego. Any ideas for debate topics?

Monday, October 19, 2009

NLC debate Powerpoint Slides -- Hickok

I've had a few requests for the slides from my talk at NLC09. I've posted them here. Comments/questions welcome, of course...

What's fundamental about the motor system's role in speech perception? Two surprises from the NLC debate

Far from being extremists in radically supporting the original formulation of the motor theory of speech perception, our hypothesis is that the motor system provides fundamental information to perceptual processing of speech sounds and that this contribution becomes fundamental to focus attention on others’ speech, particularly under adverse listening conditions or when coping with degraded stimuli. -Fadigo, NLC abstracts, 2009


Two surprises from the NLC debate between myself and Luciano Fadiga.

1. After reading his talk abstract and talking to him before the session, I thought he was going to agree that motor information at best can modulate auditory speech processing. Instead, he strongly defended a "fundamental" role for the motor system in the processing of speech sounds.

2. A majority of his arguments were not based on speech perception but came from data regarding the role of frontal systems in word-level processing ("in Broca's area there are only words"), comprehension of action semantics, syntactic processing ("Broca's region is an 'action syntax' area"), and action sequence processing.

I was expecting a more coherent argument.

The very first question during the discussion period was from someone (don't know who) who defended Luciano saying something to the effect that of course the auditory system is involved but it doesn't mean that the motor system is not fundamental. I again pointed to the large literature indicating that you don't need a motor system to perceive speech and this argues against some fundamental process. This in turn prompted the questioner to raise the dreaded Mork and Mindy argument -- something about how Mork sits by putting his head on the couch and that we understand this to be sitting but know it is not correct... I, of course, was completely defenseless and conceded my case immediately.

But seriously, when confronted with evidence that damage to the motor system doesn't produce the expected perceptional deficits, or that we can understanding actions that we cannot produce with our own motor system, it is a common strategy among mirror neuron theorists to retreat to the claim that of course many areas are involved (can you see the hands waving?). You see this all over the place in Rizzolatti's writings for example. But somehow only the motor system's involvement is "fundamental" or provides "grounding" to these perceptual processes:

“speech comprehension is grounded in motor circuits…”
-D’Ausilio, … Fadiga et al. 2009


So here is a question I would like to pose to Fadiga (or any of his co-authors):
Is speech perception grounded in auditory circuits?

Friday, October 16, 2009

The other side of the mirror

The discussion session after the talks was so friendly ... what's wrong with us?? There is consensus on the data, by and large, but there seemed to be growing discomfort, namely with the misprediction of the mirror neuron view for lesion data. It's clear that the mirror crowd owes a better explanation.

Karthik: Greg won
Al: "I think Greg won"
Bill: David won.
David: it's time to get mechanistic, assuming the mirror neurons exist in humans. Computationally motivated cognitive science models will help.

Many topics, much data -- any consensus?

Fadiga, 10:55am
"in Broca's area there are only words" - huh?? I didn't get that claim.

But two minutes later:
"Broca's region is an 'action syntax' area" -- this seems like a pre-theoretical intuition, at best. Needs to be spelled out.

Unfortunately, no analysis was provided at the end. We saw a series of amusing studies, but no coherent argument. The conclusion was "generating and extracting action meanings" lies at the basis of Broca's area.

Now Greg: first point, he separates action semantics and speech perception. Evidently, he is taking the non-humor route ... He is however, arguing against the specific claim that mirror neuron arguments, as a special case of motor theories, are problematic at best.

Greg's next move ('The Irvine opening') is to examine the tasks used in the studies. The tasks are very complex and DO NOT capture naturalistic speech perception. For example, response selection in syllable discrimination tasks might be compromised while perception per se rmains intact.

His next - and presumably final - move ('The Hawaii gambit') is to show what data *should* look like to make the case. And he ends on a model. More or less ours. (Every word is true.)

At the risk of being overly glib, Luciano won the best-dressed award, and he won the Stephen Colbert special mention for nice science-humor. He had the better jokes. Greg, alas, won the argument. Because of ... I think ... a cognitive science motivated, analytic perspective. To make claims about the CAUSAL role of motor areas in speech, the burden is high to link to speech science, which is not well done in the mirror literature.

Still in Chicago ... the Fadiga-Hickok mirror extravaganza

We're sitting in the Marriott. Luciano and Greg are beginning their debate. The debate-whisperers on my left and right: Al Braun from the NIH, Karthik Durvasula from Delaware, Bill Idsardi from Maryland.
Strong rhetorical point 1: Fadiga and his buddies used to eat lunch and smoke in the lab -- sounds fun. Fadiga is a funny speaker, and a charming participant. But 5 mins in, still no argument... Nice deployment of humor, though.
LF is showing TMS data, fMRI data, and intracranial stim data to marshal arguments for the existence of mirror neurons in humans. He is, I think rightly, focusing on the motor activation during speech perception and production. But, not surprises there.

Thursday, October 15, 2009

Yosef Grodzinsky wins!

Well, he won the debate, but not because he is right. Yosef is probably wrong about what Broca's area is doing. The reason he won is because his approach to the problem and his specific proposals are likely to generate much more research than Peter's ideas which, as one audience member noted, are very close to impossible to test or refute.

Here's a quote from David P. who is sitting next to me: "One banality after another... ugh. I learned NOTHING!"

Live from Chicago! The Battle for Broca's area


As we write, Yosef is making his case for Broca's area supporting syntactic movement. Peter has already made his. I have to say, during the first part of his talk, Yosef was clearly ahead in points. But when he started talking the details of syntactic movement versus reflexive binding, we could feel the audience tuning out. We'll see if this two-hour session format actually results in scientific progress or just causes headaches.

Wednesday, October 14, 2009

Neurobiology of Language Conference 2009 -- A.K.A., Throwdown in Chicago

The organizers of the first Neurobiology of Language Conference (NLC) have included two "panel discussions" that focus on current debates in the neuroscience of language. I was in Steve Small's lab in Chicago when these sessions were being planned and I can tell you that "throwdown sessions" was closer to the intent than panel discussions :-). Anyway, the sessions pit two vocal scientists on either side of a debate in the field; each gets a few minutes to make their case and then the floor is open for "discussion".

Throwdown #1: The Battle for Broca’s Area (see I told you throwdown is a better word!)
In one corner Yosef Grodzinsky, in the other corner Peter Hagoort

Throwdown #2: Motor Contribution to Speech Perception
In one corner Luciano Fadiga, in the other corner Greg Hickok

The contestants all have a history of public debate on these topics in the form of published commentaries and responses on each other's work. Should be interesting.

Friday, October 9, 2009

Is Broca's area the site of the core computational faculty of human language?

There has been a lot of interesting claims made recently by Angela Friederici and colleagues about an association between Broca’s area, the pars opercularis in particular, and what they call "the core computational faculty of human language": hierarchical processing. Two previous studies used artificial grammar stimuli/tasks (Bahlmann, Schubotz, & Friederici, 2008; Friederici, Bahlmann, Heim, Schubotz, & Anwander, 2006). Syllable sequences were presented according to one of two rules (presented to different groups of subjects). One rule involved only adjacent (linear) dependencies (e.g., [AB][AB]) and one involved hierarchical dependencies (e.g., [A[AB]B]). Once learned, subjects were presented with “grammatical” and “ungrammatical” strings during fMRI scanning. Violations of either grammar type showed activation in the frontal operculum, whereas only violations for the hierarchical grammar showed activity in the pars opercularis region. This latter finding is held up as evidence for the claim that the pars opercularis supports hierarchical structure building. I’m not sure what to conclude from studies involving artificial grammars so I’d like to focus on a more recent study that aimed at the same issue using natural language stimuli.

Makuuchi, et al. (2009) used a 2x2 design with STRUCTURE (hierarchical vs. linear) and DISTANCE (long vs. short) as the factors. Hierarchical stimuli involved written German sentences that were center-embedded (e.g., Maria who loved Hans who was good looking kissed Johann) whereas the linear stimuli were non-center embedded (e.g., Achim saw the tall man yesterday late at night). Already I have a problem: the notion that non-center embedded sentences are linear is puzzling (see below). The number of words between the main subject (noun) of the sentence and the main verb served as the distance manipulation. Restricting their analysis only to the left inferior frontal gyrus, Makuuchi et al. report a main effect of STRUCTURE in the pars opercularis, a main effect of DISTANCE in the inferior frontal sulcus, and no significant interactions. This finding was interpreted as evidence for distinct localizations supporting hierarchical structure building on the one hand (in the pars opercularis) and non-syntactic verbal working memory related processes on the other (inferior frontal sulcus) which was operationalized by the distance manipulation.

(LPO=left pars opercularis, LIFS=left inferior frontal sulcus, A=Hierarchical & long-distance, B=hierarchical & short-distance, C=linear & long-distance, D=linear & short-distance.)

I find this study conceptually problematic and experimentally confounded. Conceptually the notion “hierarchical” is defined very idiosyncratically to refer to center-embedding. This contradicts mainstream linguistic analyses of even simple sentences which are assumed to be quite hierarchical. For example, the English translation of a “linear” sentence in Makuuchi et al. would have, minimally, a structure something like, [Achim [saw [the tall man] [yesterday late at night]]], where, for example, the noun phrase the tall man is embedded in a verb phrase which itself is in the main sentence clause. Most theories would also assume further structure within the noun phrase and so on. Thus, in order to maintain the view that the pars opercularis supports hierarchical structure building, one must assume that (i) center-embedded structures involve more hierarchical structure building than non-center embedded structures, and (ii) that hierarchical structure building is the only difference between center-embedded and non-center embedded structures (otherwise the contrast is confounded). Makuuchi et al. make no independent argument for assumption (i), and assumption (ii) is false in that center-embedded sentences are well known to be more difficult to process than non-center embedded sentence (Gibson, 1998), thus there is a difficulty confound. One might argue that center-embedded structures are more difficult because they are hierarchical in a special way. But the confound persists in other ways. If one simply counts the number of subject-verb dependencies (the number of nouns that serve as the subject of a verb), the “hierarchical” sentences have more than the “linear” sentences.

It is perhaps revealing that the activation response amplitude in the pars opercularis to the various conditions qualitatively follows the pattern of the number of subject-verb dependencies in the different conditions: “hierarchical long” (HL) = 3 dependencies, “hierarchical short” (HS) = 2 dependencies, “linear long” (LL) = 1 dependency, “linear short” (LS) = 1 dependency, which corresponds with the pattern of response to these conditions in pars opercularis which is HL > HS > LL ≈ LS (see left bar graph above).

I also have a problem with the working memory claim, namely that inferior frontal sulcus = non-syntactic working memory. This claim is based on the assumptions that (i) the distance manipulation taps non-syntactic working memory, (ii) that the hierarchical manipulation does not tap non-syntactic working memory, and (iii) that inferior frontal sulcus showed only a distance effect and no interaction. You might convince me that (i) holds to some extent but both (ii) and (iii) are dubious. Center-embedded sentences are notoriously difficult to process and even if such structures invoke some special hierarchical process, isn’t it also likely that subjects will use whatever additional resources they have at their disposal, like non-syntactic working memory? Regarding (iii) a quick look at the amplitude graphs for the inferior frontal sulcus activation (right graph above) indicates that most of the “distance” effect is driven by the most difficult sentence type, the long-distance center-embedded condition. Thus, the lack of an interaction probably reflects insufficient power.

In short, I think all of the effects Makuuchi et al. see are driven by general processing load. Increased load likely leads to an increase in the use of less exciting processes such as articulatory rehearsal, which drives up activation levels in posterior sectors of Broca’s area. In fact, a recent study that directly examined the relation between articulatory rehearsal and sentence comprehension found exactly this: rehearsing a set of nonsense syllables produced just as much activation the pars opercularis as this comprehending sentences with long-distance dependencies (Rogalsky et al., 2008).

References

BAHLMANN, J., SCHUBOTZ, R., & FRIEDERICI, A. (2008). Hierarchical artificial grammar processing engages Broca's area NeuroImage, 42 (2), 525-534 DOI: 10.1016/j.neuroimage.2008.04.249

Friederici, A. (2006). The brain differentiates human and non-human grammars: Functional localization and structural connectivity Proceedings of the National Academy of Sciences, 103 (7), 2458-2463 DOI: 10.1073/pnas.0509389103

Gibson E (1998). Linguistic complexity: locality of syntactic dependencies. Cognition, 68 (1), 1-76 PMID: 9775516

Makuuchi, M., Bahlmann, J., Anwander, A., & Friederici, A. (2009). Segregating the core computational faculty of human language from working memory Proceedings of the National Academy of Sciences, 106 (20), 8362-8367 DOI: 10.1073/pnas.0810928106

Rogalsky C, Matchin W, & Hickok G (2008). Broca's Area, Sentence Comprehension, and Working Memory: An fMRI Study. Frontiers in human neuroscience, 2 PMID: 18958214

Thursday, October 8, 2009

The motor theory of speech perception makes no sense

The motor theory was born because it was found that the acoustic speech signal is ambiguous. The same sound, eg /d/, can be cued by different acoustic features. On the other hand, the speech gesture that produces the sound /d/, it was suggested, is not ambiguous, it always involves placement of the tip of the tongue on the roof of the mouth. Therefore we must perceive speech by accessing the invariant motor gestures that produce speech.

There is one thing that never made sense to me though. If the acoustic signal is ambiguous how does the motor system know which motor gesture to access? As far as I know, no motor theorist has proposed a solution to this problem. Am I missing something?

Saturday, October 3, 2009

Research Technician in Functional Magnetic Resonance Imaging

The Rochester Center for Brain Imaging (University of Rochester, Rochester, NY) is seeking a research technician to perform data collection, preprocessing, and analysis of functional, structural, and diffusion tensor MRI data, and development of software tools for same.
Applicants must have previous experience with MR data analyses or a strong background in Digital Image Processing (hold a BS/MS in Electrical Engineering, Biomedical Engineering or related areas). Responsibilities include: implementing custom software solutions for neuroscience research studies and conducting individual projects aimed at the development of improved computational methods for functional and morphological neuroimaging. The successful candidate will be well versed in scientific computation on Unix or Mac workstations and possess good skills in Matlab and C programming. Additional knowledge in physics, statistics or psychology, and/or experience in the processing and analysis of MR images (including packages such as AFNI, FSL, FreeSurfer or SPM) would be an asset.
The research focus of the Center is human brain functions, however the center also coordinates basic and clinical research on other topics, including pulse sequence programming and MRI coil development. The successful candidate will be based in the Rochester Center for Brain Imaging (http://www.rcbi.rochester.edu), a state-of-the-art facility equipped with a Siemens Trio 3T MR system and high-performance computing resources, with a full-time staff of cognitive neuroscientists, computer scientists, engineers, and physicists. Opportunities exist to collaborate with faculty in the departments of Brain & Cognitive Science, Center for Visual Science, Imaging Sciences/Radiology, Biomedical Engineering and Computer Science, among others.
Salary commensurable with experience. Start date flexible but a minimum of two year commitment required. If interested, please send a CV and short statement of your interest, as well as the name and address of three references to Dr. D. Bavelier, daphne@bcs.rochester.edu

Thursday, September 24, 2009

WORKSHOP ANNOUNCEMENT: Psycholinguistic Approaches to Speech Recognition in Adverse Conditions

WORKSHOP ANNOUNCEMENT
CALL FOR POSTERS

Psycholinguistic Approaches to Speech Recognition in Adverse Conditions
University of Bristol
8-10 March 2010

The workshop aims to gather academics from various fields in order to discuss the benefits, prospects, and limitations of considering adverse conditions in models of speech recognition. The adverse conditions we will consider include extrinsic signal distortions (e.g., speech in noise, vocoded speech), intrinsic distortions (e.g., accented speech, conversational speech, dysarthric speech, Lombard speech), listener-specific limitations (e.g., non-native listeners, older individuals), and cognitive load (e.g., speech recognition under an attentional or memory load, multi-tasking).


Registration now open:
http://language.psy.bris.ac.uk/workshop/index.html


Speakers

* Jennifer Aydelott, Birkbeck College, University of London, UK
* Ann Bradlow, Northwestern University, USA
* Martin Cooke, University of the Basque Country, Spain
* Anne Cutler, Max Planck Institute for Psycholinguistics, NL
* Matt Davis, MRC CBU Cambridge, UK
* John Field, University of Reading, UK
* Valerie Hazan, UCL, UK
* MLuisa García Lecumberri, University of the Basque Country, Spain
* Sven Mattys, University of Bristol, UK
* Holger Mitterer, Max Planck Institute for Psycholinguistics, NL
* Dennis Norris, MRC CBU Cambridge, UK
* Kathy Pichora-Fuller, University of Toronto, Canada
* Sophie Scott, UCL, UK
* Laurence White, University of Bristol, UK

Important Dates

* Poster abstract deadline: 30 November 2009
* Notification of acceptance: 11 December 2009
* Preregistration deadline: 31 January 2010
* Conference dates: 8-10 March 2010

Organiser

* Sven Mattys

Local Organising Commitee

* Sven Mattys
* Laurence White
* Lukas Wiget

Contact: sven.mattys@bris.ac.uk

Wednesday, September 23, 2009

Final post on mirror neurons

I'm kicking the mirror neuron habit after just one more puff (don't worry, I never inhale). I got to page 100 in Rizzolatti & Sinigaglia's Mirrors in the Brain (2008, Oxford Press) and had to stop. The logic just became so incoherent and one-sided that I decided it is a waste of time even to consider the arguments seriously.

Here's what they wrote (italics theirs):

...these [mirror] neurons are primarily involved in the understanding of the meaning of 'motor events', i.e., of actions performed by others. (p. 97)

...this is why, when it [the monkey] sees the experimenter shaping his hand into a precision grip and moving it towards the food, it immediately perceives the meaning of the 'motor events' and interprets them in terms of an intentional act. (p. 98)

This is fairly standard mirror neuron speak. It was the next section that made me decide to stop reading.

There is, however an obvious objection to this: as discussed above, neurons which respond selectively to the observation of the body movements of others, and in certain cases to hand-object interactions, have been found in the anterior region of the superior temporal sulcus (STS). We have mentioned that the STS areas are connected with the visual, occipital, and temporal cortical areas, so forming a circuit which is in many ways parallel to that of the ventral stream. What point would there be, therefore, in proposing a mirror neuron system that would code in the observer's brain the actions of others in terms of his own motor act? Would it not be much easier to assume that understanding the actions of others rests on purely visual mechanisms of analysis and synthesis of the various elements that constitute the observed action, without any kind of motor involvement on the part of the observer? (p. 98-99)


A very good question. They go on to note,

Perrett and colleagues demonstrated that the visual codification of actions reaches levels of surprising complexity in the anterior region of the STS. Just as an example, there are neurons which are able to combine information relative to the observation of the direction of the gaze with that of the movements an individual is performing. Such neurons become active only when the monkey sees the experimenter pick up an object on which his gaze is directed. If the experimenter shifts the direction of his gaze, the observation of his action does not trigger any neuron activity worthy of notice. (p. 99)


So why is the STS with its much more selective response properties to action perception not a candidate neural basis for action understanding? The answer is...

However, we must ask whether this selectivity -- or, in more general terms, the capacity to connect different visual aspects of the observed action -- is sufficient to justify using the term 'understanding'. The motor activation characteristic of F5 and PF-PFG adds an element that hardly could be derived from the purely visual properties of STS -- and without which the association of visual features of the action would at best remain casual, without any unitary meaning for the observer. (p. 99, end of paragraph)


Not only is this pure speculation, but this question is NEVER asked of mirror neurons:

However, we must ask whether this selectivity -- or, in more general terms, the capacity to connect motor aspects of the observed action -- is sufficient to justify using the term 'understanding'. The sensory activation characteristic of STS adds an element that hardly could be derived from the far less specified properties of F5 -- and without which the association of sensory-motor features of the action would at best remain casual, without any unitary meaning for the observer.


A typical response to this kind of critique is that, "it's the activity of the WHOLE circuit that is important, not just mirror neurons in F5". But this is vacuous hand-waving. If this is really the claim, then why is the visual percept "casual" and without "unitary meaning" and the motor component the one that adds meaning? Why isn't it the reverse? Why isn't the reverse ever considered?

The other glaring logical party-foul with R&S's claim is that if they are correct, monkeys should only be able to understand actions that mirror neurons code: grasping, tearing, holding, etc. All the others would be casual and without unitary meaning. Does it make sense from an evolutionary standpoint for a system that is only capable of understanding visual actions or events that have a motor representation as well? Or would it be useful for the animal to understand that a hawk circling above is a bad thing? And if you want to claim that the animal doesn't really 'understand' what a circling hawk 'means', that it only reacts to it reflexively, then you are obliged to prove to me that the monkey does 'understand' grasping actions and is not just reacting reflexively.

Here's my guess as to what mirror neurons are doing.

1. Action understanding is primarily coded in the much more sophisticated STS neurons.

2. The F5-parietal lobe circuit performs sensory-motor transformations for the purpose of guiding action.

3. Populations of F5 neurons code specific complex actions such as grasping with the hand using a particular grip, or perhaps these populations are part of the transformation (started in parietal regions) between a sensory event and a specific action.

4. F5-driven actions (or sensory-motor transformations) can be activated by objects (canonical neurons), or by the observation of actions (mirror neurons).

5. [prediction:] Mirror neurons are only one class of action-responsive cells in F5. Others code non-mirror action observation-action execution responses such as when a conspecific presents its back and a grooming action may be elicited.

6. [prediction:] F5 neuron populations are plastic. If the animal is trained to reach for a raisin upon seeing a human waving gesture or a dog's tail wag or a picture of the Empire State Building, F5 cell populations will code this association such that F5 cells may end up responding to tail wagging. (For example see Catmur, et al. 2007, although admittedly this is a human study and may not apply to macaque.)

7. The reason why mirror neurons mirror is because there is an association between seeing a reaching/grasping gesture and executing the same gesture. This could arise either because of natural competitive behavior (seeing another monkey reach may cue the presence of something tasty and generate a competitive reach) or because of the specific experimental training situation.

As far as I know, there is no way empirically to differentiate these ideas from the action understanding theory. However, the present suggestion can explain why STS neurons code actions so much more specifically than mirror neurons (because STS is critically involved in action understanding) and it does not limit 'understanding' to motor behaviors, which seems desirable. I look forward to seeing a flood of studies in Nature and Science testing alternative theories of mirror neuron function. (Yeah, right.)

So what in the world will I talk about if not mirror neurons? Well, the motor theory of speech perception is still on the table. Unlike mirror neurons, that is squarely in my research program. It is also an interesting topic because it provides an excellent test case for mirror neuron theory as it is applied to humans, just like speech was the critical test case for phrenology. (Yes, I am comparing mirror neurons to phrenology -- both very interesting ideas that were unsubstantiated when first proposed and that captured the scientific and public imagination.)


Catmur C, Walsh V, & Heyes C (2007). Sensorimotor learning configures the human mirror system. Current biology : CB, 17 (17), 1527-31 PMID: 17716898

Tuesday, September 22, 2009

Mirrors in the Brain -- Comments on Rizzolatti & Sinigaglia, 2008

Apparently I'm obsessed with mirror neurons because I can't seem to stop reading what people say about them. Now I'm reading Rizzolatti & Sinigaglia's 2008 book, Mirrors in the Brain, translated from the original Italian by Frances Anderson and published by Oxford.

I'm only about halfway through so far but already I find the book both useful in terms of its summary of the functional anatomy of the macaque motor system and frustratingly sloppy in terms of its theoretical logic.

Let me provide one example of the latter. At the outset of the book the authors describe the functional properties motor neurons (not necessarily mirror neurons) in macaque area F5. They argue that F5 motor cells
code motor acts (i.e., goal-directed movement) and not individual movements (p. 23).

As evidence they note that
...many F5 neurons discharge when the monkey performs a motor act, for example when it grasps a piece of food, irrespective of whether it uses its right or left hand or even its mouth ... [and] a particular movement that activates a neuron during a specific motor act does not do so during other seemingly related acts; for example, bending the index finger triggers a neuron when grasping, but not when scratching. (p. 23)

They conclude,
Therefore the activity of these neurons cannot be adequately described in terms of pure movement, but taking the efficacy of the motor act as the fundamental criterion of classification they can be subdivided into specific categories, of which the most common are 'grasping-with-the-hand-and-the-mouth', 'grasping-with-the-hand', 'holding', 'tearing', 'manipulating', and so on. (p. 23)

So the claim is that F5 cells are coding something higher-level that is defined by the goal, the "efficacy", of movement.

Clearly, F5 cells are coding something that is at least one-step removed from specific movements (e.g., finger flexion), but the leap from this observation to the idea that it is coding categories or goals such as 'tearing' is suspect. Perhaps these complex movements are being coded -- separately for the mouth and hand, for tearing in one manner versus another, etc. -- by the population of cells in F5 rather than in individual cells. In other words, the fact that a single cell responds to grasping with the hand and grasping with the mouth doesn't necessarily mean that it is coding an abstract concept of grasping.

But we don't need to argue with Rizzolatti and Sinigaglia on this theoretical point because they argue against their own view rather convincingly (although unwittingly) on empirical grounds. Specifically, in contrast to the claim that F5 cells code goal-directed actions, they give examples of how these cells code specific, albeit complex, movements.

Most F5 neurons ... also code the shape the hand has to adopt to execute the act in question... (p. 25)


This strikes me as a rather specific individual movement that, for example, would not apply to the same "act" executed by the mouth. More pointedly though, in their discussion of mirror neurons in F5, Rizzolatti and Sinigaglia make a big deal of cells that show a strict relation between the observed and executed act. They provide a striking example:

...the monkey observes the experimenter twisting a raisin in his hands, anti-clockwise and clockwise, as if to break it in two: the neuron discharges for one direction only. (p. 82)


So here is a case where two movements have the same goal (breaking the raisin in two), but the F5 cell only fires in response to one of the movements. Apparently this cell is coding movements not goals.

Has anyone else read Rizzolatti and Sinigaglia's book? Any thoughts?