Tuesday, October 27, 2009

Spatial Organization of Multisensory Responses in Temporal Association Cortex

An important unit physiology paper by Dahl, Logothetis, & Kayser appeared in J. Neuroscience a couple of weeks ago. These authors explored the spatial organization of cells in multisensory areas of the superior temporal sulcus in macaque, in particular the distribution of visual- versus auditory-preferring cells. What they found is that like-preferring cells cluster together in patches: auditory cells tend to cluster with other auditory cells, visual cells tend to cluster with other visual cells.


This is only mildly interesting in its own right because it just shows that functional clustering, long-known to be a feature of unimodal sensory cortex, also holds in multisensory cortex. What makes this important is the implications this finding has for fMRI. If "cells of a feather" cluster together and if these clusters are not uniformly distributed across voxels in an ROI then different voxels will be differentially sensitive to one cell type versus another. And this is exactly the kind of underlying organization that multivariate pattern analysis (MVPA) can detect. So, this new finding justifies the use of fMRI data analysis approaches such as MVPA.

Dahl CD, Logothetis NK, & Kayser C (2009). Spatial organization of multisensory responses in temporal association cortex. The Journal of neuroscience : the official journal of the Society for Neuroscience, 29 (38), 11924-32 PMID: 19776278

Monday, October 26, 2009

Exciting new job in David Gow's group at MGH!

FULL_TIME RESEARCH ASSISTANT/Massachusetts General Hospital;

RA position available in the Cognitive/Behavioral Research Group, Department of Neurology, at the Massachusetts General Hospital. Research focuses on the perception of spoken language and related speech processing in both unpaired adults and recovering stroke patients. All of our work involves integrated multimodal imaging with MRI, EEG and MEG using the state of the
art resources at the Athinoula A. Martinos Imaging Center (http://www.nmr.mgh.harvard.edu/martinos/aboutUs/facilities.php). Our lab is a leader in the application of Granger causality analysis to high spatiotemporal resolution brain activation data. The RA will work closely
with the PI on a regular basis, but will also be required to work independently. Responsibilities will include subject recruitment, stimulus development, experiment implementation and execution, subject testing including MRI and MEG/EEG scanning, data analysis, database management, and minor administrative work. The position will provide some patient
interaction with recent stroke victims. It will also involve training in multimodal imaging techniques, spectral and timeseries statistical analyses, speech recording, synthesis, analysis and digital editing techniques. This job is ideal for someone planning on applying for graduate school in cognitive neuroscience, imaging biotechnology, psychology, medicine, or the
speech and hearing sciences.

Minimum Requirements:
Bachelor¹s degree in psychology, cognitive neuroscience, computer science or a related field is required.

The ideal candidate will be proficient in Mac and Windows based applications (experience with Linux is a plus). S/he will have programming experience and will be comfortable learning to use and modify new research-related applications (e.g. MatLab code). Prior experience with neuroimaging techniques such as MRI, EEG or MEG is preferred, but not required. Good
written and oral communication skills are important assets. A sense of humor and adventure is a must. A minimum two year commitment is required. Funding is currently secure through 2014.

The candidate must be hardworking, mature, and responsible with excellent organizational and interpersonal skills. S/he must be able to work independently in a fast-paced environment, juggle and prioritize multiple tasks, and seek assistance when appropriate.

Position is available immediately. Secure funding is available through 2014. If interested, please send a CV and short statement of your interest, as well as the name and address of three references to Dr. David Gow at gow@helix.mgh.harvard.edu.

--
David W. Gow Jr., Ph.D.
Cognitive/Behavioral Neurology Group
Massachusetts General Hospital
175 Cambridge Street, CPZ-S340
Suite 340
Boston, MA 02114

ph: 617-726-6143
fax: 617-724-7836

New blurbs from a new contributor

A reader reminded me recently that not enough people comment on Talking Brains. I encouraged her to contribute. Since we were just at the Neurobiology of Language conference as well as the Society for Neuroscience, I suggested she write up a few blurbs on posters/presentations that made an impression on her. Thank you, Laura Menenti (from the Donders Center), for sending this. I hope it stimulates other readers to contribute more comments on their impressions of these two meetings (or anything else, as usual).

David

(By the way, I saw these three presentations as well. All three were very provocative and interesting - nice selection, Laura.)

An idiosyncratic sample of NLC/SfN studies

Here is an idiosyncratic sample of studies I noticed at the Neurobiology of Language Conference (Oct 15th-16th) and Neuroscience 2009 (Oct 17th-21st) - idiosyncratic because the population from which to draw was huge, because the sample size needs to be small, and because the sample is biased by my own interest - naturalistic language use.

Neuroscience 2009: Characteristics of language and reading in a child with a missing arcuate fasciculus on diffusion tensor imaging. J. Yeatman, L. H. F. Barde, H.M. Feldman

Considering the importance of the arcuate fasciculus in connecting classic language areas, the question as to what language is like when you don't have one is an exciting one. The authors tested a 12-year old girl without an arcuate fasciculus (due to premature birth) patient on a standardized neuropsychological test battery, and scanned her using Diffusion-weighted Tensor Imaging (DTI). The DTI showed that indeed, the patient completely lacked the bilateral arcuate fascicule. Surprisingly, her performance on the language tests fell within the normal range. The authors conclude that normal language performance without an arcuate fasciculus is possible, and that the brain therefore shows remarkable plasticity in dealing with the lack of such an essential pathway.

There is a catch, however: in a footnote the authors mention that the subject has very 'inefficient communication' and poor academic performance. As it turns out, the girl may be able to achieve a normal score on the tests, but not in a normal way: for example, answering the question 'What is a bird?' from the Verbal Intelligence Scale takes her three minutes, according to the experimenter. It is also essentially impossible to have a conversation with her.

To me, these results show two things:

- Normal language performance is not possible without an arcuate fasciculus, assuming that being able to hold a conversation is part of normal language performance.

- The neuropsychological tests used do not properly reflect language performance if they fail to capture such gross variations in how a patient arrives at the correct answer.

It would be extremely interesting in the light of recent discussions (Hickok and Poeppel, 2004; Saur et al., 2008) to test whether this patient's impairments are restricted to specific aspects of language processing.

Neuroscience 2009: Do we click? Brain alignment as the neuronal basis of interpersonal communication. L. J. Silbert, G. Stephens, U. Hasson

In an attempt to look at normal language use, these authors target the question of how participants in a conversation achieve mutual understanding. Possibly, they do so through shared neural patterns and this study is a first step in testing that hypothesis. The authors let a speaker tell a story in the scanner and then let eleven other subjects listen to that same story. They measured correlations between the BOLD-timeseries in the speaker and the listeners. Intersubject correlations between speaker and the average listener were highest in left inferior frontal gyrus, anterior temporal lobe and precuneus/PCC. The correlations were highest when the speaker time series was shifted to precede the listeners' by 1-3 seconds, implying that the correlations are not simply due to the fact that the speaker also hears herself speak.

To corroborate the idea that these correlations underlie communication, the authors did two further tests. First, they also recorded a Russian speaker telling a story which they then presented to non-Russian listeners. They found fewer areas showing an inter-subject correlation (there were some in for instance STS). Second, they correlated the listeners' level of understanding of the story with the strength of the inter subject correlation, and found a correlation between understanding and inter-subject correlation in basal ganglia, left temporo-parietal junction and anterior cingulate cortex. The interpretation of this finding is that the more the listeners correlate with the speaker, the more they understand.

I find the purpose of studying naturalistic communication laudable, and the results are intriguing. More detailed studies of communication are necessary however: one could interpret these results as showing that language areas are involved in speaking and listening. That, in itself, is not a shocking finding. The approach, nevertheless, holds promise for more research into naturalistic communication.

NLC: The neurobiology of communication in natural settings. J. I. Skipper and J. D. Zevin

This study attempts to avoid the concern raised above, by specifying what correlations are due to what. The authors showed subjects a movie of a TV-quiz, and used Independent Components Analysis (ICA) to identify independent brain networks underlying processing of movies. After having identified the networks, they are correlated to different aspects of the movie, identified through extensive annotation of that movie. For example, they find a component that involves bilateral auditory cortices. To find out what it does, they correlate it to diverse stimulus properties as the presence/absence of speech/gesture/speech without oral movement/topic shifts/movement without speech/... (This is done through a peak and valley analysis, in which they determine the likelihood of a specific property occurring when the signal in the component is rising or falling.) For this component, the conclusion is that it is involved when speech is present. That, of course, is not a terribly shocking finding either. But, has anyone ever investigated networks sensitive to speech without visible mouth movement, speech with mouth movement but without gesture, speech with mouth movement and gesture, movement during speech that is not gesture? Only by putting all these stimulus properties in one experiment can one look both at sensitivity to these aspects of communication separately, and at the overlap between them. Importantly, the co-occurrence of all these things is what makes naturalistic communication naturalistic communication. I think this study is a great advertisement for studying language in its natural habitat.

P.S. On a more general and totally unrelated note, the Presidential Lecture at Neuroscience 2009 by Richard Morris was an absolutely impressive example of how to conduct, and to present, science.

References

Saur D, Kreher BW, Schnell S, Kümmerer D, Kellmeyer P, Vry M-S, Umarova R, Musso M, Glauche V, Abel S, Huber W, Rijntjes M, Hennig Jr, Weiller C (2008) Ventral and dorsal pathways for language. Proceedings of the National Academy of Sciences, 105: 18035-18040.

Hickok G, Poeppel D (2004) Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92: 67-99.

Laura Menenti

Thursday, October 22, 2009

How do we perceive speech after 150 kisses?

I wouldn't know, but Marc Sato does. Marc tells me this was a tentative title for a poster he presented at SfN (or was it at NLC?). In any case the official title was Use-induced motor plasticity affects speech perception.

Rather than using that clumsy TMS technique, Marc and his colleagues decided to target speech-related motor systems using the CNS equivalent of a smart bomb: simply ask participants to make 150 lip or tongue movement over a 10 minute span. The idea is that this will fatigue the system and produce a kind of motor after effect in lip or tongue areas. Perception of lip- or tongue-related speech sounds (/pa/ and /ta/) can then be assessed behavioral. They used syllable discrimination (same-different) both with and without noise.

They calculated d' and beta scores. (WOOHOO!) d' of course is a measure of discrimination performance, corrected for response bias. Beta is a measure of the bias -- specifically, the threshold that a subject is using for deciding that the stimuli are, in this case, same or different.

So what did they find? Fatiguing lip or tongue motor systems had no effect on discrimination (d', top graph) but did have significant effector-specific effects on bias (beta scores, bottom graph).


Sato et al. conclude:

Compared to the control task, both the tongue and lip motor training biased participants’ response, but in opposite directions. These results are consistent with those observed by Meister et al. (2007, Current Biology) and d’Ausilio et al. (2009, Current Biology), obtained by temporarily disrupting the activity of the motor system through transcranial magnetic stimulation.


So what this shows is that motor stimulation/fatigue has no effect on acoustic perception, but on a higher-level decision/categorization process. Now, Marc has indicated that he views this decision/categorization process as "part of speech perception" which is perfectly legit. It certainly is part of speech perception as defined by this task. My own interests in speech perception, however, don't include all the processes involved in performing this task, so I take this as evidence that motor stimulation doesn't affect "speech perception".

Use-induced motor plasticity affects speech perception

Marc Sato1, CA, Krystyna Grabski1, Amélie Brisebois2, Arthur M. Glenberg3, Anahita Basirat1, Lucie Ménard2, Luigi Cattaneo4

1 GIPSA-LAB, CNRS & Grenoble Universités, France - 2 Département de Linguistique, Université du Québec à Montréal, Canada
3 Department of Psychology, Arizona State University, USA - 4 Centro Interdipartimentale Mente e Cervello, Università di Trento, Italy

Faculty Positions: PHONETICS/PHONOLOGY, BROWN UNIVERSITY

PHONETICS/PHONOLOGY, BROWN UNIVERSITY: The Department of Cognitive and Linguistic Sciences and the Department of Psychology announce that we will seek to fill four positions in language and linguistics over the next three years. Here we invite applications for an open-rank position in Phonetics/Phonology beginning July 1, 2010. Research focus is open, but we especially value programs of research that cross traditional boundaries of topics and methodology, including theoretical approaches. Interests in cross-linguistic and/or developmental research are highly desirable. The individual filling this position must be able to teach an introductory phonology course as well as a course in experimental phonetics. Additional positions that we will be hiring include a current search in (a) syntactic/semantic/pragmatic language processing, and two others tentatively in the areas of (b) lexical representation and processing, morphology, and/or word formation; and (c) computational modeling, cognitive neuroscience, and/or biology of language. Successful candidates are expected to have (1) a track record of excellence in research, (2) a well-specified research plan, and (3) a readiness to contribute to undergraduate and graduate teaching and mentoring. Brown has a highly interdisciplinary research environment in the study of mind, brain, behavior, and language and is establishing an integrated Department of Cognitive, Linguistic, and Psychological Sciences, effective July 2010. Plans to house the department in a newly renovated state-of-the-art building in the heart of campus are well under way. Curriculum vitae, reprints and preprints of publications, statements of research and teaching interests (one page each), and three letters of reference (for junior applicants) or names of five referees (for senior applicants) should be submitted on-line as PDFs to PhoneticsPhonologySearch@brown.edu, or else by mail to Phonetics/Phonology Search Committee, Department of Cognitive & Linguistic Sciences, Box 1978, Brown University, Providence, RI 02912 USA. Applications received by January 5, 2010 are assured of full review. All Ph.D. requirements must be completed before July 1, 2010. Women and minorities are especially encouraged to apply. Brown University is an Equal Opportunity/Affirmative Action Employer.

Wednesday, October 21, 2009

THE NEW PHONEBOOK IS HERE!


Cognitive neuroscience's equivalent, that is... Gazzaniga's The Cognitive Neurosciences IV arrived in my mailbox a couple of weeks ago. I own Vol. I & II, never bothered to get III and now have IV because I've got a chapter in it for the first time. I guess that means I'm officially a cognitive neuroscientist.

My chapter notwithstanding, the volume is pretty impressive and contains a lot of useful papers. It is again divided into the usual sections, Development and Evolution, Plasticity, Attention, Sensation and Perception (with a whopping three auditory chapters, including one by new UCI Cog Sci faculty member, Virginia Richards), Motor Systems, Memory, Language (the section I'm least likely to read), Emotional and Social Brain, Higher Cognitive Functions (i.e., everything else, including two chapters on neuroeconomics), and Consciousness (i.e., stuff we REALLY don't understand). An 11th section is titled "Perspectives" and features, well, perspectives by senior investigators. I'm looking forward to reading Sheila Blumstein's chapter in this section.

The only chapter I've already read is Tim Griffiths and Co.'s piece on auditory objects. There is some useful information there, including what appears to be a new study on auditory sequence processing using fractal pitch stimuli (sounds cool anyway). Way too much discussion on dynamic causal modeling though which includes two large figures and three tables -- TMI Tim :-)

There are a number of papers that I will eventually read. Some on the top of my list include motor-related papers by Reza Shadmehr John Krakauer (Computational Neuroanatomy of Voluntary Motor Control) and by Grant Mulliken and Richard Andersen (Forward Models and State Estimation in Posterior Parietal Cortex). There's sure to be some information there that is relevant to speech.

The volume is definitely worth a look.

Tuesday, October 20, 2009

Neurobiology of Language Conference (NLC) 2009

I think the conference was a huge success. The meeting had over 300 registrants -- so many that an overflow room with an AV feed was required during the sessions. The meeting attracted a diverse group of scientists ranging those with linguistically oriented approaches to traditional neuropsychology, functional imaging, animal neurophysiology, and genetics. The speakers were a diverse group and included both senior scientists and post docs. I have to say this now appears to be THE conference for us neuroscience of language folks. Congratulations and thank yous to Steve Small and his group (particularly Pacale Tremblay) for organizing the meeting!

At a business meeting of interested scientists (a nice range of personalities: Tom Bever, Luciano Fadiga, Yosef Grodzinsky, Richard Wise, Alec Marantz, Gabriele Miceli, David Poeppel, Greg Hickok, Steve Small and more) it was decided that the conference should become an annual event, for now tied to the SfN meeting as a satellite, which means it will be in San Diego next year. There was discussion of possibly alternating N. America-Europe (and perhaps Asia & S. America) meeting sites in the future.

So mark you calendars for next year in San Diego. Any ideas for debate topics?

Monday, October 19, 2009

NLC debate Powerpoint Slides -- Hickok

I've had a few requests for the slides from my talk at NLC09. I've posted them here. Comments/questions welcome, of course...

What's fundamental about the motor system's role in speech perception? Two surprises from the NLC debate

Far from being extremists in radically supporting the original formulation of the motor theory of speech perception, our hypothesis is that the motor system provides fundamental information to perceptual processing of speech sounds and that this contribution becomes fundamental to focus attention on others’ speech, particularly under adverse listening conditions or when coping with degraded stimuli. -Fadigo, NLC abstracts, 2009


Two surprises from the NLC debate between myself and Luciano Fadiga.

1. After reading his talk abstract and talking to him before the session, I thought he was going to agree that motor information at best can modulate auditory speech processing. Instead, he strongly defended a "fundamental" role for the motor system in the processing of speech sounds.

2. A majority of his arguments were not based on speech perception but came from data regarding the role of frontal systems in word-level processing ("in Broca's area there are only words"), comprehension of action semantics, syntactic processing ("Broca's region is an 'action syntax' area"), and action sequence processing.

I was expecting a more coherent argument.

The very first question during the discussion period was from someone (don't know who) who defended Luciano saying something to the effect that of course the auditory system is involved but it doesn't mean that the motor system is not fundamental. I again pointed to the large literature indicating that you don't need a motor system to perceive speech and this argues against some fundamental process. This in turn prompted the questioner to raise the dreaded Mork and Mindy argument -- something about how Mork sits by putting his head on the couch and that we understand this to be sitting but know it is not correct... I, of course, was completely defenseless and conceded my case immediately.

But seriously, when confronted with evidence that damage to the motor system doesn't produce the expected perceptional deficits, or that we can understanding actions that we cannot produce with our own motor system, it is a common strategy among mirror neuron theorists to retreat to the claim that of course many areas are involved (can you see the hands waving?). You see this all over the place in Rizzolatti's writings for example. But somehow only the motor system's involvement is "fundamental" or provides "grounding" to these perceptual processes:

“speech comprehension is grounded in motor circuits…”
-D’Ausilio, … Fadiga et al. 2009


So here is a question I would like to pose to Fadiga (or any of his co-authors):
Is speech perception grounded in auditory circuits?

Friday, October 16, 2009

The other side of the mirror

The discussion session after the talks was so friendly ... what's wrong with us?? There is consensus on the data, by and large, but there seemed to be growing discomfort, namely with the misprediction of the mirror neuron view for lesion data. It's clear that the mirror crowd owes a better explanation.

Karthik: Greg won
Al: "I think Greg won"
Bill: David won.
David: it's time to get mechanistic, assuming the mirror neurons exist in humans. Computationally motivated cognitive science models will help.

Many topics, much data -- any consensus?

Fadiga, 10:55am
"in Broca's area there are only words" - huh?? I didn't get that claim.

But two minutes later:
"Broca's region is an 'action syntax' area" -- this seems like a pre-theoretical intuition, at best. Needs to be spelled out.

Unfortunately, no analysis was provided at the end. We saw a series of amusing studies, but no coherent argument. The conclusion was "generating and extracting action meanings" lies at the basis of Broca's area.

Now Greg: first point, he separates action semantics and speech perception. Evidently, he is taking the non-humor route ... He is however, arguing against the specific claim that mirror neuron arguments, as a special case of motor theories, are problematic at best.

Greg's next move ('The Irvine opening') is to examine the tasks used in the studies. The tasks are very complex and DO NOT capture naturalistic speech perception. For example, response selection in syllable discrimination tasks might be compromised while perception per se rmains intact.

His next - and presumably final - move ('The Hawaii gambit') is to show what data *should* look like to make the case. And he ends on a model. More or less ours. (Every word is true.)

At the risk of being overly glib, Luciano won the best-dressed award, and he won the Stephen Colbert special mention for nice science-humor. He had the better jokes. Greg, alas, won the argument. Because of ... I think ... a cognitive science motivated, analytic perspective. To make claims about the CAUSAL role of motor areas in speech, the burden is high to link to speech science, which is not well done in the mirror literature.

Still in Chicago ... the Fadiga-Hickok mirror extravaganza

We're sitting in the Marriott. Luciano and Greg are beginning their debate. The debate-whisperers on my left and right: Al Braun from the NIH, Karthik Durvasula from Delaware, Bill Idsardi from Maryland.
Strong rhetorical point 1: Fadiga and his buddies used to eat lunch and smoke in the lab -- sounds fun. Fadiga is a funny speaker, and a charming participant. But 5 mins in, still no argument... Nice deployment of humor, though.
LF is showing TMS data, fMRI data, and intracranial stim data to marshal arguments for the existence of mirror neurons in humans. He is, I think rightly, focusing on the motor activation during speech perception and production. But, not surprises there.

Thursday, October 15, 2009

Yosef Grodzinsky wins!

Well, he won the debate, but not because he is right. Yosef is probably wrong about what Broca's area is doing. The reason he won is because his approach to the problem and his specific proposals are likely to generate much more research than Peter's ideas which, as one audience member noted, are very close to impossible to test or refute.

Here's a quote from David P. who is sitting next to me: "One banality after another... ugh. I learned NOTHING!"

Live from Chicago! The Battle for Broca's area


As we write, Yosef is making his case for Broca's area supporting syntactic movement. Peter has already made his. I have to say, during the first part of his talk, Yosef was clearly ahead in points. But when he started talking the details of syntactic movement versus reflexive binding, we could feel the audience tuning out. We'll see if this two-hour session format actually results in scientific progress or just causes headaches.

Wednesday, October 14, 2009

Neurobiology of Language Conference 2009 -- A.K.A., Throwdown in Chicago

The organizers of the first Neurobiology of Language Conference (NLC) have included two "panel discussions" that focus on current debates in the neuroscience of language. I was in Steve Small's lab in Chicago when these sessions were being planned and I can tell you that "throwdown sessions" was closer to the intent than panel discussions :-). Anyway, the sessions pit two vocal scientists on either side of a debate in the field; each gets a few minutes to make their case and then the floor is open for "discussion".

Throwdown #1: The Battle for Broca’s Area (see I told you throwdown is a better word!)
In one corner Yosef Grodzinsky, in the other corner Peter Hagoort

Throwdown #2: Motor Contribution to Speech Perception
In one corner Luciano Fadiga, in the other corner Greg Hickok

The contestants all have a history of public debate on these topics in the form of published commentaries and responses on each other's work. Should be interesting.

Friday, October 9, 2009

Is Broca's area the site of the core computational faculty of human language?

There has been a lot of interesting claims made recently by Angela Friederici and colleagues about an association between Broca’s area, the pars opercularis in particular, and what they call "the core computational faculty of human language": hierarchical processing. Two previous studies used artificial grammar stimuli/tasks (Bahlmann, Schubotz, & Friederici, 2008; Friederici, Bahlmann, Heim, Schubotz, & Anwander, 2006). Syllable sequences were presented according to one of two rules (presented to different groups of subjects). One rule involved only adjacent (linear) dependencies (e.g., [AB][AB]) and one involved hierarchical dependencies (e.g., [A[AB]B]). Once learned, subjects were presented with “grammatical” and “ungrammatical” strings during fMRI scanning. Violations of either grammar type showed activation in the frontal operculum, whereas only violations for the hierarchical grammar showed activity in the pars opercularis region. This latter finding is held up as evidence for the claim that the pars opercularis supports hierarchical structure building. I’m not sure what to conclude from studies involving artificial grammars so I’d like to focus on a more recent study that aimed at the same issue using natural language stimuli.

Makuuchi, et al. (2009) used a 2x2 design with STRUCTURE (hierarchical vs. linear) and DISTANCE (long vs. short) as the factors. Hierarchical stimuli involved written German sentences that were center-embedded (e.g., Maria who loved Hans who was good looking kissed Johann) whereas the linear stimuli were non-center embedded (e.g., Achim saw the tall man yesterday late at night). Already I have a problem: the notion that non-center embedded sentences are linear is puzzling (see below). The number of words between the main subject (noun) of the sentence and the main verb served as the distance manipulation. Restricting their analysis only to the left inferior frontal gyrus, Makuuchi et al. report a main effect of STRUCTURE in the pars opercularis, a main effect of DISTANCE in the inferior frontal sulcus, and no significant interactions. This finding was interpreted as evidence for distinct localizations supporting hierarchical structure building on the one hand (in the pars opercularis) and non-syntactic verbal working memory related processes on the other (inferior frontal sulcus) which was operationalized by the distance manipulation.

(LPO=left pars opercularis, LIFS=left inferior frontal sulcus, A=Hierarchical & long-distance, B=hierarchical & short-distance, C=linear & long-distance, D=linear & short-distance.)

I find this study conceptually problematic and experimentally confounded. Conceptually the notion “hierarchical” is defined very idiosyncratically to refer to center-embedding. This contradicts mainstream linguistic analyses of even simple sentences which are assumed to be quite hierarchical. For example, the English translation of a “linear” sentence in Makuuchi et al. would have, minimally, a structure something like, [Achim [saw [the tall man] [yesterday late at night]]], where, for example, the noun phrase the tall man is embedded in a verb phrase which itself is in the main sentence clause. Most theories would also assume further structure within the noun phrase and so on. Thus, in order to maintain the view that the pars opercularis supports hierarchical structure building, one must assume that (i) center-embedded structures involve more hierarchical structure building than non-center embedded structures, and (ii) that hierarchical structure building is the only difference between center-embedded and non-center embedded structures (otherwise the contrast is confounded). Makuuchi et al. make no independent argument for assumption (i), and assumption (ii) is false in that center-embedded sentences are well known to be more difficult to process than non-center embedded sentence (Gibson, 1998), thus there is a difficulty confound. One might argue that center-embedded structures are more difficult because they are hierarchical in a special way. But the confound persists in other ways. If one simply counts the number of subject-verb dependencies (the number of nouns that serve as the subject of a verb), the “hierarchical” sentences have more than the “linear” sentences.

It is perhaps revealing that the activation response amplitude in the pars opercularis to the various conditions qualitatively follows the pattern of the number of subject-verb dependencies in the different conditions: “hierarchical long” (HL) = 3 dependencies, “hierarchical short” (HS) = 2 dependencies, “linear long” (LL) = 1 dependency, “linear short” (LS) = 1 dependency, which corresponds with the pattern of response to these conditions in pars opercularis which is HL > HS > LL ≈ LS (see left bar graph above).

I also have a problem with the working memory claim, namely that inferior frontal sulcus = non-syntactic working memory. This claim is based on the assumptions that (i) the distance manipulation taps non-syntactic working memory, (ii) that the hierarchical manipulation does not tap non-syntactic working memory, and (iii) that inferior frontal sulcus showed only a distance effect and no interaction. You might convince me that (i) holds to some extent but both (ii) and (iii) are dubious. Center-embedded sentences are notoriously difficult to process and even if such structures invoke some special hierarchical process, isn’t it also likely that subjects will use whatever additional resources they have at their disposal, like non-syntactic working memory? Regarding (iii) a quick look at the amplitude graphs for the inferior frontal sulcus activation (right graph above) indicates that most of the “distance” effect is driven by the most difficult sentence type, the long-distance center-embedded condition. Thus, the lack of an interaction probably reflects insufficient power.

In short, I think all of the effects Makuuchi et al. see are driven by general processing load. Increased load likely leads to an increase in the use of less exciting processes such as articulatory rehearsal, which drives up activation levels in posterior sectors of Broca’s area. In fact, a recent study that directly examined the relation between articulatory rehearsal and sentence comprehension found exactly this: rehearsing a set of nonsense syllables produced just as much activation the pars opercularis as this comprehending sentences with long-distance dependencies (Rogalsky et al., 2008).

References

BAHLMANN, J., SCHUBOTZ, R., & FRIEDERICI, A. (2008). Hierarchical artificial grammar processing engages Broca's area NeuroImage, 42 (2), 525-534 DOI: 10.1016/j.neuroimage.2008.04.249

Friederici, A. (2006). The brain differentiates human and non-human grammars: Functional localization and structural connectivity Proceedings of the National Academy of Sciences, 103 (7), 2458-2463 DOI: 10.1073/pnas.0509389103

Gibson E (1998). Linguistic complexity: locality of syntactic dependencies. Cognition, 68 (1), 1-76 PMID: 9775516

Makuuchi, M., Bahlmann, J., Anwander, A., & Friederici, A. (2009). Segregating the core computational faculty of human language from working memory Proceedings of the National Academy of Sciences, 106 (20), 8362-8367 DOI: 10.1073/pnas.0810928106

Rogalsky C, Matchin W, & Hickok G (2008). Broca's Area, Sentence Comprehension, and Working Memory: An fMRI Study. Frontiers in human neuroscience, 2 PMID: 18958214

Thursday, October 8, 2009

The motor theory of speech perception makes no sense

The motor theory was born because it was found that the acoustic speech signal is ambiguous. The same sound, eg /d/, can be cued by different acoustic features. On the other hand, the speech gesture that produces the sound /d/, it was suggested, is not ambiguous, it always involves placement of the tip of the tongue on the roof of the mouth. Therefore we must perceive speech by accessing the invariant motor gestures that produce speech.

There is one thing that never made sense to me though. If the acoustic signal is ambiguous how does the motor system know which motor gesture to access? As far as I know, no motor theorist has proposed a solution to this problem. Am I missing something?

Saturday, October 3, 2009

Research Technician in Functional Magnetic Resonance Imaging

The Rochester Center for Brain Imaging (University of Rochester, Rochester, NY) is seeking a research technician to perform data collection, preprocessing, and analysis of functional, structural, and diffusion tensor MRI data, and development of software tools for same.
Applicants must have previous experience with MR data analyses or a strong background in Digital Image Processing (hold a BS/MS in Electrical Engineering, Biomedical Engineering or related areas). Responsibilities include: implementing custom software solutions for neuroscience research studies and conducting individual projects aimed at the development of improved computational methods for functional and morphological neuroimaging. The successful candidate will be well versed in scientific computation on Unix or Mac workstations and possess good skills in Matlab and C programming. Additional knowledge in physics, statistics or psychology, and/or experience in the processing and analysis of MR images (including packages such as AFNI, FSL, FreeSurfer or SPM) would be an asset.
The research focus of the Center is human brain functions, however the center also coordinates basic and clinical research on other topics, including pulse sequence programming and MRI coil development. The successful candidate will be based in the Rochester Center for Brain Imaging (http://www.rcbi.rochester.edu), a state-of-the-art facility equipped with a Siemens Trio 3T MR system and high-performance computing resources, with a full-time staff of cognitive neuroscientists, computer scientists, engineers, and physicists. Opportunities exist to collaborate with faculty in the departments of Brain & Cognitive Science, Center for Visual Science, Imaging Sciences/Radiology, Biomedical Engineering and Computer Science, among others.
Salary commensurable with experience. Start date flexible but a minimum of two year commitment required. If interested, please send a CV and short statement of your interest, as well as the name and address of three references to Dr. D. Bavelier, daphne@bcs.rochester.edu