Friday, January 20, 2012

What can we learn from the processing of syntactic violations?

There is an interesting discussion brewing in the comments of one of my previous posts that I think is worthy of the front page.  It was sparked by a question from Jeroen van Baar:
Greg, I think one of your last comments here captures the issue: "We are dealing with the fact that Broca's area doesn't get involved with sentence processing unless the stims contain violations or things get really difficult (working memory? cognitive control?)." (March 14, 2011).
If we want to learn about the processing of syntax, we need to make sure the brain is engaged in syntax analysis, and I guess there are two obvious cases in which this happens; 1. when syntax is violated and 2. when syntax analysis is needed in order to extract meaning from a sentence. So if a sentence is not attended and meaning does not need to be extracted, the auditory stimulus will flow into your brain but higher-order processing will not take place, which leaves an absence of syntax-related activation. Likewise, if a sentence has no meaning at all, as in your jabberwocky stimuli, syntax plays no role in understanding the sentence anymore, so your brain will just not bother. What I'm trying to say, I guess, is that syntax serves meaning (in good Chomskian tradition), and meaningful stimuli may just engage that syntax-processing unit we are thought to have. 
As for music: same story? What do you think?
My response was this:
Jeroen,You seem to be suggesting that syntactic analysis may not be utilized for simple, grammatical sentences. I think there is a paradox hidden in there though. You suggest that one condition in which the brain is engaged in syntactic analysis is when syntax is violated. But presumably there has to be some mechanism in the brain telling you when a violation has occurred. This implies that syntactic analysis is being carried out even when no violation occurs. So even though, "The dog chased the cat" is simple and contains no violation, the fact that we readily detect a difference between that sentence and, "The dog chase the cat" means that syntactic analysis is being carried out, no?
Jeroen countered:

Greg, You're right. A syntactic monitoring system must be active at all times. But don't you think that its activity will spike when a violation is detected? In neurolinguistics this seems to be a common paradigm: in both EEG and fMRI studies, conditions of no violation are often subtracted from conditions of violation to leave the activity that is specifically concerned with the analysis (or repair) of a syntax error. I think that with such an approach, we may be more successful in identifying a musical syntactic processing system, if there is one. 

Your findings, of course, suggest that if any overlap is found in activity elicited by musical and linguistic syntax violations, this overlap is not a sign of a shared syntactic integration system, but rather of a general violation-activated system of some kind. This is contradicted by a couple of studies that found interactions specifically between syntax violating conditions in music and language, and not between other violation conditions, such as timbre-related surprises and semantic oddballs (c.f. Koelsch et al. (2005), Interaction between Syntax Processing in Language and Music: An ERP Study, J. Cogn. Neurosc. 17(10): 1565-77; and Slevc et al. (2009), Making psycholinguistics musical: Self-paced reading time evidence for shared processing of linguistic and musical syntax, Psychonomic Bulletin & Review 16(2): 374-81). 

All in all, do you agree that a study comparing violation vs. control within-mode (language or music) and within-subjects with fMRI could provide useful insights? 

Here's the issue I'd like to bring up for discussion here: "don't you think that [syntactic] activity will spike when a violation is detected?"

This is the underlying assumption of all the studies that use the violation paradigm, but it is an empirical question and I'm not convinced we have a clear answer on this.  Does syntactic theory predict a spike in syntactic computation when a violation occurs?  Not really.  In fact, you might argue that the syntactic mechanism shuts down and something else takes over!

The problem I have with violation studies is that the occurrence of a violation is confounded with other processes, some of which may not be syntactic at all.  E.g., conflict resolution, various forms of working memory, the probability of your subject talking to him/herself may increase ("Was that a violation?  Should I push the button?), etc.  In short, I don't know what is being measured in the response to a violation so I don't how to interpret a neurophysiological response triggered by such a violation.  If you want to map the neural system involved in violation processing, then fine, study violations.  But this that really what we're after here?  Or are we trying to understanding the circuits involved in syntactic computations as they are normally carried out in grammatical sentences?  I'm interested in the latter and so I'm not interested at all in the response to violations.  I think it is misleading.

This is a theoretical concern but it is backed up by empirical results.  I think we all agree that listening to structured sentences involves syntactic computation.  If we look at that activation pattern associated with listening to sentences contrasted with rest or with listening to scrambled sentences or spectrally rotated versions of those sentences, we do not typically find activation differences in Broca's area.  However, the most robust site of activation in violation studies is Broca's area.  Given these two facts, how can we say that the violation response reflects syntactic computation?

Thursday, January 5, 2012


First call for communications
In the course of a conversational interaction, the behavior of each talker often tends to become more similar to that of the conversational partner. Such convergence effects have been shown to manifest themselves under many different forms, which include posture, body movements, facial expressions, and speech. Imitative speech behavior is a phenomenon that may be actively exploited by talkers to facilitate their conversational exchange. It occurs, by definition, within a social interaction, but has consequences for language that extend much beyond the temporal limits of that interaction. It has been suggested that imitation plays an important role in speech development and may also form one of the key mechanisms that underlie the emergence and evolution of human languages. The behavioral tendency shown by humans to imitate others may be connected at the brain level with the presence of mirror neurons, whose discovery has raised important issues about the role that these neurons may fulfill in many different domains, from sensorimotor integration to the understanding of others' behaviour.
The focus of this international symposium will be the fast-growing body of research on convergence phenomena between speakers in speech. The symposium will also aim to assess current research on the brain and cognitive underpinnings of imitative behavior. Our main goal will be to bring together researchers with a large variety of scientific backgrounds (linguistics, speech sciences, psycholinguistics, experimental sociolinguistics, neurosciences, cognitive sciences) with a view to improving our understanding of the role of imitation in the production, comprehension and acquisition of spoken language.
The symposium is organized by the laboratoire Parole et Langage, CNRS and Aix-Marseille Université, Aix-en-Provence, France ( It will be chaired by Noël Nguyen (LPL) and Marc Sato (GIPSA-Lab, Grenoble), and will be held in the Maison Méditerranéenne des Sciences Humaines.
Luciano Fadiga, University of Ferrara, Italy
Maëva Garnier, GIPSA-Lab, Grenoble, France
Simon Garrod, University of Glasgow, United Kingdom
Beatrice Szczepek Reed, University of York, United Kingdom
Papers are invited on the topics covered by the symposium. Abstracts not exceeding 2 pages must be submitted electronically and in pdf format by 15 April 2012. They will be selected by the Scientific Committee on the basis of their scientific merit and relevance to the symposium. Notifications of acceptance/rejection will be sent to the authors by 31 May 2012.
- 15 April 2012: Abstract submission deadline
- 31 May 2012: Notification of acceptance / rejection
- 30 June 2012: Early registration deadline
. Patti Adank, University of Manchester, UK
. Martine Adda-Decker, laboratoire de Phonétique et Phonologie, Paris, France
. Gérard Bailly, GIPSA-Lab, Grenoble, France
. Roxane Bertrand, laboratoire Parole et Langage, Aix-en-Provence, France
. Ann Bradlow, Northwestern University, Evanston, USA
. Jennifer Cole, Department of Linguistics, Urbana-Champaign, USA
. Mariapaola D’Imperio, laboratoire Parole et Langage, Aix-en-Provence, France
. Laura Dilley, Department of Psychology and Linguistics, Michigan State University, USA
. Sophie Dufour, laboratoire Parole et Langage, Aix-en-Provence, France
. Carol Fowler, Haskins Laboratories, New Haven, USA
. Jonathan Harrington, University of Munich, Germany
. Jennifer Hay, University of Canterbury, Christchurch, New Zealand
. Julia Hirschberg, Columbia University, New York, USA
. Holger Mitterer, Max Plank Institute for Psycholinguistics, Nijmegen, The Netherlands
. Lorenza Mondada, laboratoire ICAR, Lyon, France
. Kuniko Nielsen, Oakland University, Rochester, USA
. Noël Nguyen, laboratoire Parole et Langage, Aix-en-Provence, France
. Martin Pickering, University of Edinburgh, UK
. Marc Sato, GIPSA-Lab, Grenoble, France
. Jean-Luc Schwartz, GIPSA-Lab, Grenoble, France
. Véronique Traverso, laboratoire ICAR, Lyon, France
. Sophie Wauquier, Université Paris 8, Saint-Denis, France

Computational neuroanatomy of speech production

For your Happy New Year reading enjoyment, let me point you to my just (online) published synthesis of computational, psycholinguistic, and neuroanatomic research on speech production: Hickok, G. (2012). Computational Neuroanatomy of Speech Production, Nature Reviews Neuroscience.  The aim was to shatter barriers between the motor control folks, the psycholinguists, and the neuroscience oriented researchers studying speech production.  This integration has some interesting consequences (in my view).  Here are a few:

1. Speech motor control is hierarchically organized (no big surprise) with an auditory-(pre)motor circuit representing a relatively higher level and a somatosensory-motor circuit a relatively lower level.

2. The auditory grounded circuit primarily deals in units the size of syllables whereas what we normally think of as segmental units (~phonemes) are processed primarily in the lower-level somatosensory based circuit.  Yes, I'm arguing that "phonological representation" is distributed over auditory and somatosensory cortex.

3. Phonological encoding, in the sense of typical two-stage models of speech production is achieved in the context of a state feedback control circuit (from the motor control tradition).

4. Efference copy signals as they are currently conceptualized in the motor control literature do not exist (let's see what kind of push back I get on this one!).  That is, the motor controller does not issue a copy of the command that it has executed.  Rather, motor to sensory feedback is part of the motor planning process from the start.  In other words, in my view, the "efference copy" is an iterative feedback loop that enables sensory systems to be a part of the programming of the movement rather than just evaluating the outcome of movement commands. This conceptualization integrates the notion of motor planning, efference copies, forward prediction, and error correction into one mechanism.  In addition, this computational architecture solves the problem of how both internal and external feedback monitoring can be achieved by the same network even though the timing of the two feedback sources differ.  I present a simply simulation to demonstrate the feasibility of these assumptions.

5. Forward predictions are instantiated computationally as inhibitory inputs to sensory systems.

6. Conduction aphasia and apraxia of speech involve disruption to two different components of the same hierarchical level of state feedback control, the relatively higher level auditory-pre-motor circuit.

7. Sensory representations are central to the motor planning process and explain the tight interaction between sensory and motor speech systems.  It is a sensory theory of speech production in a sense as opposed to a motor-oriented theory of speech perception.

I would love to get your thoughts on this paper.  There's lots in here to discuss/argue about and it will be fun to debate some of the data and/or theoretical claims.