Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires. [from abstract]I disagree. What I'd like to do here is spark a discussion of this paper, hopefully involving input from the authors, that highlights the points of disagreement. This will probably involve several posts. I'm going to start here with SDL's section on "Rethinking the question."
SDL wish to deconstruct the question of the motor system's role in speech perception, which is a laudable goal. In doing so they argue,
...that the question and indeed the entire debate is misleading due to the complexity of the neurobiology of speech production and the dynamic nature of speech perception.As the statement indicates, their argument comes in two parts:
- The network involved in speech production is complex and not, for example, restricted to Broca's area. In other words, identifying what counts as "the motor system" is hard.
- Speech perception is context dependent, we don't even know what the unit of analysis is; i.e., it is dynamic. In other words, identifying what counts as "speech perception" is hard (and shifting).
Regarding the second argument, SDL write,
what is meant by ‘‘speech perception” is typically ill defined. It is often discussed in the neurobiological literature as if it is a static operation, the result of which are minimal categorical units of speech analysis, phonemes or syllables, from which we can then build words and put those words into sentences. This assumption is reflected in the way speech perception and the brain is studied using primarily isolated speech sounds like ‘‘da” and ‘‘ba”I agree that many researchers study "speech perception" using primarily isolated speech sounds like "da" and "ba" and I agree that this is an impediment to the field. In fact, SDL's complaint sounds extremely familiar to me. Here is quote from Hickok & Poeppel 2000
Part of this confusion stems from differences in what one means by ‘speech perception’ and how one tests it behaviorally. Psychological research on speech perception typically utilizes tasks that involve the identification and/or discrimination of ‘sub-lexical’ segments of speech, such as meaningless syllables, and many neuropsychological and functional imaging studies have borrowed from this rich literatureAnother from Hickok & Poeppel 2004
The upshot is that the particular task which is employed to investigate the neural organization of language (that is, the mapping operation the subject is asked to compute) determines which neural circuit is predominantly activated. [emphasis original to the published paper, cuz it's THAT important and people tend to miss the point]And again from Hickok & Poeppel 2007
Many studies using the term ‘speech perception’ to describe the process of interest employ sublexical speech tasks, such as syllable discrimination, to probe that process. In fact, speech perception is sometimes interpreted as referring to the perception of speech at the sublexical level. However, the ultimate goal of these studies is presumably to understand the neural processes supporting the ability to process speech sounds under ecologically valid conditions, that is, situations in which successful speech sound processing ultimately leads to contact with the mental lexicon and auditory comprehensionWe have been harping on this point for 16 years and repeatedly argued that the functional anatomy of "speech perception" varies by task: if you look at ecologically valid tasks (speech perception in the wild) you see a ventral temporal basis; if you look at typical laboratory "sub-lexical" tasks, you see a dorsal, frontoparietal basis. This is why in Hickok & Poeppel 2007 we stated clear definitions of the speech terms we used:
In this article we use the term ‘speech processing’ to refer to any task involving aurally presented speech. We will use speech perception to refer to sublexical tasks (such as syllable discrimination), and speech recognition to refer to the set of computations that transform acoustic signals into a representation that makes contact with the mental lexicon.It is odd that SDL complain about a lack of terminological clarity in the field (#BeenThereSaidThat), call for the abandonment of the model that was developed precisely to remedy the problem they complain about, and then fail to adhere to their own advice to worry about what counts as "speech perception" (they go on to cite a wide range of tasks, mostly "sublexical," to support their claims). In fact, according to SDL's imprecise definition of speech perception (any task counts), Hickok & Poeppel have already made their argument for the role of the motor system in speech perception. E.g., from the abstract of Hickok & Poeppel 2000: "Tasks that require explicit access to speech segments rely on auditory–motor interface systems in the left frontal and parietal lobes." So, yes, if you allow syllable discrimination to count as "speech perception," the motor system is definitely involved. Here is a series of posts I wrote in 2007 on this topic here, here, here, and here.
SDL have not redefined the problem, they have rediscovered a known problem, one that has been at least partially solved, and then ignored the solution.