WORKSHOP ANNOUNCEMENT
CALL FOR POSTERS
Psycholinguistic Approaches to Speech Recognition in Adverse Conditions
University of Bristol
8-10 March 2010
The workshop aims to gather academics from various fields in order to discuss the benefits, prospects, and limitations of considering adverse conditions in models of speech recognition. The adverse conditions we will consider include extrinsic signal distortions (e.g., speech in noise, vocoded speech), intrinsic distortions (e.g., accented speech, conversational speech, dysarthric speech, Lombard speech), listener-specific limitations (e.g., non-native listeners, older individuals), and cognitive load (e.g., speech recognition under an attentional or memory load, multi-tasking).
Registration now open:
http://language.psy.bris.ac.uk/workshop/index.html
Speakers
* Jennifer Aydelott, Birkbeck College, University of London, UK
* Ann Bradlow, Northwestern University, USA
* Martin Cooke, University of the Basque Country, Spain
* Anne Cutler, Max Planck Institute for Psycholinguistics, NL
* Matt Davis, MRC CBU Cambridge, UK
* John Field, University of Reading, UK
* Valerie Hazan, UCL, UK
* MLuisa GarcĂa Lecumberri, University of the Basque Country, Spain
* Sven Mattys, University of Bristol, UK
* Holger Mitterer, Max Planck Institute for Psycholinguistics, NL
* Dennis Norris, MRC CBU Cambridge, UK
* Kathy Pichora-Fuller, University of Toronto, Canada
* Sophie Scott, UCL, UK
* Laurence White, University of Bristol, UK
Important Dates
* Poster abstract deadline: 30 November 2009
* Notification of acceptance: 11 December 2009
* Preregistration deadline: 31 January 2010
* Conference dates: 8-10 March 2010
Organiser
* Sven Mattys
Local Organising Commitee
* Sven Mattys
* Laurence White
* Lukas Wiget
Contact: sven.mattys@bris.ac.uk
News and views on the neural organization of language moderated by Greg Hickok and David Poeppel
Thursday, September 24, 2009
Wednesday, September 23, 2009
Final post on mirror neurons
I'm kicking the mirror neuron habit after just one more puff (don't worry, I never inhale). I got to page 100 in Rizzolatti & Sinigaglia's Mirrors in the Brain (2008, Oxford Press) and had to stop. The logic just became so incoherent and one-sided that I decided it is a waste of time even to consider the arguments seriously.
Here's what they wrote (italics theirs):
This is fairly standard mirror neuron speak. It was the next section that made me decide to stop reading.
A very good question. They go on to note,
So why is the STS with its much more selective response properties to action perception not a candidate neural basis for action understanding? The answer is...
Not only is this pure speculation, but this question is NEVER asked of mirror neurons:
A typical response to this kind of critique is that, "it's the activity of the WHOLE circuit that is important, not just mirror neurons in F5". But this is vacuous hand-waving. If this is really the claim, then why is the visual percept "casual" and without "unitary meaning" and the motor component the one that adds meaning? Why isn't it the reverse? Why isn't the reverse ever considered?
The other glaring logical party-foul with R&S's claim is that if they are correct, monkeys should only be able to understand actions that mirror neurons code: grasping, tearing, holding, etc. All the others would be casual and without unitary meaning. Does it make sense from an evolutionary standpoint for a system that is only capable of understanding visual actions or events that have a motor representation as well? Or would it be useful for the animal to understand that a hawk circling above is a bad thing? And if you want to claim that the animal doesn't really 'understand' what a circling hawk 'means', that it only reacts to it reflexively, then you are obliged to prove to me that the monkey does 'understand' grasping actions and is not just reacting reflexively.
Here's my guess as to what mirror neurons are doing.
1. Action understanding is primarily coded in the much more sophisticated STS neurons.
2. The F5-parietal lobe circuit performs sensory-motor transformations for the purpose of guiding action.
3. Populations of F5 neurons code specific complex actions such as grasping with the hand using a particular grip, or perhaps these populations are part of the transformation (started in parietal regions) between a sensory event and a specific action.
4. F5-driven actions (or sensory-motor transformations) can be activated by objects (canonical neurons), or by the observation of actions (mirror neurons).
5. [prediction:] Mirror neurons are only one class of action-responsive cells in F5. Others code non-mirror action observation-action execution responses such as when a conspecific presents its back and a grooming action may be elicited.
6. [prediction:] F5 neuron populations are plastic. If the animal is trained to reach for a raisin upon seeing a human waving gesture or a dog's tail wag or a picture of the Empire State Building, F5 cell populations will code this association such that F5 cells may end up responding to tail wagging. (For example see Catmur, et al. 2007, although admittedly this is a human study and may not apply to macaque.)
7. The reason why mirror neurons mirror is because there is an association between seeing a reaching/grasping gesture and executing the same gesture. This could arise either because of natural competitive behavior (seeing another monkey reach may cue the presence of something tasty and generate a competitive reach) or because of the specific experimental training situation.
As far as I know, there is no way empirically to differentiate these ideas from the action understanding theory. However, the present suggestion can explain why STS neurons code actions so much more specifically than mirror neurons (because STS is critically involved in action understanding) and it does not limit 'understanding' to motor behaviors, which seems desirable. I look forward to seeing a flood of studies in Nature and Science testing alternative theories of mirror neuron function. (Yeah, right.)
So what in the world will I talk about if not mirror neurons? Well, the motor theory of speech perception is still on the table. Unlike mirror neurons, that is squarely in my research program. It is also an interesting topic because it provides an excellent test case for mirror neuron theory as it is applied to humans, just like speech was the critical test case for phrenology. (Yes, I am comparing mirror neurons to phrenology -- both very interesting ideas that were unsubstantiated when first proposed and that captured the scientific and public imagination.)
Catmur C, Walsh V, & Heyes C (2007). Sensorimotor learning configures the human mirror system. Current biology : CB, 17 (17), 1527-31 PMID: 17716898
Here's what they wrote (italics theirs):
...these [mirror] neurons are primarily involved in the understanding of the meaning of 'motor events', i.e., of actions performed by others. (p. 97)
...this is why, when it [the monkey] sees the experimenter shaping his hand into a precision grip and moving it towards the food, it immediately perceives the meaning of the 'motor events' and interprets them in terms of an intentional act. (p. 98)
This is fairly standard mirror neuron speak. It was the next section that made me decide to stop reading.
There is, however an obvious objection to this: as discussed above, neurons which respond selectively to the observation of the body movements of others, and in certain cases to hand-object interactions, have been found in the anterior region of the superior temporal sulcus (STS). We have mentioned that the STS areas are connected with the visual, occipital, and temporal cortical areas, so forming a circuit which is in many ways parallel to that of the ventral stream. What point would there be, therefore, in proposing a mirror neuron system that would code in the observer's brain the actions of others in terms of his own motor act? Would it not be much easier to assume that understanding the actions of others rests on purely visual mechanisms of analysis and synthesis of the various elements that constitute the observed action, without any kind of motor involvement on the part of the observer? (p. 98-99)
A very good question. They go on to note,
Perrett and colleagues demonstrated that the visual codification of actions reaches levels of surprising complexity in the anterior region of the STS. Just as an example, there are neurons which are able to combine information relative to the observation of the direction of the gaze with that of the movements an individual is performing. Such neurons become active only when the monkey sees the experimenter pick up an object on which his gaze is directed. If the experimenter shifts the direction of his gaze, the observation of his action does not trigger any neuron activity worthy of notice. (p. 99)
So why is the STS with its much more selective response properties to action perception not a candidate neural basis for action understanding? The answer is...
However, we must ask whether this selectivity -- or, in more general terms, the capacity to connect different visual aspects of the observed action -- is sufficient to justify using the term 'understanding'. The motor activation characteristic of F5 and PF-PFG adds an element that hardly could be derived from the purely visual properties of STS -- and without which the association of visual features of the action would at best remain casual, without any unitary meaning for the observer. (p. 99, end of paragraph)
Not only is this pure speculation, but this question is NEVER asked of mirror neurons:
However, we must ask whether this selectivity -- or, in more general terms, the capacity to connect motor aspects of the observed action -- is sufficient to justify using the term 'understanding'. The sensory activation characteristic of STS adds an element that hardly could be derived from the far less specified properties of F5 -- and without which the association of sensory-motor features of the action would at best remain casual, without any unitary meaning for the observer.
A typical response to this kind of critique is that, "it's the activity of the WHOLE circuit that is important, not just mirror neurons in F5". But this is vacuous hand-waving. If this is really the claim, then why is the visual percept "casual" and without "unitary meaning" and the motor component the one that adds meaning? Why isn't it the reverse? Why isn't the reverse ever considered?
The other glaring logical party-foul with R&S's claim is that if they are correct, monkeys should only be able to understand actions that mirror neurons code: grasping, tearing, holding, etc. All the others would be casual and without unitary meaning. Does it make sense from an evolutionary standpoint for a system that is only capable of understanding visual actions or events that have a motor representation as well? Or would it be useful for the animal to understand that a hawk circling above is a bad thing? And if you want to claim that the animal doesn't really 'understand' what a circling hawk 'means', that it only reacts to it reflexively, then you are obliged to prove to me that the monkey does 'understand' grasping actions and is not just reacting reflexively.
Here's my guess as to what mirror neurons are doing.
1. Action understanding is primarily coded in the much more sophisticated STS neurons.
2. The F5-parietal lobe circuit performs sensory-motor transformations for the purpose of guiding action.
3. Populations of F5 neurons code specific complex actions such as grasping with the hand using a particular grip, or perhaps these populations are part of the transformation (started in parietal regions) between a sensory event and a specific action.
4. F5-driven actions (or sensory-motor transformations) can be activated by objects (canonical neurons), or by the observation of actions (mirror neurons).
5. [prediction:] Mirror neurons are only one class of action-responsive cells in F5. Others code non-mirror action observation-action execution responses such as when a conspecific presents its back and a grooming action may be elicited.
6. [prediction:] F5 neuron populations are plastic. If the animal is trained to reach for a raisin upon seeing a human waving gesture or a dog's tail wag or a picture of the Empire State Building, F5 cell populations will code this association such that F5 cells may end up responding to tail wagging. (For example see Catmur, et al. 2007, although admittedly this is a human study and may not apply to macaque.)
7. The reason why mirror neurons mirror is because there is an association between seeing a reaching/grasping gesture and executing the same gesture. This could arise either because of natural competitive behavior (seeing another monkey reach may cue the presence of something tasty and generate a competitive reach) or because of the specific experimental training situation.
As far as I know, there is no way empirically to differentiate these ideas from the action understanding theory. However, the present suggestion can explain why STS neurons code actions so much more specifically than mirror neurons (because STS is critically involved in action understanding) and it does not limit 'understanding' to motor behaviors, which seems desirable. I look forward to seeing a flood of studies in Nature and Science testing alternative theories of mirror neuron function. (Yeah, right.)
So what in the world will I talk about if not mirror neurons? Well, the motor theory of speech perception is still on the table. Unlike mirror neurons, that is squarely in my research program. It is also an interesting topic because it provides an excellent test case for mirror neuron theory as it is applied to humans, just like speech was the critical test case for phrenology. (Yes, I am comparing mirror neurons to phrenology -- both very interesting ideas that were unsubstantiated when first proposed and that captured the scientific and public imagination.)
Catmur C, Walsh V, & Heyes C (2007). Sensorimotor learning configures the human mirror system. Current biology : CB, 17 (17), 1527-31 PMID: 17716898
Tuesday, September 22, 2009
Mirrors in the Brain -- Comments on Rizzolatti & Sinigaglia, 2008
Apparently I'm obsessed with mirror neurons because I can't seem to stop reading what people say about them. Now I'm reading Rizzolatti & Sinigaglia's 2008 book, Mirrors in the Brain, translated from the original Italian by Frances Anderson and published by Oxford.
I'm only about halfway through so far but already I find the book both useful in terms of its summary of the functional anatomy of the macaque motor system and frustratingly sloppy in terms of its theoretical logic.
Let me provide one example of the latter. At the outset of the book the authors describe the functional properties motor neurons (not necessarily mirror neurons) in macaque area F5. They argue that F5 motor cells
As evidence they note that
They conclude,
So the claim is that F5 cells are coding something higher-level that is defined by the goal, the "efficacy", of movement.
Clearly, F5 cells are coding something that is at least one-step removed from specific movements (e.g., finger flexion), but the leap from this observation to the idea that it is coding categories or goals such as 'tearing' is suspect. Perhaps these complex movements are being coded -- separately for the mouth and hand, for tearing in one manner versus another, etc. -- by the population of cells in F5 rather than in individual cells. In other words, the fact that a single cell responds to grasping with the hand and grasping with the mouth doesn't necessarily mean that it is coding an abstract concept of grasping.
But we don't need to argue with Rizzolatti and Sinigaglia on this theoretical point because they argue against their own view rather convincingly (although unwittingly) on empirical grounds. Specifically, in contrast to the claim that F5 cells code goal-directed actions, they give examples of how these cells code specific, albeit complex, movements.
This strikes me as a rather specific individual movement that, for example, would not apply to the same "act" executed by the mouth. More pointedly though, in their discussion of mirror neurons in F5, Rizzolatti and Sinigaglia make a big deal of cells that show a strict relation between the observed and executed act. They provide a striking example:
So here is a case where two movements have the same goal (breaking the raisin in two), but the F5 cell only fires in response to one of the movements. Apparently this cell is coding movements not goals.
Has anyone else read Rizzolatti and Sinigaglia's book? Any thoughts?
I'm only about halfway through so far but already I find the book both useful in terms of its summary of the functional anatomy of the macaque motor system and frustratingly sloppy in terms of its theoretical logic.
Let me provide one example of the latter. At the outset of the book the authors describe the functional properties motor neurons (not necessarily mirror neurons) in macaque area F5. They argue that F5 motor cells
code motor acts (i.e., goal-directed movement) and not individual movements (p. 23).
As evidence they note that
...many F5 neurons discharge when the monkey performs a motor act, for example when it grasps a piece of food, irrespective of whether it uses its right or left hand or even its mouth ... [and] a particular movement that activates a neuron during a specific motor act does not do so during other seemingly related acts; for example, bending the index finger triggers a neuron when grasping, but not when scratching. (p. 23)
They conclude,
Therefore the activity of these neurons cannot be adequately described in terms of pure movement, but taking the efficacy of the motor act as the fundamental criterion of classification they can be subdivided into specific categories, of which the most common are 'grasping-with-the-hand-and-the-mouth', 'grasping-with-the-hand', 'holding', 'tearing', 'manipulating', and so on. (p. 23)
So the claim is that F5 cells are coding something higher-level that is defined by the goal, the "efficacy", of movement.
Clearly, F5 cells are coding something that is at least one-step removed from specific movements (e.g., finger flexion), but the leap from this observation to the idea that it is coding categories or goals such as 'tearing' is suspect. Perhaps these complex movements are being coded -- separately for the mouth and hand, for tearing in one manner versus another, etc. -- by the population of cells in F5 rather than in individual cells. In other words, the fact that a single cell responds to grasping with the hand and grasping with the mouth doesn't necessarily mean that it is coding an abstract concept of grasping.
But we don't need to argue with Rizzolatti and Sinigaglia on this theoretical point because they argue against their own view rather convincingly (although unwittingly) on empirical grounds. Specifically, in contrast to the claim that F5 cells code goal-directed actions, they give examples of how these cells code specific, albeit complex, movements.
Most F5 neurons ... also code the shape the hand has to adopt to execute the act in question... (p. 25)
This strikes me as a rather specific individual movement that, for example, would not apply to the same "act" executed by the mouth. More pointedly though, in their discussion of mirror neurons in F5, Rizzolatti and Sinigaglia make a big deal of cells that show a strict relation between the observed and executed act. They provide a striking example:
...the monkey observes the experimenter twisting a raisin in his hands, anti-clockwise and clockwise, as if to break it in two: the neuron discharges for one direction only. (p. 82)
So here is a case where two movements have the same goal (breaking the raisin in two), but the F5 cell only fires in response to one of the movements. Apparently this cell is coding movements not goals.
Has anyone else read Rizzolatti and Sinigaglia's book? Any thoughts?
Thursday, September 17, 2009
What mirror neurons are REALLY doing
Mirror neurons are cells in monkey frontal area F5 that respond both during the execution of action and during the perception of action. Explaining why these cells respond during action execution is easy and uncontroversial: they are motor cells in a motor area -- they respond during action execution because they are involved in the coding of actions. The perceptual response is more difficult to explain. Think first about "canonical neurons", neighbors of mirror neurons in F5. Like mirror neurons, these cells respond during action execution, e.g., grasping, and they also have sensory properties, e.g., responding to the presentation of graspable objects. The sensory responses of canonical neurons have a fairly intuitive and standard explanation: the grasping of objects needs to be informed by the shape of the object (you grasp a paperclip differently than a grapefruit) and so the sensory input is used to drive appropriate grasping gestures. Importantly, canonical neurons are not assumed to be responsible for visual recognition, they just receive relevant input from areas involved in the processing of visual features.
But what about mirror neurons? Why would the percept of someone else performing an action such as grasping a piece of food help guide the monkey's own food-grasping action? One thought is that mirror neurons support imitation, but apparently macaque monkeys don't imitate so that can't be right. So the theory that was proposed early on and completely dominates (suffocates even) thought on mirror neuron function is that these cells support action understanding. According to this view, the sensory response of mirror neurons is not relevant to the monkey's own actions, unlike canonical neurons. It is rather a mechanism for understanding what other animals are doing via motor simulation. The logic is, if I understand what I'm doing when I reach for a peanut, then if I can simulate another's peanut-reaching action in my motor system, I can understand what s/he's doing.
I have argued that the action understanding theory of mirror neurons has never actually been tested in monkeys and where it has been tested in the "human mirror system" it has been proven wrong: damage to the "mirror system" does not necessarily cause a deficit in action understanding (Hickok, 2009). I have yet to see a strong empirical refutation of the evidence I discussed, but a common response that I do hear is, "you propose no alternative theory of mirror neurons."
Although I've never been fond of it's-the-only-game-in-town arguments (a theory can be demonstrably wrong even if we don't yet have a better theory) I think the point is worth taking seriously even if it is only partially true. I did propose that mirror neurons reflected a form of motor priming, but didn't develop the idea in any detail.
In response to the only-game-in-town argument, here is what I'd like neuroscientists to do, just for fun. Rather than obsessing on the idea that the sensory response of mirror neurons has no relevance to action execution, I'd like folks to at least consider the possibility that mirror neurons, like their canonical neighbors, take sensory input for a motor purpose.
I'll start. This is speculative and unsubstantiated, but so is the action understanding theory and you have to start somewhere. Consider this a jumping off point for discussion...
Can we learn something from the behavior of dogs? If you've played fetch with a dog you may have noticed that it quickly learns to anticipate the consequences of throwing actions. For example, it is not hard to fool a naive dog who plays a lot of fetch with a fake throw. Even though the ball isn't flying through the air the dog may nonetheless take off in chase. Presumably, the animal has learned to recognize throwing actions. This is interesting because dogs can't throw and so can't have throwing mirror neurons. This is also interesting because somehow the action observation, throwing, is triggering an action execution, chasing, in the dog. This tells us that and action observation-execution sensory-motor circuit exists in the animal. There may even be "chase" cells in the dog's motor cortex that fire both during action observation and action execution. This is the same sort of circuit by which the pairing of a tone (cf., the throwing action) with an airpuff to the eye (cf., the ball flying) can eventually lead to an eyeblink response (cf., chasing) just to the presence of the tone (cf., the throwing action).
But dogs are smart. Try fake throwing without the ball, e.g., while the ball is still in the dog's mouth. You don't get much of a response (if your dog responds anyway, retry the exercise with a larger ball such that the dog can easily see whether or not you have something in your hand). This is interesting because it is kind of like pantomime and mirror neurons don't respond to pantomime. You can imagine how our non-mirror action observation-execution circuit might start to behave like a mirror neuron.
The point here is that it is not hard to imagine sensory-motor circuits that take observed actions as input and use these actions as triggers for any number of executed actions via regular old sensory-motor association. The cells underlying these circuits would probably behave like canonical neurons responding both to the execution and observation of the (non-mirror) actions.
That's all fine but how could the observation of an animal reaching for a piece of food trigger a similar action in the observer? In other words why would some actions trigger mirror actions? Here's where I need the help of any readers who know primate behavior (I certainly don't). But again reasoning from dog behavior, I've noticed that if you place a ball or toy in front of a moderately well-trained dog, it may watch you and wait (an untrained dog might just grab it). If you start to reach for the object the dog may suddenly lunge for it, trying to beat you to it. The dog isn't imitating you (in fact it can't), it learned to recognize that your reaching results in ball-possession and this triggers an action with a competitive goal. I can imagine this happening naturally with food for example, where one animal's movement towards a piece of meat triggers a competitive counteraction on the part of the dog. The trigger action could be the reach of a human, a forward movement of another dog, or even the looming flight of a bird or the mechanical action of a tool. Importantly, recognition of these actions is not being carried out in the motor system of the dog via motor simulation (at least not for the human, bird, and tool actions).
Presumably monkeys can also learn to recognize actions and respond with appropriate actions themselves. Observing an aggressive posture might trigger a flee or hit action. Observing a grasp toward a piece of fruit might trigger a competitive "mirror grasp" for the same piece of fruit. Maybe watching an experimenter reaching for a raisin that the monkey really wants triggers exactly this kind of competitive motor response and maybe this is what mirror neurons really reflect. Or maybe it is just another wrong theory about mirror neuron function.
Hickok, G. (2009). Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans Journal of Cognitive Neuroscience, 21 (7), 1229-1243 DOI: 10.1162/jocn.2009.21189
But what about mirror neurons? Why would the percept of someone else performing an action such as grasping a piece of food help guide the monkey's own food-grasping action? One thought is that mirror neurons support imitation, but apparently macaque monkeys don't imitate so that can't be right. So the theory that was proposed early on and completely dominates (suffocates even) thought on mirror neuron function is that these cells support action understanding. According to this view, the sensory response of mirror neurons is not relevant to the monkey's own actions, unlike canonical neurons. It is rather a mechanism for understanding what other animals are doing via motor simulation. The logic is, if I understand what I'm doing when I reach for a peanut, then if I can simulate another's peanut-reaching action in my motor system, I can understand what s/he's doing.
I have argued that the action understanding theory of mirror neurons has never actually been tested in monkeys and where it has been tested in the "human mirror system" it has been proven wrong: damage to the "mirror system" does not necessarily cause a deficit in action understanding (Hickok, 2009). I have yet to see a strong empirical refutation of the evidence I discussed, but a common response that I do hear is, "you propose no alternative theory of mirror neurons."
Although I've never been fond of it's-the-only-game-in-town arguments (a theory can be demonstrably wrong even if we don't yet have a better theory) I think the point is worth taking seriously even if it is only partially true. I did propose that mirror neurons reflected a form of motor priming, but didn't develop the idea in any detail.
In response to the only-game-in-town argument, here is what I'd like neuroscientists to do, just for fun. Rather than obsessing on the idea that the sensory response of mirror neurons has no relevance to action execution, I'd like folks to at least consider the possibility that mirror neurons, like their canonical neighbors, take sensory input for a motor purpose.
I'll start. This is speculative and unsubstantiated, but so is the action understanding theory and you have to start somewhere. Consider this a jumping off point for discussion...
Can we learn something from the behavior of dogs? If you've played fetch with a dog you may have noticed that it quickly learns to anticipate the consequences of throwing actions. For example, it is not hard to fool a naive dog who plays a lot of fetch with a fake throw. Even though the ball isn't flying through the air the dog may nonetheless take off in chase. Presumably, the animal has learned to recognize throwing actions. This is interesting because dogs can't throw and so can't have throwing mirror neurons. This is also interesting because somehow the action observation, throwing, is triggering an action execution, chasing, in the dog. This tells us that and action observation-execution sensory-motor circuit exists in the animal. There may even be "chase" cells in the dog's motor cortex that fire both during action observation and action execution. This is the same sort of circuit by which the pairing of a tone (cf., the throwing action) with an airpuff to the eye (cf., the ball flying) can eventually lead to an eyeblink response (cf., chasing) just to the presence of the tone (cf., the throwing action).
But dogs are smart. Try fake throwing without the ball, e.g., while the ball is still in the dog's mouth. You don't get much of a response (if your dog responds anyway, retry the exercise with a larger ball such that the dog can easily see whether or not you have something in your hand). This is interesting because it is kind of like pantomime and mirror neurons don't respond to pantomime. You can imagine how our non-mirror action observation-execution circuit might start to behave like a mirror neuron.
The point here is that it is not hard to imagine sensory-motor circuits that take observed actions as input and use these actions as triggers for any number of executed actions via regular old sensory-motor association. The cells underlying these circuits would probably behave like canonical neurons responding both to the execution and observation of the (non-mirror) actions.
That's all fine but how could the observation of an animal reaching for a piece of food trigger a similar action in the observer? In other words why would some actions trigger mirror actions? Here's where I need the help of any readers who know primate behavior (I certainly don't). But again reasoning from dog behavior, I've noticed that if you place a ball or toy in front of a moderately well-trained dog, it may watch you and wait (an untrained dog might just grab it). If you start to reach for the object the dog may suddenly lunge for it, trying to beat you to it. The dog isn't imitating you (in fact it can't), it learned to recognize that your reaching results in ball-possession and this triggers an action with a competitive goal. I can imagine this happening naturally with food for example, where one animal's movement towards a piece of meat triggers a competitive counteraction on the part of the dog. The trigger action could be the reach of a human, a forward movement of another dog, or even the looming flight of a bird or the mechanical action of a tool. Importantly, recognition of these actions is not being carried out in the motor system of the dog via motor simulation (at least not for the human, bird, and tool actions).
Presumably monkeys can also learn to recognize actions and respond with appropriate actions themselves. Observing an aggressive posture might trigger a flee or hit action. Observing a grasp toward a piece of fruit might trigger a competitive "mirror grasp" for the same piece of fruit. Maybe watching an experimenter reaching for a raisin that the monkey really wants triggers exactly this kind of competitive motor response and maybe this is what mirror neurons really reflect. Or maybe it is just another wrong theory about mirror neuron function.
Hickok, G. (2009). Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans Journal of Cognitive Neuroscience, 21 (7), 1229-1243 DOI: 10.1162/jocn.2009.21189
Tuesday, September 15, 2009
A multisensory cortical network for understanding speech in noise
It’s kind of ironic that we spend so much time and effort trying to eliminate the noise problem in fMRI research on auditory speech perception when we do most of our everyday speech comprehension in noisy environments. In fact, one could argue that we are getting a more veridical picture of the neural networks supporting speech perception when we use standard pulse sequences than when we use sparse or clustered acquisition. (I am TOTALLY going to use this argument the next time a reviewer says my study is flawed because I didn’t use sparse sampling!) This is why I think it is a brilliant strategy to develop a research program to study speech processing in noise using fMRI, as Lee Miller at UC Davis has done. Not only is it the case that speech is typically processed in noisy environments (ecological validity) and that processing speech in noise is often disproportionately affected by damage the auditory/speech system (clinical applicability), but fMRI is really noisy. Brilliant!
A recent fMRI study by Bishop and Miller tackled this issue by adding even more noise to speech stimuli. They asked subjects to judge the intelligibility of meaningless syllables (the signal) presented in a constant babble (the noise) created by mixing speech from 16 talkers. They manipulated the intensity (loudness) of the signal to create a range of signal to noise ratios. In addition, they presented visual speech in the form of a moving human mouth that was either synchronous with the speech signals or temporally offset. The visual speech information was blurred to preserve the gross temporal envelope information but obscure the fine details so that syllable identification could not be achieved using visual information alone. They also had an auditory-only condition.
They did a couple of analyses. The one that would seem most interesting and the one that was emphasized in the paper was the contrast between stimuli that were judged intelligible verses those that were judged unintelligible, collapsed across the visual speech conditions. Oddly, no region in the superior temporal lobes in either hemisphere showed an effect of intelligibility. This is completely unexpected given the very high-profile finding by Scott and her colleagues who claim to have identified a pathway for intelligible speech in the left anterior temporal lobe (Scott et al. 2000). Instead, Bishop & Miller found an intelligibility effect bilaterally in the temporal-occipital boundary (in the vicinity of the angular gyrus), in the left medial temporal lobe, right hippocampus, left superior parietal lobule, left posterior intraparietal sulcus, left superior frontal sulcus, bilateral postcentral gryus, and bilateral putamen -- not your typical speech network!
They then assessed audiovisual contributions to intelligibility by looking for regions that show both an intelligibility effect (intelligible > unintelligible) and an audiovisual effect (synchronous AV > temporally offset AV). This conjunction led to activation in a subset of the intelligibility network including left medial temporal lobe, bilateral temporal-occipital boundary, left posterior inferior parietal lobule, left precentral sulcus, bilateral putamen, and right post central gyrus.
The authors discuss the possible role of medial temporal lobe structures in speech “understanding” (e.g., “evaluating cross-modal congruence at an abstract representational level”) as well as the role of the temporal-occipital boundary (e.g., “object recognition … based on features from both auditory and visual modalities”).
But what interests me most about this study, and what I think is the most important contribution, is not the intelligibility contrast but their acoustic control. Recall that they parametrically manipulated the signal to noise ratio (SNR). They ran an analysis to see what correlated with this SNR variable. The goal was to see if SNR could explain their intelligibility effect. The answer was no, “the BOLD time course for our understanding network was not adequately explained by SNR variance.” But the regions that did correlate with SNR turned out to be a familiar set of speech-related regions: bilateral STG and STS, and left MTG!
What I think this study has actually shown is that phonological perceptibility is strongly correlated with activity in a bilateral superior temporal lobe network (SNR variable) and that the “understanding network” reflects those top-down (or higher-level) factors that influence how the phonological information is used (e.g., to make an intelligibility decision). Of interest in this respect is the high degree of overlap in the distribution of SNR values judged to be intelligible (red) versus unintelligible (blue).
Because there was so much overlap in these distributions, the contrast between intelligible and unintelligible yielded no effect in regions that were responding to the phonemic information in the signal.
In sum, I think this study nicely supports the view that phonemic aspects of speech perception are bilaterally organized in the superior temporal lobe, but goes further to outline a network of regions that provide top-down/higher-level constraints on how this information is used in performing, in this case, a cross-modal integration task.
References
Bishop, C., & Miller, L. (2009). A Multisensory Cortical Network for Understanding Speech in Noise Journal of Cognitive Neuroscience, 21 (9), 1790-1804 DOI: 10.1162/jocn.2009.21118
Scott, S. (2000). Identification of a pathway for intelligible speech in the left temporal lobe Brain, 123 (12), 2400-2406 DOI: 10.1093/brain/123.12.2400
A recent fMRI study by Bishop and Miller tackled this issue by adding even more noise to speech stimuli. They asked subjects to judge the intelligibility of meaningless syllables (the signal) presented in a constant babble (the noise) created by mixing speech from 16 talkers. They manipulated the intensity (loudness) of the signal to create a range of signal to noise ratios. In addition, they presented visual speech in the form of a moving human mouth that was either synchronous with the speech signals or temporally offset. The visual speech information was blurred to preserve the gross temporal envelope information but obscure the fine details so that syllable identification could not be achieved using visual information alone. They also had an auditory-only condition.
They did a couple of analyses. The one that would seem most interesting and the one that was emphasized in the paper was the contrast between stimuli that were judged intelligible verses those that were judged unintelligible, collapsed across the visual speech conditions. Oddly, no region in the superior temporal lobes in either hemisphere showed an effect of intelligibility. This is completely unexpected given the very high-profile finding by Scott and her colleagues who claim to have identified a pathway for intelligible speech in the left anterior temporal lobe (Scott et al. 2000). Instead, Bishop & Miller found an intelligibility effect bilaterally in the temporal-occipital boundary (in the vicinity of the angular gyrus), in the left medial temporal lobe, right hippocampus, left superior parietal lobule, left posterior intraparietal sulcus, left superior frontal sulcus, bilateral postcentral gryus, and bilateral putamen -- not your typical speech network!
They then assessed audiovisual contributions to intelligibility by looking for regions that show both an intelligibility effect (intelligible > unintelligible) and an audiovisual effect (synchronous AV > temporally offset AV). This conjunction led to activation in a subset of the intelligibility network including left medial temporal lobe, bilateral temporal-occipital boundary, left posterior inferior parietal lobule, left precentral sulcus, bilateral putamen, and right post central gyrus.
The authors discuss the possible role of medial temporal lobe structures in speech “understanding” (e.g., “evaluating cross-modal congruence at an abstract representational level”) as well as the role of the temporal-occipital boundary (e.g., “object recognition … based on features from both auditory and visual modalities”).
But what interests me most about this study, and what I think is the most important contribution, is not the intelligibility contrast but their acoustic control. Recall that they parametrically manipulated the signal to noise ratio (SNR). They ran an analysis to see what correlated with this SNR variable. The goal was to see if SNR could explain their intelligibility effect. The answer was no, “the BOLD time course for our understanding network was not adequately explained by SNR variance.” But the regions that did correlate with SNR turned out to be a familiar set of speech-related regions: bilateral STG and STS, and left MTG!
What I think this study has actually shown is that phonological perceptibility is strongly correlated with activity in a bilateral superior temporal lobe network (SNR variable) and that the “understanding network” reflects those top-down (or higher-level) factors that influence how the phonological information is used (e.g., to make an intelligibility decision). Of interest in this respect is the high degree of overlap in the distribution of SNR values judged to be intelligible (red) versus unintelligible (blue).
Because there was so much overlap in these distributions, the contrast between intelligible and unintelligible yielded no effect in regions that were responding to the phonemic information in the signal.
In sum, I think this study nicely supports the view that phonemic aspects of speech perception are bilaterally organized in the superior temporal lobe, but goes further to outline a network of regions that provide top-down/higher-level constraints on how this information is used in performing, in this case, a cross-modal integration task.
References
Bishop, C., & Miller, L. (2009). A Multisensory Cortical Network for Understanding Speech in Noise Journal of Cognitive Neuroscience, 21 (9), 1790-1804 DOI: 10.1162/jocn.2009.21118
Scott, S. (2000). Identification of a pathway for intelligible speech in the left temporal lobe Brain, 123 (12), 2400-2406 DOI: 10.1093/brain/123.12.2400
Thursday, September 10, 2009
Physics of Life Reviews
I have a new review paper on the functional anatomy of language coming out in a journal called Physics of Life Reviews published by Elsevier. Before I was invited to contribute an article I had never heard of this journal but after looking through a few issues I found that it is pretty interesting. The articles are intended for a broad audience and cover everything from neural coding to computational modeling of pandemic influenza outbreaks. In the last couple of years, language issues have been appearing fairly regularly (see sample references below). What I particularly like about the format is that authors are not restricted to x journal pages and are allowed to develop their arguments fully so that hopefully the reader can get an accessible and thorough overview of the issues. It is definitely worth checking out...
EDELMAN, S., & WATERFALL, H. (2007). Behavioral and computational aspects of language and its acquisition Physics of Life Reviews, 4 (4), 253-277 DOI: 10.1016/j.plrev.2007.10.001
Frye, R., Rezaie, R., & Papanicolaou, A. (2009). Functional neuroimaging of language using magnetoencephalography Physics of Life Reviews, 6 (1), 1-10 DOI: 10.1016/j.plrev.2008.08.001
Hickok, G. (2009). The functional neuroanatomy of language Physics of Life Reviews DOI: 10.1016/j.plrev.2009.06.001
Masataka, N. (2009). The origins of language and the evolution of music: A comparative perspective Physics of Life Reviews, 6 (1), 11-22 DOI: 10.1016/j.plrev.2008.08.003
PIATTELLIPALMARINI, M., & URIAGEREKA, J. (2008). Still a bridge too far? Biolinguistic questions for grounding language on brains Physics of Life Reviews, 5 (4), 207-224 DOI: 10.1016/j.plrev.2008.07.002
EDELMAN, S., & WATERFALL, H. (2007). Behavioral and computational aspects of language and its acquisition Physics of Life Reviews, 4 (4), 253-277 DOI: 10.1016/j.plrev.2007.10.001
Frye, R., Rezaie, R., & Papanicolaou, A. (2009). Functional neuroimaging of language using magnetoencephalography Physics of Life Reviews, 6 (1), 1-10 DOI: 10.1016/j.plrev.2008.08.001
Hickok, G. (2009). The functional neuroanatomy of language Physics of Life Reviews DOI: 10.1016/j.plrev.2009.06.001
Masataka, N. (2009). The origins of language and the evolution of music: A comparative perspective Physics of Life Reviews, 6 (1), 11-22 DOI: 10.1016/j.plrev.2008.08.003
PIATTELLIPALMARINI, M., & URIAGEREKA, J. (2008). Still a bridge too far? Biolinguistic questions for grounding language on brains Physics of Life Reviews, 5 (4), 207-224 DOI: 10.1016/j.plrev.2008.07.002
Friday, September 4, 2009
Auditory Cognitive Neuroscience Society Meeting announcement
Mark your calendars!
The Auditory Cognitive Neuroscience Society (ACNS) 2010 conference is scheduled to take place on Thursday January 7th through Friday January 8th, 2010. The conference will be held in the Speech & Hearing Sciences building (room 205) on the University of Arizona campus (1131 E. 2nd Street, Tucson AZ, 85721).
Posters: We will be accepting a limited number of posters to be displayed during this year’s conference. Please view abstract submission guidelines below.
Visit the ACNS Website, 2010 Meeting for more information.
Hope to see you in January! -Andrew J. Lotto & Julie M. Liss
ACNS 2010 Abstract Submission Guidelines
We will be accepting a limited number of posters to be displayed during the ACNS Conference. Acceptance criteria include the following:
The research to be presented is pertinent to the domain of Auditory Cognitive Neuroscience.
The methodology of the work is sound.
Work in progress will be in presentable form by the time of the conference.
Abstracts should be no longer than 350 words (not inclusive of graphs, figures, or references) and include the following components:
Statement of the Problem
Study Design and Method
Results and Interpretation
Please email abstracts, including author names and affiliations, to julie.liss@asu.edu no later than October 31st, 2009. Details regarding poster size and format will be provided at the time of acceptance notification.
The Auditory Cognitive Neuroscience Society (ACNS) 2010 conference is scheduled to take place on Thursday January 7th through Friday January 8th, 2010. The conference will be held in the Speech & Hearing Sciences building (room 205) on the University of Arizona campus (1131 E. 2nd Street, Tucson AZ, 85721).
Posters: We will be accepting a limited number of posters to be displayed during this year’s conference. Please view abstract submission guidelines below.
Visit the ACNS Website, 2010 Meeting for more information.
Hope to see you in January! -Andrew J. Lotto & Julie M. Liss
ACNS 2010 Abstract Submission Guidelines
We will be accepting a limited number of posters to be displayed during the ACNS Conference. Acceptance criteria include the following:
The research to be presented is pertinent to the domain of Auditory Cognitive Neuroscience.
The methodology of the work is sound.
Work in progress will be in presentable form by the time of the conference.
Abstracts should be no longer than 350 words (not inclusive of graphs, figures, or references) and include the following components:
Statement of the Problem
Study Design and Method
Results and Interpretation
Please email abstracts, including author names and affiliations, to julie.liss@asu.edu no later than October 31st, 2009. Details regarding poster size and format will be provided at the time of acceptance notification.
Tuesday, September 1, 2009
Parallels between conduction aphasia and optic ataxia
Conduction aphasia and optic ataxia are both "dorsal stream" sensory-motor integration syndromes. The only difference is they affect distinct motor effector systems. At least that is the view I'd like to promote.
For those who aren't familiar with these syndromes, conduction aphasia is a language disorder characterized by phonemic paraphasias (speech production errors), difficulty with verbatim repetition of speech, but with preserved auditory comprehension. Optic ataxia is a "motor" disorder which affects the patient's ability to perform visually guided reaching/grasping actions. For example, "such patients demonstrate an exaggerated and poorly scaled grip aperture" (p. 172)(Rossetti et al. 2003). Visual recognition is unimpaired.
Let's consider the parallels between these syndromes. First, in both cases, "ventral stream functions" are preserved. Conduction aphasics can comprehend speech (even speech they can't repeat) and optic ataxics can recognize visually presented objects. Second, sensory-guided action is disrupted. Conduction aphasics have difficulty transforming auditory speech input into motor gestures that will reproduce what was heard. Optic ataxics have difficulty using visual information to guide reaching and grasping actions. Third, both syndromes exhibit familiarity effects. Conduction aphasics have more trouble repeating longer, lower-frequency phrases than shorter familiar phrases. Optic ataxics "exhibit far less visuomotor deficits when they reach and grasp familiar objects" (p.173) (Rossetti et al. 2003). The lesion location for both syndromes is in the same relative vicinity. Conduction aphasics have lesions that involve the inferior parietal lobe/superior temporal lobe (area Spt, I would argue). Optic ataxics have lesions that involve the superior posterior parietal lobe.
Optic ataxia is the poster child of "dorsal stream deficits". I have been arguing for some time that conduction aphasia is also best conceptualized as a dorsal stream deficit, but affecting a different output modality, the vocal tract (e.g., Hickok et al. 2003). The functional parallels between conduction aphasia and optic ataxia provide further evidence for this proposal.
One interesting consequence of this situation is that opens up the possibility of cross-fertilization between these rather disparate domains of research. What can we learn about conduction aphasia from research on optic ataxia and vise versa?
References
Hickok, G., Buchsbaum, B., Humphries, C., & Muftuler, T. (2003). Auditory–Motor Interaction Revealed by fMRI: Speech, Music, and Working Memory in Area Spt Journal of Cognitive Neuroscience, 15 (5), 673-682 DOI: 10.1162/089892903322307393
Rossetti, Y., Pisella, L., & Vighetto, A. (2003). Optic ataxia revisited: Experimental Brain Research, 153 (2), 171-179 DOI: 10.1007/s00221-003-1590-6
For those who aren't familiar with these syndromes, conduction aphasia is a language disorder characterized by phonemic paraphasias (speech production errors), difficulty with verbatim repetition of speech, but with preserved auditory comprehension. Optic ataxia is a "motor" disorder which affects the patient's ability to perform visually guided reaching/grasping actions. For example, "such patients demonstrate an exaggerated and poorly scaled grip aperture" (p. 172)(Rossetti et al. 2003). Visual recognition is unimpaired.
Let's consider the parallels between these syndromes. First, in both cases, "ventral stream functions" are preserved. Conduction aphasics can comprehend speech (even speech they can't repeat) and optic ataxics can recognize visually presented objects. Second, sensory-guided action is disrupted. Conduction aphasics have difficulty transforming auditory speech input into motor gestures that will reproduce what was heard. Optic ataxics have difficulty using visual information to guide reaching and grasping actions. Third, both syndromes exhibit familiarity effects. Conduction aphasics have more trouble repeating longer, lower-frequency phrases than shorter familiar phrases. Optic ataxics "exhibit far less visuomotor deficits when they reach and grasp familiar objects" (p.173) (Rossetti et al. 2003). The lesion location for both syndromes is in the same relative vicinity. Conduction aphasics have lesions that involve the inferior parietal lobe/superior temporal lobe (area Spt, I would argue). Optic ataxics have lesions that involve the superior posterior parietal lobe.
Optic ataxia is the poster child of "dorsal stream deficits". I have been arguing for some time that conduction aphasia is also best conceptualized as a dorsal stream deficit, but affecting a different output modality, the vocal tract (e.g., Hickok et al. 2003). The functional parallels between conduction aphasia and optic ataxia provide further evidence for this proposal.
One interesting consequence of this situation is that opens up the possibility of cross-fertilization between these rather disparate domains of research. What can we learn about conduction aphasia from research on optic ataxia and vise versa?
References
Hickok, G., Buchsbaum, B., Humphries, C., & Muftuler, T. (2003). Auditory–Motor Interaction Revealed by fMRI: Speech, Music, and Working Memory in Area Spt Journal of Cognitive Neuroscience, 15 (5), 673-682 DOI: 10.1162/089892903322307393
Rossetti, Y., Pisella, L., & Vighetto, A. (2003). Optic ataxia revisited: Experimental Brain Research, 153 (2), 171-179 DOI: 10.1007/s00221-003-1590-6
Subscribe to:
Posts (Atom)