Sunday, December 6, 2015
Wednesday, November 25, 2015
Monday, November 23, 2015
Friday, October 30, 2015
“The similarity between the motor representation generated in observation and that generated during motor behavior allows the observer to understand others’ actions, without the necessity for inferential processing.”
“neurons in F5 code the goal of the motor act [grasping, holding, tearing], regardless of how it is achieved.”
“The defining characteristic of F5 mirror neurons is that they fire in response to the presentation of a motor act, which is congruent with the one coded motorically by the same neuron.”
“the vast majority of F5 mirror neurons, termed broadly congruent respond to different motor acts, provided that they serve the same goal (Gallese et al. 1996).
“Thus, like the visual system, where, as postulated by Shepard (1984), resonating elements (neurons or neuronal assemblies) respond maximally to a set of stimuli, but are also able to respond to similar stimuli when they are incomplete or corrupt, a set of mirror neurons (broadly congruent) appears to resonate to all visual stimuli that have sufficient critical features to describe the goal of a given motor act.”
- Type 1 (12.5%): execution response=“highly specific” (e.g., grasping w/precision grip); observation response more general (precision or whole hand)
- Type 2 (82%): execution response=one goal (e.g., grasping); observation response > 1 goal (e.g., grasping or manipulating)
- Type 3 (5%): execution response=grasping; observation response=grasping with hand, grasping with mouth
There are more problems, which may apply to the 3/92 cells that have the right response properties for understanding, making their suitability for understanding questionable. Mirror neurons are sensitive to all sorts of features that have nothing to do with action understanding. Here's a list:
Thursday, October 29, 2015
First note that this touch-based "mirror mechanism" is quite different from so-called motor mirroring. The motor claim is non-trivial: perceptual understanding is not achieved by perceptual systems alone, but must (or can benefit from) involvement of the motor system.
What about perceptual mirroring? At the most abstract level, the claim is this: perceptual understanding is based on perceptual processes. Not so insightful is it? Perhaps it's even vacuous. But maybe this is too harsh an analysis. One could presumably understand the concept of someone being touched on the arm without involving an actual somatosensory representation. So maybe it is non-trivial, insightful even, that we do activate our touch cortex when observing touch. In fact, for the sake of argument, let's grant that the empirical observation is true and that it does contribute to our understanding.
What might it add to understanding? Or put differently, how much does that somatosensory "simulation" add to our understanding of an observed touch? Consider the following narrative scenarios.
Scenario #1: After he expressed his affection during the romantic dinner, the man reached out and touched the girl gently on the arm.
Scenario #2: After subduing his victim during the home invasion, the man reached out and touched the girl gently on the arm.
How much our understanding of the meaning of that touch action is encoded in the somatosensory experience? Almost none of it. The "meaning" of the action is determined for the most part by the context as it interacts with the observed action. The touch wouldn't even have to actually happen, or it could occur on a different body part (all very different experiences from a somato standpoint!), and it wouldn't alter our understanding of the event. Yes, it's true that simulating the actual touch might add something, i.e., having a sense of what the actual gentle touch felt like on the arm, but what drives real understanding is the interpretation of that touch in its context, not the somatopically specific touch sensation itself.
Conceptualized in these terms, to say that somatosensory simulation contributes to understanding of others' touch experiences is like saying that "acoustic simulation" of the voiceless labiodental fricative in the experience of hearing "fuck you" contributes to the understanding of that phrase. Yes, I suppose the /f/ plays a role, but how it combines with "uck you" and more importantly who said it to whom and under what circumstances is where the meat of the understanding will be found.
It's interesting and worthwhile to understand all the cognitive and neural bits and pieces that contribute to understanding. Lowish-level embodied "simulation," whether motor or sensory, may have a role to play. But it is important to understand these effects in the broader context. Don't for a second think that we've cracked the cognitive code for understanding just because M1 or S1 activates when we see someone do something.
Tuesday, October 13, 2015
I've pointed out previously that embodied effects are small at best. Here's an example--a statistically significant crossover interaction--from a rather high-profile TMS study that investigated the role of motor cortex in the recognition of lip- versus hand-related movements during stimulation of lip versus hand motor areas:
Effect size = ~1-2% This is typical of these sorts of studies and beg for a theory of the remaining 98-99% of the variance.
So, let me throw out a challenge to the embodied cognition crowd in the context of well worked out non-embodied models of speech production. Let's take a common set of data, build our embodied and non-embodied computational models and see how much of the data is accounted for by the standard versus the embodied model (or more likely, the embodied component of a more standard model).
Here is a database that contains naming data from a large sample of aphasic individuals. The aim is to build a model that accounts for the distribution of naming errors.
Here is a standard, non-embodied model that we have called SLAM for Semantic-Lexical-Auditory-Motor. (No, the "auditory-motor" part isn't embodied in the sense implied by embodied theorists, i.e., the level of representation in this part of the network is phonological and abstract.) Here's a picture of the structure of the model:
Incidentally, Matt Goldrick argued in a forthcoming reply to the SLAM model paper that this fit represents a complete model failure due to the fact that the patient had zero semantic errors whereas the model predicted some. This is an interesting claim that we had to take seriously and evaluate quantitatively, which we did. But I digress.
The point is that if you believe that embodied cognition is the new paradigm, you need to start comparing embodied models to non-embodied models to test your claim. Here we have an ideal testing ground: established models that use abstract linguistic representations to account for a large dataset.
My challenge: build an embodied model that beats SLAM. You've got about 2% room for improvement.