Friday, January 29, 2016
Thursday, January 7, 2016
The positions will be in Shanghai and are expected to start March 1, 2016, each for a two-year term, with the possibility to extend. Focuses of cognitive neuroscience position include but not limited to neural bases of speech and language, decision making and memory. Qualified applicants are expected to hold a Ph.D. in Psychology, Neurolinguistics, Neuroscience, and other relevant quantitative disciplines.
Successful candidates will receive globally competitive compensations, and have the opportunities to spend time at other NYU portal campuses (Abu Dhabi and New York). Applications will be reviewed until the positions are filled. To be considered, applicants must submit a CV, and the names and contact information of three references. Please follow the link to apply: http://www.nyuopsearch.com/applicants/Central?quickFind=52765
If you have any questions, please e-mail firstname.lastname@example.org.
Sunday, December 6, 2015
Wednesday, November 25, 2015
Monday, November 23, 2015
Friday, October 30, 2015
“The similarity between the motor representation generated in observation and that generated during motor behavior allows the observer to understand others’ actions, without the necessity for inferential processing.”
“neurons in F5 code the goal of the motor act [grasping, holding, tearing], regardless of how it is achieved.”
“The defining characteristic of F5 mirror neurons is that they fire in response to the presentation of a motor act, which is congruent with the one coded motorically by the same neuron.”
“the vast majority of F5 mirror neurons, termed broadly congruent respond to different motor acts, provided that they serve the same goal (Gallese et al. 1996).
“Thus, like the visual system, where, as postulated by Shepard (1984), resonating elements (neurons or neuronal assemblies) respond maximally to a set of stimuli, but are also able to respond to similar stimuli when they are incomplete or corrupt, a set of mirror neurons (broadly congruent) appears to resonate to all visual stimuli that have sufficient critical features to describe the goal of a given motor act.”
- Type 1 (12.5%): execution response=“highly specific” (e.g., grasping w/precision grip); observation response more general (precision or whole hand)
- Type 2 (82%): execution response=one goal (e.g., grasping); observation response > 1 goal (e.g., grasping or manipulating)
- Type 3 (5%): execution response=grasping; observation response=grasping with hand, grasping with mouth
There are more problems, which may apply to the 3/92 cells that have the right response properties for understanding, making their suitability for understanding questionable. Mirror neurons are sensitive to all sorts of features that have nothing to do with action understanding. Here's a list:
Thursday, October 29, 2015
First note that this touch-based "mirror mechanism" is quite different from so-called motor mirroring. The motor claim is non-trivial: perceptual understanding is not achieved by perceptual systems alone, but must (or can benefit from) involvement of the motor system.
What about perceptual mirroring? At the most abstract level, the claim is this: perceptual understanding is based on perceptual processes. Not so insightful is it? Perhaps it's even vacuous. But maybe this is too harsh an analysis. One could presumably understand the concept of someone being touched on the arm without involving an actual somatosensory representation. So maybe it is non-trivial, insightful even, that we do activate our touch cortex when observing touch. In fact, for the sake of argument, let's grant that the empirical observation is true and that it does contribute to our understanding.
What might it add to understanding? Or put differently, how much does that somatosensory "simulation" add to our understanding of an observed touch? Consider the following narrative scenarios.
Scenario #1: After he expressed his affection during the romantic dinner, the man reached out and touched the girl gently on the arm.
Scenario #2: After subduing his victim during the home invasion, the man reached out and touched the girl gently on the arm.
How much our understanding of the meaning of that touch action is encoded in the somatosensory experience? Almost none of it. The "meaning" of the action is determined for the most part by the context as it interacts with the observed action. The touch wouldn't even have to actually happen, or it could occur on a different body part (all very different experiences from a somato standpoint!), and it wouldn't alter our understanding of the event. Yes, it's true that simulating the actual touch might add something, i.e., having a sense of what the actual gentle touch felt like on the arm, but what drives real understanding is the interpretation of that touch in its context, not the somatopically specific touch sensation itself.
Conceptualized in these terms, to say that somatosensory simulation contributes to understanding of others' touch experiences is like saying that "acoustic simulation" of the voiceless labiodental fricative in the experience of hearing "fuck you" contributes to the understanding of that phrase. Yes, I suppose the /f/ plays a role, but how it combines with "uck you" and more importantly who said it to whom and under what circumstances is where the meat of the understanding will be found.
It's interesting and worthwhile to understand all the cognitive and neural bits and pieces that contribute to understanding. Lowish-level embodied "simulation," whether motor or sensory, may have a role to play. But it is important to understand these effects in the broader context. Don't for a second think that we've cracked the cognitive code for understanding just because M1 or S1 activates when we see someone do something.