Monday, September 8, 2014

Why "embodied simulation" is a vacuous concept - from Myth of Mirror Neurons

From "Chapter 6: The Embodied Brain" in The Myth of Mirror Neurons:

To say that a cognitive operation is accomplished via simulation doesn’t simplify the problem, it just hands it off to another domain of inquiry, in this case sensory and motor information processing. It’s akin to a hypothetical rogue head-of-state who calls in his top physicists and demands that they work out how to build a nuclear weapon. The physicists come back a week later and proclaim that they’ve got it all figured out: 

PHYSICISTS: We have determined that Oppenheimer and his team have succeeded in building a nuclear weapon. All we need to do is simulate what they did.
HEAD-OF-STATE: Great! So how did they do it?
PHYSICISTS: We don’t know. But simulating their methods will definitely work.
MIRROR NEURON THEORISTS: We have determined that when we carry out an action of our own we understand the meaning of that action. When we observe someone else’s action, all we need to do is simulate that action in our own motor system and we will achieve understanding.
SKEPTIC: Yes, but how do we understand the meaning of our own actions in the first place?
MIRROR NEURON THEORISTS: We don’t know, but we know that simulating that process will work.


Jonathan Drucker said...

A respectful dissent, from someone who greatly appreciates your work:

Grounding cognitive processing in modality-specific systems doesn't explain cognition, but it does accomplish two very important things: 1) It defines the language in which cognitive operations take place. Symbolic formalisms are attractive, but although they've proved useful in computer science, amodal symbol systems don't go very far in explaining human cognition. By grounding cognition in sensorimotor systems, we've established a cooperative venture between cognitive and sensory neuroscience. That's not to say we've figured much out yet, but it points us in the right direction.
2) it solves the philosophical grounding problem: the meaning of thought is derived from its isomorphisms with the real world, and with our conscious experience of it. What more meaning can be derived?

Some sources:

On simulation:
Barsalou, L. W., Kyle Simmons, W., Barbey, A. K., & Wilson, C. D. (2003). Grounding conceptual knowledge in modality-specific systems. Trends in Cognitive Sciences, 7(2), 84–91.

On deriving meaning by grounding cognitive symbols:
Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

And of course:
Hofstadter, D. R. (2000). Godel, Escher, Bach (p. 168). Penguin.

Greg Hickok said...

Thanks for your comment, Jonathan. As a vocal dissenter myself, I always welcome opposing views and lively discussion! So let me first agree, then disagree with you. :-)

As I pointed out in my book, it could be *very* helpful to know that we should be looking in sensory and motor systems for "high-level cognitive" operations. In that sense, the embodied view may point us in the right direction. What I'm really railing against is the idea that if it can be demonstrated that, say, concepts are embodied in sensory-motor systems, our job is done. Most if not all embodied cog research goes only this far and so isn't very helpful at this stage. I'm waiting for deeper progress.

Where I think you've gone a bit astray, along with many embodied theorists, is in the idea that by moving complicated things down the cognitive hierarchy, things get simpler, less abstract, more resonant with the "real world." The computational problem is no less complicated (how and why do we categorize, for eg?), it's just that the problem has moved brain locations. If anything, embodied cognition complicates sensorimotor systems.

I also think the whole field is hung up on a 30 year old Fodorian version of mental/neural computation: modular, fast-acting, impenetrable input systems feed into an abstract symbolic "cognitive" component. This was neither the original conception of the "cognitive revolution" (originally and more accurately referred to as the *information processing* approach to the mind), nor is it what many of us practicing (non-embodied) students of the mind/brain believe. All of this is detailed in the book.

Finally, that there might/should be an "isomorphism with the real world" is probably an illusion generated by our computational minds.

Jonathan Drucker said...

I think we may be on the same page now.

"What I'm really railing against is the idea that if it can be demonstrated that, say, concepts are embodied in sensory-motor systems, our job is done."

I'd agree. Some outstanding issues:
1) How categorization works
2) What the rest of the cortex (i.e., association areas that are indifferent to input modality) is doing
3) How embodied concepts are manipulated (representation isn't enough - how are they retrieved? How are useful ones selected for further processing? How are they combined to form new concepts?)

I also agree that things aren't simplified when we move "down the cognitive hierarchy" (side note: treating cognition and the brain as hierarchies is still controversial, but, in my opinion, paramount. See the strides our AI friends are making with Deep Learning). To me, the most important consequence of embodiment is that the complexity is of a similar variety at each level. Meaning, just as regularities in sensory input are abstracted to compute object categories, so are regularities at an object-oriented level abstracted to form more abstract categories.

Along the same lines, I (and, to be honest, most people nowadays) reject Fodor's idea of modularity. Most of his nine features (particularly those concerning information flow and access) are inconsistent with the massive parallelism of the brain's architecture. I wouldn't say that modular information isn't abstracted to the point that it no longer resembles the real world. I also wouldn't say that sensorimotor systems operate independently of one another (think of synesthesia, or this: What I would say is that it's not turtles all the way down - at some point, all of cognition can be traced back to the periphery. Maybe the big question we should be asking is how the abstraction process works, and how homogenous it is across systems and levels of the cognitive hierarchy?

I should read your book.

Greg Hickok said...

I propose a hybrid model in my book. Information can be represented at many levels of detail/abstraction. What level you access depends on the task: trying to figure out whether to run or just stand and gawk at the sight of a tiger walking down 5th Avenue tethered to a leash? You need decide whether to "call up" associations with wild animals or pets. Or, trying to remember whether the tiger you saw today on 5th Ave had a striped or not and if so whether they ran parallel or perpendicular to the tail axis? Then you might have consult lower-level visual representations. The neural evidence--detailed in the book--supports a hybrid model.

We need to think of cognition (er, information processing) not in terms of peripheral or central, embodied or symbolic, or even my favorite dorsal or ventral, but rather in terms of the computational tasks that the brain needs to perform. Approached in this way, all the rest will sort itself out.

William Matchin said...

@ Jonathan,

"it solves the philosophical grounding problem: the meaning of thought is derived from its isomorphisms with the real world, and with our conscious experience of it. What more meaning can be derived?"

Define "real world" for me. The physicists are nowhere close to working this out yet, whether the "real world" is comprised of things like bosons or fermions, that sort of thing. Certainly your experience of the world isn't "isomorphic" to bosons or fermions, is it? Our experiences happen to involve things like sense of 3 dimensional space, rigid objects, color, motion, that sort of thing, but these are already highly abstract and hopelessly "cognitive" constructs that are likely to be innate and highly species-specific.

If you want to say that some concepts are derived from other, sensory-motor concepts, then you should recognize that this is purely saying that some innate concepts not derived from the environment are the basis of other, non-innate concepts. This is an old empiricist philosophical tradition, probably the dominant latent assumption of almost all cognitive scientists for quite some time. (1), embodied cognition is a new player to this philosophical tradition, and (2) there are arguments against this position that have largely gone unanswered, namely the work of Fodor who pointed out that "definitions" don't work, or at least nobody has shown how they work.

Fodor, J. A., Garrett, M. F., Walker, E. C., & Parkes, C. H. (1980). Against definitions. Cognition, 8(3), 263-367.

ikbol said...

Hi Greg,

"What I'm really railing against is the idea that if it can be demonstrated that, say, concepts are embodied in sensory-motor systems, our job is done"

I doubt anyone actually says that. Lakoff doesn't, Berggen, who I've just read, definitely doesn't, and explicitly says something like "it's just the beginning."

So let me carry on the incomplete job (off the top of my head) - embodied, image-schematic simulation is most probably the basis of MOST reasoning in humans and animals. (Lakoff only begins to speculate that it plays a significant part - I'm going way further).

The overwhelmingly central way to reason about whether the objects of the world fit together - whether "the hand fits the glove" - is via "embodied simulation". I'd prefer to say by "figurative reasoning". If you want to work out whether the hand fits the glove, or the shoe fits the footprint, or any object/action fits any other object/action, you have to do this (inside your head) by first mapping the figures of those objects onto each other.

Then you go in for "fully embodied real simulation" (!) - i.e. you conduct a physical experiment, and physically check the actual objects against each other.

The idea that physical reasoning about the world (and in the end there is nothing but physical reasoning) can be done by manipulating symbols is a crazy illusion of a dying textual civilisation, and no longer appropriate in an image-rich-nay-wealthy, multimedia civilisation. Symbols are just names of objects - you can only check whether objects fit together by manipulating first the figures (outlines) of those objects and then the actual objects (and not their names). H-A-N-D and G-L-O-V-E (or any other associated names) will not help you

And that is how the whole of science proceeds. Perhaps you'd care to example a single scientific experiment that could have been reasoned about and conducted without
image schematic/figurative reasoning about the relevant objects, and then physical experiment on the objects.

As Lakoff is fairly aware in Women, Fire etc, the idea that figurative/schematic reasoning is central, fundamentally challenges our present ideas of computations, because algorithms can't do figurative reasoning or "figure things/objects out". So there is HUGE vested resistance to these ideas. Cog sci is still hugely committed to symbolic/logo/math-centric reasoning.

Tough. We are going to have to become "literate" in imagistic/schematic/figurative reasoning (from being almost totally illiterate now). And then we are going to have to invent new kinds of computers that can deal directly in images (or "maps" as Ramachandran might approach it) and not just symbols.

Exciting new world, huh?

Mike Tintner

Greg Hickok said...

Thanks for your thoughts, Mike. Your point about embodied theorists admitting that they are just at the beginning of the endeavor is a fair point. Maybe my critique is overly harsh in that sense. But by the same token, you are overstepping the computationalist view: no one thinks that literal symbolic variables are floating around in the brain. Re-read Newell et al.'s uber-symbolic classic, "the logic theorist" and you will find the following statement:

"A program is no more, and no less, an analogy to the behavior of an organism than is a differential equation to the behavior of the electrical circuit it describes. Digital computers come into the picture only because they can, by appropriate programming, be induced to execute the same sequences of information processes that humans execute when they are solving problems. Hence, as we shall see, these programs describe both human and machine problem solving at the level of information processes"

It really is about information processing, not literal symbol manipulation.

By analogy, do you believe in the mathematically described laws of physics? Do you protest that E=MC^2 cannot possibly be a valid way to think about the relation between energy and mass because surely the physical world doesn't *really* compute the relation using such mathematical symbols? Of course not. We are all comfortable with the idea that mathematical formalisms can be used to *describe* the physical world. Why, then, when neuro- and cognitive scientist use the same formalisms to describe the mental world is this "a crazy illusion of a dying textual civilization"?

You say that "symbols are just names of objects - you can only check whether objects fit together by manipulating first the figures (outlines) of those objects and then the actual objects (and not their names). H-A-N-D and G-L-O-V-E (or any other associated names) will not help you."

So let's think a little harder about what manipulating the figures and the objects involves neurally. The first thing you have to do is code the visual image of a glove and somatosensory/visual image of your hand in terms of patterns of spiking neurons. Oops! You're already computing and representing the images in a form that is different from but "stands in" for the physical object! (You can find the abstract mathematical formula for an integrate and fire spiking neuron here:

So even to get to the point where you can do "figurative reasoning" you have perform computations, "represent" information in the language of neurons (as opposed to physical objects themselves), and further process that information to assess the relation between your neural codes of hand and glove. The brain is not capable of dealing "directly" with images.