Thursday, March 5, 2015

Why computational cognitive scientists can continue their work despite rumors of their field's demise

The cognitive revolution (or better, information processing revolution) rejected the idea that behavior could be understood without reference to a contribution from the mind/brain.  Through decades of experimentation and theory development, we have come to appreciate that the mind/brain works by computing (or better, transforming) information available in the environment (or stored in the mind/brain itself) as a means to control behavior.  Call this the computational theory of mind.  Models in this framework often abstract away from particular instances (tasks, experiences, actions) and develop abstract models of how the brain computes (transforms information).  These often use mathematical symbols or other representational notation.

Radical embodied cognition rails against this view and makes arguments along these lines:

Computational/symbolic/mathematical models are descriptions of some phenomenon.  For example, a falling apple doesn't actually compute the gravitational force as understood mathematically.  The mind is the same. Just because you can describe, say, aspects of movement according to Fitt's law doesn't mean the brain actually computes the formula.  And by generalization, just because we can describe lots of mental functions using computational/symbolic/mathematical models doesn't mean the brain computes or processes symbols. Therefore, the mind doesn't compute; computational models are barking up the wrong tree; we need a new paradigm.

Putting aside debates about what counts as computation, here's why these sorts of arguments don't change the computational cognitive scientist's research program one bit.

Falling apples don't compute, but an abstract mathematical description of the force behind the behavior led to great scientific progress.  It is the abstract mathematical descriptions that has pushed physics to such heights of understanding.  If physicists rejected their theories just because apples don't compute, we probably would be too busy tending the farm to debate this silliness.  Therefore, modeling cognition using abstract computational systems can (has!) lead (led!) to great scientific progress.  Even if the mind isn't literally crunching X's and Y's, there is great value in modeling it this way.

No computational cognitive scientist (that I know) actually believes the mind works precisely, literally as their models hold.  Chomskians don't believe neuroscientists will find linguistic tree structures lurking in dendritic branching patterns nor do Bayesians expect to find P(A|B) spelled out by collections of neurons doing Y-M-C-A dance moves.  Rather, we understand that these ideas have to be implemented by massive, complex neural networks structured into a hierarchy of architectural arrangements bathed in a sea of chemical neuromodulators and modified according principles such as spike-timing-dependent plasticity.  No one (that I know) is foolhardy enough to believe that the relation between our computational models and neural implementation is literal, transparent, or simple.  In short, computational cognitive scientists use their models in exactly the same way physicists use math. To reject this approach because mathematical symbols aren't literally lurking in the brain is foolish.

Cognitive neuroscientists, also disparaged by the embodieds, are working on the linking theories, asking how tree structures or prior probabilities might be implemented in neural networks.  Not surprisingly, the neural implementation models don't literally contain symbols. Instead they contain units (e.g., neurons) arranged into architectures, with particular connection patterns, nested oscillators, modulators, and so on, and often modeled after real brain circuits as best we understand them.  We are doing well enough at neurocomputational modeling to simulate all kinds of complex behaviors.

I respect that radical embodieds want to see how much constraint on cognitive systems the environment and the body can provide and that they want a more realistic idea of how the mind physically works (in which case I suggest studying neuroscience rather than polar planimeters).  We have learned some things from this embodied/ecological approach.  But given that subscribers don't reject that the mind/brain contributes something, we still need models of what that something is.  And this is what computational cognitive scientists have been working on for decades with much success.
Carry on, you computational people.  Let's check back in with the radical embodies in 2025 to see how far they've gotten in figuring out attention, language, memory, decision making, perceptual illusions, motor control, emotion, theory of mind, and the rest. If they have made some progress, and I expect they will, we can then update our models by adding a few body parts and letting our robots roam a little bit more.

 

9 comments:

William Matchin said...

I feel that Fodor & Pylyshyn 1988 'Connectionism and cognitive architecture: a critical analysis' is worth a read here - I keep feeling that these debates have already been done generations ago.

Yohan said...

Good post!

What I find most baffling about the radical embodied cognition / ecological psychology folks is their antipathy to 'representations'.

http://mindhacks.com/2015/03/05/radical-embodied-cognition-an-interview-with-andrew-wilson/

I don't see how they can argue that retinitopic or tonotopic maps are anything other than representations. Perhaps the issue is purely semantic?

Unknown said...

@Han

Well, is the retinotopic map the representation, or is it the pattern of neural activity upon that map the representation? I assume you mean the latter, but the existence of these maps does not explain how we see, at least in these sense of the transformation of representations from retina to V1 and onwards to the dorsal and ventral streams, which is the standard model taught in perceptual psychology. We know this cannot be the case because eye tracking experiments indicate we only ever look at a small fraction of a scene, yet we perceive a whole scene. The standard representationalist response is that top down processes mediate the retinotopic representation or that the representation is neurally distributed. Nobody is denying the existence of retinotopic maps on the visual cortex, the question is what role they play in the act of seeing. As was pointed out in your link there are credible alternatives to the representationalist accounts, such as O’Regan & Noe’s sensorimotor account of vision and visual consciousness: http://postcog.ucd.ie/files/oregan.noe_.pdf

Despite what Greg seems to claim, the stance you take on embodiment and representations will have an impact on the type of research done. If you think visual consciousness is about the transformation of representations in the brain and body, then you’ll look for neural activity that encodes those processes. If you think the world is the representation the brain uses, then you might look for neural activity that supports the functional relations between brain, body and world. That’s a pretty big difference in how you interpret functional brain imaging (for example) and how you design experiments in the first place.

I'm still not sure what I think about representational accounts of cognition myself, but theres certainly a lot to learn from embodied critiques.

Yohan said...

Thanks Michael! I think that's a very fair point.

I think there is some confusion about the meaning of the word "representation" is psychology and neuroscience. In psychology and philosophy it seems to have something to do with subjective perception. But neuroscientists often use it in a purely instrumental/functional sense, as is the case with a motor map (which almost by definition has nothing to do with perception). For many neuroscientists, a representation is a concordance between neural activity and some external or bodily phenomenon.

I've always liked the embodied cognition idea as a philosophy, but I've never actually seen a computational model built on non-representational principles.

Even the simplest neural models I have built or read about start with some kind of mapping: a tonotopic mapping, a head direction mapping, a behavioral plan map etc.

What would it mean to avoid these representations? Are you aware of any computational models that employ a non-representational framework?

Greg Hickok said...

What makes you excited about the embodied cognition philosophy?

Unknown said...

@ Greg

I think the falling apple analogy is a poor one because nobody is arguing that apples have agency, so of course nobody is arguing that it literally manipulates symbols and does computations as it falls.

However most cognitive neuroscience does rest upon an assumption that that is what is going on in the human brain – literally. Not that there’s a little homunculus scratching out numbers on a blackboard in our heads, but that the neural activity plays essentially the same functional role. That’s the beauty and power of the ‘multiple realisable’ aspect of this type of functionalism. A homunculus might write 1 and 0, or a computer might calculate the same figures, whereas in the brain, a neurons action potential triggers or doesn’t trigger. They’re all essentially the same activity, or so the argument goes.

To get back to the apple analogy, computation is a powerful way of describing how an apple falls. However, there’s a danger of actually computing very little about the process you’re trying to study. For example, a computational account of the patterns of red and green of the apple will tell us little about how it falls. If the representational account of vision and visual consciousness isn’t ‘correct’ (or usefully described in a representational way) as O’Regan & Noe argue, then a lot of the computational work is wasted in the same way.

So I don’t think many embodied cognitive scientists argue that a computational approach isn’t useful per se, just that if it’s using a representational model, it can be a waste of time.

@Han

I’m not sure what you mean by computational and neural models. If you mean cognitive models that use mathematical formula and data manipulation to calculate some sort of output and thus model behaviour, then it’s difficult to imagine those without representations. Likewise with connectionist models, a 2 layer network is arguably not necessarily representationalist, but that’s pretty difficult to argue with 3+ layers. But those fields are designed with representations in mind. Likewise if you mean models of neural activity, then as most cognitive neuroscience is representational it’s going to be difficult to imagine neural models without them! I think one of Andrew Wilsons point is that most researchers don’t even try.

Perhaps a deeper issue might be your treatment of sensory and motor activity as being essentially separate and mediated by neural activity – the cognitive sandwich. Now it’s perfectly possible to be critical of the cognitive sandwich model and still use representations, but I think the cognitive sandwich promotes the use of representations in a way that other models, such as ecological or enaction models don’t. If you describe a system as input-processing-output, representations seem to fit very nicely into that, especially if the processing part is as multi-layered and complex as the human brain.

However, if you look on perception/action as being a continuous process that is distributed beyond the brain and the organism, then the obviousness and unassailability of representations seems to wither away somewhat.

ikbol said...

Greg,

If you're doing only mathematical analyses of the world, then you're restricted. You're putting the world into neat regular boxes literally and they're artificial - and the world isn't made of neat mathematical boxes.

Let's say you wish to compute the paths of these bodies - natural human bodies -

http://www.bbc.co.uk/staticarchive/075dd11c11bfb9ecc7aa8aeac47749e343ad5e68.jpg

How do you do that? Compute the future motion of these bodies from their present still positions on the canvas? Because you can do that - you can get up and dance like everyone of those bodies. A pretty remarkable feat given that you have only the figures/outlines of those bodies - extremely limited info.

The odds are, I suggest, we have a "body computer" - a "morphological computer - that computes the world by mapping other bodies onto its own." You configure your body here according to the figures painted (map your body onto theirs) - and by positioning your body that way are able to tell where they're going to move, how fast they're likely to move, and the manner of their movement - based on stored configurational memories of your own body.

If you're going to suggest that you could do this mathematically - or that there would be any point to doing it mathematically - I would say you're nuts.

And the issue is of enormous technological importance, because AGI/strong AI has been stillborn for 50 years, because maths and logic can't perceive any kind of natural bodies that aren't extremely homeomorphic.

And really isn't the idea of a mind computer actually a fiction? Mind began in living creatures as a neural network - isn't the brain merely the centre of the human body's neural network - and quite inseparable from, and nonfunctional without the whole network? And isn't even a standalone computer attached to a body of sorts with which it does things - like a screen on which it forms images?

Doesn't thinking that computation can only mean digital, logicomathematical computation show a desperate lack of technological imagination?

My guess is we're body/morphological computers first and last - and digital computation is just an extremely useful extension - of our digits/fingers.

ikbol said...

P.S. The body computer doesn't work just by comparison with previous body configurations.

By configuring your body into new (as well as old) positions, you can work out the properties of other bodies. So, for example, if you've never seen Charlie Chaplin before, a few seconds exposure to film of him, can give you his walk - and enable you to predict how he might walk in different situations to those you've seen. It also enables you to understand and empathise with the emotions attached to his walk.

The idea that a digital computer could ever do this is absurd - and basically a rationale of Ludditism.

Yohan said...

@ikbol

There seems to be some confusion about the role of mathematics in science. Not all scientists confuse the map with the territory. We use computation to understand what real neural networks do, and then make precise analogies between computations on our computers and processes involving neural activity.

All these examples — Charlie Chaplin etc — constitute elaborate 'arguments from incredulity'. Imagine someone in the 19th century saying 'math can't possible have anything to do with heredity'. That has been proven wrong by population genetics.

Just because our methods haven't yielded success yet doesn't mean they won't in the future. The whole foundation of science is that the universe in intelligible in terms of mathematics. Why should we give up on this idea, that too at a time when artificial neural networks and robotics are getting really interesting?