Wednesday, November 12, 2014

Computation at the neuron level -- where noncomputational embodied theories need to start

It seems that some embodied theorists see no need for computation or perhaps even information processing.  Rather than talking about, say, how interaural time difference (ITD) information can be used to compute spatial location, some embodied theorists want to say that spatial location is "perceived directly" given the physical signal as it passes through body-determined channels.  The brain is thought to bring little to the task in that the physical signal is not transformed but rather registers directly in neural systems.

These theorists have spent a fair amount of time talking about the body--the movement is called "embodiment" after all--but little time talking about what's going on at the neuronal level.  I say, point well taken with respect to the contribution of the body: you don't get ITDs in the first place without two ears and a head in between.  But I also say, it is time for embodied theorists to look at the next step in the "registration" of those physical signals: the function of individual neurons. (Actually this is the second step, the first being transducer organs such as the cochlea and photoreceptor cells).  Physical signals must be passed through neurons, which exhibit a complex relation between input and output.  Some would even go so far as to say neurons are transforming the signal, i.e., computing. Here's a quote that gives a sense of what's going on at the single neuron level:
Neurons take input signals at their synapses and give as output sequences of spikes. To characterize a neuron completely is to identify the mapping between neuronal input and the spike train the neuron produces in response. In the absence of any simplifying assumptions, this requires probing the system with every possible input. Most often, these inputs are spikes from other neurons; each neuron typically has of order N ~ 10^3 presynaptic connections. If the system operates at 1 msec resolution and the time window of relevant inputs is 40 msec, then we can think of a single neuron as having an input described by a ~ 4 x 10^4 bit word—the presence or absence of a spike in each 1 msec bin for each presynaptic cell—which is then mapped to a one (spike) or zero (no spike). More realistically, if average spike rates are ~10s^-1 the input words can be compressed by a factor of 10. In this picture, a neuron computes a Boolean function over roughly 4000 variables.  Aguera y Arcas et al. Neural Computation 15, 1715–1749 (2003)
If you want neuroscientists and good old fashioned cognitive scientists (GOFCS) to take you seriously, build some "embodied" models of whatever process you are interested in and let's see how far you get without transforming the information (and therefore morphing into a GOFCS).  For now, we don't see how you can get past even a single neuron without information processing rendering your more fundamental claims pretty much vacuous.  

Friday, November 7, 2014

Embodied robots -- Post #2 on Wilson & Golonka 2013

There's some cool stuff highlighted by W&G including robots that tidy up without being programmed to do so, robots that walk (downhill) with only the power of gravity simply because their bodies were designed in the right way, and cricket robots that find the best mate automatically due to the architecture of the sound localization system. We've discussed sound localization previously so let's focus on the two other examples.

Robots that tidy without the intention to do so or knowledge they did it.

Robots with two sensors situated at 45 degree angles on the robot's "head"


and a simple program to avoid obstacles detected by the sensors will after a while tidy a room full of randomly distributed cubes into neat piles:


W&G conclude from this that,

Importantly, then, the robots are not actually tidying – they are only trying to avoid obstacles, and their errors, in a specific extended, embodied context, leads to a certain stable outcome that looks like tidying 
The point here is that the robots did not have a representation for, or a desire to, tidy or even any knowledge that they had tidied.  A complex "cognitive" behavior can emerge from "a single rule, 'turn away from a detected obstacle'" to quote W&G.  

This is cool.  But it neither rules out computation/information processing as the basis of mental function nor tell us how and why humans tidy.  Regarding my first point, notice that even though there is no program in the bot specific to tidying, there is a program nonetheless--W&G call it a "rule" which thought would be a banned termed in the embodied camp--that controls the robot's behavior.  Granted, the computation has nothing to do with tidying.  But it has everything to do with detecting obstacles and using that information to generate a change in a motor plan, which itself is a computational problem that the robot's programmers have solved.  W&G point to tidying behavior but ignore completely the sensorimotor behavior of the robot.  By analogy suppose I laid out the following argument. Humans can dull the point on a pencil's lead. I've developed a robot that writes with a pencil. I've programmed nothing in the robot about pencil lead or the desire to dull it. Yet it happens as an emergent property of the system. Therefore, the system isn't computing, all we have to do is set up the right environmental conditions and it will happen dynamically.  The flaw in the logic of course is you had to program the robot to write.  

What about humans?  Could our tidying behavior be explained similarly?  Not a chance.  The bots don't know they are tidying.  We recognize it immediately.  Where is that knowledge coming from?  Now you need a theory of knowledge of tidying and things get complicated again.  Just because you can get complex-looking behaviors to emerge from simple architectures doesn't mean the simple architectures aren't computing and it doesn't mean that humans do it that way.  

Robot bodies that walk themselves

W&G ask, 
Why does walking have the form that it does? One explanation is that we have internal algorithms which control the timing and magnitude of our strides. Another explanation is that the form of walking depends on how we are built and the relationship between that design and the environments we move through.
Although it's hard to imagine that walking doesn't depend on how we are built and the environment we move through, let's allow the argument.  

Humans don’t walk like lions because our bodies aren’t designed like lions’ bodies. 
Not gonna argue with that!  

Robotics work on walking show that you can get very far in explaining why walking has a particular form just by considering the passive dynamics. For example, robots with no motors or onboard control algorithms can reproduce human gait patterns and levels of efficiency simply by being assembled correctly 
Ah, now some substance.  This is interesting work.  Engineers built a robot frame that could walk down an incline, slinky-like, with nothing but gravity pulling it along.  That's an impressive bit of engineering, but humans can walk on flat ground.  So, the bot shell was fitted with

simple control algorithms..., which allows the robots to maintain posture and control propulsion more independently. 
What's cool is that these simple control algorithms--way simpler than previously used--when fitted to different body types work for a wide range of locomotion behaviors.  W&G conclude,

These robots demonstrate how organisms might use distributed task resources to replace complex internal control structures. 
This is fantastic work.  If you look at the original science paper, in the supplemental material you find that the authors of the paper likened their approach to that of the Wright brothers in designing their plane.  Instead of trying to engineer a craft that from the start could power itself and fly, the Wrights first designed a craft that could glide, then they fit a simple motor to it and (no surprise to them or us now) it flew on its own power.  So building a robot that can glide (e.g., walk down an incline) was a great first step.  Then all you have to do it build in a simple control system.  We don't hear much about this control system in W&G's paper only that they are "simple control algorithms." Here's the description from the science paper:
Their only sensors detect ground contact, and their only motor commands are on/off signals issued once per step. In addition to powering the motion, hip actuation in the Delft biped also improves fore-aft robustness against large disturbances by swiftly placing the swing leg in front of the robot before it has a chance to fall forward.
With the right design, complex calculations can be replaced with simple calculations.  (But they're still calculations, which W&G don't mention.)  Now, if you want the robot to do a little learning, e.g., in order to adapt to changing walking environments, you need to add a little to the computations. The same science paper reports how they implemented sensorimotor learning in their robot:

The robot acquires a feedback control policy that maps sensors to actions using a function approximator with 35 parameters. With every step that the robot takes, it makes small, random changes to the parameters and measures the change in walking performance. This measurement yields a noisy sample of the relation between the parameters and the performance, called the performance gradient, on each step. By means of an actor-critic reinforcement learning algorithm (18), measurements from previous steps are combined with the measurement from the current step to efficiently estimate the performance gradient on the real robot despite sensor noise, imperfect actuators, and uncertainty in the environment. The algorithm uses this estimate in a real-time gradient descent optimization to improve the stability of the step-to-step dynamics. 
The supplementary material provides more information on this algorithm:

The learning controller, represented using a linear combination of local nonlinear basis functions, takes the body angle and angular velocity as inputs and generates target angles for the ankle servo motors as outputs. The learning cost function quadrat- ically penalizes deviation from the dead-beat controller on the return map, evaluated at the point where the robot transfers support from the left foot to the right foot. Eligibility was accumu- lated evenly over each step, and discounted heavily (γ 0.2) between steps. The learning algorithm also constructs a coarse estimate of the value func- tion, using a function approximator with only an- gular velocity as input and the expected reward as output. This function was evaluated and updated at each crossing of the return map.
Although the body design of these robots drastically simplifies the computational task for the robot's digital brain, there is substantially more computation involved in the simple task of walking on level ground that G&W quickly gloss over in their discussion of this example.

All you cognitive modelers who don't take body design into account:  You should!  The embodied theorists are absolutely correct about emphasizing this point.

All you radical embodied cognitive scientists who think you can do away with computation (i.e., information processing): You still can't!  You can simplify the computations and that's excellent progress, but yours is not a new model of the mind.  It's GOFIP--good old-fashioned information processing--hooked up to better models of the delivery system.
 

Wednesday, November 5, 2014

Has embodied cognition earned its name? Critique of Wilson & Golonka 2013 #1

Wilson and Golonka have provided a very nice outline of the embodied cognition enterprise.  Have a look here.  I'm sure this doesn't represent all embodied theorists but it does summarize the radical "replacement" view. So, I've decided to have a very close look at the piece over the next few days and provide my thoughts for further discussion and clarification.  I have no doubts that I will mischaracterize and misunderstand certain things so I hope Andrew and Sabrina will correct and clarify.  Of course, I would love to hear from others as well.  I'm not attempting to summarize the arguments here so please read the paper for context.  Quotes from the paper are indented and my comments follow.  This blog post concerns the second section of the paper.
Because perception is assumed to be flawed, it is not considered a central resource for solving tasks.
Who argues this?  Perceptual scientists?  By "perception" to you mean perceptual systems? Or do you mean the physical signals that perception uses?

Because we only have access to the environment via perception, the environment also is not considered a central resource. 
 Who argues this?  Of course the environment is a resource for perception.  That's where the input comes from.

This places the burden entirely on the brain to act as a storehouse for skills and information. 
Who argues this?  Do you think traditional cognitive psychologists would deny that information can be stored external to the brain in say written form?  Or that the body or environment constrains the brain's solutions to information processing problems?  Of course, you DO need a brain to read those notes.

This job description makes the content of internal cognitive representations the most important determinant of the structure of our behavior. Cognitive science is, therefore, in the business of identifying this content and how it is accessed and used  
Agreed, generally.  But, the starting point (for perceptual research anyway) is what is the nature of the input, which defines the problem.  Sound can hit the ears with time delays; how do you translate that into an orienting response?  The image hitting the two retinas is slightly different; how do you get 3D from that?  Perceptual scientists are ALWAYS mindful of what the input look like.  To say that for non-embodied psychologists it's just a disembodied mind is building a straw man.

Advances in perception-action research, particularly Gibson’s work on direct perception (Gibson, 1966, 1979), changes the nature of the problem facing the organism. 
These "advances" are 30 years old.  Maybe it would be worth looking at more recent models of perception?

if perception-action couplings and resources distributed over brain, body, and environment are substantial participants in cognition, then the need for the specific objects and processes of standard cognitive psychology (concepts, internally represented competence, and knowledge) goes away, to be replaced by very different objects and processes (most commonly perception-action couplings forming non-linear dynamical systems 
 Your conclusion doesn't follow from your premises.  Why does the fact that there is information in the environment and that information processing is constrained by the body mean that you don't need concepts, internal representations, or knowledge?  Also, a dab of circularity here.  "If perception-action couplings..." (your assumption) then we replace standard notions with "perception-action couplings" (your conclusion).  You've at least partially assumed your conclusion.

This, in a nutshell, is the version of embodiment that Shapiro (2011) refers to as the replacement hypothesis and our argument here is that this hypothesis is inevitable once you allow the body and environment into the cognitive mix.  
See above.  It doesn't follow.   So, if I understand the claim, cognition is spread over environment, body, and brain.  Further, traditional theorists didn't put enough emphasis on environment and body and too much on brain.  Ok, that's reasonable. But unless you want to remove the brain/mind altogether, you still need a theory of the brain/mind's contribution to cognition.  Since, according to your own assumptions (i.e., that the brain/mind does something), that theory cannot be fully derived from environment or body.  This means that you will need a traditional information processing model in between.  Therefore at best "embodied cognition" is a variant of standard cognitive models.

To earn the name, embodied cognition research must, we argue, look very different from this standard approach. 
Seems like it hasn't earned its name.  

Tuesday, November 4, 2014

How the mind works: It's the information stupid!

Not that I'm calling anyone stupid.  That's a reference, of course, to Clinton campaign manager James Carville's "It's the economy, stupid." It's a call to refocus the emphasis.  Here we're talking cognitive science and the relation between computational theories and embodied theories of the mind and the need to refocus our emphasis on information processing.

I contend that embodied theories are, under the hood, computational (i.e., information processing) theories and that the embodied folks are mischaracterizing computational theories.  Or at the very least they using one such theory (~Fodorian philosophy) as representative of the whole cognitivist/computational mindset.  In fact, it’s always been about information and how it gets processed.  It doesn’t matter how you process the information—neurons, electronic switches, gears, pumps—it just matters that information (patterns of physical stuff that correlate with the state of the world) is used in such a way as to guide behavior. To try to make this clear, here’s an excerpt from The Myth of Mirror Neurons discussing some early conceptions of cognitive psychology.  
Psychologist Ulric Neisser, who literally named the field and wrote the book on it with his 1967 text, Cognitive Psychology, defined the domain of cognition this way:
“Cognition” refers to all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used.  … Such terms as sensation, perception, imagery, retention, recall, problem-solving, and thinking, among many others, refer to hypothetical stages or aspects of cognition.[1]
Neisser’s table of contents underlined his view that cognition was not limited to higher-order functions.  His volume is organized into four parts.  Part I is simply the introductory chapter.  Part II is called “Visual Cognition” and contains five chapters.  Part III is “Auditory Cognition” with four chapters. Finally, Part IV deals with “The Higher Mental Processes” and contains a single chapter, which Neisser refers to as “essentially an epilogue” with a discussion that is “quite tentative”. He continues,
Nevertheless, the reader of a book called Cognitive Psychology has a right to expect some discussion of thinking, concept-formation, remembering, problem-solving, and the like…. If they take up only a tenth of these pages, it is because I believe there is still relatively little to say about them….
Most scientists today working on perception or motor control, even at fairly low levels, would count their work as squarely within the information processing model of the mind/brain and therefore within Neisser’s definition of cognition.  Consider this paper title, which appeared recently in a top-tier neuroscience journal: Eye Smarter than Scientists Believed: Neural Computations in Circuits of the Retina.  If anything in the brain is a passive recording device (like a camera) or a simple filter (like polarized sunglasses) it’s the retina, or so we thought. Here’s how the authors put it:
Whereas the conventional wisdom treats the eye as a simple prefilter for visual images, it now appears that the retina solves a diverse set of specific tasks and provides the results explicitly to downstream brain areas.[2]
Solves a diverse set of specific tasks and provides the results… sounds like a purpose-built bit of programing—in the retina!  We observe similar complexity in the control of simple movements, such as tracking an object with the eyes, an ability that is thought to involve a cerebral cortex-cerebellar network including more than a half dozen computational nodes that generate predictions, detect errors, calculate correction signals, and learn.[3]

---end excerpt--

My former post doc advisor, Steve Pinker, who is arguably today’s champion of the computational theory of mind and a staunch defender of "symbolic processing" (it's not what you think!) reinforces the broad definition of computation as just being about information processing:  
the function of the brain is information processing, or computation… Information consists of patterns in matter or energy, namely symbols, that correlate with states of the world. That’s what we mean when we say that something carries information. A second part of the solution is that beliefs and desires have their effects in computation—where computation is defined, roughly, as a process that takes place when a device is arranged so that information (namely, patterns in matter or energy inside the device) causes changes in the patterns of other bits of matter or energy, and the process mirrors the laws of logic, probability, or cause and effect in the world. [4]
Notice that symbols are defined simply as patterns in matter or energy, not x’s and y’s in lines of code.  The patterns “represent” (i.e., correlate with) states of the world.  This constitutes information that brains can make use of by changing the patterns, e.g., taking interaural time difference and using that information to guide head movement. This is why the embodied movement is so puzzling to me.  It’s fundamentally no different that the computational theory of mind.  Does the body contribute something to information processing?  Of course!  The brain evolved with the body to solve survival problems.  The body shapes the input to the brain. But that doesn't mean that the brain isn't processing information.  


1 Neisser, U. (1967) Cognitive psychology. Appleton-Century-Crofts
2 Gollisch, T. and Meister, M. (2010) Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron 65, 150-164
3 Wolpert, D.M., et al. (1998) Internal models in the cerebellum. Trends in Cognitve Sciences 2, 338-347