Friday, November 7, 2014

Embodied robots -- Post #2 on Wilson & Golonka 2013

There's some cool stuff highlighted by W&G including robots that tidy up without being programmed to do so, robots that walk (downhill) with only the power of gravity simply because their bodies were designed in the right way, and cricket robots that find the best mate automatically due to the architecture of the sound localization system. We've discussed sound localization previously so let's focus on the two other examples.

Robots that tidy without the intention to do so or knowledge they did it.

Robots with two sensors situated at 45 degree angles on the robot's "head"


and a simple program to avoid obstacles detected by the sensors will after a while tidy a room full of randomly distributed cubes into neat piles:


W&G conclude from this that,

Importantly, then, the robots are not actually tidying – they are only trying to avoid obstacles, and their errors, in a specific extended, embodied context, leads to a certain stable outcome that looks like tidying 
The point here is that the robots did not have a representation for, or a desire to, tidy or even any knowledge that they had tidied.  A complex "cognitive" behavior can emerge from "a single rule, 'turn away from a detected obstacle'" to quote W&G.  

This is cool.  But it neither rules out computation/information processing as the basis of mental function nor tell us how and why humans tidy.  Regarding my first point, notice that even though there is no program in the bot specific to tidying, there is a program nonetheless--W&G call it a "rule" which thought would be a banned termed in the embodied camp--that controls the robot's behavior.  Granted, the computation has nothing to do with tidying.  But it has everything to do with detecting obstacles and using that information to generate a change in a motor plan, which itself is a computational problem that the robot's programmers have solved.  W&G point to tidying behavior but ignore completely the sensorimotor behavior of the robot.  By analogy suppose I laid out the following argument. Humans can dull the point on a pencil's lead. I've developed a robot that writes with a pencil. I've programmed nothing in the robot about pencil lead or the desire to dull it. Yet it happens as an emergent property of the system. Therefore, the system isn't computing, all we have to do is set up the right environmental conditions and it will happen dynamically.  The flaw in the logic of course is you had to program the robot to write.  

What about humans?  Could our tidying behavior be explained similarly?  Not a chance.  The bots don't know they are tidying.  We recognize it immediately.  Where is that knowledge coming from?  Now you need a theory of knowledge of tidying and things get complicated again.  Just because you can get complex-looking behaviors to emerge from simple architectures doesn't mean the simple architectures aren't computing and it doesn't mean that humans do it that way.  

Robot bodies that walk themselves

W&G ask, 
Why does walking have the form that it does? One explanation is that we have internal algorithms which control the timing and magnitude of our strides. Another explanation is that the form of walking depends on how we are built and the relationship between that design and the environments we move through.
Although it's hard to imagine that walking doesn't depend on how we are built and the environment we move through, let's allow the argument.  

Humans don’t walk like lions because our bodies aren’t designed like lions’ bodies. 
Not gonna argue with that!  

Robotics work on walking show that you can get very far in explaining why walking has a particular form just by considering the passive dynamics. For example, robots with no motors or onboard control algorithms can reproduce human gait patterns and levels of efficiency simply by being assembled correctly 
Ah, now some substance.  This is interesting work.  Engineers built a robot frame that could walk down an incline, slinky-like, with nothing but gravity pulling it along.  That's an impressive bit of engineering, but humans can walk on flat ground.  So, the bot shell was fitted with

simple control algorithms..., which allows the robots to maintain posture and control propulsion more independently. 
What's cool is that these simple control algorithms--way simpler than previously used--when fitted to different body types work for a wide range of locomotion behaviors.  W&G conclude,

These robots demonstrate how organisms might use distributed task resources to replace complex internal control structures. 
This is fantastic work.  If you look at the original science paper, in the supplemental material you find that the authors of the paper likened their approach to that of the Wright brothers in designing their plane.  Instead of trying to engineer a craft that from the start could power itself and fly, the Wrights first designed a craft that could glide, then they fit a simple motor to it and (no surprise to them or us now) it flew on its own power.  So building a robot that can glide (e.g., walk down an incline) was a great first step.  Then all you have to do it build in a simple control system.  We don't hear much about this control system in W&G's paper only that they are "simple control algorithms." Here's the description from the science paper:
Their only sensors detect ground contact, and their only motor commands are on/off signals issued once per step. In addition to powering the motion, hip actuation in the Delft biped also improves fore-aft robustness against large disturbances by swiftly placing the swing leg in front of the robot before it has a chance to fall forward.
With the right design, complex calculations can be replaced with simple calculations.  (But they're still calculations, which W&G don't mention.)  Now, if you want the robot to do a little learning, e.g., in order to adapt to changing walking environments, you need to add a little to the computations. The same science paper reports how they implemented sensorimotor learning in their robot:

The robot acquires a feedback control policy that maps sensors to actions using a function approximator with 35 parameters. With every step that the robot takes, it makes small, random changes to the parameters and measures the change in walking performance. This measurement yields a noisy sample of the relation between the parameters and the performance, called the performance gradient, on each step. By means of an actor-critic reinforcement learning algorithm (18), measurements from previous steps are combined with the measurement from the current step to efficiently estimate the performance gradient on the real robot despite sensor noise, imperfect actuators, and uncertainty in the environment. The algorithm uses this estimate in a real-time gradient descent optimization to improve the stability of the step-to-step dynamics. 
The supplementary material provides more information on this algorithm:

The learning controller, represented using a linear combination of local nonlinear basis functions, takes the body angle and angular velocity as inputs and generates target angles for the ankle servo motors as outputs. The learning cost function quadrat- ically penalizes deviation from the dead-beat controller on the return map, evaluated at the point where the robot transfers support from the left foot to the right foot. Eligibility was accumu- lated evenly over each step, and discounted heavily (γ 0.2) between steps. The learning algorithm also constructs a coarse estimate of the value func- tion, using a function approximator with only an- gular velocity as input and the expected reward as output. This function was evaluated and updated at each crossing of the return map.
Although the body design of these robots drastically simplifies the computational task for the robot's digital brain, there is substantially more computation involved in the simple task of walking on level ground that G&W quickly gloss over in their discussion of this example.

All you cognitive modelers who don't take body design into account:  You should!  The embodied theorists are absolutely correct about emphasizing this point.

All you radical embodied cognitive scientists who think you can do away with computation (i.e., information processing): You still can't!  You can simplify the computations and that's excellent progress, but yours is not a new model of the mind.  It's GOFIP--good old-fashioned information processing--hooked up to better models of the delivery system.
 

2 comments:

Andrew said...

Many things:

First, the 'Swiss' robots are not supposed to be an example that explains human tidying. That would be silly. Instead, it's a lesson in a) complex behaviour can emerge from simple systems and b) that behaviour might not be built into the system explicitly anywhere in a way you'd call a representation. So any theories that figured out 'the necessary computational steps required to tidy a room' would be a tragically flawed misunderstanding of what was going on.

But it has everything to do with detecting obstacles and using that information to generate a change in a motor plan, which itself is a computational problem that the robot's programmers have solved
This is a loaded way to describe a system that simply changes the relative activity of it's wheels as a function of a simple light input. Where is the plan in these robots?

Control algorithms. We don't dwell on these much because they are, unfortunately, literally computational solutions. But this is only because the engineers are literally having to implement perceptual control using actual computers! Machine vision really does begin with digital images that have to be extensively computed on before they can be useful. This says nothing about how animal vision works, of course.

The interesting part of this robotics work is how much they can achieve before being forced into computations by the hardware, and then how much those achievements change what the computational solutions need to be anyway. The problem is Boston Dynamics is just trying to make things that work given their hardware, not trying to make a point about embodiment. Pfeifer and Bongard are doing the latter and so their work pushes robot morphology much harder.

So
Although the body design of these robots drastically simplifies the computational task for the robot's digital brain, there is substantially more computation involved in the simple task of walking on level ground
is only true for robots. Take the body design and connect that to different perceptual systems and what happens? That's where you have to move to biological systems and things get harder. That's why we don't stop at robots! We've replaced lots of computations successfully then run into hardware limits; now let's find out what wetware can do.

Maxim Baru said...

wrt to the tidying robots case and the morals it purportedly implies, it seems to me to be another flavor of the conspiracy theory of human behavior. Here, I'm thinking of the kinds of evolutionary psychological theories that try to explain a persons behavior X by their unobservable desire for Y. Notoriously, these types of arguments are very hard to justify. Fodor makes this point this way:

"this kind of inference needs to be handled with great care. For, often enough, where an interest in X would rationalise Y, so too would an interest in P, Q or R. It’s reasonable of Jones to carry an umbrella if it’s raining and he wants to keep dry. But, likewise, it’s reasonable for Jones to carry an umbrella if he has in mind to return it to its owner. Since either motivation would rationalise the way that Jones behaved, his having behaved that way is compatible with either imputation. This is, in fact, overwhelmingly the general case: there are, most often, all sorts of interests which would rationalise the kinds of behaviour that a creature is observed to produce. What’s needed to make it decisive that the creature is interested in Y is that it should produce a kind of behaviour that would be reasonable only given an interest in Y. But such cases are vanishingly rare since, if an interest in Y would rationalise doing X, so too would an interest in doing X. A concern to propagate one’s genes would rationalise one’s acting to promote one’s children’s welfare; but so too would an interest in one’s childrens’ welfare. Not all of one’s motives could be instrumental, after all; there must be some things that one cares for just for their own sakes. Why, indeed, mightn’t there be quite a few such things? Why shouldn’t one’s children be among them?"

Source: LRB "the trouble with psychological darwinism" (1998)