Sound localization in the barn owl is fairly well-understood in neurocomputational terms. Inputs from the two ears converge in the brainstem's nucleus laminaris with a "delay line" architecture as in the figure:
Given this arrangement, the neurons (circles) on which the left and right ear signals will
converge simultaneously will depend on the time difference between excitation of the two
ears. If both ears are stimulated simultaneously (sound directly in front), convergence will
happen in the middle of the delay line. If the
sound stimulates the left ear first, convergence will happen farther to the right in this schematic (left ear stimulation arrives sooner allowing its signal to get further down the
line before meeting the right ear signal). And vise versa if right ear stimulation arrives
sooner. This delay line architecture basically sets up an array of coincidence detectors in
which the position of the cell that detects the coincidence represents information: the
difference in stimulation time at the two ears and therefore the location of the sound
source. Then all you have to do is plug the output (firing pattern) of the various cells in the
array into a motor circuit for controlling head movements and you have a neural network
for detecting sound source location and orienting toward the source.
Question: what do we call this kind of neural computation? Is it embodied? Certainly it takes advantage of body-specific features, the distance between the two ears (couldn't work without that!) and I suppose we can talk of a certain "resonance" of the external world with neural activation. In that sense, it's embodied. On the other hand, the network can be said to represent information in a neural code--the pattern of activity in network of cells--that no longer resembles the air pressure wave that gave rise to it. In fact, we can write a symbolic code to describe the computation of the network. Typical math models of the process use cross-correlation but you can do it with some basic code like this:
Question: what do we call this kind of neural computation? Is it embodied? Certainly it takes advantage of body-specific features, the distance between the two ears (couldn't work without that!) and I suppose we can talk of a certain "resonance" of the external world with neural activation. In that sense, it's embodied. On the other hand, the network can be said to represent information in a neural code--the pattern of activity in network of cells--that no longer resembles the air pressure wave that gave rise to it. In fact, we can write a symbolic code to describe the computation of the network. Typical math models of the process use cross-correlation but you can do it with some basic code like this:
Let x = time of sound onset detected at left ear
Let y = time of sound onset detected at right ear
If x = y, then write ‘straight ahead’
If x < y, then write ‘left of center’
If x > y, then write ‘right of center’
Although there is no code in the barn owl’s brain, the architecture of the network indeed implements the program: x and y are the input signals (axonal connections) from the left and right ears; the relation between x and y is computed via delay line coincidence detectors; and “rules” for generating an appropriate output are realized by the connections between various cells in the array and the motor system (in our example). Brains and lines of code can indeed implement the same computational program. Lines of code do it with a particular arrangement of symbols and rules, brains do it with particular arrangement of connections between neurons that code or represent information. Both are accurate ways of describing the computations that the system carries out to perform a task.
Does it matter, then, whether we call this non-representational embodied cognition or classical symbolic computation? I think not. If we simply start trying to actually figure out the architectures and computations of the system we are studying, the question of what to call it becomes trivial.
Let y = time of sound onset detected at right ear
If x = y, then write ‘straight ahead’
If x < y, then write ‘left of center’
If x > y, then write ‘right of center’
Although there is no code in the barn owl’s brain, the architecture of the network indeed implements the program: x and y are the input signals (axonal connections) from the left and right ears; the relation between x and y is computed via delay line coincidence detectors; and “rules” for generating an appropriate output are realized by the connections between various cells in the array and the motor system (in our example). Brains and lines of code can indeed implement the same computational program. Lines of code do it with a particular arrangement of symbols and rules, brains do it with particular arrangement of connections between neurons that code or represent information. Both are accurate ways of describing the computations that the system carries out to perform a task.
Does it matter, then, whether we call this non-representational embodied cognition or classical symbolic computation? I think not. If we simply start trying to actually figure out the architectures and computations of the system we are studying, the question of what to call it becomes trivial.
28 comments:
The embodied vs symbolic issue is actually this: what is the system actually doing in order to solve the problem? As you note, the owl is not computing that algorithm. Instead, it has neurons wired so as to respond in a certain way. The system is also not representing that information; instead it's connected to other systems with architectures that make it's responses get used in particular ways.
Part of the problem with killing off computational, representational stories is that you can add them as a gloss to any activity. This is why the embodied reanalysis always just sounds like a redescription (but what it actually means is that representations aren't doing much real work). The hard work of course comes when the two accounts make different predictions and one works better than the other. Re-read Louise Barrett's chapters on the Portia spider and Webb's work on sound localisation in grasshoppers; these are good examples of how the computational, symbolic accounts just don't lead to the right questions, while the embodied account does.
Of course the nervous system contributes significant work to behaviour. But it's job is not to abstract away from the input, which is what representational accounts all assume. This barn owl neural architecture is intimately tied to the nature of the information it interacts with and the role it needs to play in shaping behaviour. The solution is specific to these, and is implemented as a neural process with very specific dynamic characteristics that are tuned to the task. That process can be described computationally but is not in and of itself a computation. The embodied analysis is interested in what kind of neural systems can be created and maintained by the organism's interactions with it's environments, not what kind of general purpose descriptions can be applied, because from the point of view of the organism, the former is the only thing that might lead to that neural architecture.
Thanks for this clarification. I don't think I disagree with you except in terminology. So, can I push you to clarify a couple of additional points?
1. Do neurons compute?
2. Do neurons process information?
3. Who's symbolic position are you arguing against?
I ask the 3rd question because even the classical symbol/computational theorists, such as Newell & Simon, who wrote lines of symbolic computer code as a model of human problem solving, did not seem to subscribe the position you are arguing against.
Newell et al. write, "Digital computers come into the picture only because they can, by appropriate programming, be induced to execute the same sequences of information processes that humans execute when they are solving problems. Hence, as we shall see, these programs describe both human and machine problem solving at the level of information processes."
Let me also push you on another point. You write that the nervous system's "job is not to abstract away from the input...." Would you stick to your guns on this point, even if abstraction augments the "role it needs to play in shaping behavior"? Or is abstraction OK in those cases?
Hi all,
Great post. I am wondering though if we should take seriously Lawrence Shapiro's distinctions among embodied hypotheses when attempting to understand how to interpret the owl's sound localization (and all of human behavior). In particular, he suggests a distinction between a 'replacement hypothesis' that poses that discussions of computation and representation are not necessary to describe behavior such as sound localization. I think there are strong arguments in favor of this approach, especially as conceptualized by people like Anthony Chemero). I think these arguments are largely of epistemological nature however, and not about ontology. In following Andrew's posts, I think he is sympathetic to the replacement idea?
Shapiro also distinguishes a 'conceptualization hypothesis', in which computations and representations are important for cognition, but things are represented via simulations in sensorimotor systems. This is more in-line with Barsalou's ideas. The description of the owl's sound localization circuitry can easily exist in this type of embodied system. The prediction is that when the owl thinks of previously localized sounds, or how it might localize some future sounds, a simulation in this circuitry underlies her ability to do so.
I feel like until we start making these types of distinctions (whether we all adopt Shapiro's or not), progress will be difficult. The general idea of embodiment (i.e. the ways in which the body and environment shape and constrain behavior) is a powerful tool for investigating human psychology, and it would be a shame to over interpret it or dismiss it too abruptly.
What do you think?
But what progress are we trying to make? We already know how barn owls localize sound in terms of neural computation. Why should we care whether we call it embodied or symbolic?
We certainly may like to ask additional questions, like (as you note) whether the same brainstem circuit is re-activated when the owl "thinks" about location. Notice, though, that hypothesizing that thinking about location involves "simulating" the original sensory experience doesn't *by itself* explain what's happening. For that we need to understand the original sensory process (delay lines, "convergence neurons" serving as codes for locations, etc.). When we have the low-level stuff worked out, then we can say, with substance, that thinking about location involves re-activating neurons that code location in the brainstem delay-line neural networks. Otherwise, it's just a vague hypothesis, a heuristic for finding an answer, not an explanation.
Now, assuming sound localization and thinking about localization all works like this, is that embodied or symbolic? I don't care. I *would* argue, however, that the system is processing information. It is taking information about differences in arrival time (or level) of sound at the two ears and converting it into a code for location. This is standard information processing psychology as it has been implemented in models since the cognitive (i.e., information processing) revolution. What's new with embodied cognition, as far as I can tell, is the claim that low level information processing is important for, indeed part of, "high-level" information processing (e.g.,' thinking about' or conceptualizing). It's an interesting new way of modeling high-level processes, one that needs empirical verification, but not a fundamental shift in paradigm.
All of this discussion leaves me a bit puzzled as to what happened to the actual fundamental embodiment of human semantics. All of our meaning making is based on a human scale experience. For example, if I say "I bumped my head on the ceiling and when I reached my head up, I could not reach it." I immediately perceive the paradox. I talk about things with respect to my field of vision (get out of my sight), have events back to back, etc. I can tell the difference between The Normans conquering Britain and The Beatles conquering America. I instinctively know when it makes sense to talk about me standing in front of a Church and the Church being behind me.
It is this sort rich bodily knowledge of the world that makes cognition possible. And as a linguist when I speak of embodiment, that's what I'd have in mind. And none of this is solvable as a series of algorithmic computations. But in all of this discussion, I don't see any suggestion that this is being discussed. I first came across embodied cognition in the 80s through cognitive semantics (Lakoff, etc.) and have yet to find any psychological or even neural account of this. Nothing offered here even hints at dealing with the massive complexity that is embodied language.
However, I do wonder if it does make sense to talk about embodiment at the level of neural processing. But that processing does need to account for our ability to instantly reflect the world's human-level complexity in the way we deal with language.
Hi Greg,
Thanks for the response! I just want to make a couple of points.
###
...Notice, though, that hypothesizing that thinking about location involves "simulating" the original sensory experience doesn't *by itself* explain what's happening. For that we need to understand the original sensory process (delay lines, "convergence neurons" serving as codes for locations, etc.). When we have the low-level stuff worked out, then we can say, with substance, that thinking about location involves re-activating neurons that code location in the brainstem delay-line neural networks. Otherwise, it's just a vague hypothesis, a heuristic for finding an answer, not an explanation.
###
I agree with you here. At least from the conceptualization perspective this is exactly where we are: we have a hypothesis about how how a cognitive task may be preformed (i.e. it is supported by simulations in sensorimotor circuits). I don't anyone adopting the conceptualization approach would have a problem with the idea that we need to identify and clearly specify what the low-level neural apparatus is doing. And right now, this idea of simulation is merely a hypothesis, one that can easily be falsified. I see this as a great opportunity to generate hypotheses about this circuit.
###
Now, assuming sound localization and thinking about localization all works like this, is that embodied or symbolic? I don't care. I *would* argue, however, that the system is processing information...This is standard information processing psychology as it has been implemented in models since the cognitive (i.e., information processing) revolution. What's new with embodied cognition, as far as I can tell, is the claim that low level information processing is important for, indeed part of, "high-level" information processing (e.g.,' thinking about' or conceptualizing). It's an interesting new way of modeling high-level processes, one that needs empirical verification, but not a fundamental shift in paradigm.
###
Many people say that the idea that modality specific information is used in high level cognitive tasks is not new (e.g. 'dual coding' hypotheses are kinda 'embodied', and imagery too, etc). What embodiment does, at least from the conceptualization perspective, is extend this to all cognitive tasks-- problem solving, emotional processing, memory, etc. This is a less drastic shift in cognitive psychology than the claims of, for instance, proponents of the replacement hypothesis who suggest representations are not needed to explain behavior. Whether these represent completely different paradigms, or exist on a spectrum of 'embodied' paradigms is another issue...
You ask why we should care how we interpret the owl's sound localization (as symbolic or embodied). I think it is important to interpret this in a framework that a) generates specific hypotheses about how what we observe in the sound localization circuit relates to all of the owl's behaviour (e.g. simulations in this circuit are needed for the owl to form memories of previously encountered sounds), b) allows us to identify features of behaviour or the environment that are the most relevant for understanding this cognitive skill (e.g. what about the environment makes this specific circuit necessary for keeping the owl alive and doing its thing?) and c) that gives us ideas about the causal role a certain computational circuit plays in different cognitive tasks (i.e. do you NEED the modality specific sound localization circuit to think about and form memories of previously experienced, localized sounds). I think that certain subtypes of the embodiment thesis (the conceptualization hypothesis in particular) does these things quite nicely by directing our attention to different skills and tasks that might not be acknowledged from a purely symbolic, computational, amodal, mental-activity-is-all-in-the-head kind of approach.
1. Do neurons compute?
I think they can be described as computing, but actually I think they are messy biological systems that are not just trading activity that can be coded as 0 and 1. From my limited knowledge of the biology, we now know neurons are in all kinds of continuous contact with each other via electrical and chemical signalling, modulated by changing gene expression which reflects current environmental demands etc etc etc. Computation doesn't even seem to come close to describing this nonlinear dynamical process.
2. Do neurons process information?
Yes, but the nature of that information is up for grabs. It's not going to be Shannon information - as powerful as that framework is, it's too abstracted from the real world.
3. Who's symbolic position are you arguing against?
I ask the 3rd question because even the classical symbol/computational theorists, such as Newell & Simon, who wrote lines of symbolic computer code as a model of human problem solving, did not seem to subscribe the position you are arguing against.
Newell et al. write, "Digital computers come into the picture only because they can, by appropriate programming, be induced to execute the same sequences of information processes that humans execute when they are solving problems. Hence, as we shall see, these programs describe both human and machine problem solving at the level of information processes."
This quote is excellent because it's not just a neutral description of an actual state of affairs; it's dripping with epistemology! Just because you can break down a process into algorithmic steps and implement those steps in code doesn't necessarily tell you anything about how the biological system actually solves the task. The Portia spider stuff in Louise's book is a great example: it looks like the spider simply 'must' be planning because it is able to track prey it's not currently able to see. It even sits there for a while bobbing around looking like it's thinking about things. But it's not running simulations, it's not planning, it's moving to sample optic flow that will enable it to locomote to the prey even though it will briefly lose visual contact, and this was only revealed when people stopped assuming it was doing the task computationally and started asking if it was doing it perceptually.
The embodied hypothesis is that our behaviour is caused by our perceptual contact with the world, not an internal representation of the world, and there are an ever increasing number of examples that this is the case. This is not a redescription of the same explanation, it's a different explanation and it matters (to answer your main concern) if a) the embodied explanation is better and b) the computational explanation actively interferes with you asking the right questions to find the better, embodied account.
So whatever it is our wonderfully messy neurons are up to, it's not computing and representing. It is processing information, but we need to be more ecological about what we mean by information in order to have the right job description for the brain.
Would you stick to your guns on this point, even if abstraction augments the "role it needs to play in shaping behavior"? Or is abstraction OK in those cases?
One of the lessons of the 'replacement' style embodied cognition work we typically point to (eg the a-not-b error, etc) is that what looks like abstraction might be no such thing. So I'd need specific examples to even try to answer and even then I'd need to know enough about the example to be able to propose the kind of embodied analysis that you'd need to do first in order to rule abstraction in or out.
The psychologist's fallacy is to mistake their description of the behaviour for the mechanism of the behaviour. Good embodied research tries to avoid this problem by not assuming things like the need for abstraction or 'hey, they spider simply MUST be planning'.
So neurons compute and and process information. Whew! The cognitive revolution lives! ;-) (But then I notice you contradict yourself and that neurons aren't computing. I'll go with your first answer.) Given this, the embodied movement does not represent a fundamental change in conceptualizing how the mind works. It is still an information processing device that transforms information from the senses in various ways to accomplish various tasks. What I will credit embodiment approaches with, though, is that they promote the search for lower-level explanations for high-level abilities. Some of my own work would certainly qualify in this respect.
Re: question 3 you write, "The embodied hypothesis is that our behaviour is caused by our perceptual contact with the world, not an internal representation of the world" This is an odd claim. Perceptual scientists agree, perception is an active process created by the brain; it is not a movie screen or tape recorder. This means that the perception involves an internal representation of the world. Firing of the "coincidence neuron" in the simple sound localization model is an example of how spatial location is transformed into a neural code or representation. "Perceptual contact with the world" is not a theory but a vacuous statement, unless you are suggesting a complete retreat to behaviorism such that the physical world directly causes behavior. Is that what you are claiming? I don't think so.
I have a feeling you are completely hung up on the idea that for "computation" to happen it has to be clean 0s and 1s. That's just how digital computers do it. Wetware does it differently, messily, but it's still processing information, i.e., computing some form of transformation.
An example of abstraction for you: it might be useful to recognize different instances of lions, under various lighting conditions, from various angles, and with various bits occluded. It might even be useful to recognize the same animal by sound. There is evolutionary advantage to abstracting across different sensory events that cue the same object and appropriate responses. Having learned that lion A is dangerous, we can then generalize to lion B. Can the mind/brain abstract in such a case? Or are we yoked to the physical environment thus treating each lion encounter as unconnected events?
I disagree which your characterization that psychologists believe their description of behavior is the mechanism (by this I suspect you mean implementation) of the behavior. We are fully aware that brains don't literally have lines of code or math symbols. (We are also aware that digital computers don't either!)
Finally, the spider example is a red herring. It shows that a high-level theory of how that spider solves that task is wrong and a lower-level theory works better. This just means that information is processed and transformed at a lower level. It does not show that the spider is not representing and processing information or that high-level theories aren't correct in some domains.
There's a lot here but going point by point risks this turning into an argument on the internet :) Let me refocus a little:
The question here is 'is the embodied description just another way of saying the same thing as the computational description?' and I claim the answer is 'no'. Let's think about the sound localisation example from your post (it seems like a similar setup to the grasshopper system Louise Barrett reviews).
Is the neural architecture that allows the owl to localise a sound computational? It's not clear that it's computing anything. The neural system behaves in a certain way depending on information about sound passes through it. Where is the computation happening? Calling this computation is like saying a hammer is computing how to fall when I drop it; sure, I can compute what will happen to it given the relevant equations but this is not what the hammer is doing. Physicists describe the motion of the hammer in dynamical terms as a process that unfolds over time under task specific constraints. (Replacement style) embodied cognition makes the same move for the same reason. This means we need to describe the relevant constraints and how those are assembled into the local dynamic from which behaviour emerges. We talk about this in our paper in terms of our 4 key questions.
Does the behaviour of the network represent location? Actually no; what it does is shape behaviour (head orienting) in a functional manner depending on the parameters of the sound input. Calling its behaviour a representation of location adds exactly nothing to our understanding of the mechanism, and worse makes it sound like it has added something. It's a mere gloss on an otherwise fairly useful story that just makes a mess.
People forget that representations are solutions to a particular problem, namely poverty of stimulus. No poverty of stimulus, no need to represent anything - just detect information using appropriately calibrated systems. This sound localisation system is exactly this: a calibrated device that, when coupled to sound information from the environment it responds in a task specific manner that reflects this calibration. This is why embodied cognition people talk about Watts steam governors and polar planimeters as technological analogies for how things work, rather than computers. The proposed underlying mechanism is not even slightly the same.
This is most useful when the mechanisms are predicted to respond differently so you can empirically tell them apart. Understanding fly ball catching as a calibrated interaction with information rather than a computation explains why outfielders run in curved and not straight paths; this goes for any kind of prospective (online) vs predictive (computational) control set up. The owl is controlling head movements prospectively and not predictively; you might want to talk about this as involving a representation but I have no idea why you would bother given that representations are never required when prospective control is an option.
Notice that in none of this am I denying an interesting role for the brain. It is clearly up to some critical work connecting information to behaviour. That work is not best described as representational or computational, though, and that's the more radical embodied argument. If anything I think this work on the sound localisation systems is very embodied and actively points to a non-representational, non-computational story being the best way to go.
Thanks for this, Andrew. It is a good idea to get back to basic principles and you’ve clearly laid out your position.
It seems to me you are fighting a Straw Fodor, well, maybe a real Fodor or a version of a real Fodor. Your fly ball example says we don’t use sensory information about velocity and angle to calculate a trajectory (in a Fodorian central processor), instead we move our bodies such that the perceived trajectory of the ball is straight, which happens to guarantee we end up in the right place at the right time. Fine. I won’t dispute it. But you still have to do an awful lot of information processing—you seem to be OK with information processing so I’m using that term instead of computation—just to see the ball against a background scene, perceive its motion, perceive the straightness of its motion, and generate motor commands to keep it straight, etc. You are arguing against a particular theory of information processing, not proposing a whole new way of thinking about how the brain works. Ditto Barrett’s book.
So we agree on how baseball players run down fly balls and we agree on how crickets and owls localize sound. Our debate, then, is not about content. It’s about what you call it: embodied or something else. Not a very interesting debate.
Now, if you want to deny information processing and retreat to full blown behaviorism, then we have something fundamental to argue about. As far as I can tell from what you are saying, you are squarely in the non-Fodorian cognitivist (i.e., information processing tradition) where we can talk about the merits of different information processing models.
One correction: the computer itself is not the analogy of the mind, it’s the computer program (i.e., the flow of information processing).
you seem to be OK with information processing so I’m using that term instead of computation.... You are arguing against a particular theory of information processing, not proposing a whole new way of thinking about how the brain works
OK - so tell me what you mean by information processing. In cognitive science the phrase typically means the kind of processing required to support the inferential mechanisms implemented as representations required to overcome poverty of stimulus. This is certainly not what I think is going on. So what do you think is going on when you use the term?
If you just mean 'input is transformed in some way that shapes output' well, hmm. You might want to think of this as information processing but I wouldn't necessarily want to. The good old polar planimeter is a device that transforms an input (rotations on a wheel) into an output (a measurement of area) but it's not really doing anything I'd obviously want to call information processing. This is the point of all the dynamical systems examples and is the essence of the different perspective on what's going on.
In addition, the form of the required processing depends on the form of the information you start with. The planimeter does not take measurements of lengths and therefore it doesn't implement a multiplication process to get to area. Same with the barn owl; that neural system doesn't seem to be implementing any trigonometry, for example. So at the very least, and this is one of our main arguments, if you want to characterise what the brain is doing you'd better start with information and not neurons.
One important thing about behaviourism (besides how often it succeeds - way more than the average cognitive experiment :) is that it placed an emphasis on the organism's environment as the place to go to define 'functional' behaviour. You're right that Skinner's main weakness was that he had not theory of information (ie how we perceive that environment) but the primary insight, that we should look at what the world offers for behaviour before assuming it all comes from the brain is still true today.
One little glitch in that whole "look at what the world offers" idea. Some perceptual scientists (see Don Hoffman's work) have come to the conclusion that the world is nothing like what we perceive. The idea is that perception is a user interface, much like your computer's desktop environment, that is designed to hide the truth. Evolutionary simulations back up the claim. If true, the environment itself is a creation of the computational mind. But that's an entirely different debate.
Some perceptual scientists (see Don Hoffman's work) have come to the conclusion that the world is nothing like what we perceive.
At one level, this is automatically a bad idea. If perceptual information isn't really about the way the world is, then why don't we die horribly more often?
At another level, however, this is just the perceptual bottleneck. The world is dynamical (i.e. best described in terms of both motions and forces). Perceptual information is kinematic (motions only). So perceptual information cannot be identical to the world it is information about. However, it can be specific to that world (see Gibson, Turvey, Shaw, Reed and Mace plus Runeson); kinematics can specify dynamics.
Of course, this means that we don't organise our behaviour with respect to the world per se, but with respect to the information about the world we are detecting. (This is the important bit behaviourism was missing). This idea has a lot of empirical support; I've done a lot of work on coordination dynamics, for example and helped show that coordinated rhythmic movement is organised with respect to the information for relative phase.
Long story short, this point is basically true but while interesting and important there are solutions.
I guess I always assumed that embodied cognition, most crucially, makes claims about the nature of *concepts*, and by extension the nature of thought, language, etc. Concepts - or so I thought - are comprised on such a view of (lists of) sensorimotor features, because of the (in my view flawed and even false) assumption that sensorimotor processing is in some sense 'closer' to the world. Sensorimotor processing is perhaps closer to the body/world in some completely pre-theoretical and (misleadingly) intuitive sense, but when you look under the hood, the infrastructure of sensorimotor processing is highly complex, highly abstract, distal to the perception/action surfaces, relentlessly inferential, and so on.
In any case, if concept=bundle of sensorimotor features, and NOTHING ELSE (if there is something else, like a prototype, then you have given up on the basic assumptions and introduced an abstraction with a different format ... cause you have to specify the axes over which you average), then these features have to have (a) causal force in inference and (b) some way of combining, since a/the key ingredient of cognition and language is compositionality. This latter criterion is, to my knowledge, very very tricky on the most straight-up embodied view.
This is all very close to the *ooold* "picture theory of meaning" which has some pretty serious shortcomings. Notably: compositionality.
Caramazza and Mahon have written lots of important stuff on this, and Randy Gallistel's Memory and the Computational Brain is also helpful. Jackendoff, in various places, also gets very explcit about the nitty-gritty. One really has to spell out in detail the nature of a representation, and in a way that that (conceptual or lexical) representations can do the work they demonstrably do, in terms of storage, inference, sensorimotor interfaciness, etc. How this is done in neural tissue, i.e. at the implementational level of analysis, is completely unclear.
There is no debate that sensorimotor features are associated with concepts and words, and in that sense there is 'weak embodiment.'
But whether the list of sensorimotor features suffices as a theory of concepts and words is very doubtful.
David- I too thought embodiment was just a new way of thinking about how concepts are represented. That there may be more involvement of lower-level sensory and motor systems than previously thought. That's an interesting idea worth investigation. Like you, I think it has serious limitations. Then I started reading claims that embodiment was a "post-cognitive" revolution, a new model of the mind, and an alternative to computational approaches. That's what disturbed me because you can't get past the retina (or a single neuron for that matter) without some form of computation.
Andrew- Don't be like me. You should actually read about a theory before critiquing it offhandedly. :-) "Perceptual interface theory" as Hoffman calls it says that we don't see the world veridically. That doesn't mean that we don't see it usefully for survival. In fact, it argues that we don't see it veridically precisely because it is MORE USEFUL for survival to see it non-veridically. He uses a computer desktop GUI as an example. That file on your desktop isn't *really* rectangular, blue, and in the top left of your screen. It's just a handy interface that makes computer interaction feasible for everyday rapid use. If we had to deal directly with the machine circuits, voltages, diodes and so on computers would be unusable by most people and for most tasks. Ditto, Hoffman argues (and shows with evolutionary simulations), for perception.
"Well," you object, as many have to Hoffman's arguments, "if that car coming down the street isn't really there, why don't you step out in front of it?" Just because perception isn't veridical doesn't mean it isn't useful. You don't step out in front a car for the same reason you don't drag your file icon into the trash icon. You're not *really* putting the file inside a trash can, but the consequences of the action are real enough: the data will disappear.
Hi all,
I think there is a very important distinction to be made in this discussion, one that Shapiro pointed out in his text book. David's ideas about the role of sensorimotor processes in conceptual processing is distinctly of the 'conceptualization' variety of embodiment; Andrew's ideas seem quite aligned with the 'replacement' variety. These two hypotheses, while both suggesting that the body and environment take precedence in explaining behavior (and therefore both enjoy the descriptive term 'embodiment'), do so for different reasons. Importantly, computation, perceptual inference, and representation are NOT at odds with the conceptualization idea, while they ARE at odds with the replacement idea. This very well may make conceptualization and replacement hypothesis incompatible. Importantly, while conceptualization ideas can be explicated firmly in neuro-computational language, the replacement ideas cannot. This makes replacement embodiment more revolutionary (or radical), while the conceptualization theory is not so revolutionary; it is just a way of grounding abstract 'thought' (or more objectively, behaviors that we think reflect abstract thought such as language) in the sensorimotor experiences of animals (i.e the interaction between brain, behavior, and environment).
Importantly, whether we take a conceptualization or replacement approach (or neither) has major consequences for how we describe, and what predictions we make, of the owl's sound localization capacity. And I think it is at this point that the distinction becomes important.
Thanks, Heath. I stick to my original point, though: if we all agree that delay lines in the owl (or any neural architecture or any neuron) is processing information, then the replacement variant of embodiment is just traditional cognitive psychology in new clothes.
I think there are certainly strong arguments for that perspective and I could certainly agree with you; in general, I find explanations with computations and representations quite useful, too.
I do wonder if the traditional cognitive psychology and radical embodiment are so different in their epistemology that, though they attempt to explain the same thing, they are NOT mutually exclusive. Light behaves as particles or waves, depending on how it is measured. Do we have a case where a behavioral phenomena (e.g. sound localization) behaves as a computational process or as a direct perception process, depending on how you measure it (i.e .form the traditional perspective or the radical one)? If this is so, then traditional cognitive psychology could be internally coherent and predictive, and so could radical embodiment, but one does not provide the BEST answer (i.e. light is not particles OR waves). I think this line of thought takes us afield of the phenomenon in question (i.e. sound localization), and it does get pretty thick with a philosophy that I am not an expert in, but I do wonder if there is value to this notion for practicing experimental psychologists. (Am I saying 'can't we all just be friends?! Maybe a little, but I am thinking about the practical consequences for doing science!)
Interesting thoughts. First, yes, let's all be friends! I like the debate, the ideas are interesting, and no matter who is right or wrong or in between we'll learn something. That's the game we play and it is never personal--for me anyway. Interesting story: I visited ASU and Art Glenberg's "Laboratory for Embodied Cognition." While chatting with him and his lab I was telling them all the reasons they were wrong in my view and Art was rebutting, etc. The students were getting flustered, angry, frustrated. Art and I had a great time. After my department talk, Art invited me to his house, even offered to have me stay the night. He's a great guy and even though I think he's dead wrong, I'd happily hang out with him.
Anyway, I find it interesting and telling that people who are actually trying to understand how owls localize sound or humans detect visual motion don't care whether the system is embodied or symbolic or whatever you want to call it. They simply try to figure out how it works. The debate I'm having with Andrew seems to be moving toward more about what to call delay lines not whether there are delay lines. If that's where we end up, the philosophical distinction is vacuous.
Couple of things: Heath is exactly right about the conceptualisation vs replacement hypotheses and the fact they are distinct. The former is about grounding representations and the latter is not. I am in the replacment camp because, as Sabrina and I argue in our paper, we think allowing any embodiment removes any need for representation by definition.
Heath, I don't think representational and non-representational accounts are like a particle/wave thing - I think that they are entirely different and incompatible (and that the former is wrong :)
Anyway, I find it interesting and telling that people who are actually trying to understand how owls localize sound or humans detect visual motion don't care whether the system is embodied or symbolic or whatever you want to call it.
I think they should care, for the reason that even if they don't explicitly talk about it, their work is still being guided by a theoretical framework. If that framework is wrong then they will end up in dead ends; and if they don't acknowledge the role of theory in guiding their empirical work then they won't know how to get out of the dead end. There is no such thing as effective theory free science.
I don't currently agree that the owl system is processing information, at least not in a way that makes 'processing information' an explanation of what's going on.
And for the record I also don't take these arguments personally; if my work can't stand up to criticism then I'm doing it wrong. (And I've also recently interacted with Glenberg in the literature; I think he's completely wrong too but we had a good debate and he was extremely nice throughout the whole thing. I've also freaked out students by having an excellent blazing row with a colleague! :)
Re Hoffman: the problem with this approach is that you need to constrain your fake impression of the world according to some rule set that actually enhances your survival. Where does this rule set come from? How stable is it over short and long term changes to your environment?
The GUI example actually reveals all these problems. Software interface design is actually an amazingly hard problem because the environment you are controlling has few if any regularities, let alone any laws. There are lots of ways to make the GUI, some work for a while or for a limited use, and they are all very fragile and can get screwed over easily.
What you need to know to enhance survivabilty is what's happening in the world right now. You need information that is as closely 'about' that world as you can get. One large class of information we use is that which can specify action-relevant dynamical properties of the world. It exists and we use it because it is very stable and extremely functional - it is entirely about what you need to know because of the lawful process that created it (event dynamics interacting with energy arrays such as light).
Any theory that states we systematically futz with the input in order to make it more useful needs a clear story about how we 'know' what to do to make the input more useful and where that 'knowledge' came from (scare quotes just indicating you can read 'knowledge' any way you like, not just some explicit rule set). Our Frontiers paper actually argues that as soon as you have a workable way to answer those questions, you actually end up with such good access to the environment that this kind of representational, inferential futzing is no longer required. This is why we think radical embodiment is actually kind of inevitable.
I disagree that I'm dismissing this offhandedly; I'm dismissing it on theoretical grounds, which is something psychologists are not usually comfortable doing. I think theory is a strength, not a weakness, but it always sounds wrong to psychologists who want data instead. Data are fine, but sometimes it's not the point.
Regarding Hoffman--
Andrew Wilson says, "Where does this rule set come from? "
Charles Darwin says: "natural selection."
Don Hoffman agrees with Darwin.
Regarding information processing--
Andrew says, "I don't currently agree that the owl system is processing information, at least not in a way that makes 'processing information' an explanation of what's going on."
Ok. You don't like "computation." You don't like "information processing." Tell me what term your prefer and provide a definition.
You say computational neuroscientists need a theory: "There is no such thing as effective theory free science." I agree completely. But they have a theory. Delay lines function as coincidence detectors such that the location of a stimulus can be represented by the activity of the network. You can build a functioning model that does this. In fact, in your smart phone is a chip developed by Lloyd Watt that uses this neurocomputational mechanism to improve sound processing capabilities of your device. It is a strong theory with proven real world applicability. You just don't like the terms they use to describe what is going on.
Ok. You don't like "computation." You don't like "information processing." Tell me what term your prefer and provide a definition.
If I had to channel the embodied cognition perspective, I think they would say that their term is physics. I think they assume everything in psychology reduces to physics, so they think that labels like "information processing" or "computation" or "cognition" or even "perception" are just convenient ways of organizing phenomena, and don't really exist.
In fact, in your smart phone is a chip developed by Lloyd Watt that uses this neurocomputational mechanism to improve sound processing capabilities of your device.
I should hope the computational solution works in my phone - it's a computer! That success has no bearing on whether the owl brain is computing or not.
I actually don't know what term to use any more. I'm not averse to information processing at one level, although thinking about the owl system in particular, I don't see a computation being implemented and it's not clear that there is processing going on. I see a device with a dynamic that interacts with information in a way that produces a functional behaviour for the owl. This is sort of processing, but it's not the way the term is usually applied. No one ever says a hammer is processing information as it implements 'falling in a gravitational field', for example.
William is right to an extent; the (replacement style) embodied term is much more grounded in physics. It's not reduction to physics (that is a phrase loaded with history I don't think applies here) but it is about treating nervous systems as physical devices rather than computational ones.
The nervous system does connect information to behaviour; I'm just becoming more convinced as this thread goes on that 'information processing' is not the right way to frame it. I'm still thinking, though.
The point about the smart phone chip is that the information processing principles, abstracted away from its implementation, has explanatory value and real world applicability. This shows that the information processing level of description is doing explanatory work.
The point that underlies traditional cognitive science/information processing approaches is that with the right arrangements of matter (e.g., neurons) a system can be induced to *transform* energy (i.e., information such as ITDs) in such a way to accomplish a task (an orienting head turn). Honestly, I think it is fine if you prefer to describe what is going on at a physical level: cochlear waves mechanically stimulate hair cells which induces currents that integrate which causes action potentials. Probably you'll need to reduce it down to molecular processes to get closer to the causal world you envision. That's called molecular biology and physiology and it is extremely valuable but it is a terribly clunky level at which to model cognitive processes.
What cognitive psychologists have noticed is that you can capture a lot of generalizations if you work at the level of information processing (aka, computation) and make a lot of progress understanding visual object recognition, hearing, language, motor control, etc. If it really is more a level of description thing then replacement embodiment isn't an alternative computational cognitive models, it's a hunt for lower-level implementations. In which case you need to switch fields and study physiology, anatomy, molecular neurobiology or maybe physics. ;-)
Incidentally, those of us working on cognitive neuroscience take our job to be the effort to uncover the linking between biology and computational models. We value both and want to understand their relation. To my mind this is much more productive than being exclusively computational or exclusively physiological. You can learn from both levels of description.
@Greg, I think there's a problem conflating success in modeling with explanatory power. An example for an unrelated field is 'rational choice theory' in economics. It's actually fairly good at modeling the behavior of populations (up to a point). But it in fact does not even approximate how real individual people make decisions.
There are many instances when a success in modeling does not necessary mean that you've provided an explanation. It's useful to keep in mind the 'all models are wrong but many of them are useful' dictum. So, I'd argue that there's a difference between uncovering "the linking between biology and computational models" and computationally modeling observations of biological phenomena. I think the latter is incredibly useful but the former is fraught with too much potential for conflating model with reality. Linguistics is full of exactly these sorts of problems as I tried to show in this post.
@Andrew Wilson I tend to agree with you that information processing is too tied in the symbolic computation model. But if you go back to the Shannonian basics, it can be a very powerful way of thinking about signal transmission. It allows for some mathematical modeling of simple interactions (maybe even hammer hitting a nail). However, most people think it means processing a series of if-then transformations between strings of bi-polar symbols. And it's very unlikely that this is a very good picture of what the brain is doing (as far as I can tell from afar) and definitely not what goes on in language (as I can tell from very up close).
Post a Comment