The cognitive revolution (or better, information processing revolution) rejected the idea that behavior could be understood without reference to a contribution from the mind/brain. Through decades of experimentation and theory development, we have come to appreciate that the mind/brain works by computing (or better, transforming) information available in the environment (or stored in the mind/brain itself) as a means to control behavior. Call this the computational theory of mind. Models in this framework often abstract away from particular instances (tasks, experiences, actions) and develop abstract models of how the brain computes (transforms information). These often use mathematical symbols or other representational notation.
Radical embodied cognition rails against this view and makes arguments along these lines:
Computational/symbolic/mathematical models are descriptions of some phenomenon. For example, a falling apple doesn't actually compute the gravitational force as understood mathematically. The mind is the same. Just because you can describe, say, aspects of movement according to Fitt's law doesn't mean the brain actually computes the formula. And by generalization, just because we can describe lots of mental functions using computational/symbolic/mathematical models doesn't mean the brain computes or processes symbols. Therefore, the mind doesn't compute; computational models are barking up the wrong tree; we need a new paradigm.
Putting aside debates about what counts as computation, here's why these sorts of arguments don't change the computational cognitive scientist's research program one bit.
Falling apples don't compute, but an abstract mathematical description of the force behind the behavior led to great scientific progress. It is the abstract mathematical descriptions that has pushed physics to such heights of understanding. If physicists rejected their theories just because apples don't compute, we probably would be too busy tending the farm to debate this silliness. Therefore, modeling cognition using abstract computational systems can (has!) lead (led!) to great scientific progress. Even if the mind isn't literally crunching X's and Y's, there is great value in modeling it this way.
No computational cognitive scientist (that I know) actually believes the mind works precisely, literally as their models hold. Chomskians don't believe neuroscientists will find linguistic tree structures lurking in dendritic branching patterns nor do Bayesians expect to find P(A|B) spelled out by collections of neurons doing Y-M-C-A dance moves. Rather, we understand that these ideas have to be implemented by massive, complex neural networks structured into a hierarchy of architectural arrangements bathed in a sea of chemical neuromodulators and modified according principles such as spike-timing-dependent plasticity. No one (that I know) is foolhardy enough to believe that the relation between our computational models and neural implementation is literal, transparent, or simple. In short, computational cognitive scientists use their models in exactly the same way physicists use math. To reject this approach because mathematical symbols aren't literally lurking in the brain is foolish.
Cognitive neuroscientists, also disparaged by the embodieds, are working on the linking theories, asking how tree structures or prior probabilities might be implemented in neural networks. Not surprisingly, the neural implementation models don't literally contain symbols. Instead they contain units (e.g., neurons) arranged into architectures, with particular connection patterns, nested oscillators, modulators, and so on, and often modeled after real brain circuits as best we understand them. We are doing well enough at neurocomputational modeling to simulate all kinds of complex behaviors.
I respect that radical embodieds want to see how much constraint on cognitive systems the environment and the body can provide and that they want a more realistic idea of how the mind physically works (in which case I suggest studying neuroscience rather than polar planimeters). We have learned some things from this embodied/ecological approach. But given that subscribers don't reject that the mind/brain contributes something, we still need models of what that something is. And this is what computational cognitive scientists have been working on for decades with much success.
Carry on, you computational people. Let's check back in with the radical embodies in 2025 to see how far they've gotten in figuring out attention, language, memory, decision making, perceptual illusions, motor control, emotion, theory of mind, and the rest. If they have made some progress, and I expect they will, we can then update our models by adding a few body parts and letting our robots roam a little bit more.