Friday, May 1, 2015

Is there an evolutionary model for language? The case for "Within Species Comparative Computational Neuroscience"

Comparative Neuroscience is entrenched in our methodological psyche. We regularly use phylogenetically related animals (mice, cats, monkeys) as model systems for understanding our own brain.  Hubel and Wiesel shared a Nobel Prize "for their discoveries concerning information processing in the visual system" not "the cat visual system" (their model animal) because we believe evolution conserves neurocomputational principles including coding strategies, architectures and so on.  Studying mice, cats, and monkeys, we believe, teaches us about the human brain.

For decades, centuries maybe, language scientists have lamented the lack of an animal model for language.  In fact, this was our excuse for why vision scientists seem to have made so much more progress in mapping the neural foundation of their system than ours.  But is it really the case that we don't have an animal model?  Some researchers will quickly point out that birdsong or ultrasonic vocalizations in mice can provide a useful model.

But I suggest we can do better or a least do more by looking for evolutionary homologies to our language system not in other species but in our own brain.

Here's the basic idea:

(1) Neural systems, like the species they inhabit, have a long evolutionary history.
(2) The evolution of neural subsystems (vision, hearing, olfaction, memory, emotion, social cognition, language ...) was not uniform but more klugey.
(3) The evolution of a given subsystem builds on its neurocomputational ancestor systems.
(4) Therefore, just like we find homologies in structural or functional design of related species that reflect their evolutionary lineage, we should find neural homologies in computations and architectures that reflect their neurocomputational lineage.

Language is an interesting case because it evolved so recently compared to other neural systems. Consider that the earliest estimates for the first stages of language evolution are in the range of 1.75 Mya and more typical estimates are roughly equivalent to the appearance of H. Sapiens about 100,000 years ago.  But even if we assume the rudiments of a neural system for language was developing 2 Mya it is quite clear that this system evolved in the context of an already rich neurocomputational system with highly developed sensory and motor, memory, conceptual, and social systems in place.  Specifically, our lineage split with our very bright primate cousin, the chimpanzee, ~5 Mya, which leaves at least an additional 3 million years of brain evolution between our common ancestor with chimps and the (earliest stages) of language evolution.

What this means is that language circuits likely built on top of, and therefore should show homologies to, other systems in our own brain.  And this opens the door to a Comparative Computational Neuroscience program for language: looking to non-linguistic neural systems for clues to the brain organization for language.

This is precisely the approach that gave rise to the Dual Stream model for language, which argues for a homologous organization between language and non-linguistic sensory systems such as the dual stream models of vision and hearing (see here for similar arguments).  Recent work suggesting shared computational principles behind motor control and linguistic processes in speech production is another example.  The fact that we finding what look like homologies provides some evidence that this approach might hold promise.

This does not mean that language can be reduced to sensorimotor circuits any more than the human mind can be "reduced" to the macaque's.  The approach is in fact quite agnostic to the degree of specialization of a system compared to its neurocomputational cousins, making it a potentially useful methodological framework for both the language-is-special and the language-is-not-special crowds.  All it really says is that we can learn something about language systems from studying hearing or vision or motor control, just like we can learn something about human vision from studying cats.

1 comment:

William Matchin said...

I do find this approach compelling and well-grounded. I think it's quite plausible to look across these domains for homologous properties that are informative. Not least because I was your student, but not most either!

However, this depends on what we are looking at, right? I have no problems with the sensory-motor properties of language behavior being shared with vision. But what about sentence processing? I think there are potentially informative homologies, but the steps involved in processing a sentence, particularly in building the hierarchical structure and interpreting it, are not so easily compared with non-language systems as are the basics of motor control. In fact, syntax looks distinctively unique in terms of its ability to embed things of the same type inside of it.