If you happen to be in Durham this weekend, come and hang out at this party:
From Neuron to Language
States of the art in the cognitive neuroscience of language
A couple of departments at Durham are hosting a little symposium on this stuff, with the hope of stimulating philosophical discussion. Speakers include Richard Wise, Angela Friederici, Gary Marcus, Lolly Tyler, and some others.
Here is the idea: "All invited speakers are asked to take a step back from their day-to-day research and assess what has and has not been achieved in terms of an understanding of the brain basis of the human linguistic mind. The overall aim of the symposium is largely philosophical, insofar as we wish to understand the mind in terms of the brain, yet the approach is bottom-up in that we approach the issue from the experimentalist’s point of view."
If all of this happens with appropriate levels of lubrication (is gin a lubricant?), it could be quite entertaining. As you can predict, I will argue that it is not at all clear whether there is a game in town at all. Hope to see you there.
Sounds like fun. My invitation must have been spam-filtered. :-) I agree, though ... I don't think we understand anything substantial yet. All we seem to be able to do is to document the "neural correlates" of something. I can tell you where a word is probably recognized, but I still don't know how a word is recognized. Where still just doing geography and haven't yet gotten to the underlying geology.
Yup, we are still doing the "homework problem" -- i.e. figuring out where to even look.
A key problem is that, too often, we tend to mistake functional localization for explanation.
I call this the 'cartographic imperative.' I just wrote a brief paper about this for the art history journal edited by a friend of mine. I can send it to you, if you like. It's called "The cartographic imperative: confusing localization and explanation in cognitive neuroscience"
It's light and easy reading, promise.
That was one cosy meeting, a nice way of spending a weekend indeed. Thanks for advertising it! Please, keep this kind of updates coming in the blog.
If I'm not all mistaken, David, you were advocating an approach, that we are to establish the relations between the elementary linguistic operations, without which the linguists “can not live”, and elementary computations provided by the basic modules of the brain neural circuitry in the understanding of, for example, Douglas and Martin (2007). Though I'm all sympathetic to the idea in general, I don't think that it might be too productive (and small impact of Gary's book, which you have discussed, shows just this). You have partially discussed some of the reasons, but I think it's worth repeating them here.
- difficult (almost impossible) to find the relevant level of between linguistics and the brain if we want to be “neuroscientific”.
due to the emergent properties of the system consisting of large number of elements it is useless to model the behaviour of the system basing on the behaviour of the elements
the same time some basic cognitive functions might be much more important for language processing than neurophysiological ones. For example, Gary Marcus relies much more on general memory constraints, rather than LTP/LTD when inferring the treelets.
The scale problem is abashing enough in itself, but there is more. Simple circuits you were referring to underlie far too much. If we are to accept the philosophical standpoint that brain is a substrate for all our mental activities then we have to accept that these circuits can underlie almost anything. Even more so considering the amount of non-innate operations that become extremely automatized when humans learn stuff like quantum physics, drawing, ping-pong or refine the music composing skill. Thus the question “what the elementary circuits can do for language?” becomes not so interesting, since the answer is likely to be “Just anything".
I would assume that much more relevant to language is the question “what type of operations can be learned in unsupervised manner?”. Since if language is anyhow shaped by the computational limitations of the brain the bottle-neck is most likely to manifest itself in language transfer between the generations.
Interesting research agenda would then be to take the learning mechanisms we know our brain to be capable of, merge it with the linguistic data and try to understand what kind of basic units and operations we have to add in order to derive the linguistic primitives including both entities and operations on them. This is not to say that we are to derive everything from statistics. Statistics is useful when learning new classification of data and much less so, when learning new operations that could possibly be applied to the data. In programming terms – statistics provides new classes of objects while we are longing for new methods.
So my dream agenda would look the following way:
- accumulate those amazing things that brain circuitry can do with the data like categorization, pattern extraction, memory etc.
- try to formulate learning mechanisms that allow learning new operations on data.
- couple these abilities with linguistic data available to the child and try to understand what kind of linguistic primitives can be derived from this set. When certain primitive becomes underivable – either assume that it is innate or rather try to come up with an idea of learning mechanism that would allow to learn it.
- repeat if necessary.
Where does the brain science come in?
Cellular neuroscience keeps on suggesting things like LTP/LTD, dendritic computations etc. This, however, due to the emergent properties of the huge sets of neurons, can be way off from what is needed. (why not look at the level of gene expression in the individual neuron as the basis of language learning? I bet one can find beautiful correlations with U-shape learning curve! It can be meaningful, but for sure not on contemporary level of understanding).
Much more meaningful seem the attempts to link the computations that run in the specific sensory systems with the relevant language functions. And from this point of view the “home task” of finding the placement of specific language processes is truly important.
So not “what can the neural circuits do?” but “what can the brain circuits learn?” . Would be really curious to hear your reply.
And thanks again for the wonderful term “interdisciplinary cross-sterilization”.
PS It would be extremely nice if you could put your presentation on the web or may be send it to me. Some things that seemed to be self-evident during the talk start fading away.
Hi Daniel, thanks for your stimulating comment. And with respect to the slides, send me an e-mail and I'll be happy to send you a PowerPoint or PDF version.
I agree with some of your points and disagree with others. The challenge I was trying to/hoping to characterize concerns the *really* long term goal of deriving a "unified" is research program that links linguistics and neurobiology. I completely agree with your point that, given what we know at the moment, these links are not particularly relevant to what would try to understand. For example, the notion of a canonical cortical microcircuit, if it scales up and is demonstrated throughout the brain, will be very difficult to link and precise ways to linguistic computation.
I think the ideas are you outline with respect to learning are certainly a possible approach. That being said, for the moment I would advocate a different approach: if it is a fact of the matter that brain computation is pretty generic, then the 'specialized' nature of what is going on (for example visual object recognition is not like compositional semantics) must come from the data structures. Therefore, what I would try to go after, is the nature of the data structures that enter into computation. On this view, a much richer and more sophisticated understanding of lexical representation will be absolutely essential to get a grip on how the brain deals with language.
My concern with pursuing the research program strictly from a learning point of view is that we just don't have the neurobiological vocabulary to tell any interesting story, beyond what we know about LTP/LTD. In fact, Randy Gallistel makes this point in a number of his recent papers -- that is, we have no idea how information is stored and information is carried forward to permits the computation over representations. But maybe I'm misunderstanding the specifics of your proposal... so perhaps you're arguing that we should try to discover biological primitives by focusing in on what learning tells us. Is that a better characterization of your idea?
Post a Comment