When fMRI studies are designed to uncover the existence of discrete representations of features in language cortex, or MEG data interpreted by turning to underspecification notions of featural representations, we fall into the isomorphism fallacy— “to take the products of description and assign them explanatory, causal status” (Bellugi & Studdert-Kennedy, 1980, p. 92). Real explanations must come from “principles that are independent of the domain of the observations themselves (Lindblom, 1980, p. 18).”
The linguistically-driven search for memory structures in the human brain mapping featural-based entities could be compared to neuroethologists investigating echo location processing in the bat and claiming to find areas showing [+fast/-fast] or [+far/-far] ‘features’ in auditory areas of the bat’s brain. Such descriptive labels do not substitute for real time explanations such as auditory neurons sensitive to the various doppler shifts in the returning second harmonic (60kHz) echo, or the time delay of the echo signal relative to the emitting pulse. Acoustic signals shaped by the laws of physics is what the brain listens to and what underlies the perception and ultimate representations of sounds, whether they be contrastive segments of a human language, or species-specific sounds heard by bats. Admittedly, such sounds are arranged along a continuum, and do not lend themselves to binary classifications, but the analogy should hit home. Ultimately, the physical signals that shape the information-bearing parameters of speech segments is all we need to concern ourselves with. Combination-sensitive auditory neurons do not care about featural labels. Taxonomic classifications of sound systems are meant for textbooks and box and arrow functional models of language structure, not biological tissue.
Bellugi, U ., and Studdert-Kennedy M, . (1980) ( Eds.) . "Signed and spoken
language: Biological constraints on linguistic form," Life Sci. Res. Rep.
19, Report of the Dahlem Workshop, 24-28 March, Berlin; Verlag Chemie,
Kenstowicz, M. (1994). Phonology in Generative Grammar, Oxford: Blackwell.
Lindblom, B. (1980). The goal of phonetics, its unification and application.
Phonetica, 37, 7-26.
Mielke, J. (2008). The Emergence of Distinctive Features. Oxford: Oxford
Would Prof Sussman make an analgous claim for valence in chemistry before the "reduction" to quantum mechanics? Were these also not explanatory? Or say for Mendelian genes before Watson and Crick? In retrospect, I think we say that valences and genes paved the way for fruitful further unifications as the "reducing" science came into their own by radically changing. Chemistry and genetics had it basically right, physics and biochemistry caught up and a fruitful unification ensued. Why shouldn't the same be true for phonology and neuroscience? Of course it may not play out the same way, but there is nothing odd in thinking that it well might. There are certainly no a priori arguments against this possibility, at least none that Dr Sussman provides.
So when one bat wants to warn another bat that it is about to fly into a wall, does it show it the spike train of an auditory neuron sensitive to the various doppler shifts in the returning second harmonic (60kHz) echo?
By the same logic, a scientist should only worry about the physics of electronics and the circuit diagram of my computer in trying to explain what is happening to it when I type this. And yet, most people would consider explanations of the computer's behavior at that level of analysis to be completely uninformative.
Alternatively, a physicist might even say that all this talk about "brains", "neurons" and whatnot are just a descriptive vocabulary for different configurations of atoms and the rules that constrain them. By the same logic, "biology" is meant for textbooks describing the taxonomy and functional models of assemblies of carbon-based elements, but are not real explanatory models of actual physical things.
Arguing from a reductionist perspective misses the point I was making.
Of course that approach was beneficial for real scientific domains such as chemistry, genetics, physics, etc. Phonology is a different story. It is not quite a hard science. The argument that segments can be broken down into “bundles of features” seems to suggest a reductionist step, but if you carefully examine it, one can make the argument that feature systems grossly oversimplify the facts, and hence “expand” rather than reduce. Let me explain: Take an articulatory feature along a place dimension such as ‘velar’. It specifies tongue occlusion for stops ( /g/ and /k/) against the posterior area of the hard palate. So this one descriptive label stands for a myriad of tongue and jaw muscle activations. In addition, the exact place along the palate is highly variable and strongly influenced by the following vowel (say ‘geese’ & ‘goose’). So [+velar] reduces nothing. It provides a superficial label that glosses over a host of motoric and acoustic-based realities. I find it inconceivable that such a subjective and over-simplified descriptive label could have actual instantiation in brain tissue. Besides, linguists do not even agree with what is the underlying ‘phonological primitive’ organizing the representational system of language structure— is it auditory or articulatory? That debate is in its sixth decade. Until the concept of features gains the scientific integrity of valence electrons or genetic material, it is best kept to formal gamesmanship, and away from the wet stuff.
To Anonymous #1:
It's very simple: the observing bat yells out to the bat about to crash into the wall .."turn, turn, you are
[+ NEAR]" Another use for features!
I think the points made reduce down to two things:
1. The categorical things that we ascribe to brain function may be incorrect in terms of their isomorphic mapping.
2. That the categorical things we ascribe may in fact be continuous.
The first is trivially true. We create hypotheses and support them through the collection of evidence or falsify them by the same method. The brain is indeed a complicated place, but that shouldn't hinder us from examining it.
The second is true in some specific cases, wrong in others, and empirically testable in most. In this particular case, and in reference to the most recent comment, I don't think the argument is that distinctive features come pre-packaged, rather that this is the (a) level of abstraction that is pulled away from a messy, continuous signal.
And while I agree that the production/perception of these features "...[provide] a superficial label that glosses over a host of motoric and acoustic-based realities", these motoric and acoustic aspects have to make contact with an abstract system that is then used for things like phonological processes, syntactic operations, etc.
In short, I think the claim is that distinctive features offer a linking hypothesis between the continuous world of acoustic and motor variability and the more discrete world of linguistic operations. And while this may in fact be completely wrong, experimental evidence will determine its fate one way or the other.
A specific reference to ”the linguistically-driven search for memory structures in the human brain mapping featural-based entities” could make the discussion more focused and less sarcastic. Let’s consider Poeppel et al. 2008 and Poeppel & Monahan 2011. The “box and arrow functional models of language structure” then refers to Fig. 1 in the latter paper (it’s a reproduction from a 1962 paper by Halle and Stevens) while the bat dichotomy mirrors Fig. 1d in the former. I don’t believe in amodal storage of words in the memory in terms of the distinctive features. Yet I find Poeppel & coworkers’ working hypothesis - as shown, for example, in the “box and arrow” model in Fig 4 in Poeppel et al. 2008 - very interesting and useful. If I may be a bit sarcastic, the moral for the neuroscience of language is this: It’s better to start with a wrong but linguistically based hypothesis rather than somewhere in the air.
D. Poeppel et al., 2008, Phil. Trans. R. Soc. B 363, 1071.
D. Poeppel & P. J. Monahan, 2011, Language and cognitive processes 26, 935.
Post a Comment