Friday, October 16, 2009

Many topics, much data -- any consensus?

Fadiga, 10:55am
"in Broca's area there are only words" - huh?? I didn't get that claim.

But two minutes later:
"Broca's region is an 'action syntax' area" -- this seems like a pre-theoretical intuition, at best. Needs to be spelled out.

Unfortunately, no analysis was provided at the end. We saw a series of amusing studies, but no coherent argument. The conclusion was "generating and extracting action meanings" lies at the basis of Broca's area.

Now Greg: first point, he separates action semantics and speech perception. Evidently, he is taking the non-humor route ... He is however, arguing against the specific claim that mirror neuron arguments, as a special case of motor theories, are problematic at best.

Greg's next move ('The Irvine opening') is to examine the tasks used in the studies. The tasks are very complex and DO NOT capture naturalistic speech perception. For example, response selection in syllable discrimination tasks might be compromised while perception per se rmains intact.

His next - and presumably final - move ('The Hawaii gambit') is to show what data *should* look like to make the case. And he ends on a model. More or less ours. (Every word is true.)

At the risk of being overly glib, Luciano won the best-dressed award, and he won the Stephen Colbert special mention for nice science-humor. He had the better jokes. Greg, alas, won the argument. Because of ... I think ... a cognitive science motivated, analytic perspective. To make claims about the CAUSAL role of motor areas in speech, the burden is high to link to speech science, which is not well done in the mirror literature.


Anonymous said...

Why don't you folks make your own experiments?

Marc Ettlinger said...

To play devil's advocate, Fadiga should have said something similar to what Grodzinsky's general point was. At least they have a theory of how to solve the central problem in speech perception, the inverse problem, even if it's wrong. There is, as of yet, no way to map from acoustics to phonemes or features, which was alluded to early on.
I believe H&P has this as-of-yet-unsolved function as a central part of the theory. Exemplar theory (my personal fave) has a solution, too: the goal is to map to words, which have full acoustic representations in the brain, and not abstract featural representations.
I don't see how H&P have a fleshed out hypothesis as far as I understand.
Please set me straight if I'm being ignorant.

Greg Hickok said...

I'd be happy to set you straight Marc. :-) So let's consider Fadiga's theory in detail. Here it is: "Speech comprehension is grounded in motor systems". He has no concept of what speech comprehension is variously referring to the perception of phonetic features to words to syntax, he provides no definition of "grounded" and has a definition of the motor system that appears to include everything from Broca's area, premotor cortex, M1, posterior parietal lobe, and the STS. I.e., Fadiga doesn't have a theory of how to map acoustics to phonemes.

As for H&P we make no presumption that we have a theory of the mechanism of speech perception, just a theory of the broad neural architecture.

Greg Hickok said...

Geez, re-reading my last comment, I see that it sounds pretty harsh. It wasn't intended that way so let me qualify it before I make anyone mad. It is not an attack on Luciano or his theory, but on the idea that his theory is an explanation that solves the central problem in speech perception.

Fadiga's claim is that the motor system is fundamental for or grounds speech comprehension broadly construed. If he wants to lump together all processes ranging from phonetics to syntax and make claims about the motor system for these, that is fine. Further, like H&P I don't believe Fadiga is arguing that his proposal is a fleshed out theory of speech perception.

Marc Ettlinger said...

I can't defend MT in good conscience since I think it's problematic for some of the reasons you mentioned in the talk. I also don't know Fadiga was the ideal person to defend MT in the explicit ways we're talking about here since I think he's less concerned with speech perception than with mirror neurons. So we probably need to move away from some of those comments. Ultimately, MT (real MT) has a way of connecting two boxes of your model - from acoustic input to words - that is pretty well fleshed out with respect to why it is hard. I don't think a reasonable alternative was offered since the acoustics-to-phonemes diagram I think you showed, which is the most sensible solution in a way, is actually computationally impossible without some other critical component - MT, exemplar theory, or some as of yet approach. The broad model is extraordinarily useful, but I think Josef's point was that if I fill in one of the arrows, you can't tell me I'm wrong until you have your own explanation.

Fred said...

@Marc: I'm right on board with an acoustic (possibly-bigger-than-) word-based exemplar approach as being the sensible way to do this. See Greg, I told you such ppl existed! (OK, I may have put "bigger than words" in Marc's mouth).

@Greg: I replied to a comment of yours on an older post with questions about whether syllables can do the work you want them to...

Greg Hickok said...

Marc: let's be clear. MT does not solve any problems. All it does is say that motor gestures might solve the lack of invariance problem in speech perception. It doesn't specify *how* this problem is solved, i.e., how you get from acoustic input to motor representations.

So I completely disagree that MT has a way of connecting two boxes of our model.

If the MT actually provided an explicit computational account of speech perception and there was no alternative account that explained the data better, then Yosef's argument would hold. But the MT doesn't and so Yosef's argument doesn't.

Can you guys provide a reference or two for the exemplar approach you refer to?

Greg Hickok said...

Anonymous: Do you have a substantive point?

Fred said...

@Greg: a few good starting places are

Coleman, John. (1998). Cognitive reality and the phonological lexicon: a review. J.Neurolinguistics, 11. pp.295-320. (more about acoustic lexical representations than about exemplar stuff)

Coleman, John. (2002). Phonetic representations in the mental lexicon. In Durand & Laks (eds), Phonetics, Phonology and Cognition. pp. 96-130.

Johnson Keith. (2007). Decisions and Mechanisms in Exemplar-based Phonology. In Sole, Beddor & Ohala (eds), Experimental Approaches to Phonology. pp. 25-40.

Also, J. Phonetics 31 (2003) is all devoted to exemplar-based approaches to language...

Marc Ettlinger said...

MT constrains the inverse problem by limiting the number of solutions to something tractable. If you have an articulatory system involved in perception, you can generate the acoustic signal a potential articulatory representation would generate. I'm reminded of the way Dexter does his blood splatter analyses on the eponymous show, if you watch it. He has his own bucket of fake blood he splashes around to test hypotheses of how a murder was committed based on the real blood pattern.
Without an articulatory system, yes IDing steady state /i/ or discriminating /ba/ vs /pa/ is possible, but anything real and/or difficult should be impossible, i.e. speech in noise or very natural, coarticulated speech. That may suggest it's more faciliatory than essential, but given the difficulty speech-recognition has had over the past 50 years, I'd say these 'hard' cases are the norm rather than the exception.

Re: exemplar models:
Hawkins, S. 2003. Roles and representations of systematic fine phonetic detail in speech understanding.
is probably the best place to start and has myriad references therein.

Greg Hickok said...

I understand the logic of the motor theory, but an explicit account of how it works was never proposed, as we discussed previously. All it amounts to is the suggestion that motor information can constrain the solution space. So I could simply counter with my own suggestion, that the acoustic context constrains the solution space.

Or I could counter that motor information can constrain the solution space by modulating the auditory response, rather than requiring that acoustic information be mapped onto motor gestures.

MT simply cannot claim to be the only game in town.

Marc Ettlinger said...