Run, don't walk, to your nearest library or computer terminal!
A new paper by Huan Luo and me just appeared in Neuron. Huan was a graduate student in the University of Maryland Neuroscience and Cognitive Science program and worked principally with me and Jonathan Simon. She finished her Ph.D in 2006, and is now living in Beijing with her husband and two children. Yes, she was ridiculously productive in graduate school ...
This paper shows (IMHO) compelling evidence (based on single trial MEG data) that speech is analyzed using a ~200 ms window.
Luo, H. & Poeppel, D. (2007). Phase Patterns of Neuronal Responses Reliably Discriminate Speech in Human Auditory Cortex. Neuron 54, 1001-1010.
How natural speech is represented in the auditory cortex constitutes a major challenge for cognitive neuroscience. Although many single-unit and neuroimaging studies have yielded valuable insights about the processing of speech and matched complex sounds, the mechanisms underlying the analysis of speech dynamics in human auditory cortex remain largely unknown. Here, we show that the phase pattern of theta band (4–8 Hz) responses recorded from human auditory cortex with magnetoencephalography (MEG) reliably tracks and discriminates spoken sentences and that this discrimination ability is correlated with speech intelligibility. The findings suggest that an ∼200 ms temporal window (period of theta oscillation) segments the incoming speech signal, resetting and sliding to track speech dynamics. This hypothesized mechanism for cortical speech analysis is based on the stimulus-induced modulation of inherent cortical rhythms and provides further evidence implicating the syllable as a computational primitive for the representation of spoken language.
News and views on the neural organization of language moderated by Greg Hickok and David Poeppel
Thursday, June 28, 2007
I can be cranky - but am I crazy? Biolinguistics and Wikipedia ...
Wikipedia is great. I always hoped to appear there. But now I have ... and my feeling is mixed. My friend Dave Embick from Penn (maybe even more cranky than me, by the way) noticed this entry, and was (un)kind enough to let me know.
There is an entry in Wikipedia on "biolinguistics," a term that has been used in recent years in variety of confusing (and confused) ways. As someone whose faculty appointments are in a biology department and a linguistics department, I feel that *something* about our work has to do with biology and linguistics (after all, we work with the brain part of biology and the language part of linguistics). But biolinguistics is for the most part concerned with more abstract problems, say stuff like "do the laws of growth and form as discussed by D'Arcy Thompson show specific effects on the architecture of the language system." I have friends and colleagues who work on these issues in a serious way, and at some point some comments on 'good' (a research program with 'legs') versus bad biolinguistics (dead ends) are in order. There is, to be sure, good stuff to be done in that context.
In this entry, in any case, reference is made to the Fibonacci Series and the Golden Ratio, and it it intimated that these parts of mathematics have particular applicability to syntax. Maybe. maybe not. I think that this is an empirical question. But, as Dave points out, one one reading of the Wikipedia entry I am not just cranky, but maybe crazy because I may not 'believe' in Fibonacci :-) here is what is said:
"This approach is not without its critics. David Poeppel, the neuroscientist and linguist, has characterized the Biolinguistics program as "inter-disciplinary cross-sterilization", arguing that vague metaphors that seek to relate linguistic phenomena to biological phenomena explains nothing about language or biology. However, it was recently shown that syntactic structures possess the properties of other biological systems. The Law of Nature (Golden Ratio) accounts for the number of nodes in syntactic trees, binarity of branching, and syntactic phase formation."
Now, it is true that I think there are many serious conceptual problems with the way some questions are asked in the context of biolinguistics. However, if there are *really* results that show that detailed properties of syntactic structure follow from the Golden Ratio, I would like to know the linking hypotheses from Golden Ratio to neuronal circuitry to syntactic phase formation. The reason I am cranky is that I can't just buy into this way of talking about stuff. The reason that I am not crazy is that I simply want to see plausible accounts of how these different levels of description interact. So ... show me the money.
There is an entry in Wikipedia on "biolinguistics," a term that has been used in recent years in variety of confusing (and confused) ways. As someone whose faculty appointments are in a biology department and a linguistics department, I feel that *something* about our work has to do with biology and linguistics (after all, we work with the brain part of biology and the language part of linguistics). But biolinguistics is for the most part concerned with more abstract problems, say stuff like "do the laws of growth and form as discussed by D'Arcy Thompson show specific effects on the architecture of the language system." I have friends and colleagues who work on these issues in a serious way, and at some point some comments on 'good' (a research program with 'legs') versus bad biolinguistics (dead ends) are in order. There is, to be sure, good stuff to be done in that context.
In this entry, in any case, reference is made to the Fibonacci Series and the Golden Ratio, and it it intimated that these parts of mathematics have particular applicability to syntax. Maybe. maybe not. I think that this is an empirical question. But, as Dave points out, one one reading of the Wikipedia entry I am not just cranky, but maybe crazy because I may not 'believe' in Fibonacci :-) here is what is said:
"This approach is not without its critics. David Poeppel, the neuroscientist and linguist, has characterized the Biolinguistics program as "inter-disciplinary cross-sterilization", arguing that vague metaphors that seek to relate linguistic phenomena to biological phenomena explains nothing about language or biology. However, it was recently shown that syntactic structures possess the properties of other biological systems. The Law of Nature (Golden Ratio) accounts for the number of nodes in syntactic trees, binarity of branching, and syntactic phase formation."
Now, it is true that I think there are many serious conceptual problems with the way some questions are asked in the context of biolinguistics. However, if there are *really* results that show that detailed properties of syntactic structure follow from the Golden Ratio, I would like to know the linking hypotheses from Golden Ratio to neuronal circuitry to syntactic phase formation. The reason I am cranky is that I can't just buy into this way of talking about stuff. The reason that I am not crazy is that I simply want to see plausible accounts of how these different levels of description interact. So ... show me the money.
Wednesday, June 27, 2007
Flowbrain blog - interesting essay
Brad Buchsbaum has launched a new blog of his own with a pretty interesting essay, "The Four Ages of Functional Neuroimaging." Check it out! http://flowbrain.blogspot.com/
Tuesday, June 19, 2007
Response selectivity to speech in the left hemisphere
There's a figure in Hickok & Poeppel 2007 showing activations to speech sounds (CVs, etc.) across several recent functional imaging studies, most of which contrast these speech sounds with various acoustic control stimuli. The figure shows bilateral activation consistent with our claim that both hemispheres are capable of processing speech sounds, and somewhat at odds with the common view that "phonemic" processing mechanisms are left dominant. What is not evident from that figure (but is noted in the text of the paper), is that many of the studies show more extensive left hemisphere activation to speech sounds when contrasted with non-speech controls, and a couple show only left hemisphere activation. This asymmetry seems to be the driving force behind the view that phonemic processing is left dominant. But here is something to keep in mind: speech activates left and right superior temporal regions rather symmetrically. What is asymmetric (at least in some studies) is the response to non-speech control stimuli. So the question is, how do we interpret computationally, the difference between a region that responds to speech as well as similar non-speech sounds vs. another region that also responds to speech but not as well to non-speech sounds?
Tuesday, June 12, 2007
Jealous of the fusiform? I am.
Why do people go on and on so about the fusiform gyrus and face recognition? What's the deal? I think they carry on because ... well, because they can! As it turns out, the neuronal activity in the fusiform face area (FFA; see e.g. Kanwisher N, McDermott J, Chun MM. 1997. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci. 17(11):4302-11) really does have a tight link to face recognition processes. Evidently, the holistic aspects of face processing are 'elementary' or 'primitive' (in a computational-representational sense) such that a circumscribed cortical area forms the basis for the functional process. That does not mean that other areas are not involved, but apparently the FFA plays a privileged role. Who would have thunk it - face recognition has a place.
Anyway, I am jealous. I would like for us speech-types to have an area like that, too. The fusiform face area and the parahippocampal place area are practically household names (well ... in pretty nerdy households). What do we have? Why don't we have something like the 'superior speech field' SSF, or the 'middle speech gyrus' MSG? Can we have that please?
Well, it's not so clear that that would make sense. What we have is an increasingly articulated functional and anatomic boxology -- for example in the way Greg and I presented in in our 2007 Nature Reviews Neuroscience paper (see earlier blog entry). I think this does make sense, because 'speech perception' or 'spoken word recognition' are in my view not monolithic; rather, a number of subroutines are necessary parts of successful speech perception. No single area is responsible for a sufficient number of processes to by itself deserve the name 'speech area.' So, although we often continue to search for the speech area, I think this is wrong-headed. We should be radical decompositionalists instead, and identify (cognitively, computationally) all the subroutines implicated in speech processing. And find a neurobiologically plausible implementation (a la Marr) for the computational subroutines.
Of course, I could be wrong about that ... Reasonable people can disagree, so maybe there is an area 'specialized for speech'. I'd certainly like to hear the arguments.
Anyway, I am jealous. I would like for us speech-types to have an area like that, too. The fusiform face area and the parahippocampal place area are practically household names (well ... in pretty nerdy households). What do we have? Why don't we have something like the 'superior speech field' SSF, or the 'middle speech gyrus' MSG? Can we have that please?
Well, it's not so clear that that would make sense. What we have is an increasingly articulated functional and anatomic boxology -- for example in the way Greg and I presented in in our 2007 Nature Reviews Neuroscience paper (see earlier blog entry). I think this does make sense, because 'speech perception' or 'spoken word recognition' are in my view not monolithic; rather, a number of subroutines are necessary parts of successful speech perception. No single area is responsible for a sufficient number of processes to by itself deserve the name 'speech area.' So, although we often continue to search for the speech area, I think this is wrong-headed. We should be radical decompositionalists instead, and identify (cognitively, computationally) all the subroutines implicated in speech processing. And find a neurobiologically plausible implementation (a la Marr) for the computational subroutines.
Of course, I could be wrong about that ... Reasonable people can disagree, so maybe there is an area 'specialized for speech'. I'd certainly like to hear the arguments.
Friday, June 8, 2007
Lisa Pearl and Jon Sprouse to join UCI faculty
Linguistics as UC Irvine -- A.K.A. UC OC -- just got a huge boost with two new faculty, Lisa Pearl and Jon Sprouse, both products of Maryland Linguistics. Lisa and Jon will join the Department of Cognitive Sciences effective July 1, and will form part of the core of a new proposed Center for Language Science. We will be teaching them both how to surf shortly after their arrival.
Wednesday, June 6, 2007
Send your comments/announcements
If anyone has any comments on a recent paper they read, or have announcements such as faculty/post doc positions, upcoming conference info, etc., please feel free to email either myself or David with the information and we will post it on the blog.
Tuesday, June 5, 2007
Postdoc at U Texas, San Antonio
COGNITIVE NEUROSCIENCE OF BILINGUAL LANGUAGE COMPREHENSION (Postdoctoral Position)
Brain Cognition and Language Lab, Department of Biology, University of Texas at San Antonio
The Brain Cognition and Language lab at the University of Texas at San Antonio is seeking a postdoctoral fellow in the area of cognitive neuroscience of language. The research emphasis will be in understanding bilingual language comprehension under normal and abnormal circumstances (i.e., Aphasia). The primary techniques used are behavioral response time measures and Event Related Potentials (ERP) recordings, with the possibility of using a variety of imaging techniques, including PET, fMRI and TMS. Dr. Nicole Wicha is head of the lab, as well as the Chief of the ERP lab at the
Research Imaging Center at the University of Texas Health Science Center at San Antonio. Applicants must have a PhD and a strong background in the cognitive psychology or cognitive neuroscience of language and/or language disorders and statistics. Experience with ERP or other neuroimaging methodologies and proficiency in Spanish and English is preferable.
Position available immediately. Salary is commensurate with NIH guidelines. UTSA is an equal opportunity employer committed to creating a diverse, cooperative work environment. Women, members of under-represented minority groups and individuals with physical disabilities are encouraged to apply.
For further information contact Dr. Wicha. To apply please send a CV, statement of research interests and 3 letters of references to:
Nicole Y. Y. Wicha, Ph.D.
Department of Biology
University of Texas at San Antonio
One UTSA Circle
San Antonio, Texas 78249
Nicole.Wicha@UTSA.edu
(210) 458-7013
http://www.bio.utsa.edu/faculty/wicha.html
Contact Information:
Nicole Y. Y. Wicha
Department of Biology
University of Texas at San Antonio
One UTSA Circle
San Antonio, Texas 78249
Nicole.Wicha@UTSA.edu
http://www.bio.utsa.edu/faculty/wicha.html
Brain Cognition and Language Lab, Department of Biology, University of Texas at San Antonio
The Brain Cognition and Language lab at the University of Texas at San Antonio is seeking a postdoctoral fellow in the area of cognitive neuroscience of language. The research emphasis will be in understanding bilingual language comprehension under normal and abnormal circumstances (i.e., Aphasia). The primary techniques used are behavioral response time measures and Event Related Potentials (ERP) recordings, with the possibility of using a variety of imaging techniques, including PET, fMRI and TMS. Dr. Nicole Wicha is head of the lab, as well as the Chief of the ERP lab at the
Research Imaging Center at the University of Texas Health Science Center at San Antonio. Applicants must have a PhD and a strong background in the cognitive psychology or cognitive neuroscience of language and/or language disorders and statistics. Experience with ERP or other neuroimaging methodologies and proficiency in Spanish and English is preferable.
Position available immediately. Salary is commensurate with NIH guidelines. UTSA is an equal opportunity employer committed to creating a diverse, cooperative work environment. Women, members of under-represented minority groups and individuals with physical disabilities are encouraged to apply.
For further information contact Dr. Wicha. To apply please send a CV, statement of research interests and 3 letters of references to:
Nicole Y. Y. Wicha, Ph.D.
Department of Biology
University of Texas at San Antonio
One UTSA Circle
San Antonio, Texas 78249
(210) 458-7013
Contact Information:
Nicole Y. Y. Wicha
Department of Biology
University of Texas at San Antonio
One UTSA Circle
San Antonio, Texas 78249
Friday, June 1, 2007
Phonological access in naming
Graves et al. have reported a pretty cool study in JoCN examining the cortical regions involved in phonological access in picture naming (Graves, Grabowski, Mehta, & Gordon, 2007, JoCN, 19, 617-631). The goal was to use the word frequency effect (WFE) as a way of identifying phonological word form access. The assumption is that the WFE -- naming times are longer for lower frequency words -- reflects phonological word form access during production, and therefore regions that show greater activity for lower frequency words are those involved in phonological access. Because frequency is highly correlated with other variables such as word length and concept familiarity, the authors also quantified these variables and attempted to factor them out of the WFE analysis. Although I'm not completely convinced that these other variables are completely controlled, the findings are nonetheless interesting and worth paying attention to. Basically, they found that the WFE was associated with three main regions, one in the inferior frontal gyrus, one in the ventral occiptial-temporal cortex, and one in the posterior temporal gyrus. Only the pSTG location however, was specific in its sensitivity to frequency -- the other two regions were, in addition, sensitive to concept familiarity.
The authors conclude that "the left pSTG is specifically involved in accessing lexical phonology" [in picture naming] p. 629, and this seems to be a reasonable conclusion that is consistent with, for example, Indefrey and Levelt's (2004) meta-analysis conclusions.
One question I have, though, is whether this is THE region involved in accessing lexical phonology in production, or whether it is part of a circuit that participates in this process under "high-load" conditions. The region they identified (-51, -37, 20) precisely corresponds to area Spt (see Brad Buchsbaum's comment on the "Where is area Spt" blog entry). We proposed in our NRN paper that the dorsal stream network including area Spt is involved in the production of low frequency words, because sensory-guidance is required during motor sequence programming, whereas higher frequency words may be stored as motor chunks that can simply be activated without Spt mediation. The WFE found in area Spt is consistent with this claim. This view does leave open the question of what circuits are involved in naming of high frequency items. Perhaps STS regions? Any ideas?
The authors conclude that "the left pSTG is specifically involved in accessing lexical phonology" [in picture naming] p. 629, and this seems to be a reasonable conclusion that is consistent with, for example, Indefrey and Levelt's (2004) meta-analysis conclusions.
One question I have, though, is whether this is THE region involved in accessing lexical phonology in production, or whether it is part of a circuit that participates in this process under "high-load" conditions. The region they identified (-51, -37, 20) precisely corresponds to area Spt (see Brad Buchsbaum's comment on the "Where is area Spt" blog entry). We proposed in our NRN paper that the dorsal stream network including area Spt is involved in the production of low frequency words, because sensory-guidance is required during motor sequence programming, whereas higher frequency words may be stored as motor chunks that can simply be activated without Spt mediation. The WFE found in area Spt is consistent with this claim. This view does leave open the question of what circuits are involved in naming of high frequency items. Perhaps STS regions? Any ideas?
Subscribe to:
Posts (Atom)