Here is a more detailed summary of the article provided by lead author, Tom Schofield:
We were interested in discovering how the abstraction of meaningful, invariant representations of speech sounds (i.e. phonemes) occurs in the brain. To do this we used MEG, DCM and a mismatch paradigm employing speech and non-speech stimuli. In the speech condition, we played a repeating ‘standard’ stimulus (the word 'Bart') and periodically interleaved three infrequent ‘deviant’ stimuli created by manipulating the spectro-temporal profile (i.e. formant frequencies) of the vowel sound within the word. The first deviant was behaviourally distinguishable from the standard, but still sounded like 'Bart'. The second deviant was only slightly further away than the first in terms of formant frequency but far enough away that it contained a different phoneme and therefore sounded like a different word - ‘Burt’. The third deviant was much further away acoustically from the standard and sounded like the word ‘Beet’. We thus had three speech deviants that were all acoustically different from the standard, two of which also differed phonemically. We presented the same subjects with an additional set of non-speech sine wave stimuli; a standard tone of the same length and frequency as the 2nd formant of the vowel sound of the speech standard, and three tone deviants of increasing frequency that were matched on the basis of behavioural discrimination to the speech deviants. We modelled the event-related field associated with each deviant with DCM. Our main finding was that although processing of both speech and non-speech sounds engage the same bilateral network within auditory cortex (HG and STG), there is a difference in the way that the brain processes a stimulus change that has a functional meaning (i.e. a phoneme change) versus those that do not. Essentially, phonological processing causes a relative increase in postsynaptic sensitivity within higher levels of auditory cortex in the left hemisphere (left STG) and a concomitant decoupling of the hemispheres at this level. In contrast, the effects of equivalent non-speech stimuli change are seen in lower-levels of auditory cortex in the right hemisphere. We do not argue that speech perception at the phonological level is purely left-lateralised, but rather that it engages a bilateral network that displays asymmetric organisation at higher levels of the cortical hierarchy. My guess is that this asymmetry exists because of the likely subsequent interaction between phonemic and lexicosemantic representations; I would argue that the abstraction of higher level, post-phonemic representations is quite strongly left lateralised (e.g. see our Journal of Neuroscience fMRI DCM paper of last year).
This seems to be a nice demonstration of how left and right hemisphere systems process word-level information differently. Tom, I think your conclusions are perfectly reasonable. What do you think would have happened if you used non-word stimuli? Would the strength of the left hemisphere coupling have diminished?
Schofield, T., Iverson, P., Kiebel, S., Stephan, K., Kilner, J., Friston, K., Crinion, J., Price, C., & Leff, A. (2009). Changing meaning causes coupling changes within higher levels of the cortical hierarchy Proceedings of the National Academy of Sciences, 106 (28), 11765-11770 DOI: 10.1073/pnas.0811402106
Interesting question. I would argue that it is phonological processing rather than word-level processing that causes the increase in connectivity within left STG. Firstly, because other mismatch experiments using isolated vowel sounds/syllables etc. typically show slightly stronger responses over the left hemisphere than over the right, and this occurs at roughly the same latency as our response. Secondly, on the basis of the few lexical mismatch studies out there, I would expect to see any word-level effects becoming apparent a little later than the time window we modeled here.
Post a Comment