A number of functional imaging studies have found that contrasting speech with various non-speech control stimuli eliminates vast areas of speech-responsive cortical activations; i.e., many areas are equally activated by speech and non-speech sounds. Many investigators discount these regions that are jointly activated by speech and non-speech sounds as being somehow less critical for speech. -- the primary quest being to identify The Speech Area. We have previously disagreed with this view, and the general approach, suggesting that regions that respond to non-speech sounds could still be carrying out critical speech-related computations. We further have suggested that these "non-speech specific" regions could still be speech specific if only we had the resolution to image the neural substructure.
Three years late, I discover a paper by Michael Beauchamp, Alex Martin and colleagues showing just this (Beauchamp et al. 2004, Nat Neurosci, 7:1190-2). They imaged a multisensory region of the STS using both typical fMRI resolution and higher resolution methods in a multisensory paradigm presenting auditory and/or visual stimuli. Using typical lower resolution imaging they found that the STS region showed equivalent responses to stimuli from either modality. Higher resolution imaging, however, found a patchy organization within this broader region that contained zones that were specifically responsive to one or the other sensory modality, as well as some zones that were responsive to both.
No difference doesn't always mean no difference. It sometimes means we just don't have the resolution to see the difference.
No comments:
Post a Comment