Tuesday, February 22, 2011

Reflections on the syllable as the perceptual unit in speech perception by Dom Massaro

Given that there have been some interesting debates here on Talking Brains regarding the basic unit of speech perception, I asked Dom Massaro, a prominent and long-time player in this debate, to put together a comment on the topic for publication here. He graciously agreed to do this for us and here it is. Thanks Dom!
-greg

*************

Some reminiscences on how I was led to propose the syllable as the perceptual unit in speech perception. I relied mostly on my writings in the literature rather than undocumented memory.
Dom Massaro

During my graduate studies in mathematical and experimental psychology and also during my postdoctoral position, I developed an information-processing approach to the study of behavior (see Massaro & Cowan, 1993, for this brand of information processing). Two important implications arose from this approach: 1) the proximal influences on behavior and 2) the time course of processing are central to a complete description of behavior (as opposed to simple environment-behavior relationships. My early studies involved a delineation of perception and memory processes in the processing of speech and music. The research led to a theory of perception and memory processes that revealed the properties of pre-perceptual and perceptual memory stores and rules for interference of information in these stores and theories of forgetting (Massaro, 1970).

Initiating my career as a faculty member, I looked to apply this information-processing approach to a more substantive domain of behavior. I held a graduate seminar for three years with the purpose of applying the approach to language processing. We learned that previous work in this area had failed to address the issues described above, and our theoretical framework and empirical reviews anticipated much of the research in psycholinguistics since that time in which the focus is on real-time on-line processing (see our book entitled, Understanding Language: An Information Processing Analysis of Speech Perception, Reading and Psycholinguistics, 1975)

My own research interests also expanded to include the study of reading and speech perception. Previous research had manipulated only a single variable in these fields, and our empirical work manipulated multiple sources of both bottom-up and top-down information. Gregg Oden and I collaborated to formulate a fuzzy logical model of perception (Oden & Massaro, 1978; Movellan & McClelland, 2001), which has served as a framework for my research to this day. Inherent to the model were prototypes in memory and, therefore, it was important to take a stance on perceptual units in speech and print. By this time, my research and research by others indicated the syllable and the letter as units in speech and print, respectively. Here is the logic I used.

Speech perception can be described as a pattern-recognition problem. Given some speech input, the perceiver must determine which message best describes the input. An auditory stimulus is transformed by the auditory receptor system and sets up a neurological code in a pre-perceptual auditory storage. Based on my backward masking experiments and other experimental paradigms, this storage holds the information in a pre-perceptual form for roughly 250 ms, during which time the recognition process must take place. The recognition process transforms the pre-perceptual image into a synthesized percept. One issue given this framework is, what are the patterns that are functional in the recognition of speech? These sound patterns are referred to as perceptual units.

One reasonable assumption is that every perceptual unit in speech has a representation in long-term memory, which is called a prototype. The prototype contains a list of acoustic features that define the properties of the sound pattern as they would be represented in pre-perceptual auditory storage. As each sound pattern is presented, its corresponding acoustic features are held in pre-perceptual auditory storage. The recognition process operates to find the prototype in long-term memory which best describes the acoustic features in pre-perceptual auditory storage. The outcome of the recognition process is the transformation of the pre-perceptual auditory image of the sound stimulus into a synthesized percept held in synthesized auditory memory.

According to this model, pre-perceptual auditory storage can hold only one sound pattern at a time for a short temporal period. Backward recognition masking studies have shown that a second sound pattern can interfere with the recognition of an earlier pattern if the second is presented before the first is recognized. Each perceptual unit in speech must occur within the temporal span of pre-perceptual auditory storage and must be recognized before the following one occurs for accurate speech processing to take place. Therefore, the sequence of perceptual units in speech must be recognized one after the other in a successive and linear fashion. Finally, each perceptual unit must have a relatively invariant acoustic signal so that it can be recognized reliably. If the sound pattern corresponding to a perceptual unit changes significantly within different speech contexts, recognition could not be reliable, since one set of acoustic features would not be sufficient to characterize that perceptual unit. Perceptual units in speech as small as the phoneme or as large as the phrase have been proposed.

The phoneme was certainly a favorite to win the pageant for speech’s perceptual unit. Linguists had devoted their lives to phonemes, and phonemes gained particular prominence when they could be distinguished from one another by distinctive features. Trubetzkoy, Jakobson, and other members of the "Prague school" proposed that phonemes in a language could be distinguished by distinctive features. For example, Jakobson, Fant, and Halle (1961) proposed that a small set of orthogonal, binary properties or features were sufficient to distinguish among the larger set of phonemes of a language. Jakobson et al. were able to classify 28 English phonemes on the basis of only nine distinctive features. While originally intended only to capture linguistic generalities, distinctive feature analysis had been widely adopted as a framework for human speech perception. The attraction of this framework is that since these features are sufficient to distinguish among the different phonemes, it is possible that phoneme identification could be reduced to the problem of determining which features are present in any given phoneme. This approach gained credibility with the finding, originally by Miller and Nicely (1955) and since by many others, that the more distinctive features two sounds share, the more likely they are to be perceptually confused for one another. Thus, the first candidate we considered for the perceptual unit was the phoneme.

Consider the acoustic properties of vowel phonemes. Unlike some consonant phonemes, whose acoustic properties change over time, the wave shape of the vowel is considered to be steady-state or tone-like. The wave shape of the vowel repeats itself anywhere from 75 to 200 times per second. In normal speech, vowels last between 100 and 300 ms, and during this time the vowels maintain a fairly regular and unique pattern. It follows that, by our criteria, vowels could function as perceptual units in speech.

Next let us consider consonant phonemes. Consonant sounds are more complicated than vowels and some of them do not seem to qualify as perceptual units. We have noted that a perceptual unit must have a relatively invariant sound pattern in different contexts. However, some consonant phonemes appear to have different sound patterns in different speech contexts. For example, the stop consonant phoneme /d/ has different acoustic representations in different vowel contexts. Since the steady-state portion corresponds to the vowel sounds, the first part, called the transition, must be responsible for the perception of the consonant /d/. The acoustic pattern corresponding to the /d/ sound differs significantly in the syllables /di/ and /du/. Hence, one set of acoustic features would not be sufficient to recognize the consonant /d/ in the different vowel contexts. Therefore, we must either modify our definition of a perceptual unit or eliminate the stop consonant phoneme as a candidate.

There is another reason why the consonant phoneme /d/ cannot qualify as a perceptual unit. In the model perceptual units are recognized in a successive and linear fashion. Research has shown, however, that the consonant /d/ cannot be recognized before the vowel is also recognized. If the consonant were recognized before the vowel, then we should be able to decrease the duration of the vowel portion of the syllable so that only the consonant would be recognized. Experimentally, the duration of the vowel in the consonant-vowel syllable (CV) is gradually decreased and the subject is asked when she hears the stop consonant sound alone. The CV syllable is perceived as a complete syllable until the vowel is eliminated almost entirely (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967). At that point, however, instead of the perception changing to the consonant /d/, a nonspeech whistle is heard. Liberman et al. show that the stop consonant /d/ cannot be perceived independently of perceiving a CV syllable. Therefore, it seems unlikely that the /d/ sound would be perceived before the vowel sound; it appears, rather, that the CV syllable is perceived as an indivisible whole or gestalt.

These arguments led to the idea that the syllables function as perceptual units rather than containing two perceptual units each. One way to test this hypothesis is to employ the CV syllables in a recognition-masking task. Liberman et al., found that subjects could identify shortened versions of the CV syllables when most of the vowel portion is eliminated. Analogous to our interpretation of vowel perception, recognition of these shortened CV syllables also should take time. Therefore, a second syllable, if it follows the first soon enough, should interfere with perception of the first. Consider the three CV syllables /ba/, /da/, and /ga/ (/a/ pronounced as in father), which differ from each other only with respect to the consonant phoneme. Backward recognition masking, if found with these sounds, would demonstrate that the consonant sound is not recognized before the vowel occurs and also that the CV syllable requires time to be perceived.

There have been several experiments on the backward recognition masking of CV syllables (Massaro, 1974, 1975; Pisoni, 1972). Newman and Spitzer (1987) employed the three CV syllables /ba/, /da/, and /ga/ as test items in the backward recognition masking task. These items were synthetic speech stimuli that lasted 40 ms; the first 20 ms of the item consisted of the CV transition and the last 20 ms corresponded to the steady-state vowel. The masking stimulus was the steady-state vowel /a/ presented for 40 ms. In one condition, the test and masking stimuli were presented to opposite ears, that is, dichotically. All other procedural details followed the prototypical recognition-masking experiment.

The percentage of correct recognitions for 8 observers improved dramatically with increases in the silent interval between the test and masking CVs. These results show that recognition of the consonant is not complete at the end of the CV transition, nor even at the end of the short vowel presentation. Rather, correct identification of the CV syllable requires perceptual processing after the stimulus presentation. These results support our hypothesis that the CV syllable must have functioned as a perceptual unit, because the syllable must have been stored in pre-perceptual auditory storage, and recognition involved a transformation of this pre-perceptual storage into a synthesized percept of a CV unit. The acoustic features necessary for recognition must, therefore, define the complete CV unit. An analogous argument can be made for VC syllables also functioning as perceptual units (Massaro, 1974).

We must also ask whether perceptual units could be larger than vowels, CV, or VC syllables. Miller (1962) argued that the phrase of two or three words might function as a perceptual unit. According to our criteria for a perceptual unit, it must correspond to a prototype in long-term memory which has a list of features describing the acoustic features in the pre-perceptual auditory image of that perceptual unit. Accordingly, pre-perceptual auditory storage must last on the order of one or two seconds to hold perceptual units of the size of a phrase. But the recognition-masking studies usually estimate the effective duration of pre-perceptual storage to be about 250 ms. Therefore, perceptual units must occur within this period, eliminating the phrase as the perceptual unit.

The recognition-masking paradigm developed to study the recognition of auditory sounds has provided a useful tool for determining the perceptual units in speech. If preperceptual auditory storage is limited to 250 ms, the perceptual units must occur within this short period. This time period agrees nicely with the durations of syllables in normal speech.


The results of the present experiments demonstrate backward masking in a two-interval forced-choice task, a same-different task, and an absolute identification task. The backward masking of one sound by a second sound is interpreted in terms of auditory perception continuing after a short sound is complete. A representation of the short sound is held in a preperceptual auditory storage so that resolution of the sound can continue to occur after the stimulus is complete. A second sound interferes with the storage of the earlier sound interfering with its further resolution The current research contributes to the development of a general information processing model (Massaro, 1972, 1975).

To solve the invariance problem between acoustic signal and phoneme, while simultaneously adhering to a pre-perceptual auditory memory constraint of roughly 250 ms, Massaro (1972) proposed the syllables V, CV, or VC as the perceptual unit, where V is a vowel and C is a consonant or consonant cluster. This assumption was built into the foundation of the FLMP (Oden & Massaro, 1978). It should be noted that CVC syllables would actually be two perceptual units, the CV and VC portions, rather that just one. Assuming that this larger segment is the perceptual unit reinstates a significant amount of invariance between signal and percept. Massaro and Oden (1980, pp. 133–135) reviewed evidence that the major coarticulatory influences on perception occur within these syllables, rather than between syllables. Any remaining lack of invariance across these syllables could conceivably be disambiguated by additional sources of information in the speech stream.

References
Massaro, D.W. (1970). Perceptual Processes and Forgetting in Memory Tasks. Psychological Review, 77(6), 557-567.

Massaro, D.W. (1972). Preperceptual Images, Processing Time, and Perceptual Units in Auditory Perception. Psychological Review, 79(2), 124-145.

Massaro, D. W. (1974). Perceptual Units in Speech Recognition. Journal of Experimental Psychology, 102(2), 349-353.

Massaro, D.W. (1975). Understanding Language: An Information Processing Analysis of Speech Perception, Reading and Psycholinguistics. New York: Academic Press.

Massaro, D.W. and Cowan, N. (1993). Information Processing Models: Microscopes of the Mind. Annual Review of Psychology, 44, 383-425.
http://mambo.ucsc.edu/papers/1993.html

Massaro, D. W. & Oden, G. C. (1980). Speech Perception: A Framework for Research and Theory. In N.J. Lass (Ed.), Speech and Language: Advances in Basic Research and Practice. Vol. 3, New York: Academic Press, 129-165.

Movellan, J., and McClelland, J. L. (2001). The Morton-Massaro Law of Information Integration: Implications for Models of Perception. Psychological Review, 108, 113-148.

Saturday, February 19, 2011

Research Assistantship, Philadelphia/Elkins Park, PA. -- Moss Rehab

The Language and Aphasia Laboratory of Moss Rehabilitation Research Institute (MRRI) has openings for a BA/BS-level research assistant, beginning Spring or Summer of 2011. Under the direction of Dr. Myrna Schwartz, the laboratory conducts NIH-funded research on normal and aphasic language processes, with emphasis on word and sentence production. Our RAs gain valuable experience with language-impaired patients, are trained to administer clinical measures of aphasia and to conduct and analyze experiments with patients. Learning opportunities also include state-of-the art lesion analysis and applications of computational modeling.



Applicants should have strong academic backgrounds in psychology, neuroscience or linguistics, with coursework in statistics and research methods. Preference will be given to applicants with prior research experience, particularly in cognitive psychology, speech and hearing sciences, or linguistics. MRRI and MossRehab are part of the Albert Einstein Healthcare Network. The position offers competitive salary and benefits (medical, dental, vision, tuition reimbursement). Send cover letter, C.V. and contact information for three references to Dr. Erica Middleton: email: middleer@einstein.edu; fax: 215-663-6783; mail: Moss Rehabilitation Research Institute, 60 Township Line Rd., Elkins Park, PA, 19027.

Tuesday, February 15, 2011

Survey on the 2010 Neurobiology of Language Conference in San Diego

On behalf of the NLC 2011 Organizing Committee, I would like to invite you to fill out a short survey on your experience at NLC 2010 in San Diego, California. Tell us your opinion about the location, structure, and scientific content of the meeting and help us plan the next Neurobiology of Language meetings! If you did not attend NLC 2010: we would like to know your opinion about the structure and location of future NLC meetings.

Please take a few minutes to share you opinion with us here http://www.surveygizmo.com/s/453078/1kv1i

We thank you very much in advance!

Monday, February 14, 2011

Postdoctoral positions: CEA Paris - Wassenhove Lab

Post-doctoral positions have opened up in the Cognitive NeuroImaging Unit / NeuroSpin MEG group near Paris (France) to work with Dr Virginie van Wassenhove.

Applications are invited from talented, dynamic, committed, and enthusiastic researchers. 

The scientific project will be exploring the interface between time perception, temporal processes and cortical dynamics (assessed by MEG/EEG). Specific components of this project include (but are not restricted to): novel psychophysical designs addressing time estimation and the dynamics of multisensory processing together with novel analytical methods for MEG/EEG signal analysis/tracking. Postdoctoral fellows will lead MEG studies as part of funded projects from ERC and ANR grants. They will be expected to work closely with and supervise (master, phd) students involved in the projects. Some involvement in organizational and managerial aspects specific to these projects may be expected. Fellows’ independent and collaborative research will be encouraged as part of their career development.

The ideal applicant will have a sound experience with MEG (and/or EEG techniques), a strong record in Cognitive Neurosciences and an overall good set of skills in applied signal processing. A degree in Neurosciences, Cognitive Neurosciences, Psychology or other related fields is required. Programming skills are a plus. A strong record of published work, prior experience with MEG/EEG techniques, a good understanding of signal processing and overall good programming skills are essential. Applicants from outside the European Union are very welcome but they must qualify for a valid visa. French speaking is not a requirement as long as English is mastered. Opportunities for French language classes are available.

Applicants will be expected to conduct MEG experiments, from acquisition to data analysis - other techniques are not excluded (EEG, fMRI for instance). Their intellectual contribution to the project is strongly encouraged. Our group is located in a state-of-the-art neuroimaging facility (NeuroSpin: http://www-dsv.cea.fr/en/instituts/institut-d-imagerie-biomedicale-i2bm/services/neurospin-d.-le-bihan) and is part of a large scientific community in the Parisian area.

Duration: the position is initially funded for one year (renewable up to four or five years).

Starting date: as soon as possible

Salary will be commensurate with experience.

The application package should contain a motivated letter of intent, a detailed résumé with a list of publications,a minimum of two letters of recommendation (or contacts from which those could be obtained) a letter of intent with a statement of research interests. Inquiries and applications should be sent to Virginie.van-Wassenhove@cea.fr  or Virginie.van.Wassenhove@gmail.com

Please, put POSTDOC in the email subject. Applications will be considered until the positions are filled.

Relevant references: Minding time in an amodal representational space. Philos Trans R Soc Lond B Biol Sci. 2009 Jul 12;364(1525):1815-30  ;

Tenure Track Position at UT Austin

Job Description:
The Department of Communication Science and Disorders at The University of Texas at Austin has a 9-month, tenure-track assistant professor position available beginning August 2011. Preferred area of expertise is adult communication disorders with special focus on acquired language disorders and adult language literacy. Qualified applicants will have a PhD in Communication Science & Disorders or a related field. A Certificate of Clinical Competence in Speech-Language Pathology is preferred. The successful candidate will conduct research in adult acquired language disorders, seek funding in support of their research program, teach academic courses related to adult language disorders and language literacy for majors and nonmajors in communication on campus and via distance education, direct student research, engage in professional service and serve on department, college and university committees. The Department of Communication Science and Disorders has 16 full-time faculty, 250 undergraduate students and more than 100 graduate students.
Applicant Instructions:
Qualified applicants should submit a letter of application (including research and teaching interests), curriculum vitae, and the names of three references including contact information to: Thomas P. Marquardt, Ph.D., Search Committee Chair, Department of Communication Science and Disorders, 1 University Station, Austin, Texas 78712-1089. For more information about the College and Department, visit our website at: http://csd.utexas.edu/. Review of applicants will continue until the position is filled. The College of Communication is committed to achieving diversity in its faculty, students, and curriculum, and it welcomes applicants who can help achieve these objectives.

Friday, February 11, 2011

First International Conference on Cognitive Hearing Science for Communication

First International Conference on
Cognitive Hearing Science for Communication
June 19-22, 2011
Linköping, Sweden


Researchers in all fields, basic and applied, who are interested in the interplay between cognitive and hearing factors in communication are welcome. The conference will include invited speakers and open poster sessions.

For further information, including the list of confirmed speakers, see:
http://eventus.trippus.se/head2011attendees

The abstract submission deadline is 28th of February, 2011.

The conference is organized through the Linnaeus Centre for Hearing and Deafness (Linnaeus HEAD), funded by the Swedish Research Council (Vetenskapsrådet).

Looking forward to seeing you in Linköping!

On behalf of the Organizing Committee
Ingrid Johnsrude, Jerker Rönnberg


Ingrid Johnsrude, PhD
Department of Psychology, Queen's University, Canada
&
Linnaeus Centre for Hearing and Deafness (HEAD)
Linköping University, Sweden

ingrid.johnsrude@queensu.ca

Thursday, February 10, 2011

On the nature of sensorimotor integration for speech processes

For the last few years I have been thinking a lot about a few different things: What specifically is our proposed dorsal stream doing? How does the motor system contribute to speech perception? What is the relation between sensorimotor processes used during speech production (e.g., feedback-based motor control models) and purported sensorimotor processes in speech perception? How do computational models of speech production (e.g., feedback control models, psycholinguistic models, neurolinguistic models) relate to neural models of speech processing? A new "Perspective" article which just appeared today in Neuron and is currently free to download, summarizes the outcome of my thoughts on these questions, mixed with a heavy dose of input from my co-authors John Houde (UCSF) and Feng Rong (Talking Brains West post doc). Yes, I'm very proud that the piece has been labeled a "perspective" rather than a "review" -- that means it is theoretically novel rather than a summary statement ;-)

The starting point for the article is the observation that there are two main lines of research on sensorimotor integration, which paradoxically do not interact. Namely the idea that the auditory system is critically involved in speech production (exemplified by the motor control folks like Frank Guenther and John Houde) and the idea that the motor system is critically involved in speech perception (exemplified by folks like Stephen Wilson, Pulvermuller, and many others). We wondered whether these two lines of work could be integrated into one model. The answer, we propose, is yes.

The basic idea is the dorsal stream sensorimotor integration circuit is built to support speech production via a state feedback control architecture of the sort that is common in the visual-manual motor control literature. But the computational properties of the system, particularly the generation of forward sensory predictions of motor consequences provides a ready made mechanism for the motor system to modulate (not drive) the perception of others' speech under some circumstances (e.g., when the acoustic signal is weak or ambiguous).

In addition, we attempted to show how psycholinguistic models of speech production (e.g., Levelt, Dell) as well as neurolinguistic models (e.g., the concept of input and output phonological lexicons) relate to the proposed state feedback control model. I never liked the idea of there being two phonological "lexicons" but it actually makes a lot of sense in the framework of state feedback control architectures.

The model also does a decent job of explaining some of the key symptoms of conduction aphasia and stuttering which are explained as different types of disruption of the same feedback control mechanism.

The graphic depiction of the model is below. I'm looking forward to your feedback on this!



Hickok, G., Houde, J., & Rong, F. (2011). Sensorimotor Integration in Speech Processing: Computational Basis and Neural Organization Neuron, 69 (3), 407-422 DOI: 10.1016/j.neuron.2011.01.019

Wednesday, February 9, 2011

Why is Broca's aphasia/area the focus of research on "syntactic comprehension"? Was it a historical accident?

Arguably it was the classic paper by Caramazza and Zurif, published in 1976, that kicked off what turned into decades of research on the role of Broca's area in syntactic computation. We all know from our grade school lessons that Caramazza and Zurif found that Broca's aphasics exhibit not only agrammatic production, but also a profound deficit in using syntactic knowledge in sentence comprehension. The critical bit of evidence was that Broca's aphasics seemed perfectly fine in using semantic information (lexical and real-world knowledge) to derive the meaning of an utterance: they could correctly understand a so-called semantically non-reversible sentence like, The apple that the boy is eating is red. But they failed when interpretation required the use of syntactic information, i.e., when the sentence was semantically reversible like, The boy that the girl is chasing is tall.

This finding suggested that Broca's aphasics had a deficit in syntax, one that affected both production AND comprehension. Broca's area, via its association with Broca's aphasia (a dubious association, but a topic for another post) then became the major anatomical focus for the localization of syntax, including its role in comprehension of sentences. This obsession with Broca's area and syntax (comprehension in particular) persists today.

But is it all a historical accident? I happened to re-read Caramazza and Zurif today (I'm working on a chapter on Broca's aphasia and it is always a good idea to go back to original sources). C&Z tested not only Broca's aphasics, but also conduction aphasics, a little-remembered fact. Conduction aphasics have posterior lesions and don't have agrammatic speech output. But guess what? C&Z report that the conduction aphasics performed exactly like the Broca's aphasics. Check out the graph below which I recreated by eye-balling the relevant values from their Figure 3 which shows percent correct on the sentence-to-picture matching task for object-gap semantically reversible vs. nonreversible sentences like the examples above.


The failure of conduction aphasics to use syntactic knowledge was, of course, noted by the authors.

...the conclusion is inescapable- Broca’s and Conduction aphasics do not seem at all capable of using algorithmic [syntactic] processes. Thus, for those sentences that were semantically constrained, performance was approximately at the 90% level, but it dropped to chance level when these semantic constraints were not available. p. 580


Why then did all subsequent work focus on Broca's aphasics and Broca's area? Why was conduction aphasia and more posterior lesion sites not considered as a possible test case/source for the neural substrate for syntax?

The answer derives from the common interpretation of conduction aphasia at the time, which is that of a disconnection syndrome. Conduction aphasia was caused, the story went, not by damage to any computational system, but by a disconnection of computational systems, namely Wernicke's and Broca's area. C&Z argued that the conduction aphasics comprehension problems derived from a disconnection of syntactic systems, which lived in Broca's area.

Conduction aphasics also were incapable of using syntactic algorithmic processes [see also Saffran & Matin (in press) and Scholes (in press)]. The question arises, therefore, as to whether syntactic operations also rely on cortical regions posterior to Broca’s area or whether the conduction deficit should be considered within a disconnection framework, that is, as the severing of a connection to Broca’s area (Geschwind, 1970). Given the impressive arguments offered by Geshwind, we are presently satisfied in treating it as a problem of disconnection, but a disconnection from an area that subserves sytactic processes. p. 581


But the interpretation of conduction aphasia has evolved since in the 1970s. It is no longer considered a disconnection syndrome but rather a deficit caused by cortical dysfunction. We can, and should, argue about what conduction aphasia is, functionally. Maybe our final conclusion will be the same as C&Z's (I don't believe it will), but the point is that based on an assumption about the nature of conduction aphasia, research emphasis shifted entirely to Broca's aphasia and Broca's area, ignoring conduction aphasia and more posterior cortices. I believe this was an unfortunate and ultimately misleading turn.

Maybe historical accident isn't the right word for what happened in the 1976 publication. It wasn't an accident that C&Z assumed the popular account, articulated so eloquently by Geschwind. It was a reasonable conclusion. But it did dramatically shape the focus of subsequent research and we are still living with the consequences of this theoretical argument. There are still heated debates both in print and in conference forums regarding the role of Broca's area in syntactic comprehension (Grodzinsky & Santi, 2008; Rogalsky & Hickok, 2011; Willems & Hagoort, 2009). Conversely, there is no concerted effort aimed at determining the role of the temporal-parietal junction (the location of lesions associated with conduction aphasia) in syntactic comprehension. This is a shame because I believe we are missing a big part of the puzzle.

This is a good lesson though. Sometimes ideas become entrenched in the scientific literature by "accidents" of the current theoretical milieu, and sometimes the resulting scientific path that a field takes can be the wrong one. It is important to occasionally re-visit the reasons why choices were made and to evaluate whether a new direction is worth exploring.

References

Caramazza A, & Zurif EB (1976). Dissociation of algorithmic and heuristic processes in language comprehension: evidence from aphasia. Brain and language, 3 (4), 572-82 PMID: 974731

Grodzinsky Y, & Santi A (2008). The battle for Broca's region. Trends in cognitive sciences, 12 (12), 474-80 PMID: 18930695

Rogalsky C, & Hickok G (2010). The Role of Broca's Area in Sentence Comprehension. Journal of cognitive neuroscience PMID: 20617890

Willems RM, & Hagoort P (2009). Broca's region: battles are not won by ignoring half of the facts. Trends in cognitive sciences, 13 (3) PMID: 19223227

Tuesday, February 8, 2011

Post-doctoral position -- Hickok Lab, UC Irvine

The Department of Cognitive Sciences and the Center for Cognitive Neuroscience announce a Postdoctoral Scholar position in the Auditory and Language Neuroscience Laboratory (under the direction of PIs Gregory Hickok & Kourosh Saberi).

The postdoctoral fellow will collaborate in NIH-funded research investigating the functional anatomy of auditory and language processing. Ongoing research projects in the lab employ a variety of methods, including behavioral and neuropsychological lesion-symptom mapping studies, as well as techniques such as fMRI, EEG/MEG, and TMS. Opportunities also exist for collaboration with other cognitive science faculty and with faculty in the Center for Cognitive Neuroscience.

Requirements – Candidates should have a Ph.D. in a relevant discipline and experience with functional MRI, preferably in the area of speech and language. Familiarity with computational and statistical methods for neuroimaging (e.g. MatLab, SPM, AFNI) is advantageous.

The appointment could begin as early as March 2011 for a period of one year. Renewal is based on performance and availability of grant support. Salary will be commensurate with experience, minimum salary: $37,740.

Application Procedures – Please send a cover letter (including research skills), CV, and the names/contact information for three (3) references to:
Lisette Isenberg
Department of Cognitive Sciences and Center for Cognitive Neuroscience
University of California, Irvine
Irvine, CA 92697-5100
aisenber@uci.edu
The University of California, Irvine is an equal opportunity employer committed to excellence through diversity.

Thursday, February 3, 2011

Department Chair Search, Dept. of Communicative Sciences and Disorders, Michigan State University

The College of Communication Arts and Sciences at Michigan State University is seeking outstanding candidates for the position of Chair of the Department of Communicative Sciences and Disorders. MSU is looking to fill this position with an energetic individual who is excited at the opportunity to shape an innovative CSD program that seeks to become a leader for the field.
Qualifications of preferred candidates include distinguished scholarship, prior administrative experience, success with external funding and multidisciplinary collaboration, and a vision for the future of a department that is undergoing significant growth. A Ph.D. in Communicative Sciences and Disorders or a related discipline is required. Salary is competitive, and based on experience and academic credentials. Preferred starting date for the position is August 1, 2011.

Letters of application or nominations should be sent to Brad Rakerd, Professor and Search Committee Chair, Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI 48824-1212, USA; telephone 517-432-8195; e-mail rakerd@msu.edu. Candidates should submit a statement highlighting their experience and qualifications, a curriculum vita, and names of three references. The search committee will begin its evaluation of applicants early in 2011 and will continue until an exceptional candidate is selected.

MSU is an affirmative action, equal opportunity employer. MSU is committed to achieving excellence through cultural diversity. The university actively encourages applications and/or nominations of women, persons of color, veterans and persons with disabilities.

Wednesday, February 2, 2011

McGill University: Director – School of Communication Sciences and Disorders

McGill University is seeking a scientist to serve as Director of its School of Communication Sciences and Disorders. The School is dedicated to excellence in research and clinical teaching, and is home to the oldest doctoral research program in communication sciences and disorders in Canada. Ten full-time faculty, 13 part-time faculty and numerous clinical instructors provide clinical training in speech-language pathology at the master’s level (two-year program accommodating approximately 30 students per year) and research training at the doctoral and post-doctoral levels (approximately 23 students). Faculty in the school have collaborative ties with a wide variety of departments and institutes at McGill (Psychology, Linguistics, Neuroscience, Otolaryngology, Biomedical Engineering and the Montreal Neurological Institute), as well as other Montreal universities, and maintain national and international research collaborations. In recent years, the school has played a central role in establishing and directing a world-class interdisciplinary research centre – the McGill Centre for Research on Language, Mind and Brain (www.crlmb.ca). The School is centrally located on the McGill University campus in downtown Montreal, a vibrant multilingual city.

The Director will be a tenured Associate or Full Professor. He or she will be responsible for leading the educational mission and for setting the scientific priorities and goals of the School. A proven record of success and excellence in research, together with team leadership, is required. Applicants should hold a PhD or the equivalent.

For consideration by the selection committee, please send a signed letter of interest by March 15, 2011, with a copy of your curriculum vitae and three letters of reference to the attention of: Dr. Melvin Schloss at Acadpersonnel.med@mcgill.ca

Candidates would benefit from a working knowledge of both official languages. McGill University is committed to diversity and equity in employment. It welcomes applications from indigenous peoples, visible minorities, ethnic minorities, persons with disabilities, women, persons of minority sexual orientations and gender identities and others who may contribute to further diversification. All qualified applicants are encouraged to apply; however, Canadians and permanent residents will be given priority.