Wednesday, July 30, 2014

Everything I ever needed to know I learned from Wernicke

Well, not quite, but here's an interesting quote from Wernicke 1874, as translated by Eggert 1977, that foreshadows much current work on sensorimotor control for speech production:
Observations of daily speech usage and the process of speech development indicates [sic] the presence of an unconscious, repeated activation and simultaneous mental reverberation of the acoustic image which exercises a continuous monitoring of the motor images. [...This sensory-motor pathway] whose thousandfold use [during development] maintains a continuing significant control over the choice of the correct motor image. […] Apart from impairment in comprehension [in sensory aphasia], the patient also presents aphasic symptoms in speech produced by absence of the unconscious monitoring of the imagery of the spoken word. 

Tuesday, July 29, 2014

Sensorimotor area Spt under attack (but they're shooting blanks): Reply to Parker Jones et al. 2014

A recent report in Frontiers (link to it here) by my good friends Oiwi Parker Jones (first author) and Cathy Price (senior author) challenges the claim that area Spt is a sensorimotor integration area for vocal tract actions.  Their attack comes from multiple fronts, both fMRI and lesion data.  On the fMRI side they sought to determine whether Spt was more active during repetition tasks, particularly for pseudowords which demand sensory-to-motor translation, compared to two auditory naming tasks.  One involved listening to animal sounds and naming the animal and the other involved listening to someone humming and naming the gender of the hummer.  These tasks did not involve direct translation between a heard auditory code and a motor code and so shouldn't activate Spt as vigorously as the repetition tasks, they argued.  The lesion data involved 8 patients with auditory repetition deficits were studied and their lesions mapped to identify the anatomical source of the problem.  We have shown previously (Buchsbaum et al. 2011) that area Spt, as mapped using fMRI in 100+ participants, overlaps the lesion distribution of conduction aphasia (N=14).

What Parker Jones et al.  found was that "No brain areas, including Spt, were more activated by auditory repetition of pseudowords compared to sound naming."  They further found that the lesions associated with repetition deficits involved the arcuate fasciculus, not necessarily Spt.

They conclude, "the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration... [and] most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing."  Back to 1960s, er 1870s, we go.

There are two main problems with this study that undermine their conclusions regarding Spt.

1. The used overt speech production.  We have always used covert production to study Spt.  Why? Because Spt responds to acoustic stimulation including the sound of one's own voice.  Using overt feedback makes it very hard to separate sensory from motor-related activity in Spt.  Case in point: primary auditory cortex showed exactly the same response profile as did Spt in the Parker Jones study: more activity during animal sound naming than pseudoword repetition (the animal sounds were longer). Therefore, this study measured primarily the acoustic response properties of Spt, not its sensorimotor properties.

2. The idea that Spt isn't particularly involved in naming (whether it's pictures, words, animal sounds, or genders) is incorrect.  To be fair, before I fully came to grips with what Spt is doing computationally, I mostly thought of it as performing an auditory-to-motor coordinate transform.  Pseudoword repetition would seem to be the most direct task to tap into this process.  But I have always believed that Spt plays a role in producing words under any speech production condition. And more recently, I have argued (see here and here) that Spt is part of the speech motor planning process, essentially running not only in the auditory-to-motor direction, but also doing a motor-to-sensory forward prediction to detect motor planning errors, i.e., motor plan-to-auditory target mismatches, prior to speech generation.  This, I argue, occurs for every speech act, including animal sound naming.  So the manipulation that Parker Jones et al. used does not pit auditory-motor integration against no auditory-auditory motor integration.  Instead it pits two auditory-motor integration tasks against one another.  At best, we might be able to learn something about Spt if the two tasks differ in their integration load.

With this in mind let's look a little closer at the pseudoword versus animal sound naming results.  I think it is a reasonable assumption that pseudoword repetition places greater loads on the Spt circuit than animal naming.  First, there is no other route, through which the task can be achieved. Second, pseudowords are by definition lower frequency than real words and therefore more likely to induce motor planning errors and more likely to require extra error detection and correction work.  Third, conduction aphasics, which according to my model have Spt damage, in general have more difficulty with non-word repetition.  So why in the Parker Jones study does animal sound naming activate Spt more than pseudoword repetition? It doesn't, in contrast to what they claim, if we factor out the acoustic response (see problem #1). Here are some numbers, generated by eyeball from their graph:

Auditory cortex activity to animal sound naming: 6.9
Auditory cortex activity to pseudo word repetition: 4.0
*p-word repetition to animal sound naming ratio: .58

Spt activity to animal sound naming: 3.7
Spt activity to pseudo word repetition: 2.9
*p-word repetition to animal sound naming ratio: .78

The relative activation in the pseudo word compared to sound naming condition increases by 20% in Spt versus auditory cortex. In relative terms then (i.e., with the baseline auditory activity factored out), Spt activates more to pseudo words than animal sound naming.  This fact is obscured in absolute terms because the acoustically-driven activation dominates the response profile.

What about the lesion data.  I have no reason to question their findings, other than on the basis of the small sample size.  I can report informally that we have also been collecting data on repetition ability in a much larger lesion sample and Spt (and surrounds) is robustly implicated even when white matter involvement is factored out.  We are working on the manuscript now.

In short, Spt's demise as a sensorimotor interface is greatly exaggerated.  Which is not to say that I believe we completely understand what Spt is doing.  In fact I think it is a bit more nuanced and interesting.  I would show you the data if only I could convince an NIH panel to fund the research.

1 PostDoc & 2 PhD Positions, Language & Predictive Coding, University of Frankfurt / Germany

The Cognitive Neuroscience Lab (Prof. Christian Fiebach) at the Department of Psychology of Goethe University Frankfurt offers three research positions as part of an ERC consolidator project that investigates neurophysiological mechanisms of language processing from a predictive coding perspective:

Postdoctoral Researcher (German Salary Level E13, 100%) in Cognitive and Computational Neuroscience of Language

We seek a colleague with a strong background in EEG/MEG, fMRI, and/or neuro-computational modeling, and an interest in brain mechanisms underlying language processing. You should have skills in signal processing, data analysis, and/or computational modeling, programming skills (e.g., Matlab, Python), and willingness to acquire expertise in all three methods. The successful candidate will be involved in all aspects of the project and should be motivated to further develop this topic. The position is offered initially for two years. However, an extension for up to five years is possible.

Two PhD positions (German Salary Level E13, 65%) in Cognitive Neuroscience of Language

The PhD projects involve fMRI and MEG/EEG experiments in the field of language processing. We encourage applications from excellent and enthusiastic candidates with MSc or equivalent degrees from Psychology, Neuroscience, Computational Neuroscience, Biology, Physics, or related areas, who share our interest in understanding investigating the neural bases of language processing. Programming skills (e.g., Matlab, Python) are appreciated. Tasks involve the design, acquisition, and analysis of fMRI and MEG/EEG experiments, as well as the publication of research findings. The PhD positions involve funding for three years.

Our lab is at the Department of Psychology and is part of Frankfurt’s vibrant neuroscience community (Interdisciplinary Center for Neurosciences Frankfurt) and the larger Rhein-Main area (Rhein Main Neuroscience Network Frankfurt/Mainz). We have access to state of the art facilities involving the Frankfurt Brain Imaging Center with two 3T MR scanners and a 275 channel MEG, as well as EEG, fNIRS and eye tracking.

The positions are available from September 1, 2014, and available until filled. Further information can be obtained directly from Christian Fiebach.

Please send your complete application (including CV, certificates, as well as names of two referees) electronically to Prof. Christian Fiebach, Department of Psychology, Goethe University Frankfurt, Grüneburgplatz 1, D-60323 Frankfurt am Main ( 

Friday, July 25, 2014

Why I can't talk: Mechanism underlying speech fluency circa 1849

From the Scientific American archives, 1849, Vol. 4, p. 174:

The common fluency of speech, in many men and women, is owing to a scarcity of words, for whoever is master of language, and hath a mind full of ideas, will be apt in speaking, to hesitate upon the choice of both; whereas, common speakers have only one set of ideas, and one set of words to clothe them in, and these are always ready; so people come faster out of church when it is nearly empty, than when a crowd is at the door.  
Yeah, that's why I can't talk.  My mind is too full of ideas.  

Thursday, July 24, 2014

Where does the 10% myth come from?

No one knows exactly.  A nice summary of what we do know is provided in a recent WIRED piece here. William James was thought to play a role, based on a quote from Dale Carnegie's book, How to win friends and influence people, but this may have been a misquote.  Kolb and Wishaw's classic text, Fundamentals of Human Neuropsychology suggests Flourens work in the early 1800s as a likely empirical foundation for the myth.  Flourens of course is famous for his empirical attack on phrenology.  His method involved ablation studies in a variety of animals--chickens, pigeons, frogs, dogs, rabbits--in which he successively removed larger and larger chunks of the cerebrum.  He reported in 1824 that “One can remove, from the front, or the back, or the top or the side, a certain portion of the cerebral lobes, without destroying their function. A small part of the lobe seems sufficient to exercise these functions.”  This led to his theory of the equipotentiality of the cerebrum, including the cortex, in contrast to the phrenological view, and provides a rational jumping off point for the 10% myth.  Kolb and Wishaw write, 
Perhaps the most commonly encountered Flourensian idea is in pedagogy, where it is expressed as the assertion that most people never use more than 10% of the brain. p. 9

A series of lectures in London by Charles-Édouard Brown-Séquard argued vehemently against the then-recent swell of empirical support for a localizationist view of cortex promoted by the work of Broca, Meynert, and Wernicke in the language domain and Ferrier, Bonchefontaine, and Fritsch & Hitzig in the motor domain. Citing the work of Flourens and reporting on new work of his own using a similar lesion based approach, Brown-Sequard argued that
Each half of the brain is a complete brain originally, and possesses the aptitude to be developed as a center for the two sides of the body in volitional movement as well as in all the other cerebral functions. Still very few people develop very much, and perhaps nobody quite fully, the powers of the two brains. [emphasis mine]
as regards localization of function, ... nerve cells endowed with the same function, instead of forming a cluster so as to be in the neighborhood of each other, are scattered in the brain, so that any part of that organ can be destroyed without cessation of their function.... all the symptoms of brain disease--such as paralysis, anesthesia, amaurosis, aphasia, insanity, convulsions, and the rest--are produced by the same mechanism, whether they arise from an irritation in any part of the trunk or limbs, or from an irritation in any part of the meninges or of the brain itself.
It's circumstantial, but the link between Flourens and the 10% myth seems plausible.

The lectures were published in The Lancet in 1876. Quotes are from the introductory lecture published in the July 15 issue. Scientific American published a commentary on the lectures providing the same quotes, thus possibly perpetuating the idea that few people fully develop the powers of the brain.

Tuesday, July 22, 2014

Open call for abstracts for Special Issue of Cognitive Neuropsychology: Conceptual Knowledge Representation

This is an open call for original research, reviews, and commentaries associated with representations of
conceptual knowledge in the mind and brain.

For the purposes of this Special Issue, conceptual knowledge refers to the knowledge by which we
understand, make inferences, and produce statements about objects, actions, events, settings, human
social roles and interactions, and their states or properties.

Target topics include:

1. Neural theories of conceptual knowledge representation, e.g. how are individual brain systems
and networks of systems involved in the representation of specific types, aspects, or features of
conceptual knowledge?

2. Cognitive theories of conceptual knowledge representation, e.g. what representational
schemes are supported by the data?

3. Individual differences in conceptual knowledge representation, e.g. which aspects of the
cognitive or neural representational schemes are common or variable across individuals, and
which aspects are stable or variable over time?

4. Context and compositionality in conceptual knowledge representation, e.g. how does the
cognitive or neural representation of a concept vary as a function of its semantic context or
combination with other concepts?

5. Resolving converging and/or diverging evidence from different domains, e.g. how can evidence
from different domains – various neural recording methods, cognitive psychology,
neuropsychological case study, computational modeling, and formal semantics – be integrated
with one another?

Abstracts of one page or less describing your proposed manuscript should be submitted by October, 15,
2014. Acceptances to submit full manuscripts will be sent by November 1, 2014, and the submission
deadline for the full manuscript will be May, 1, 2015. Publication of this special issue is planned for Fall,
2015, and articles will appear online as they become available.

Initial abstracts should be sent by email to specialissue at jhuapl dot edu

Guest Editors:
Timothy Rogers, University of Wisconsin, Madison

Michael Wolmetz, Johns Hopkins University Applied Physics Laboratory

Research Assistant - Royal Holloway, University of London - Department of Psychology

Research Assistant
Royal Holloway, University of London - Department of Psychology
£32,862 to £34,724 includes London Allowance
Full Time
Contract / Temporary

Placed on:
16th July 2014
14th August 2014
Job Ref:
Full Time, Fixed term for 3 years from January 2015
Salary is in the range £32,862 to £34,724 per annum inclusive of London Allowance
Applications are invited for the post of Research Assistant to work with Dr Carolyn McGettigan on the project “Vocal Learning in Adulthood: Investigating the mechanisms of vocal imitation and the effects of training and expertise”, which is funded by the Economic and Social Research Council. The project will investigate the behavioural and neural correlates of the acquisition of novel vocal sounds, using magnetic resonance imaging (MRI) of the brain and the vocal tract.
Applicants should hold a PhD in Psychology, Neuroscience or a related discipline (e.g. Experimental Phonetics, Speech Science, Medical Physics). You must have previous research experience in auditory processing or speech/vocal behaviour, be able to demonstrate strong abilities in acoustic analysis (e.g. using Praat, Matlab) and show a capacity to use computational methods for cognitive neuroscience research. Expertise in MRI research is highly desirable.
This is a full time post, available from January 2015 or as soon as possible thereafter for a fixed term period of 36 months. This post is based in Egham, Surrey where the College is situated in a beautiful, leafy campus near to Windsor Great Park and within commuting distance from London.
For an informal discussion about the post, please contact Dr Carolyn McGettigan (  or +44 (0)1784 443529). For more information about the activities of the Royal Holloway Vocal Communication Laboratory, visit the lab website:
Interested applicants should complete the online application form and submit (i) a full curriculum vitae with a list of publications and (ii) a 1-page statement of past and current research activities and areas of interest.
To view further details of this post and to apply please visit The RHUL Recruitment Team can be contacted with queries by email at: or via telephone on: +44 (0)1784 41 4241.
Please quote the reference: 0714-123
Closing Date:  Midnight, 14th August 2014
Interview Date: To be confirmed

The College is committed to equality and diversity, and encourages applications from all sections of the community.  

Sunday, July 6, 2014

Post-doctoral positions at new Department of new Max-Planck Institute

The Max-Planck-Institute for Empirical Aesthetics in Frankfurt, Germany, investigates the cognitive, affective, neuronal, and sociocultural foundations of aesthetic experience.

For the newly founded Department of Neuroscience (David Poeppel, director) we are seeking

two post-doctoral research scientists

who will participate in the development and execution of neuroscientifically founded projects on aesthetics that link the Department of Neuroscience with the Department of Music and the Department of Language and Literature. 

We are looking for neuroscientists (cognitive/systems neuroscience) or psychologists with a completed Ph.D. Applicants should provide evidence of excellent training and experience in their home discipline and discuss their interest in empirical aesthetics in general as well as outline potential projects.

The Max-Planck-Institute expects strong academic credentials, the ability to perform independent creative work, joy in taking on new challenges, the ability to work in a team, high social competence, and above average ability to handle pressure. Applicants must have excellent English skills and ideally some knowledge of German.

We offer an attractive urban environment in the vicinity oft the Unicampus Westend, a very good work environment, and an interesting and varied area of research. The positions are for a 5 year term. Salary is pursuant to German Entgeltgruppe 14 TVöD Bund, with commensurate benefits.

The Max Planck Society seeks to increase the number of women in those areas where they are underrepresented and therefore explicitly encourages women to apply. The Max-Planck society is committed to increasing the number of individuals with disabilities in its workforce and therefore encourages applications from such qualified individuals.

Informal inquiries can be addressed to David Poeppel ( Applications should include a cover letter (that outlines the interest and projects in empirical aesthetics), CV, academic credentials, and names of three references. The deadline for applications is August 15, 2014. Materials should be sent to:

Max-Planck-Institut für empirische Ästhetik
Personalstelle, Grüneburgweg 14, 60322 Frankfurt, Germany

or online to:

Friday, July 4, 2014

Human connectome project: lessons from genetics

The number of genes separating humans and other primate is fewer than previously thought and in fact very small. See this piece.  What this means is that phenotypical differences are not primarily a function of protein coding genes themselves:
"The physiological and developmental differences between primates are likely to be caused by gene regulation rather than by differences in the basic functions of the proteins in question."
 I think this is an important lesson for the massive effort(s) to map the structure of the human brain (e.g., connectome project, BRAIN Initiative, etc.).  I support this effort, of course.  It will provide invaluable data and is absolutely necessary.  But I think once complete we will be in a place quite similar to where we are in genetics: Human Genome Sequence--check.  Understanding how to build a human--not even close.

MIT's Sebastian Seung likes to say, "I am my connectome."  I think the connectome will turn out to be something like the genome: fairly generic foundation on which all of the really interesting stuff is built.  In short, mapping the connectome isn't going to tell us how to build a brain, unfortunately.  

Thursday, July 3, 2014

Broca's area: a dessert topping or a floor wax? It may help you decide.

Broca's area seems to be involved in everything.

  • Speech articulation
  • Sentence comprehension
  • Working memory
  • Cognitive control
  • Sequencing
  • Hierarchical processing
  • Manual gesture execution
  • Speech perception
  • Gestural action understanding
to name a few of the top of my head.  One thing in common with many tasks, including those that activate Broca's area is that the subjects in these studies must make a decision.  A recent study by Greg Reckless et al. has added to the list of functions ascribed to Broca's area by showing that activation in this hyperactive region is modulated by changes in decision bias in a picture-based perceptual decision-making task (abstract below).  This is quite consistent with findings from Jon Venezia's study in my lab lab here at UC Irvine showing that motor speech-related areas are modulated by response bias. See here for a discussion of that work.  

These studies seriously complicate claims that Broca's area does [pick your favorite from the list above] because it could simply be a function of the decision process.  

1000 points to whoever can figure out what Broca's area is REALLY doing.

The left inferior frontal gyrus is involved in adjusting response bias during a perceptual decision-making task



Changing the way we make decisions from one environment to another allows us to maintain optimal decision-making. One way decision-making may change is how biased one is toward one option or another. Identifying the regions of the brain that underlie the change in bias will allow for a better understanding of flexible decision-making.


An event-related, perceptual decision-making task where participants had to detect a picture of an animal amongst distractors was used during functional magnetic resonance imaging. Positive and negative financial motivation were used to affect a change in response bias, and changes in decision-making behavior were quantified using signal detection theory.


Response bias became relatively more liberal during both positive and negative motivated trials compared to neutral trials. For both motivational conditions, the larger the liberal shift in bias, the greater the left inferior frontal gyrus (IFG) activity. There was no relationship between individuals' belief that they used a different strategy and their actual change in response bias.


The present findings suggest that the left IFG plays a role in adjusting response bias across different decision environments. This suggests a potential role for the left IFG in flexible decision-making.

Wednesday, July 2, 2014

Computational exhaust fumes

I suspect that the many studies now published that show motor effects on perception or motor involvement in conceptual representation amount to the computational equivalent of exhaust fumes.

If you direct a fan at the stream of exhaust coming out of car's tailpipe you can reliably manipulate (p<.000001!) the flow of gases.  But this reveals nothing about how the machine that generates the exhaust works.

I believe the effects.  There is no need to do any more experiments until we figure out whether they have any relevance to the computational speech machine.  Given that the task that is typically used in these experiments is some variant of syllable discrimination, which has long been known to double dissociate from word recognition, I strongly suspect that the answer is, no relevance.  It's just computational exhaust fumes.