Wednesday, July 23, 2008

Georgetown Research Assistant Position

RESEARCH ASSISTANT POSITION AVAILABLE IN THE BRAIN AND LANGUAGE LAB

THE BRAIN AND LANGUAGE LAB
The Brain and Language Lab at Georgetown University investigates the biological and psychological bases of first and second language, and the relations between language and other cognitive domains, including memory, music and motor function. The lab's members test their hypotheses using a set of complementary behavioral, neurological, neuroimaging (ERP, MEG, fMRI) and other biological (genetic, endocrine, pharmacological) approaches. They are interested in the normal acquisition and processing of language and non-language functions, and their neurocognitive variability as a function of factors such as genetics, hormones, sex, handedness, age and learning environment; and in the breakdown, recovery and rehabilitation of language and non-language functions in a variety of disorders, including Specific Language Impairment, ADHD, dyslexia, autism, Tourette syndrome, Alzheimer's disease, Parkinson's disease, Huntington's disease, and aphasia. For a fuller description of the Brain and Language Lab, please go to brainlang.georgetown.edu.


RESEARCH ASSISTANT POSITION
We are seeking a full-time Research Assistant/Lab Manager. The successful candidate, who will work with the other RA/Lab Managers currently in the lab, will have the opportunity to be involved in a variety of projects, using a range of methodological approaches (see above and brainlang.georgetown.edu). S/he will have responsibility for various aspects of research and laboratory management and organization, including creating experimental stimuli; setting up and running experiments on a variety of subject groups; performing statistical analyses; helping manage the lab's computers; managing undergraduate assistants; and working with the laboratory director and other lab members in preparing and managing grants and IRB protocols.

Minimum requirements for the position include a Bachelor's or Master's degree, with a significant amount of course-work or research experience in at least two and ideally three of the following: linguistics, cognitive psychology, neuroscience, computer science, and statistics. Familiarity with Windows (and ideally Linux) is highly desirable, as is experience in programming or statistics and/or a strong math aptitude. A car is preferable because subject testing is conducted at multiple sites. The candidate must be extremely responsible, reliable, energetic, hard-working, organized, and efficient, and be able to work with a diverse group of people.

To allow for sufficient time to learn new skills and to be productive, candidates should be available to work for at least two years, and ideally for three. Preference will be given to candidates who can begin immediately. Interested candidates should email Matt Gelfand (mpg37@georgetown.edu) their CV and one or two publications or other writing samples, and have 3 recommenders email him their recommendations directly. Salary will be commensurate with experience and qualifications. The position, which includes health benefits, is NIH-funded. Georgetown University is an Affirmative Action/Equal Opportunity employer.

Sunday, July 20, 2008

Driving while using hands-free cell phones


A California law recently went into effect (July 1, 2008), banning the use of handheld cell phones while driving; hands-free devices, however, are allowed. While this is not a typical topic for Talking Brains, given that it affects TB West folks directly, and given that talking on a cell phone involves speech processing, I figure it is fair game.

It is clear that talking on a cell phone while driving, significantly impairs one's ability to drive; in fact, the impairment is on par with having a blood alcohol level of 0.08%, the legal limit. If driving under the influence is illegal, it seems reasonable to regulate the use of other influences on driving ability.

Here's the problem, though: most of the impairment comes from talking, not holding. In several studies by David Strayer and colleagues at the University of Utah, talking on a handheld OR hands-free cell phone impairs driving ability equally. Therefore, bans on handheld phones don't make any sense (unless you own stock in companies that manufacture hands-free devices). To reduce accidents, cell phone use needs to be banned completely during driving.

Critics might counter that we talk in the car all the time to our passengers. Does this research mean we should ban any form of conversation while driving? No. It turns out that Strayer's research has found that conversing with passengers does NOT produce the same kind of impairment on driving; passengers adjust their conversation as a function of driving difficulty, and many actually help out in some situations. After all, passengers have a stake in the performance of the driver as well. But the person on the other end of a cell phone conversation has no idea when break lights suddenly flash, or a ball rolls into the road, and therefore merrily chatters on.

A recent paper by Marcel Just et al. hints that the parietal lobe may be involved in the decrease in performance: in subjects who were performing a simulated driving task, parietal activation was down by 37% during concurrently listening to sentences compared to a no interference condition. Not sure what to do with this information, but since the blog is called Talking BRAINS, I thought I should mention it...

Notice to lawmakers: Banning handheld phones is a waste of time and (our) money. Well, maybe it's not a complete waste. Now I have an extra hand to hold my coffee.

References:

Just, M.A., Keller, T.A., and Cynkara, J. (2008). A decrease in brain activation associated with driving when listening to someone speak. Brain Research, 1205, 70-80.

Strayer, D. L., & Johnston, W. A. (2001). Driven to distraction: Dual-task studies of simulated driving and conversing on a cellular phone. Psychological Science, 12, 462-466.

Strayer, D. L., Drews, F. A., & Crouch, D. J.(2006). A comparison of the cell phone driver and the drunk driver. Human Factors, 48, 381-391.

Strayer, D. L., & Drews, F. A.(2007). Cell-Phone-Induced Driver Distraction. Current Directions In Psychological Sicence, 16, 128-131.

Friday, July 18, 2008

3rd Annual Eleanor M. Saffran Cognitive Neuroscience Conference

3rd Annual Eleanor M. Saffran
Cognitive Neuroscience Conference


“Language Processing in the Multilingual Brain: Implications for
Treatment of Developmental and Adult Language Disorders”

Sponsored by the Eleanor M. Saffran Center for Cognitive Neuroscience
of the
Department of Communication Sciences and Disorders
College of Health Professions
Temple University
and
Philadelphia Neuropsychology Society

Date: Friday, September 12th, 2008
Time: Registration begins at 8:15am
Conference from 8:30am to 5:00pm
Reception from 5:15 to 6:30pm
Location: Conference - Howard Gittis Student Center- South
Room 200
13th Street between Cecil B. Moore & Montgomery Ave.

For additional information please feel free to contact Dr. Nadine Martin
(nmartin@temple.edu) or Melissa Correa (mcorrea@temple.edu), or click here for a flyer.

Tuesday, July 8, 2008

Where's the "how"? Top 10 functional imaging contributions to understanding language processing

In a comment to my last post on the CNS Summer Institute, TB Down Under rep, Greig de Zubicaray, mentioned Indefrey and Levelt's 2004 paper as an example of a solid attempt to merge neuroscience and psycholinguistics. I'm my response comment, I agreed, but cautioned that we don't actually learn much by localizing neural correlates of the various boxes in an assumed psycholinguistic model, as Indefrey & Levelt did. Such localization exercises don't tell us how language is processed in the brain, only where it is processed, and "where" by itself, isn't that interesting. Seriously, who really cares if phonological encoding in speech production involves the dorsal or ventral bank of the STS (for example)? But, as Greig points out, "where" has the potential to reveal "how," and I agree.

But what have we learned? We are more than decade removed from the first PET and fMRI studies of language processing. Presumably this is enough time to assess progress, so this seems like a good time for an exercise:

What are the Top 10 contributions of functional imaging (PET & fMRI, specifically) to understanding how language is processed in the brain? [crickets]

We better get some feedback on this question, because if the neuroscience of language community can't come up with anything on this one, it probably means we are wasting our time generating pretty pictures. Go ahead and draw on your own work.

Greig suggested that some of his own imaging studies indicate that the architecture for language processing system are more closely related to connectionist architectures than serial feedforward architectures.

Can we come up with a Top 10 list?

Monday, July 7, 2008

Summer Institute for Cognitive Neuroscience: Trade-off between neuroscience and linguistics


I just returned from giving a talk in the language session at the Summer Institute for Cog Neuro, which was held in Lake Tahoe. Beautiful venue, great weather (until the wind shifted and smoke from a fire in the area blew into the valley), and an interesting session.

The program was chaired by Alfonso Caramazza and included talks by Peter Hagoort, Kevin Shapiro (Caramazza co-author), Franck Ramus, David Caplan, Laurent Cohen, and me. I'm not going to go into detail about any of the talks - you can read all about them when Gazzaniga's The Cognitive Neurosciences Volume IV hits the bookshelves. Instead, I want to comment the one thing that made a serious impression on me as I listened to the talks and fielded questions on my own talk, namely, that there remains a serious gap between psycholinguistics and biology.

I have to confess up front that I didn't hear all of the talks: I had to miss Caplan's and Cohen's talk, which were given on the day after mine. Based on the talks I did hear, the contrast was striking between those of us who approach the cognitive neuroscience of language, with an emphasis on neuroscience versus an emphasis on (psycho)linguistics. For example, Hagoort talked about semantic unification, spending a good deal of time dissecting the ERP patterns associated with a range of linguistic constructions; he had little to say about the functional anatomy underlying these unification processes other than reference to some brain regions that might be generating these effects. Shapiro and Caramazza's talk was mostly concerned with the patterns of association and dissociation in morphological processes that one finds in brain damage patterns; again not much in the way of detailed functional anatomic networks. Franck Ramus talked about the genetics of language, providing details on the foci of genetic abnormalities associated with various language-related conditions, but provided little psycholinguistic detail using terms like "morphosyntactic deficits" to describe the language phenomena. Likewise, my own talk highlighted the role of specific brain structures such as the posterior half of the STS (bilaterally), posterior planum temporale (Spt) in various language processes, but I had little to say about the exact psycholinguistic processes involved and, like Franck, resorted to terms like "some aspect of phonological processing" and "sensory-motor integration."

Neither approach is right or wrong, or more or less important. They just reflect a different emphasis.

In some sense, listening to a talk from someone with a different emphasis can be unsatisfying. For example, Peter Hagoort asked me how Levelt's notion of syllabification fits into our Dual Stream model. My response: I don't know. Caramazza and Caplan both pressed me on what kind of representations live in the STS. My response: Maybe a phonological lexicon of some sort. Good questions that I hope to be able to answer some day, but to date, not very satisfying responses. Although I didn't, I just as easily could have asked Peter, or example, how the notion of semantic unification fits with the connectivity pattern between the anterior temporal lobe and BA45 (or whatever).

In the end, it is often hard to find points of connection between these different approaches to brain and language research, and I think the tendency is either to ignore findings/hypotheses from the research program at the other end of the spectrum, or to reject it. I certainly feel this tendency in myself (semantic unification or dissociations in morphological abilities don't help me understand the response properties of Spt, so why pay attention), and I can see how people from a psycholing model approach might tend to ignore or reject what we're doing (it doesn't clarify the nature of syllabification or morphological affixation, so why pay attention).

But this is the wrong mindset, of course. We are all interested in understanding the relation between neural systems and language models. We need to pay attention to what's going on at both ends of the spectrum and actively seek connections. We also have to understand that neuroanatomical models are not intended (at this stage) to be psycholinguistic models, and are necessarily (at this stage) more coarse in terms of processing stages/representations. Despite the fact that both the Dual Stream model and Levelt's model of speech production have boxes and arrows, they are not aiming to characterize the same set of facts. Hopefully the two types of models will converge and constrain each other in the end, but there need not be a one-to-one relation between boxes and Brodmann areas. At this stage in our field, it helps to approach a given paper, talk, or grant proposal with an "emphasis adjusted" mindset.

Friday, July 4, 2008

An embarrassing moment -- and a lesson in ethics

A few days ago, I recommended a review article on fMRI by Nikos Logothetis that just appeared in Nature. Now, Logothetis appears there again, but in a more dodgy context.

I recommend as reading a piece on the sociology of science and scientific conduct that just came out in Nature. The article tells the embarrassing saga of a dataset acquired by two former Logothetis trainees and published in the journal Human Brain Mapping this past May.

I will spare Talking Brains readers the details -- they are painfully obvious from the report by Alison Abbott. But I will say that, as far as I am concerned, everybody looks really LAME in this story! The two authors should not have proceeded to the publication stage without involving the PI much more closely, given the origin of the data; the PI should show more perspective and coolitude and chillaxity (both closely related to Stephen Colbert's truthiness) -- and write a calm and carefully argued response, if the stakes are really that high; the journal Human Brain Mapping should have allowed time for a published response, given that that is not uncommon for journals (even I have been involved in a controversy in which there were responses in the same issue); Nature should not write about this -- unless this is NPG's new Nature People (nobody wins with this type of hype, I bet).

I do think this is a good place for a brief discussion -- and poll -- on how to deal with papers well in advance. Do we all always agree how a given project will be divided up and written up, etc.? Should a PI, say in a lab meeting, discuss who/what/how/when of a paper before it happens? A lot of ethical issues come up that merit some thought. I, for one, am always a little confused on what is exactly the right thing to do. Things are complicated -- but are there sensible guidelines, especially in big labs with lots of people working on lots of projects?

A (supposed) neural theory of language? Where's the beef??

I think it' s safe to say that I am interested in brain and language, and so when I just came across a book that promises a "neural theory of language" my interest and curiosity were piqued -- and I bought the damn thing ... I have been struggling for years with the issue of what a neural theory of language would even look like (see, for example, Dave Embick's and my paper on this issue, "Defining the relations between linguistics and neuroscience"; a new paper on this topic by us is forthcoming), so I was pretty jazzed to spend some quality time (waiting for a flight at Logan Airport) reading this book.

Jerome A. Feldman's From Molecule to Metaphor. A Neural Theory of Language (MIT Press, 2006), alas, is little more than a mediocre intro to cognitive science with a little neuroscience, uncritically adopted. A real disappointment. You can learn *a little* -- and really just a little -- about certain aspects of computational modeling in language processing that are endorsed by Feldman, mostly in the last third of the book. But the promise of unification, or even coherent discussion, of the challenging links "from molecule to metaphor' are not spelled out.

Greg, this book would make you crazy. I kid you not. The book makes a pitch for embodied cognition, but in a relentlessly naive form. The first seven chapters are a kind of 'light-reading' introduction to some neurobiological concepts (neural connection are tuned; cells are complicated; plasticity matters, mirror neurons are awesome -- that sort of thing). Why any of these specific concepts matter for a neural theory of language is never told to us, even though the subheading for this section of the book is How the Brain Computes.

The critical passages, linking brain and language, contain stuff like this: "There is every reason to believe that ideas, concepts, and the like are represented by neural activity. [Eh .. OK .., yes .. DP] The exact circuitry involved is uncertain, but it suffices for us to assume that some stable connection pattern is associated with each word, concept, schema, and so on. page 91" [Oh well - cop out. DP]"While the details remain unclear, the general idea that mental connections are active neural connections is universally accepted ... (page 94)". No kidding! In the crucial sections on the "Computational Bridge" -- a section I was especially looking forward to, because I think some form of computational theory will be crucial -- the message is that all knowledge is embodied, and that mirror neurons provide the key mechanism. OK, whatever.

The book is (a) superficial [basic cog sci stuff, mediocre intro level] and (b) gullible [apparently Feldman has never come across a mirror neuron claim he didn't believe and adopt]. The basic ingredients for a neural theory of language are: mirror neurons; spreading activation as an account of priming; synaptic plasticity. Sure, all great ideas (even the mirror neurons are a great idea, even if they just don;t work) -- BUT: nothing, and I mean nothing is said at any level of detail about how this is supposed to account for even the most banal aspects of language. Understanding as simulation? Maybe.

There is some rhetoric about language wars -- Chomsky is bad/badder/baddest, Pinker is not very good, Jackendoff also leaves something to be desired -- but there is never anything to sink your teeth into regarding the neurobiological mechanisms that form the basis for any aspect of language. As I said, a disappointment. I'm sure that Feldman has made many important contributions to computational cognitive science and computational lingustics. This book is not one of them.