Thursday, March 5, 2015

Why computational cognitive scientists can continue their work despite rumors of their field's demise

The cognitive revolution (or better, information processing revolution) rejected the idea that behavior could be understood without reference to a contribution from the mind/brain.  Through decades of experimentation and theory development, we have come to appreciate that the mind/brain works by computing (or better, transforming) information available in the environment (or stored in the mind/brain itself) as a means to control behavior.  Call this the computational theory of mind.  Models in this framework often abstract away from particular instances (tasks, experiences, actions) and develop abstract models of how the brain computes (transforms information).  These often use mathematical symbols or other representational notation.

Radical embodied cognition rails against this view and makes arguments along these lines:

Computational/symbolic/mathematical models are descriptions of some phenomenon.  For example, a falling apple doesn't actually compute the gravitational force as understood mathematically.  The mind is the same. Just because you can describe, say, aspects of movement according to Fitt's law doesn't mean the brain actually computes the formula.  And by generalization, just because we can describe lots of mental functions using computational/symbolic/mathematical models doesn't mean the brain computes or processes symbols. Therefore, the mind doesn't compute; computational models are barking up the wrong tree; we need a new paradigm.

Putting aside debates about what counts as computation, here's why these sorts of arguments don't change the computational cognitive scientist's research program one bit.

Falling apples don't compute, but an abstract mathematical description of the force behind the behavior led to great scientific progress.  It is the abstract mathematical descriptions that has pushed physics to such heights of understanding.  If physicists rejected their theories just because apples don't compute, we probably would be too busy tending the farm to debate this silliness.  Therefore, modeling cognition using abstract computational systems can (has!) lead (led!) to great scientific progress.  Even if the mind isn't literally crunching X's and Y's, there is great value in modeling it this way.

No computational cognitive scientist (that I know) actually believes the mind works precisely, literally as their models hold.  Chomskians don't believe neuroscientists will find linguistic tree structures lurking in dendritic branching patterns nor do Bayesians expect to find P(A|B) spelled out by collections of neurons doing Y-M-C-A dance moves.  Rather, we understand that these ideas have to be implemented by massive, complex neural networks structured into a hierarchy of architectural arrangements bathed in a sea of chemical neuromodulators and modified according principles such as spike-timing-dependent plasticity.  No one (that I know) is foolhardy enough to believe that the relation between our computational models and neural implementation is literal, transparent, or simple.  In short, computational cognitive scientists use their models in exactly the same way physicists use math. To reject this approach because mathematical symbols aren't literally lurking in the brain is foolish.

Cognitive neuroscientists, also disparaged by the embodieds, are working on the linking theories, asking how tree structures or prior probabilities might be implemented in neural networks.  Not surprisingly, the neural implementation models don't literally contain symbols. Instead they contain units (e.g., neurons) arranged into architectures, with particular connection patterns, nested oscillators, modulators, and so on, and often modeled after real brain circuits as best we understand them.  We are doing well enough at neurocomputational modeling to simulate all kinds of complex behaviors.

I respect that radical embodieds want to see how much constraint on cognitive systems the environment and the body can provide and that they want a more realistic idea of how the mind physically works (in which case I suggest studying neuroscience rather than polar planimeters).  We have learned some things from this embodied/ecological approach.  But given that subscribers don't reject that the mind/brain contributes something, we still need models of what that something is.  And this is what computational cognitive scientists have been working on for decades with much success.
Carry on, you computational people.  Let's check back in with the radical embodies in 2025 to see how far they've gotten in figuring out attention, language, memory, decision making, perceptual illusions, motor control, emotion, theory of mind, and the rest. If they have made some progress, and I expect they will, we can then update our models by adding a few body parts and letting our robots roam a little bit more.

 

Monday, February 2, 2015

International Conference on Interdisciplinary Advances in Statistical Learning

We are pleased to announce the *International Conference on Interdisciplinary Advances in Statistical Learning*. The conference will take place in San Sebastian, Spain, June 25-27, 2015.

The conference website is now live:
http://www.bcbl.eu/events/statistical-learning/en/


IMPORTANT DATES

Abstract deadline: March 1st, 2015.
Notification of abstract acceptance: March 15th, 2015.
Early registration deadline: April 15th, 2015.
Online registration deadline: May 15th, 2015.
Conference dates: June 25-27, 2015.

The conference will discuss statistical learning and its underlying mechanisms from behaviour to neuroscience, in various domains such as language, music, vision, and audition, with data from adult participants, development, individual differences, computational modeling, and non-human species.

Presentation formats include keynote speakers, symposia, peer-reviewed talks, panel discussions, and poster sessions.

Keynote Speakers:
Richard Aslin, University of Rochester
Morten Christiansen, Cornell University
Elissa Newport, Georgetown University
Linda Smith, Indiana University
Nicholas Turk-Browne, Princeton University

Theme Speakers:
Computational modeling:
Erik Thiessen, Carnegie Mellon University

Consciousness & implicit learning:
Axel Cleermans, Université Libre de Bruxelles

Development:
Rebecca Gomez, University of Arizona, USA

Evolution & cross-species comparisons:
Kenny Smith, University of Edinburgh

Neurobiology and cognitive neuroscience:
Barbara Knowlton, UCLA


We look forward to seeing you soon in San Sebastian!


The Organizing Committee

Friday, January 30, 2015

Against the mind/brain by Fred Cummins

Guest post from Fred Cummins in response to a tweet exchange.
----------------------------------------

This is a letter to Greg Hickok in response to some recent tweet exchanges that immediately seemed to raise issues that are not resolvable in 140 character snippets. I'm including Andrew Wilson and Sabrina Golonka from Leeds, and Marek McGann from Limerick in the distribution, as I believe we might all have interesting perspectives on the issues at stake.
Let's start with this heartfelt tweet from Greg, which I hope represents common ground among all of us:

I wonder how much the pace of science would quicken if we could just understand what the fuck each other are trying to say. (Origin: @GregoryHickok)
I might have some reservations about the use of "pace", which suggests a simple linear progressive course from ignorance into certainty, to be delivered by science, but the sentiment that we are talking past each other, and that this is not in any of our interests, motivates this reply.
Then, to start the discussion, here is a sequence of three further tweets (concatenated) by Greg which triggered this response:

Embody: 'To give a concrete form to what is abstract' (OED). In the brain, concrete form = neural codes (firing patterns, etc). An embodied mental concept then just means a concept defined by neural codes. Embodied cognition therefore targets the same question as standard cognitive neuroscience: how does the brain code information?
And one further attempt by Greg to nail down two opposing (or complementary?) positions:

Fair to lump embodiment in two broad camps? 'Psychological': cog grounded in sens-motor. 'Physical': cog+sens-mot grounded in body/envirnmnt
I bristled. I objected. I complained that Greg was misappropriating the word "embodied":

OED a terrible source from which to work here. Your nonce definition demands faith in a computational orthodoxy many reject. (Source: @fcummins)

Where should we start then? (@GregoryHickok)

By not misappropriating the term "embodied"? CogSci has not been helped by Camp A's dismissal of Camp B (for many As and Bs) (@fcummins)
To which Greg replies:

Ok, I'm listening. What is your version of embodied? (@GregoryHickock)
This is where the tweet-format breaks down, because that is rather a big question. It is also a question I have no intention of answering, because it presupposes further common ground that is not, in fact, present. This yawning abyss needs to be acknowledged before we could usefully approach an answer. Otherwise, we are indeed incapable of understanding what the fuck we are hearing from each other.
Important qualification: There are not two camps here, with Greg in one and Andrew-Sabrina-Marek-Fred in the other. I do not speak for Andrew-Sabrina-Marek, and my views here are my own. I am not confident they would garner assent from anyone else. That (and the good natured tone of twitter exchanges among us) will allow me to perhaps overstate my case in the interests of drawing attention to the missing common ground that must be acknowledged if things are to improve. And I will undoubtedly make unwarranted assumptions about Greg's position and commitments too, for which I will happily accept vilification and rebuke.  And if my tone degenerates into incoherent ranting towards the end, I ask for good-natured indulgence.  I love the challenge set by the first tweet, and I think we all need to learn how to argue, with due acknowledgement of the chasms that can separate us.

At issue here is not a disagreement over how we understand the mechanisms of the cognitive system that give rise to human behaviour. It is not a matter of negotiating the degree to which such explanations need to appeal to properties of the non-brain body in addition to the computational properties of the brain. The disagreement is much more serious and fundamental than that.
There is not, nor is there likely to be, agreement that the brain is best understood as a computational machine. Likewise there is no agreement that there *is* a cognitive system. There is no agreement that the physiological activity of the brain is properly understood in representational terms. There is no agreement that the brain is the locus, or origin, of phenomenal experience, or consciousness. This is a lot of disagreement. So much, that a premature jump to accounts of what "embodiment" might mean will get nowhere.

I don't want to convince Greg, or the whole of cognitive neuroscience that their enterprise is fundamentally unsound. I do believe that the orthodox interpretation of the brain, and the causal accounts of human behaviour that result, are tragically wrong, and that alternatives are necessary. But here is perhaps the biggest bone of contention of all: I do not believe that there exists, could exist, or should exist, a final fixed account of what a person is, how experience unfolds for them, or what it is like to be a person. These, I firmly believe, should be negotiated. Putative answers are not of the nature of "facts" in inorganic sciences, and will never admit of subsumption under laws comparable in rigour and predictive power to the laws of mechanics.

And so the representational, computational, information processing, cognitivist approach to brains and to people has its place. It is hopelessly wrong, but that is not the problem. The problem is the authority granted to models, findings, pronouncements arising from such views. Science does not give us certainty here. It gives us the means to negotiate and approach local consensus, for some issues, some of the time.

What I do find lamentable is the failure of folk in the representational camp (role played by Greg here) to even acknowledge that their position is open to question. To assume that it is OK to appeal to representations and information processing and perceptual inputs, and the whole grab bag of psychological concepts that, from where I and many others are standing, appear to be worse than untrustworthy.

That sentience might be a property of living forms, rather than brains, is a well articulated position of some venerability. Philosophical arguments extend back at least to Kant, and recently Thompson's Mind in Life, building as it does on people like Jonas, Husserl, Varela, Maturana, and many many others, provides a well fleshed out account. This is not a single voice. Relating mind to life is going on all over the place. The most significant developments in biological philosophy that address the question of how "life" is to be understood all feed into the discussion here. Terrence Deacon's "Incomplete Nature", and Stuart Kaufmann's "Investigations" are some relevant works. I could list dozens of others, but all I want to establish is that there is more than one game in town and the wilful neglect of the grounded, reasoned, position of others that the rationalist, cognitivist camp regularly engages in speaks only of their lack of knowledge of the territory. We in the mind and life camp are well aware of the arguments, successes, failures, weaknesses and strengths of the cogitivist position. Why do we repeatedly run into a failure of that camp to acknowledge other positions? Some of the founders are well known for their appalling arrogance here. Fodor, in 2001, says:

[The Computational Theory of Mind] is, in my view, by far the best theory of cognition that we’ve got; indeed, the only one we’ve got that’s worth the bother of a serious discussion.... its central idea---that intentional processes are syntactic operations defined on mental representations---is strikingly elegant.
That phrase "the only one we've got that's worth the bother of a serious discussion" rankles me. It is dismissive and ignorant. Chomsky argues regularly in similar vein, and has recently been called publicly to task for it by Christina Behme. This is rank, ignorant, fundamentalism.
The 1991 book The Embodied Mind (Varela, Thompson, Rosch) is a landmark work that introduced some of these concerns into cognitive science. It also represents the first principled introduction of the term Embodiment into such fundamental discussions. The term has been dragged through many ditches since, and used and misused in so many ways that any suggestion that there is an agreed interpretation of the term is ridiculous. To illustrate my point, I need only point out how Greg felt that embodiment could be subsumed within his view of how brains relate to experience and behaviour (I hate the word "mind").
Is it necessary to argue here the shortcomings of the contemporary cognitivist view? Is it necessary to decry it as a crypto-religious solipsistic Cartesian approach, bound inseparably to a cultural-historical view of the human that valorises individual agency and autonomy to the point of shutting the person off from their world? To my jaundiced eye, all computational accounts we currently have fall woefully short of providing plausible accounts of either experience or behaviour. They lean on a magical notion of representation that is a theological construct, yet they are not aware of their theological commitments.
The issues at stake are not small. They extend to more than a single word. But I hope this response can serve to lay down a marker, so that Greg, and anyone else belonging to the Church of Cognitivism, does not feel free to misappropriate terms from reasoned approaches to cognition they do not even acknowledge. I am happy to elaborate on any of the points above, but I feel this tweet has now reached its character limit.

-- 
Cognitive Science Programme
UCD School of Computer Science and Informatics


Monday, January 12, 2015

Technical Assistant in the Fedorenko Lab (EvLab), MGH/MIT

POSITION OPENING: Technical Assistant in the Fedorenko Lab (EvLab), MGH/MIT, to assist with all aspects of research on the cognitive and neural architecture of the language system.  Target start date is June 1 but earlier would be preferable.

RESPONSIBILITIES: Designing, programming, and conducting behavioral (including web-based) and fMRI experiments; analyzing behavioral and fMRI data; creating and updating the lab website; implementing and maintaining analysis software; technical support for lab personnel; and some basic administrative duties.

REQUIREMENTS: Candidates must have ALL of the following: i) strong math, statistics, and computer skills (e.g., MATLAB, Python, shell scripting), ii) substantial programming experience, iii) Mac and Windows troubleshooting skills and good knowledde of the Linux/Unix environment (our data server is linux), iv) a Bachelor's degree in cognitive science, neuroscience, computer science, engineering, psychology, physics, or math, and iv) evidence of serious interest in a career in cognitive neuroscience. In addition, prior research experience in cognitive neuroscience and experience conducting and/or or analyzing fMRI experiments and/or anatomical MRI studies is desired. The position is ideal for individuals considering future graduate study in cognitive science or neuroscience. We seek an organized, self-motivated, independent, and reliable individual who is able to multitask efficiently in a fast-paced environment, as well as work as part of a team.

If interested please contact Ev Fedorenko at evelina9@mit.edu.

<> <> <> <> <> <> <> <> <> <> <> <> <> <> <>
Evelina (Ev) Fedorenko
MGH, Psychiatry Department
Building 149, East 13th Street
Room 2624
Charlestown, MA 02129

evelina9@mit.edu
evelina.fedorenko@mgh.harvard.edu

http://web.mit.edu/evelina9/www/
http://web.mit.edu/evelina9/www/funcloc.html

RA and Post-doctoral positions in the NYU/NYU Abu Dhabi Neuroscience of Language Lab

1) Four lab manager/RA positions in the Neuroscience of Language Laboratory (http://www.psych.nyu.edu/nellab) at NYU and NYU Abu Dhabi (PIs: Alec Marantz & Liina Pylkkänen). Two of these positions will be based in New York and two in Abu Dhabi. The New York positions will be regular lab manager positions for the Pylkkänen and Marantz groups; the Abu Dhabi positions will be more flexible RA-positions most likely involving working with both PIs. All RAs should expect their responsibilities to span the two sites of the laboratory. Initial appointments are for one year, with possibility of renewal.  BA/BS or MA/MS in a cognitive science-related discipline (psychology, linguistics, etc.) or computer science is required. The lab managers/RAs will be involved in all stages of execution and analysis of MEG experiments on language processing. Previous experience with psycho- or neurolinguistic experiments is highly preferred. A background in statistics and some programming ability (especially Matlab) would give an applicant a strong edge. In Abu Dhabi, salary and benefits, including travel and lodging, are quite generous. We are looking to start these positions as soon as possible; most likely start dates are in mid-spring/early summer, although a late summer start might be possible. Evaluation of applications will begin immediately, and decisions will be made on a rolling basis.

To apply, please email CV and names of references to Phoebe Gaston at pg72@nyu.edu. In your email, please indicate whether you are applying for a position in New York or Abu Dhabi and whether your research interests align better with the Pylkkänen or Marantz group. 

2) Postdoctoral position: Cognitive Neuroscientist.  2-year, potentially renewable post-doctoral position in the cognitive neuroscience of language for the NYU Abu Dhabi Neuroscience of Language Laboratory (http://www.psych.nyu.edu/nellab). The researcher will have had experience with evoked response experiments using either MEG or EEG. The main responsibility of the researcher will be to cooperate with the PIs on the design and completion of MEG experiments with participant populations of varied linguistic and educational backgrounds to address questions related to the research projects of the NeLLab PIs, Alec Marantz and Liina Pylkkänen. A researcher with cross-linguistic experimental experience would be ideal for the job. Salary and rank will be commensurate with experience; benefits, including travel and lodging in Abu Dhabi, are quite generous. Applications will be accepted through February for a start date in early to late summer. Review of applicants will begin immediately. To apply, please email CV and names of references to Phoebe Gaston at pg72@nyu.edu.

What is the extent of motor influence on speech perception?

Although I am thoroughly convinced that an extreme version of the motor/mirror neuron theory of speech perception is untenable, I am quite open to the possibility that the motor system may influence speech perception.  That is, I view speech perception as fundamentally an auditory process (wild claim, I know!) but one that can be modulated by contextual sources including acoustic, visual, semantic, sentential and yes, motor information.  To illustrate, here is a quote from my 2011 paper with John Houde and Feng Rong:


… we suggest … under some circumstances forward predictions from the motor speech system can modulate the perception of others’ speech. … forward predictions generated via motor commands can function as a top-down attentional modulation of sensory systems. Such attentional modulation may be important for sensory feedback control because it sharpens the perceptual acuity of the sensory system to the relevant range of expected inputs (see below). This ‘‘attentional’’ mechanism might then be easily co-opted for motor-directed modulation of the perception of others’ speech, which would be especially useful under noisy listening conditions, thus explaining the motor speech induced effects of perception (Hickok et al., 2011, Neuron, 69(3), 407-422).

Nonetheless, I still hold the view that current evidence for this claim--including the many high-impact TMS reports--is exceptionally weak.  The problems with these reports are primarily methodological.  They typically use an unnatural task such as syllable discrimination, which is not something we do in normal speech processing settings, and response bias is not well controlled.  See Hickok 2014 for extensive discussion.  

A recent paper from Pulvermuller's group (Schomers et al.) reports a nice study that addresses my previous concerns.  The study follows up an influential report from the same group showing that stimulation of motor lip versus tongue areas differentially affects perception of syllables with lip- versus tongue-related sound onsets (D'Ausilio et al., 2009).  In the new study they again stimulated either lip or tongue motor areas and found a crossover interaction in the speed at which words starting with lip- or tongue-related sounds were recognized (via button press decisions to a matching picture). 

It's a nice result, but not without complication.  A closer look reveals that:

(1) It is not clear whether the stimulation can be attributed to motor cortex stimulation or to somatosensory cortex stimulation.  It would be quite interesting in it's own right if somato stimulation modulated speech perception making the paper important in that sense, but since we are debating the role of the motor system, an ambiguity in whether motor cortex was really the source of the effect weakens the claim quite a bit.  

(2) The effect held only in the reaction time data.  No effect was observed for accuracy, which seems rather more important for speech processing.  

(3) The RT effect held only for tongue-related sounds but not lip-related sounds.  

All of these complications just reinforce my skepticism that the motor system is doing anything for speech perception.  But maybe I'm just nitpicking.  These scientists made a serious effort to address my previous concerns and did a nice job.  True, the effects aren't whopping but at least some effects were noted.  Combined with the several previous reports perhaps this finally confirms that the motor system has a role to play in speech perception.

OK, so let me bite my tongue (thus temporarily modulating my perception of tongue-related sounds!) and admit that, consistent with my own claim (see above quote), the motor system indeed plays a role in perception.

The next question we must ask, then, is how much of a role does it play.  And the answer from the Schomers study is not very much:

--Since these TMS-induced effects only emerge with the speech stimuli are barely detectible anyway (69% accuracy in the Schomers et al. experiment), the motor system contributes only when we are failing to hear a good chunk of the speech signal.

--Under those circumstances, it doesn't even improve perceptual accuracy.  All it does is speed up responses.

--It only does this for some speech sounds.

So, fine, I admit that the motor system plays a "causal role" in speech perception. But that role is nearly inconsequential to understanding the neural and computational basis of speech perception.


 


Sunday, January 11, 2015

Post-doc opening at Simons Center for the Social Brain at MIT

POSITION OPENING: The Simons Center for the Social Brain at MIT is looking for a postdoctoral researcher for a 2-year position (possibly extendable up to 3 years) to work on an interdisciplinary project investigating pragmatic processing and the nature of pragmatic impairments in autism spectrum disorders. The project is a collaborative effort across five labs and includes three components that use i) behavioral (Gibson), ii) neural (Fedorenko & Saxe), and iii) computational/developmental (Schulz & Tenenbaum) approaches, respectively.  Target start date is June 1 but earlier would be preferable.

RESPONSIBILITIES: Leading one or more components of the project, including designing experiments, collecting and analyzing data (plus overseeing data collection / analyses carried out by graduate students / lab tech working on this project), presenting the results at conferences, and writing up the results for publication.  Because of the interdisciplinary nature of the project, we expect the postdoc to acquire new skills in order to be able to truly integrate the three components (for example, if the postdoc has a cognitive neuroscience background, s/he will be expected to acquire behavioral experimental skills, developmental research skills, and/or computational modeling skills).

REQUIREMENTS: We are looking for individuals with a Ph.D. in Neuroscience, Psychology, Cognitive Science, Computer Science or related fields.  Strong quantitative skills are a must.  Expertise in pragmatics and/or autism spectrum disorders is a big plus.  So is experience working with ASD populations.

SEND APPLICATIONS TO: Ev Fedorenko (evelina9@mit.edu).  Applications will be reviewed until the position is filled.