Thursday, February 26, 2009

The Great Voodoo Hunt: A reminder that it's OK to plot your fMRI data

I can see the courtroom-style drama being played out in every colloquium, poster session, job interview, and platform presentation involving fMRI research (not to mention reviews!)...

Accuser: [Sternly] Have you ever run a statistical test on data from an ROI that was defined by the same data?
fMRI Researcher: [Eyes twitch nervously] No sir.
Accuser: Have you ever run a correlation between an ROI and behavioral data using the same data that defined the ROI?
fMRI Researcher: [Wipes sweat from brow, cowers] No sir. Never.
Accuser: [In a rising crescendo] HAVE YOU EVER PLOTTED YOUR DATA FROM AN ROI?
fMRI Researcher: [Breaks down in tears] YES! YES I HAVE BUT I SWEAR I DIDN'T MEAN TO HURT ANYBODY!
Audience: [snickers, jeers, and dismisses researcher's entire body of work as -- cue dramatic music --VOODOO!]


No doubt you are aware of Ed Vul's voodoo correlations paper by now. It is a useful paper in that it reminds us to keep our analyses independent. Nothing new statistically, of course, but it never hurts to be reminded. I decided I really didn't want to spend much time on the topic here on Talking Brains because it is getting plenty of attention without our help. But then someone point out this to me from Ed's chapter with Nancy Kanwisher:

The most common, most simple, and most innocuous instance of non-independence occurs when researchers simply plot (rather than test) the signal change in a set of voxels that were selected based on that same signal change. ... In general, plotting non-independent data is misleading, because the selection criteria conflate any effects that may be present in the data from those effects that could be produced by selecting noise with particular characteristics.


One might take from this that we're not supposed to (allowed to) plot data from an ROI, else the VOODOO police will hunt us down. But it would be a huge mistake not to look and not to publish these graphs. By looking at the graphs you can distinguish between differences that result from different levels of positive activation in the various conditions, different degrees of negative activation (signal decrease relative to baseline), or to one condition going positive and another going negative. These different patterns may suggest different theoretical interpretations. It could also suggest other differences such as the latency of the response that would go undetected in activation maps alone. If all we looked at were contrast maps, without examining amplitude or timecourse plots, we could miss out on important information. Just be aware that when you look at the magnitude of the amplitude differences, it could be slightly biased.

To be clear, Vul and Kanwisher will not prick their voodoo dolls for any researcher who wants to publish graphs from an ROI. In fact, they are quick to note the value of examining and publishing such graphs, as long as they are used to explore activation patterns that are orthogonal to the selection criteria:

On the other hand, plots of non-independent data sometimes contain useful information orthogonal to the selection criteria. When data are selected for an interaction, non-independent plots of the data reveal which of many possible forms the interaction takes. In the case of selected main effects, readers may be able to compare the activations to baseline and assess selectivity. In either case, there may be valuable, independent and orthogonal information that could be gleaned from the time-courses. In short, there is often information lurking in graphs of non-independent data; however, it is usually not the information that the authors of such graphs draw readers’ attention to. Thus, we are not arguing against displaying graphs that contain redundant (and perhaps biased) information, we are arguing against the implicit use of these graphs to convince readers by use of non-independent data.


Given the extensive buzz about all this voodoo, I sincerely hope that things don't degenerate into a witch hunt. Ed and colleagues have rightfully reminded us to be careful in our statistical treatment of fMRI data. But let's be equally careful not turn into mindless persecutors.

P.s., in case your appetite for voodoo commentary is yet to be satiated check out Brad Buchsbaum's recent critiques.

7 comments:

Jonas said...

(Greg, this Brad Buchsbaum link in the end seems broken. Best, J.)

Greg Hickok said...

Thanks. Fixed it.

yisroel said...

Someone out there want to rewrite the lyrics to Jimi Hendrix's 'Voodoo Chile'?

Niko Kriegeskorte said...

i agree that we should keep cool as we assess what results hold up.

but "witch hunt" would suggest that the problem is in the persecutor's imagination.

ed et al. may have overstated the size of the biases for the particular studies they addressed. but biases do arise from nonindependent selective analysis. and distorted analyses are frequently interpreted as evidence. moreover, multiple testing correction in nonselective mapping is frequently insufficient.

the problem is also not restricted to social neuroscience or brain imaging. circularity occurs in electrophysiological and behavioral studies as well.

we do need to be concerned whether a given analysis is self-fulfilling or not -- especially because results of circular analyses will look stronger.

avoiding circularity is a continual challenge to our field. discussing it can only help.

our simulations and tests suggest:
(a) that the biases will be moderate in many cases. (however, of course, they can push a test over the threshold of significance. so nonindependent ROI analysis should never serve as evidence for the effect selected for.)
(b) that the biases can be very large under certain circumstances (particularly for discontiguous selection or sorting of channels).

--niko

Greg Hickok said...

Thanks for your comment Niko, I agree 100%. I'm not at all concerned that the "accusers" will lead the witch hunt; people like Ed and yourself are the folks who have spent a fair amount of time thinking carefully about these issues and as a result have a very rational approach to the problem. I'm more worried about the casual consumer of the voodoo cautions. These are the folks who may not think as carefully about these issues and may over apply their criticisms in grant and paper reviews.

Can't wait to read more about your simulations. Can we expect a public report soon?

Matthew Lieberman said...

For anyone interested, there was a public debate on Voodoo Correlations last fall at the Society of Experimental Social Psychologists between Piotr Winkielman (one of the authors on the Voodoo paper) and myself (Matt Lieberman). The debate has been posted online.

http://www.scn.ucla.edu/Voodoo&TypeII.html

Matthew Lieberman said...

For anyone interested, there was a public debate on Voodoo Correlations last fall at the Society of Experimental Social Psychologists between Piotr Winkielman (one of the authors on the Voodoo paper) and myself (Matt Lieberman). The debate has been posted online.

http://www.scn.ucla.edu/Voodoo&TypeII.html