Last week the Neurostats 2014 workshop took place at the University of Warwick (co-organised by Adam Johansen, Nicolas Chopin, and myself). The goal was to put some neuroscientists and statisticians together to talk about neural data and what to do with it. General impressions:

  • The type of Bayesian hierarchical modelling that Andrew Gelman has been advocating for years is starting to see some use in neuroimaging. On the one hand it makes plenty of sense since the data at the level of individual subjects can be cr*p and so one could really use a bit of clever pooling. On the other, imaging data is very high-dimensional, running a Gibbs sampler can take days, and it’s not easy making the data comparable across subjects.
  • You have to know your signals. Neural data can be unbelievably complicated and details matter a lot, as Jonathan Victor showed in his talk. A consequence if that if you as a neuroscientist have a data analysis problem, it’s not enough to go see a statistician and ask for advice. If you have EEG data you need to find someone who knows *specifically* about all the traps and pitfalls of EEG, or else someone who’s willing to learn about these things. A consequence is that we should think about training neurostatisticians, the way we already have biostatisticians, econometricians and psychometricians.

There were plenty of interesting talks, but below are some of my personal highlights.

The imager’s fallacy

We had a great talk by EJ Wagenmakers on the perennial topic of the evilness of p-values. Turns out there are still some new things to be said on the topic, especially as it applies to neuroimaging. In fMRI people threshold maps of p-values to highlight significant blobs, as in:

Blobs!!! (from Wikipedia)

The usual interpretation is that activity in the blobs is significantly affected by some experimental manipulation, and therefore these brain areas have something to do with the manipulation (for example, they respond to sounds).

The imager’s fallacy is to look at such an image and think that because area A is significant and area B isn’t, then area A and B are different. That’s an error, and an instance of thinking that “the difference between significant and insignificant is significant”. In fact there are probably a whole of brain areas for which the fMRI data simply aren’t good enough to say anything reliable, areas that Wagenmakers dubbed “in limbo”.

There’s a simple fix for the imager’s fallacy (submitted manuscript by de Hollander, Wagenmakers, Waldorp & Forstmann). You find all areas “in limbo”, and highlight them in an ugly colour of your choice.

limbo

Areas in yellow/red are significantly modulated, areas in gray aren’t and green means you can’t tell from the data. The technique is explained on a HBM poster by de Hollander, which is where I found the image.

Human brain warping

Everybody’s brain is shaped differently, sometimes in fairly extreme ways. Unfortunately this if one wants to compare or average brain images across subjects they first have to be mapped onto one another, or to a common standard. Stanley Durrleman described a framework for the problem based on representing shapes as currents (average flows across the shape). The Deformetrica software implements these ideas and looks really promising.

Michael Hanke gave a great talk on their Open Data experiment (website at studyforrest.org). They have released a very large dataset of subjects lying in a scanner listening to a Forrest Gump audiobook. Their approach to variability across subjects is to map voxels based on functional correspondence rather than purely anatomical considerations, an idea that seems to work very well.

Given a chance, EEG data will bite you in the bum 

Jonathan Victor gave another very interesting talk on EEG data in coma/vegetative patients. The upshot is that EEG data contains the kinds of artifacts and dependencies that can seriously invalidate an analysis, as shown in their re-analysis of the much-hyped Lancet article “Bedside detection of awareness in the vegetative state”. Useful technique, some caution required.

Overall plenty of food for thought. We will post slides on the workshop’s webpage for those interested. Thanks to CRISM and the University of Warwick for their support!