Last week the Neurostats 2014 workshop took place at the University of Warwick (co-organised by Adam Johansen, Nicolas Chopin, and myself). The goal was to put some neuroscientists and statisticians together to talk about neural data and what to do with it. General impressions:

- The type of Bayesian hierarchical modelling that Andrew Gelman has been advocating for years is starting to see some use in neuroimaging. On the one hand it makes plenty of sense since the data at the level of individual subjects can be cr*p and so one could really use a bit of clever pooling. On the other, imaging data is very high-dimensional, running a Gibbs sampler can take days, and it’s not easy making the data comparable across subjects.
- You have to know your signals. Neural data can be unbelievably complicated and details matter a lot, as Jonathan Victor showed in his talk. A consequence if that if you as a neuroscientist have a data analysis problem, it’s not enough to go see a statistician and ask for advice. If you have EEG data you need to find someone who knows *specifically* about all the traps and pitfalls of EEG, or else someone who’s willing to learn about these things. A consequence is that we should think about training neurostatisticians, the way we already have biostatisticians, econometricians and psychometricians.

There were plenty of interesting talks, but below are some of my personal highlights.