Abstract:
|
Neuroscience is entering an exciting new age. Modern recording technologies enable simultaneous measurements of thousands of neurons in organisms performing complex behaviors. Such recordings offer an unprecedented opportunity to glean insight into the mechanistic underpinnings of intelligence, but they also present an extraordinary statistical and computational challenge: how do we make sense of these large scale recordings? We adopt a Bayesian approach, translating hypotheses about patterns of network connectivity and low-dimensional dynamics into prior distributions in probabilistic models for neural data. We develop a corresponding set of Bayesian inference algorithms that leverage model structure to efficiently compute the posterior distribution of model parameters and variables. In a variety of real world examples, we show how these inferences have helped advance our understanding of neural computation.
|