Methods for Neural Data Analysis

A major theme in my research is neural data analyis. How can we extract hidden structure from electrophysiological recordings of a population of neurons? How can we interpret this structure? I’m developing probabilistic unsupervised machine learning approaches to address such questions.

Nonlinear dynamical systems in continuous time (GP-SDE)

We developed a nonparametric approach to modelling a latent nonlinear stochstic differential equation in an interpretable way. The key idea is to include interpretable features of a dynamical system, like fixed points and the local linearised dynamics around them, as model parameters that can be estimated directly as part of the model fitting. This avoids having to take a two-step approach and therfore allows to propagate uncertainty about the dynamics themselves to estimates about the features we care about. The method works in continous time, which makes it easy to apply it to spike time data. Applied to neural data, this method offers a way to describe the high-dimensional population data in terms of a low-dimensional nonlinear stochastic process, whose dynamics describe the temporal evolution of latent computational variables that are distributed across the neural population. We have a paper on the method here and also Python code available here.

Sparse variational Gaussian Process Factor Analysis (svGPFA)

We developed a sparse variational extension of GPFA using inducing points. Our method does not only circumvent the cubic complexity of standard GPFA but also makes it easy to work with non-conjugate likelihoods, like a point-process for spike-time data. Our point-process extension of GPFA operates directly on spike times and in continuous time, removing the need to bin spike times into discrete counts falling into time bins of a fixed chosen width. This method allows to perform dimensionality reduction on the high-dimensional spike-trains of the population, and extract low-dimensional neural trajectories. Some details of this methods are covered in this paper.

Scalable Inference for STRFs with low-rank structure

I’ve also worked on developing a scalable method for Bayesian inference in spatio-temporal receptive fields. Our method improves on the cubic scaling of standard maximum a posteriori inference and allows for improved receptive field estimation in settings with very large and correlated, or naturalistic stimuli and under limited number of samples.

Analysing Neural Population Data

I’m also working on using data-driven approaches to analyse population recordings with a specific question in mind. This might also involve developing a method tailored to addressing a particular question, but the focus here is much more on interpreting the structure we extract or estimate from data than the method used to do so.

Interpreting optogenetic perturbations

Optogenetic perturbations are a key approach to probing neural circuits in order to gain insight into causal relationships between neural activity and behaviour. I am collaborating with the Shenoy Laboratory at Stanford to interpret neural population responses during optogenetic stimulation of motor cortex. We study perturbations within the framework of dynamical systems, which allows us to analyse how particular perturbation patterns interact with the local circuit dynamics.

Decomposition of inter-trial variability

Using methods for single trial analysis, we can start asking questions about how variations in the way the population activity unfolded over time may relate to differences in the observed behaviour. Inter-trial variability can be separated into variability in timing and variability in the underlying computation itself. I’ve worked on developing an extension of the svGPFA model in order to decompose single-trial activity according to these different components. Doing so allows us to ask questions about how much of the total variability is of purely temporal nature, what subspaces of the state space this variability lives in, and so on.