The big (brain) data cometh: Low-dimensional models for understanding neural systems

The recent investment in neurotechnology development has spurred tremendous excitement about the potential to uncover the operating principles of biological neural circuits. However, a storm is brewing. If the neuroengineering community is able to achieve their goals of developing technologies that increase the number of interfaced neurons by orders of magnitude, what comes next? How do we acquire, transmit and store this data in an extremely constrained hardware environment? What are the theoretical models of neural coding that should be tested and refined? What experimental paradigms are most valuable for increasing our understanding of neural circuits?

Modern data science has shown that low-dimensional models (e.g., sparsity, manifolds, attractors) have been a powerful way to approximately capture the information in high-dimensional data. Given the power of these approaches, it is likely that they can contribute both to designing efficient engineering tools for the electrophysiology data pipeline as well as modeling the sensory neural systems that process information about environmental stimuli. In this talk I will discuss our recent progress on these problems, including new algoritms and analysis for dimensionality reduction and inference in sparsity, manifold and dynamical system models. We will show that these results can provide 1) powerful algorithms to aid large-scale electrophysiology data acquisition, 2) models of neural coding and perception in the visual pathway, and 3) novel experimental paradigms that leverage new neurotechnologies to uncover the fundamental operating principles of neural systems.

November 5 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Christopher Rozell