Towards a unified theory of efficient, predictive, and sparse coding
A central goal in theoretical neuroscience is to predict the response properties of sensory neurons from first principles. To this end, “efficient coding” posits that sensory neurons encode maximal information about their inputs, given internal constraints. There exist, however, many variants of efficient coding (e.g., redundancy reduction, different formulations of predictive coding, robust coding, sparse coding, etc.), differing in their regimes of applicability, in the relevance of signals to be encoded, and in the choice of constraints. Here we present a unified framework based on “information bottleneck” that encompasses previously proposed efficient coding models, and extends to new regimes. We specifically focus on codes that efficiently represent the information from the input signal that is useful for making predictions about that signal’s future. On the example of naturalistic movies we demonstrate that neural codes optimized for future prediction are qualitatively different from previously studied cases that are optimized for efficiently representing the stimulus past. Beyond neuroscience, our approach yields tractable solutions for optimal prediction in the temporal domain under various encoding / decoding constraints.
A pizza lunch will be served.