Predictability, complexity, and learning

Citation
W. Bialek et al., Predictability, complexity, and learning, NEURAL COMP, 13(11), 2001, pp. 2409-2463
Citations number
107
Categorie Soggetti
Neurosciences & Behavoir","AI Robotics and Automatic Control
Journal title
NEURAL COMPUTATION
ISSN journal
08997667 → ACNP
Volume
13
Issue
11
Year of publication
2001
Pages
2409 - 2463
Database
ISI
SICI code
0899-7667(200111)13:11<2409:PCAL>2.0.ZU;2-1
Abstract
We define predictive information I-pred(T) as the mutual information betwee n the past and the future of a time series. Three qualitatively different b ehaviors are found in the limit of large observation times T:I-pred(T) can remain finite, grow logarithmically, or grow as a fractional power law. If the time series allows us to learn a model with a finite number of paramete rs, then I-pred(T) grows logarithmically with a coefficient that counts the dimensionality of the model space. In contrast, power-law growth is associ ated, for example, with the learning of infinite parameter (or non-parametr ic) models such as continuous functions with smoothness constraints. There are connections between the predictive information and measures of complexi ty that have been defined both in learning theory and the analysis of physi cal systems through statistical mechanics and dynamical systems theory. Fur thermore, in the same way that entropy provides the unique measure of avail able information consistent with some simple and plausible conditions, we a rgue that the divergent part of I-pred(T) provides the unique measure for t he complexity of dynamics underlying a time series. Finally, we discuss how these ideas may be useful in problems in physics, statistics, and biology.