In a variety of PAC learning models, a trade-off between time and informati
on seems to exist: with unlimited time, a small amount of information suffi
ces, but with time restrictions, more information sometimes seems to be req
uired. In addition, it has long been known that there are concept classes t
hat can be learned in the absence of computational restrictions, but (under
standard cryptographic assumptions) cannot be learned in polynomial time (
regardless of sample size). Yet, these results do not answer the question o
f whether there are classes for which learning from a small set of examples
is computationally infeasible, but becomes feasible when the learner has a
ccess to (polynomially) more examples.
To address this question, we introduce a new measure of learning complexity
called computational sample complexity that represents the number of examp
les sufficient for polynomial time learning with respect to a fixed distrib
ution. We then show concept classes that (under similar cryptographic assum
ptions) possess arbitrarily sized gaps between their standard (information-
theoretic) sample complexity and their computational sample complexity. We
also demonstrate such gaps for learning from membership queries and learnin
g from noisy examples.