In this paper, we consider some problems in learning with respect to a fixe
d distribution. We introduce two new notions of learnability; these are pro
bably uniformly approximately correct (PUAC) learnability which is a strong
er requirement than the widely studied PAC learnability, and minimal empiri
cal risk (MER) learnability, which is a stronger;requirement than the previ
ously defined notions of "solid" or "potential" learnability, It is shown t
hat, although the motivations for defining these two notions of learnabilit
y are entirely different, these two notions are in fact equivalent to each
other and, in turn, equivalent to a property introduced here, referred to a
s the shrinking width property, It is further shown that if the function cl
ass to be learned has the property that empirical means converge uniformly
to their true values, then all of these learnability properties hold. In th
e course of proving conditions for these forms of learnability, we also obt
ain a new estimate for the VC-dimension of a collection of sets obtained by
performing Boolean operations on a given collection; this result is of ind
ependent interest. We consider both the case in which there is an underlyin
g target function, as well as the case of "model-free" (or agnostic) learni
ng. Finally, we consider the issue of representation of a collection of set
s by its subcollection of equivalence classes. It is shown by example that,
by suitably choosing representatives of each equivalence class, it is poss
ible to affect the property of uniform convergence of empirical probabiliti
es.