We consider the problem of learning DNF formulae in the mistake-bound
and the PAC models. We develop a new approach, which is called polynom
ial explainability, that is shown to be useful for learning some new s
ubclasses of DNF (and CNF) formulae that were not known to be learnabl
e before. Unlike previous learnability results for DNF (and CNF) formu
lae, these subclasses are not limited in the number of terms or in the
number of variables per term; yet, they contain the subclasses of Ic-
DNF and k-term-DNF (and the corresponding classes of CNF) as special c
ases. We apply our DNF results to the problem of learning visual conce
pts and obtain learning algorithms for several natural subclasses of v
isual concepts that appear to have no natural boolean counterpart. On
the other hand, we show that learning some other natural subclasses of
visual concepts is as hard as learning the class of all DNF formulae.
We also consider the robustness of these results under various types
of noise.