A central problem in inductive logic programming is theory evaluation.
Without some sort of preference criterion, any two theories that expl
ain a set of examples are equally acceptable. This paper presents a sc
heme for evaluating alternative inductive theories based on an objecti
ve preference criterion. It strives to extract maximal redundancy from
examples, transforming structure into randomness. A major strength of
the method is its application to learning problems where negative exa
mples of concepts are scarce or unavailable. A new measure called mode
l complexity is introduced, and its use is illustrated and compared wi
th a proof complexity measure on relational learning tasks. The comple
mentarity of model and proof complexity parallels that of model and pr
oof-theoretic semantics. Model complexity, where applicable, seems to
be an appropriate measure for evaluating inductive logic theories.