This article presents an ILP system, called ILP-R, which has several p
roperties that address the demands of knowledge discovery in databases
(KDD) quite nicely. The system uses Relief for its literal quality es
timation, which can be as efficient as Information gain but more effec
tive in detecting dependencies between literals. We introduce a weak l
anguage bias and exploit its properties for storing partial proofs in
a mesh-like structure. We show the linear space bounds of this encodin
g scheme, with respect to the clause length. Finally, we present the f
irst-order Bayesian classification framework, which can sometimes lead
to significantly better classification and better noise resistance. i
t is also flexible enough to be used as an experimentation tool for. r
evealing some underlying propel ties of the domain. We empirically tes
ted our system on a set of artificial and one real-world domain, both
propositional and relational. We discuss the advantages and deficienci
es of our approach.