The aim of relational learning is to develop methods for the induction of h
ypotheses in representation formalisms that are more expressive than attrib
ute-value representation. Most work on relational learning has been focused
on induction in subsets of first order logic like Horn clauses. In this pa
per we introduce the representation formalism based on feature terms and we
introduce the corresponding notions of subsumption and anti-unification. T
hen we explain INDIE, a heuristic bottom-up learning method that induces cl
ass hypotheses, in the form of feature terms, from positive and negative ex
amples. The biases used in INDIE while searching the hypothesis space are e
xplained while describing INDIE's algorithms. The representational bias of
INDIE can be summarised in that it makes an intensive use of sorts and sort
hierarchy, and in that it does not use negation but focuses on detecting p
ath equalities. We show the results of INDIE in some classical relational d
atasets showing that it's able to find hypotheses at a level comparable to
the original ones. The differences between INDIE's hypotheses and those of
the other systems are explained by the bias in searching the hypothesis spa
ce and on the representational bias of the hypothesis language of each syst
em.