Although a landmark work, version spaces have proven fundamentally lim
ited by being constrained to only consider candidate classifiers that
are strictly consistent with data. This work generalizes version space
s to partially overcome this limitation. The main insight underlying t
his work is to base learning on version-space intersection, rather tha
n the traditional candidate-elimination algorithm. The resulting learn
ing algorithm, incremental version-space merging (IVSM), allows versio
n spaces to contain arbitrary sets of classifiers, however generated,
as long as they can be represented by boundary sets. This extends vers
ion spaces by increasing the range of information that can be used in
learning; in particular, this paper describes how three examples of ve
ry different types of information-ambiguous data, inconsistent data, a
nd background domain theories as traditionally used by explanation-bas
ed learning-can each be used by the new version-space approach.