The article deals with the problem of learning incrementally ('on-line
') in domains where the target concepts are context-dependent, so that
changes in context can produce more or less radical changes in the as
sociated concepts. In particular, we concentrate on a class of learnin
g tasks where the domain provides explicit clues as to the current con
text (e.g., attributes with characteristic values). A general two-leve
l learning model is presented that effectively adjusts to changing con
texts by trying to detect (via 'meta-learning') contextual clues and u
sing this information to focus the learning profess. Context learning
and detection occur during regular on-line learning, without separate
training phases for context recognition. Two operational systems based
on this model are presented that differ in the underlying learning al
gorithm and in the way they use contextual information: METAL(B) combi
nes meta-learning with a Bayesian classifier, while METAL(IB) is based
on an instance-based learning algorithm. Experiments with synthetic d
omains as well as a number of 'real-world' problems show that the algo
rithms are robust in a variety of dimensions, and that meta-learning c
an produce substantial increases in accuracy over simple object-level
learning in situations with changing contexts.