On-line learning in domains where the target concept depends on some h
idden context poses serious problems. A changing context can induce ch
anges in the target concepts, producing what is known as concept drift
. We describe a family of learning algorithms that flexibly react to c
oncept drift and I:an take advantage of situations where contexts reap
pear. The general approach underlying all these algorithms consists of
(1) keeping only a window of currently trusted examples and hypothese
s; (2) storing concept descriptions and reusing them when a previous c
ontext re-appears: and (3) controlling both of these functions by a he
uristic that constantly monitors the system's behavior. The paper repo
rts on experiments that test the systems' performance under various co
nditions such as different levels of noise and different extent and ra
te of concept drift.