Incremental concept learning for bounded data mining

Citation
J. Case et al., Incremental concept learning for bounded data mining, INF COMPUT, 152(1), 1999, pp. 74-110
Citations number
51
Categorie Soggetti
Information Tecnology & Communication Systems
Journal title
INFORMATION AND COMPUTATION
ISSN journal
08905401 → ACNP
Volume
152
Issue
1
Year of publication
1999
Pages
74 - 110
Database
ISI
SICI code
0890-5401(19990710)152:1<74:ICLFBD>2.0.ZU;2-6
Abstract
Important refinements of concept learning in the limit from positive data c onsiderably restricting the accessibility of input data are studied. Let c be any concept; every infinite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning machine computes a sequence of hypotheses about the target concept from a p ositive presentation of it. With iterative learning, the learning machine, in making a conjecture, has access to its previous conjecture and the lates t data items coming in. In k-bounded example-memory inference (k is a prior i fixed) the learner is allowed to access, in making a conjecture, its prev ious hypothesis, its memory of up to k data items it has already seen, and the next element coming in. In the case of k-feedback identification, the l earning machine, in making a conjecture, has access to its previous conject ure, the latest data item coming in, and, on the basis of this information, it can compute k items and query the database of previous data to find out , for each of the k items, whether or not it is in the database (k is again a priori fixed). In all cases, the sequence of conjectures has to converge to a hypothesis correctly describing the target concept. Our results are m anyfold. An infinite hierarchy of more and more powerful feedback learners in dependence on the number k of queries allowed to be asked is established . However, the hierarchy collapses to 1-feedback inference if only indexed families of infinite concepts are considered, and moreover, its learning po wer is then equal to learning in the limit. But it remains infinite for con cept classes of only infinite r.e. concepts. Both k-feedback inference and k-bounded example-memory identification are more powerful than iterative le arning but incomparable to one another. Furthermore, there are cases where redundancy in the hypothesis space is shown to be a resource increasing the learning power of iterative learners. Finally, the union of at most k patt ern languages is shown to be iteratively inferable. (C) 1999 Academic Press .