The present paper investigates identification of indexed families L Of recu
rsively enumerable languages from good examples. We distinguish class-prese
nting learning from good examples (the good examples have to be generated w
ith respect to a hypothesis space having the same range as L) and class-com
prising learning from good examples (the good examples have to be selected
with respect to a hypothesis space comprising the range of L). A learner is
required to learn a target language on every finite superset of the good e
xamples for it, If the learners first and only conjecture is correct then t
he underlying learning model is referred to as finite identification from g
ood examples and if the learner makes a finite number of incorrect conjectu
res before always outputting a correct one, the model is referred to as lim
it identification from good examples. In the context of class-preserving le
arning, it is shown that the learning power of finite and limit identificat
ion from good text examples coincide. When class comprising learning From g
ood text examples is concerned, limit identification is strictly more power
ful than finite learning, Furthermore, if learning from good informant exam
ples is considered, limit identification is superior to finite identificati
on in the class presenting as well as in the class-comprising case. Finally
, we relate the models of learning from good examples to one another as wel
l as to the standard learning models in the context of Gold-style language
learning. (C) 2001 Elsevier Science B,V. All rights reserved.