Intuitively, a class of objects is robustly learnable if not only this clas
s itself is learnable but all of its computable transformations remain lear
nable as well. In that sense, being learnable robustly seems to be a desira
ble property in all fields of learning.
We will study this phenomenon within the paradigm of inductive inference. H
ere a class of recursive functions is called robustly learnable under the c
riterion I iff all of its images under general recursive operators are lear
nable under the criterion I. M. Fulk (1990, in "31st Annual IEEE Symposium
on Foundations of Computer Science." pp. 405-410. IEEE Comput. Soc. Press,
Los Alamitos, CA) showed the existence of a nontrivial class which is robus
tly learnable under the criterion Ex. However, several of the hierarchies (
such as the anomaly hierarchies for Ex and Bc) do not stand robustly. Hence
, up to now it was not clear if robust learning is really rich. The main in
tention of this paper is to give strong evidence that robust learning iv ri
ch.
Our main result proved by a priority construction is that the mind change h
ierarchy for Ex stands robustly. Moreover, the hierarchies of team learning
for both Ex and Bc stand robustly as well. In several contexts, we observe
d the surprising fact that a more complex topological structure of the clas
ses to be learned leads to positive robustness results, whereas an easy top
ological structure yields negative results. We also show the counterintuiti
ve fact that even some self-referential classes can be learned robustly. So
me of our results point out the difficulty of robust learning when only a b
ounded number of mind changes is allowed. Further results concerning unifor
mly robust learning are derived, (C) 2001 Academic Press.