In the k-Restricted-Focus-of-Attention (k-RFA) model, only Ic of the n
attributes of each example are revealed to the learner, although the
set of visible attributes in each example is determined by the learner
. While the k-RFA model is a natural extension of the PAC model, there
are also significant differences. For example, it was previously know
n that learnability in this model is not characterized by the VC-dimen
sion and that many PAC learning algorithms are not applicable in the k
-RFA setting. In this paper we further explore the relationship betwee
n the PAC and k-RFA models, with several interesting results. First, w
e develop an information-theoretic characterization of k-RFA learnabil
ity upon which we build a general tool for proving hardness results. W
e then apply this and other new techniques for studying RFA learning t
o two particularly expressive function classes, Ic-decision-lists (k-D
L) and k-TOP, the class of thresholds of parity functions in which eac
h parity function takes at most k inputs. Among other results, we prov
e a hardness result for k-RFA learnability of k-DL, k less than or equ
al to n - 2. In sharp contrast, an (n - 1)-RFA algorithm for learning
(n - 1)-DL is presented. Similarly, we prove that 1-DL is learnable if
and only if at least half of the inputs are visible in each instance.
In addition, we show that there is a uniform-distribution k-RFA learn
ing algorithm for the class of k-DL. For k-TOP we show weak learnabili
ty by a k-RFA algorithm (with efficient time and sample complexity for
constant Ic) and strong uniform-distribution k-RFA learnability of lc
-TOP with efficient sample complexity for constant k. Finally, by comb
ining some of our Ic-DL and Ic-TOP results, we show that, unlike the P
AC model, weak learning does not imply strong learning in the k-RFA mo
del.