This paper studies the robustness of PAC learning algorithms when the
instance space is {0, 1}(n), and the examples are corrupted by purely
random noise affecting only the attributes (and not the labels). For u
niform attribute noise, in which each attribute is flipped independent
ly at random with the same probability, we present an algorithm that P
AC learns monomials for any (unknown) noise rate less than 1/2. Contra
sting this positive result, we show that product random attribute nois
e, where each attribute i is flipped randomly and independently with i
ts own probability p(i), is nearly as harmful as malicious noise-no al
gorithm can tolerate more than a very small amount of such noise.