Currently popular reinforcement learning methods are based on estimati
ng value functions that indicate the long-term value of each problem s
tate. In many domains, such as those traditionally studied in Al plann
ing research, the size of state spaces precludes the individual storag
e of state value estimates. Consequently, most practical implementatio
ns of reinforcement learning methods have stored value functions using
generalizing function approximators, with mixed results. We analyze t
he effects of approximation error on the performance of goal-based tas
ks, revealing potentially severe scaling difficulties. Empirical evide
nce is presented that suggests when difficulties are Likely to occur a
nd explains some of the widely differing results reported in the liter
ature.