Combining different machine learning algorithms in the same system can
produce benefits above and beyond what either method could achieve al
one. This paper demonstrates that genetic algorithms can be used in co
njunction with lazy learning to solve examples of a difficult class of
delayed reinforcement learning problems better than either method alo
ne. This class, the class of differential games, includes numerous imp
ortant control problems that arise in robotics, planning, game playing
, and other areas, and solutions for differential games suggest soluti
on strategies for the general class of planning and control problems.
We conducted a series of experiments applying three learning approache
s - lazy Q-learning, k-nearest neighbor (k-NN), and a genetic algorith
m - to a particular differential game called a pursuit game. Our exper
iments demonstrate that Ic-NN had great difficulty solving the problem
, while a lazy version of Q-learning performed moderately well and the
genetic algorithm performed even better. These results motivated the
next step in the experiments, where we hypothesized Ic-NN was having d
ifficulty because it did not have good examples - a common source of d
ifficulty for lazy learning. Therefore, we used the genetic algorithm
as a bootstrapping method for Ic-NN to create a system to provide thes
e examples. Our experiments demonstrate that the resulting joint syste
m learned to solve the pursuit games with a high degree of accuracy ou
tperforming either method alone - and with relatively small memory req
uirements.