Intelligence pertains to the ability to make appropriate decisions in light
of specific goals and to adapt behavior to meet those goals in a range of
environments. Mathematical games provide a framework for, studying intellig
ent behavior in models of real-world settings or restricted domains. The be
havior of alternative strategies in these games is defined by each individu
al's stimulus-response mapping. Limiting these behaviors to linear function
s of the environmental conditions renders the results to be little more tha
n a facade: effective decision making in any complex environment almost alw
ays requires nonlinear stimulus-response mappings The obstacle then comes i
n choosing the appropriate representation and learning algorithm. Neural ne
tworks and evolutionary algorithms provide useful means for addressing thes
e issues. This paper describes efforts to hybridize neural and evolutionary
computation to learn appropriate strategies in zero- and nonzero-sum,games
, including the iterated prisoner's dilemma, tic-tac-toe, and checkers. Wit
h respect to checkers, the evolutionary algorithm was able to discover a ne
ural network that can be used to play at a near-expert level without inject
ing expert knowledge about how To play the game. The implications of evolut
ionary learning with respect to machine intelligence ale also discussed. It
is argued that evolution provides the framework for explaining naturally o
ccuring intelligent entities and can be used to design machines that are al
so capable of intelligent behavior.