Although developments on software agents have led to useful applications in
automation of routine tasks such as electronic mail filtering, there is a
scarcity of research that empirically evaluates the performance of a softwa
re agent versus that of a human reasoner, whose problem-solving capabilitie
s the agent embodies. In the context of a game of a chance, namely Yahtzee(
C), we identified strategies deployed by expert human reasoners and develop
ed a decision tree for agent development. This paper describes the computer
implementation of the Yahtzee game as well as the software agent. It also
presents a comparison of the performance of humans versus an automated agen
t. Results indicate that, in this context, the software agent embodies huma
n expertise at a high level of fidelity.