AI planning agents are goal-directed: success is measured in terms of
whether an input goal is satisfied. The goal gives structure to the pl
anning problem, and planning representations and algorithms have been
designed to exploit that structure. Strict goal satisfaction may be an
unacceptably restrictive measure of good behavior, however. A general
decision-theoretic agent, on the other hand, has no explicit goals: s
uccess is measured in terms of an arbitrary preference model or utilit
y function defined over plan outcomes. Although it is a very general a
nd powerful model of problem solving, decision-theoretic choice lacks
structure, which can make it difficult to develop effective plan-gener
ation algorithms. This paper establishes a middle ground between the t
wo models. We extend the traditional AI goal model in several directio
ns: allowing goals with temporal extent, expressing preferences over p
artial satisfaction of goals, and balancing goal satisfaction against
the cost of the resources consumed in service of the goals. In doing s
o we provide a utility model for a goal-directed agent. An important q
uality of the proposed model is its tractability. We claim that our mo
del, like classical goal models, makes problem structure explicit. Thi
s structure can then be exploited by a problem-solving algorithm. We s
upport this claim by reporting on two implemented planning systems tha
t adopt and exploit our model.