Planning under uncertainty is a central problem in the study of automated s
equential decision making, and has been addressed by researchers in many di
fferent fields, including AI planning, decision analysis, operations resear
ch, control theory and economics. While the assumptions and perspectives ad
opted in these areas often differ in substantial ways, many planning proble
ms of interest to researchers in these fields can be modeled as Markov deci
sion processes (MDPs) and analyzed using the techniques of decision theory.
This paper presents an overview and synthesis of MDP-related methods, showi
ng how they provide a unifying framework for modeling many classes of plann
ing problems studied in AI. It also describes structural properties of MDPs
that, when exhibited by particular classes of problems, can be exploited i
n the construction of optimal or approximately optimal policies or plans. P
lanning problems commonly possess structure in the reward and value functio
ns used to describe performance criteria, in the functions used to describe
state transitions and observations, and in the relationships among feature
s used to describe states, actions, rewards, and observations.
Specialized representations, and algorithms employing these representations
, can achieve computational leverage by exploiting these various forms of s
tructure. Certain AI techniques-in particular those based on the use of str
uctured, intensional representations-can be viewed in this way. This paper
surveys several types of representations for both classical and decision-th
eoretic planning problems, and planning algorithms that exploit these repre
sentations in a number of different ways to ease the computational burden o
f constructing policies or plans. It focuses primarily on abstraction, aggr
egation and decomposition techniques based on AI-style representations.