Belief change is a fundamental problem in AI: Agents constantly have t
o update their beliefs to accommodate new observations. In recent year
s, there has been much work on axiomatic characterizations of belief c
hange. We claim that a better understanding of belief change can be ga
ined from examining appropriate semantic models. In this paper we prop
ose a general framework in which to model belief change. We begin by d
efining belief in terms of knowledge and plausibility: an agent believ
es phi if he knows that phi is more plausible than -phi. We then consi
der some properties defining the interaction between knowledge and pla
usibility, and show how these properties affect the properties of beli
ef. In particular, we show that by assuming two of the most natural pr
operties, belief becomes a KD45 operator. Finally, we add time to the
picture. This gives us a framework in which we can talk about knowledg
e, plausibility (and hence belief), and time, which extends the framew
ork of Halpern and Fagin for modeling knowledge in multi-agent systems
. We then examine the problem of ''minimal change''. This notion can b
e captured by using prior plausibilities, an analogue to prior probabi
lities, which can be updated by ''conditioning''. We show by example t
hat conditioning on a plausibility measure can capture many scenarios
of interest. In a companion paper, we show how the two best-studied sc
enarios of belief change, belief revision and belief update, fit into
our framework. (C) 1997 Elsevier Science B.V.