Models of rationality typically rely on underlying logics that allow s
imulated agents to entertain beliefs about one another to any depth of
nesting. Such models seem to be overly complex when used for belief m
odelling in environments in which cooperation between agents can be as
sumed, i.e., most HCI contexts. We examine some existing dialogue syst
ems and find that deeply-nested beliefs are seldom supported, and that
where present they appear to be unnecessary except in some situations
involving deception. Use of nested beliefs is associated with nested
reasoning (i.e., reasoning about other agents' reasoning). We argue th
at for cooperative dialogues, representations of individual nested bel
iefs of the third level (i.e., what A thinks B thinks A thinks B think
s) and beyond are in principle unnecessary unless directly available f
rom the environment, because the corresponding nested reasoning is red
undant. Since cooperation sometimes requires that agents reason about
what is mutually believed, we propose a representation in which the se
cond and all subsequent nesting levels are merged into a single catego
ry. In situations affording individual deeply-nested beliefs, such a r
epresentation restricts agents to human-like referring and repair stra
tegies, where an unrestricted agent might make an unrealistic and perp
lexing utterance.