Trust is one of the most important concepts guiding decision-making and con
tracting in human societies. In artificial societies, this concept has been
neglected until recently. The inherent benevolence assumption implemented
in many multiagent systems can have hazardous consequences when dealing wit
h deceit in open systems. The aim of this paper is to establish a mechanism
that helps agents to cope with environments inhabited by both selfish and
cooperative entities. This is achieved by enabling agents to evaluate trust
in others. A formalization and an algorithm for trust are presented so tha
t agents can autonomously deal with deception and identify trustworthy part
ies in open systems. The approach is twofold : agents can observe the behav
ior of others and thus collect information for establishing an initial trus
t model. In order to adapt quickly to a new or rapidly changing environment
, one enables agents to also make use of observations from other agents. Th
e practical relevance of these ideas is demonstrated by means of a direct m
apping from a scenario to electronic commerce.