Using trust for detecting deceitful agents in artificial societies

Citation
M. Schillo et al., Using trust for detecting deceitful agents in artificial societies, APPL ARTIF, 14(8), 2000, pp. 825-848
Citations number
22
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
APPLIED ARTIFICIAL INTELLIGENCE
ISSN journal
08839514 → ACNP
Volume
14
Issue
8
Year of publication
2000
Pages
825 - 848
Database
ISI
SICI code
0883-9514(200009)14:8<825:UTFDDA>2.0.ZU;2-4
Abstract
Trust is one of the most important concepts guiding decision-making and con tracting in human societies. In artificial societies, this concept has been neglected until recently. The inherent benevolence assumption implemented in many multiagent systems can have hazardous consequences when dealing wit h deceit in open systems. The aim of this paper is to establish a mechanism that helps agents to cope with environments inhabited by both selfish and cooperative entities. This is achieved by enabling agents to evaluate trust in others. A formalization and an algorithm for trust are presented so tha t agents can autonomously deal with deception and identify trustworthy part ies in open systems. The approach is twofold : agents can observe the behav ior of others and thus collect information for establishing an initial trus t model. In order to adapt quickly to a new or rapidly changing environment , one enables agents to also make use of observations from other agents. Th e practical relevance of these ideas is demonstrated by means of a direct m apping from a scenario to electronic commerce.