Skip navigation
DOI: http://dx.doi.org/10.7551/978-0-262-31709-2-ch099
Pages 692-697
First published 2 September 2013

Evolution of Mutual Trust Protocol in Human-based Multi-Agent Simulation

Hirotaka Osawa, Michita Imai

Abstract

Acquisition of the opponent's model and achieving mutual trust with others are notable traits of humankind's intelligence. Achieving mutual trust is a big challenge for artificial intelligences, and it is a key factor in trading. However, how players observe each others' behaviors and how they achieve mutual trust are not fully known. In this study, we researched the growth of a mutual trust protocol in a trading game in a human-based simulation. We designed and implemented webbased multi-player trading game based on the refusable iterative Anti-Max Prisoner's Dilemma game (rAMPD). In the game, each agent's strategy is described by an automaton and periodically modified by human players. We conducted a longterm human-based evolution of mutual trust using this trading game for approximately one month and observed how the agents' automata changed. Analyses of the high-ranking agents' automata and introspective reports by the human players revealed that the mutual trust protocol is achieved by using the initial trade as a signal for mutual recognition.