Abstract :
[en] We introduce a two-player model of reinforcement learning with memory. Past actions of an iterated game are stored in a memory and used to determine player's next action. To examine the behaviour of the model some approximate methods are used and confronted against numerical simulations and exact master equation. When the length of memory of players increases to infinity the model undergoes ail absorbing-state phase transition. Performance of examined strategies is checked in the prisonor' dilemma game. It turns out that it is advantageous to have a large memory in symmetric games, but it is better to have a short memory in asymmetric ones. (C) 2009 Elsevier B.V. All rights reserved.
Scopus citations®
without self-citations
5