On-Policy Concurrent Reinforcement Learning
Document Type
Article
Publication Date
10-1-2004
Abstract
When an agent learns in a multi-agent environment, the payoff it receives is dependent on the behaviour of the other agents. If the other agents are also learning, its reward distribution becomes non-stationary. This makes learning in multi-agent systems more difficult than single-agent learning. Prior attempts at value-function based learning in such domains have used off-policy Q-learning that do not scale well as the cornerstone, with restricted success. This paper studies on-policy modifications of such algorithms, with the promise of scalability and efficiency. In particular, it is proven that these hybrid techniques are guaranteed to converge to their desired fixed points under some restrictions. It is also shown, experimentally, that the new techniques can learn (from self-play) better policies than the previous algorithms (also in self-play) during some phases of the exploration.
DOI
10.1080/09528130412331297956
Montclair State University Digital Commons Citation
Banerjee, Bikramjit; Sen, Sandip; and Peng, Jing, "On-Policy Concurrent Reinforcement Learning" (2004). Department of Computer Science Faculty Scholarship and Creative Works. 457.
https://digitalcommons.montclair.edu/compusci-facpubs/457