Unifying Convergence and No-Regret in Multiagent Learning
Document Type
Conference Proceeding
Publication Date
7-10-2006
Abstract
We present a new multiagent learning algorithm, RV σ(t), that builds on an earlier version, ReDVaLeR . ReDVaLeR could guarantee (a) convergence to best response against stationary opponents and either (b) constant bounded regret against arbitrary opponents, or (c) convergence to Nash equilibrium policies in self-play. But it makes two strong assumptions: (1) that it can distinguish between self-play and otherwise non-stationary agents and (2) that all agents know their portions of the same equilibrium in self-play. We show that the adaptive leaning rate of RV σ(t) that is explicitly dependent on time can overcome both of these assumptions. Consequently, RV σ(t) theoretically achieves (a') convergence to near-best response against eventually stationary opponents, (b') no-regret pay-off against arbitrary opponents and (c') convergence to some Nash equilibrium policy in some classes of games, in self-play. Each agent now needs to know its portion of any equilibrium, and does not need to distinguish among non-stationary opponent types. This is also the first successful attempt (to our knowledge) at convergence of a no-regret algorithm in the Shapley game.
DOI
10.1007/11691839_5
Montclair State University Digital Commons Citation
Banerjee, Bikramjit and Peng, Jing, "Unifying Convergence and No-Regret in Multiagent Learning" (2006). Department of Computer Science Faculty Scholarship and Creative Works. 609.
https://digitalcommons.montclair.edu/compusci-facpubs/609