Convergent Gradient Ascent in General-Sum Games
Document Type
Conference Proceeding
Publication Date
1-1-2002
Abstract
In this work we look at the recent results in policy gradient learning in a general-sum game scenario, in the form of two algorithms, IGA and WoLF-IGA. We address the drawbacks in convergence properties of these algorithms, and propose a more accurate version of WoLF-IGA that is guaranteed to converge to Nash Equilibrium policies in self-play (or against an IGA learner). We also present a control theoretic interpretation of variable learning rate which not only justifies WoLF-IGA, but also shows it to achieve fastest convergence under some constraints. Finally we derive optimal learning rates for fastest convergence in practical simulations.
Montclair State University Digital Commons Citation
Banerjee, Bikramjit and Peng, Jing, "Convergent Gradient Ascent in General-Sum Games" (2002). Department of Computer Science Faculty Scholarship and Creative Works. 193.
https://digitalcommons.montclair.edu/compusci-facpubs/193