Incremental Multi-Step Q-Learning
Document Type
Article
Publication Date
1-1-1996
Abstract
This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic-programming based reinforcement learning method, with the TD(λ) return estimation process, which is typically used in actor-critic learning, another well-known dynamic-programming based reinforcement learning method. The parameter λ is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovtan effect of coarse state-space quantization. The resulting algorithm, Q(λ)-learning, thus combines some of the best features of the Q-learning and actor critic learning paradigms. The behavior of this algorithm has been demonstrated through computer simulations.
DOI
10.1007/BF00114731
Montclair State University Digital Commons Citation
Peng, Jing and Williams, Ronald J., "Incremental Multi-Step Q-Learning" (1996). Department of Computer Science Faculty Scholarship and Creative Works. 340.
https://digitalcommons.montclair.edu/compusci-facpubs/340