Abstract
Eligibility traces have been shown to substantially improve the convergence speed of temporal difference learning algorithms, by maintaining a record of recently experienced states. This paper presents an extension of conventional eligibility traces (compiled traces) which retain additional information about the agent’s experience within the environment. Empirical results show that compiled traces outperform conventional traces when applied to policy evaluation tasks using a tabular representation of the state values.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Sutton, R.S.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988)
Singh, S.P., Sutton, R.S.: Reinforcement learning with replacing eligibility traces. Machine Learning 22, 123–158 (1996)
Rummery, G., Niranjan, M.: On-line Q-Learning Using Connectionist Systems. Cambridge University Engineering Department, Cambridge (1994)
Watkins, C.J.C.H.: Learning from Delayed Rewards, PhD Thesis, Cambridge University (1989)
Sutton, R.S.: Dyna, an integrated architecture for learning, planning and reacting. SIGART Bulletin 2, 160–163 (1991)
Kaelbling, L.P.: Learning to Achieve Goals. In: Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, Chambéry, France (1993)
Ollington, R., Vamplew, P.: Concurrent Q-Learning for Autonomous Mapping and Navigation. In: The 2nd International Conference on Computational Intelligence, Robotics and Autonomous Systems, Singapore (2003)
Jaakkola, T., Jordan, M.I., Singh, S.P.: On the convergence of stochastic iterative dynamic programming algorithms. Neural Computation 6, 1185–1201 (1994)
Cichosz, P.: Truncating temporal differences: On the efficient implementation of TD(λ) for reinforcement learning. Journal of Artificial Intelligence Research 2, 287–318 (1995)
Sutton, R.S.: Generalisation in reinforcement learning: Successful examples using sparse coarse coding. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems: Proceedings of the 1995 Conference, pp. 1038–1044. The MIT Press, Cambridge (1996)
Kretchmar, R.M., Anderson, C.W.: Comparison of CMACs and RBFs for local function approximators in reinforcement learning. In: IEEE International Conference on Neural Networks (1997)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Vamplew, P., Ollington, R., Hepburn, M. (2006). Enhanced Temporal Difference Learning Using Compiled Eligibility Traces. In: Sattar, A., Kang, Bh. (eds) AI 2006: Advances in Artificial Intelligence. AI 2006. Lecture Notes in Computer Science(), vol 4304. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11941439_18
Download citation
DOI: https://doi.org/10.1007/11941439_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-49787-5
Online ISBN: 978-3-540-49788-2
eBook Packages: Computer ScienceComputer Science (R0)