Fairbank, M. & Alonso, E. (2012). A Comparison of Learning Speed and Ability to Cope Without Exploration between DHP and TD(0). Paper presented at the IEEE International Joint Conference on Neural Networks (IEEE IJCNN 2012), 1783-1789, 10-15-2012, Brisbane, Australia.
- Accepted Version
Download (244kB) | Preview
This paper demonstrates the principal motivations for Dual Heuristic Dynamic Programming (DHP) learning methods for use in Adaptive Dynamic Programming and Reinforcement Learning, in continuous state spaces: that of automatic local exploration, improved learning speed and the ability to work without stochastic exploration in deterministic environments. In a simple experiment, the learning speed of DHP is shown to be around 1700 times faster than TD(0). DHP solves the problem without any exploration, whereas TD(0) cannot solve it without explicit exploration. DHP requires knowledge of, and differentiability of, the environment's model functions. This paper aims to illustrate the advantages of DHP when these two requirements are satisfied.
|Item Type:||Conference or Workshop Item (Paper)|
|Additional Information:||© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.|
|Uncontrolled Keywords:||Dual Heuristic Dynamic Programming, DHP, Adaptive Dynamic Programming, Reinforcement Learning Heuristic Dynamic Programming; DHP; Adaptive Dynamic Programming; Reinforcement Learning|
|Subjects:||L Education > L Education (General)
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
|Divisions:||School of Informatics > Department of Computing|
Actions (login required)
Downloads per month over past year