City Research Online

Internally Driven Q-learning - Convergence and Generalization Results

Alonso, E. ORCID: 0000-0002-3306-695X, Mondragon, E. ORCID: 0000-0003-4180-1261 & Kjaell-Ohlsson, N. (2012). Internally Driven Q-learning - Convergence and Generalization Results. In: Filipe, J. & Fred, A. (Eds.), Proceedings of the 4th International Conference on Agents and Artificial Intelligence. (pp. 491-494). Setubal, Portugal: SCITEPRESS. ISBN 978-989-8425-95-9 doi: 10.5220/0003736404910494


We present an approach to solving the reinforcement learning problem in which agents are provided with internal drives against which they evaluate the value of the states according to a similarity function. We extend Q-learning by substituting internally driven values for ad hoc rewards. The resulting algorithm, Internally Driven Q-learning (IDQ-learning), is experimentally proved to convergence to optimality and to generalize well. These results are preliminary yet encouraging: IDQ-learning is more psychologically plausible than Q-learning, and it devolves control and thus autonomy to agents that are otherwise at the mercy of the environment (i.e., of the designer).

Publication Type: Conference or Workshop Item (Paper)
Publisher Keywords: Q-learning, IDQ-learning, Internal Drives, Convergence, Generalization
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Departments: School of Science & Technology > Computer Science
Text - Published Version
Download (150kB) | Preview



Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login