City Research Online

Continual State Representation Learning for Reinforcement Learning using Generative Replay

Caselles-Dupré, H., Garcia Ortiz, M. ORCID: 0000-0003-4729-7457 & Filliat, D. (2018). Continual State Representation Learning for Reinforcement Learning using Generative Replay. Paper presented at the Workshop on Continual Learning, NeurIPS 2018- 32nd Conference on Neural Information Processing Systems, 07 December 2018, Montreal, Canada.

Abstract

We consider the problem of building a state representation model in a continual fashion. As the environment changes, the aim is to efficiently compress the sensory state's information without losing past knowledge. The learned features are then fed to a Reinforcement Learning algorithm to learn a policy. We propose to use Variational Auto-Encoders for state representation, and Generative Replay, i.e. the use of generated samples, to maintain past knowledge. We also provide a general and statistically sound method for automatic environment change detection. Our method provides efficient state representation as well as forward transfer, and avoids catastrophic forgetting. The resulting model is capable of incrementally learning information without using past data and with a bounded system size.

Publication Type: Conference or Workshop Item (Paper)
Additional Information: Accepted contribution to the Workshop on Continual Learning, NeurIPS 2018 (Neural Information Processing Systems)
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Departments: School of Science & Technology > Computer Science
[thumbnail of 1810.03880v3.pdf]
Preview
Text
Download (1MB) | Preview

Export

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Downloads

Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login