Learning to Act with RVRL Agents
Child, C. H. T., Stathis, K. & Garcez, A. (2007). Learning to Act with RVRL Agents. Paper presented at the 14th RCRA Workshop, Experimental Evaluation of Algorithms for Solving Problems with Combinatorial Explosion, Jul 2007, Rome, Italy.
Abstract
The use of reinforcement learning to guide action selection of cognitive agents has been shown to be a powerful technique for stochastic environments. Standard Reinforcement learning techniques used to provide decision theoretic policies rely, however, on explicit state-based computations of value for each state-action pair. This requires the computation of a number of values exponential to the number of state variables and actions in the system. This research extends existing work with an acquired probabilistic rule representation of an agent environment by developing an algorithm to apply reinforcement learning to values attached to the rules themselves. Structure captured by the rules is then used to learn a policy directly. The resulting value attached to each rule represents the utility of taking an action if the conditions of the rule are present in the agent’s current set of percepts. This has several advantages for planning purposes: generalization over many states and over unseen states; effective decisions can therefore be made with less training data than state based modelling systems (e.g. Dyna Q-Learning); and the problem of computation in an exponential state-action space is alleviated. The results of application of this algorithm to rules in a specific environment are presented, with comparison to standard reinforcement learning policies developed from related work.
Publication Type: | Conference or Workshop Item (Paper) |
---|---|
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Departments: | School of Science & Technology > Computer Science |
Download (177kB) | Preview
Export
Downloads
Downloads per month over past year