Inductive learning in Shared Neural Multi-Spaces

Howe, J. M., Mota, E.D. & Garcez, A. (2017). Inductive learning in Shared Neural Multi-Spaces. Paper presented at the Twelfth International Workshop on Neural-Symbolic Learning and Reasoning 2017, July 17-18, 2017, London, UK.

[img]
Preview
Text - Accepted Version
Download (286kB) | Preview

Abstract

The learning of rules from examples is of continuing interest to machine learning since it allows generalization from fewer training ex- amples. Inductive Logic Programming (ILP) generates hypothetical rules (clauses) from a knowledge base augmented with (positive and negative) examples. A successful hypothesis entails all positive examples and does not entail any negative example. The Shared Neural Multi-Space (Shared NeMuS) structure encodes first order expressions in a graph suitable for ILP-style learning. This paper explores the NeMuS structure and its re- lationship with the Herbrand Base of a knowledge-base to generate hy- potheses inductively. It is demonstrated that inductive learning driven by the knowledge-base structure can be implementated successfully in the Amao cognitive agent framework, including the learning of recursive hypotheses.

Item Type: Conference or Workshop Item (Paper)
Additional Information: Copyright © 2017 for the individual papers by the papers' authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors.
Divisions: School of Informatics > Department of Computing
URI: http://openaccess.city.ac.uk/id/eprint/18545

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics