Towards Robust Neurosymbolic Relational Learning
Luca, T., Paes, A., Zaverucha, G. & Garcez, A. D.
ORCID: 0000-0001-7375-9518 (2025).
Towards Robust Neurosymbolic Relational Learning.
In:
2025 International Joint Conference on Neural Networks (IJCNN).
2025 International Joint Conference on Neural Networks (IJCNN), 30 Jun - 5 Jul 2025, Rome, Italy.
doi: 10.1109/ijcnn64981.2025.11229419
Abstract
Traditional neural networks (NNs) learn primarily from data, which limits their capacity to represent relational knowledge or handle symbolic relational data effectively. Although graph neural networks (GNNs) address this limitation at the level of relational data, they continue to struggle at learning relational knowledge. Neural-symbolic learning offers a solution by combining machine learning with knowledge representation, enabling the development of interpretable logic-based models learned from neural networks. Bottom clause propositionalization (BCP) is a prominent approach that transforms relational knowledge into attribute-value examples. A bottom clause is a logical representation created from each example as a starting point for the search process. BCP can be used with symbolic learners or neural networks to tackle relational domains. However, BCP often faces significant memory storage problems when handling larger datasets due to the volume of logical literals that it generates. Semi-propositionalization can alleviate these storage problems by grouping logical literals. However, it does not eliminate the substantial time requirements to create a bottom clause for each example. This paper investigates the application of sampling to the examples used for bottom clause generation. The hypothesis is that the number of examples needed to generate bottom clauses can be reduced significantly. Finding representative bottom clauses from data should enable relational learning to take place at an adequate level of abstract relational knowledge rather than simply at the level of the relations between any two data points. We evaluate this hypothesis by training a classifier with different sampling from five relational datasets. We experimentally validate the size of each sampling for each dataset. Experimental results show that training classifiers with fewer relational examples produces competitive results compared to using the entire dataset. The best results are obtained with up to 50% reduction in the set of examples.
| Publication Type: | Conference or Workshop Item (Paper) |
|---|---|
| Additional Information: | © 2025 IEEE. This accepted manuscript is made available under the terms of the Creative Commons Attribution License (CC-BY), which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. |
| Publisher Keywords: | Training, Training data, Machine learning, Transforms, Knowledge representation, Sampling methods, Graph neural networks, Data models, Faces, Sports |
| Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
| Departments: | School of Science & Technology School of Science & Technology > Department of Computer Science |
| SWORD Depositor: |
Available under License Creative Commons Attribution.
Download (228kB) | Preview
Export
Downloads
Downloads per month over past year
Metadata
Metadata