City Research Online

Reasoning about what has been learned: Neural-Symbolic integration for explainable artificial intelligence

Wagner, B. (2022). Reasoning about what has been learned: Neural-Symbolic integration for explainable artificial intelligence. (Unpublished Doctoral thesis, City, University of London)


We investigate the potential of Neural-Symbolic integration to reason about what a neural network has learned. By undertaking a systematic study of the literature on explainable Artificial Intelligence, we propose a new taxonomy that showcases the methods in a structured manner and enables the integration of earlier work. Initially, two promising symbolic-inspired explainability methods for deep neural networks are evaluated. We examine the limitations of a concept-based XAI approach and expand the applicability of the method to a new data domain. The approach is extended by integrating ontology querying to provide more comprehensive explanations. Having examined the internal representation of the network, we investigate the appropriateness of a decision tree extraction method to explain the inner operations. Specifically, we apply the original proposal to increasingly complex visual tasks while extending the method to provide a deeper understanding of the extracted trees. In order to overcome the limitations identified in the examined methods, we propose a novel Neural-Symbolic approach to explainability by exploring the connection between conceptual representation and expressive first-order logic operators for intuitive and powerful explanations corresponding to human reasoning. The relevant framework is adapted from Logic Tensor Networks and modified to add a continual interactive explanation mechanism for model-agnostic querying and constraining. We examine the reasoning capabilities of neural networks that have been trained using the framework to identify the necessary requirements for a model to be integrated effectively. This allows us to establish what constitutes valuable background information for improving deduction as compared to previously published benchmarks. Using the novel interactive framework, we present a method for acting on information extracted by any XAI approach to prevent the model from learning unwanted behaviour and biases and show how the method can be applied to quantitative fairness. Compared with another constraint-based neural network approach, we demonstrate improved accuracy, while maintaining fairness based on two common fairness metrics. Furthermore, we incorporate the initial work on concept groundings into the new framework to facilitate comprehensive conceptual and logic-based explanations of the black box model (i.e., CNN and transformer). It demonstrates that model explanations can retain truthful representations of operations and internal representations, passing a benchmark proposed for truly explainable AI.

Publication Type: Thesis (Doctoral)
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Departments: School of Science & Technology > Computer Science
Text - Accepted Version
Download (21MB) | Preview



Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login