City Research Online

Explainable AI for object detection from autonomous vehicles

Hogan, M. J. (2025). Explainable AI for object detection from autonomous vehicles. (Unpublished Doctoral thesis, City, University of London)

Abstract

To tackle the complex tasks required for autonomous vehicle operations, robust perception capabilities often depend on multiple deep neural networks (DNNs). However, the opaque nature of DNN detection algorithms presents challenges in transparency, leading to unpredictable behaviour in critical applications, such as scene and object recognition for uncrewed aerial vehicles (UAVs), and self-driving cars. This research aims to bridge this transparency gap by developing explainability methods that enhance trust in networks used on autonomous vehicles.

This study addresses three major limitations in current literature. First, state-of-the- art object detection networks exhibit reduced performance with aerial imagery from UAVs, highlighting the need for robust solutions. Second, most existing explainability techniques focus on image classification, rather than object detection, leaving an important gap. Third, a lack of standardised validation methods for explanations presents many ongoing challenges.

To address these issues, this thesis introduces three novel explainability frameworks for deep object detection. The first framework adapts Grad-CAM for the YOLOv5 detector, generating explanations for class scores, objectness scores, and bounding box coordinates, while evaluating real-time performance. Building on these insights, the second explainer, a KernelSHAP-based framework, introduced a model-agnostic approach to explain object detection across architectures. Finally, the DetDSHAP framework offers a propagation-based method that, not only calculates contributions of individual pixels to a predicted bounding box, but also how discrete units of the DNN played a role in the predictions. The DetDSHAP was employed to optimise model performance through pruning.

Additionally, a novel ”Wrapping Game” approach is proposed to validate the reliability of explainers in high-stakes edge cases, providing a measure for the discriminative power of explanations. This work is further supported by the development of the XI (eXplainable Intelligence) Autonomous Driving dataset, tailored to autonomous vehicle challenges, which enables rigorous testing of explainability techniques in real-world scenarios. Together, these contributions form a comprehensive framework to enhance the interpretability of deep object detection models, ensuring autonomous vehicle systems are both effective and trustworthy.

Publication Type: Thesis (Doctoral)
Subjects: T Technology > T Technology (General)
T Technology > TA Engineering (General). Civil engineering (General)
Departments: School of Science & Technology > Engineering
School of Science & Technology > School of Science & Technology Doctoral Theses
Doctoral Theses
[thumbnail of Hogan thesis 2025 PDF-A.pdf]
Preview
Text - Accepted Version
Download (61MB) | Preview

Export

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Downloads

Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login