Talk / Overview

The research domain of explainable Artificial Intelligence (xAI) has recently emerged to investigate deep learning models, often constructed as black boxes. In the field of deep learning on graphs, AI users want to get a better understanding of the models and their outcomes. This is particularly the case for financial decision making. However, there is no clear classification for the existing explainability methods working on GNN; and to build one, we lack a consensus on the definition of core concepts such as “explanation”, “interpretation”, “transparency”, “trust”. Here, we propose a novel approach to properly compare explainability techniques for GNNs. We start by re-defining the fundamental concepts. On this common ground, we select diverse datasets for node and graph classification tasks and compare the most representative techniques in the field of explainability for GNNs. By highlighting the complexity of the term “explanation”, we show that explainability techniques have different roles: they vary on the type of explanations they provide, the type of data they are suited for, and the task of the model (node or graph classification). Instead of having them compete on the same level, we rather provide a comparative framework based on multiple criteria to guide researchers for selecting not the best but the most appropriate method given their problem.

Talk / Speakers

Kenza Amara

Doctorate, ETHZ

Talk / Slides

Download the slides for this talk.Download ( PDF, 499.91 MB)

Talk / Highlights

Lightning Talk: Explaining Graph Neural Networks

With Kenza AmaraPublished April 27, 2022

AMLD / Global partners