Workshop / Overview

The development of deep learning models led to important breakthroughs in terms of performance in a variety of tasks, at times surpassing human performance.  For example, combining expert opinions and deep learning was shown to achieve the best sensitivity and specificity in medical tasks. 
Deep learning algorithms, however, may hide inherent risks such as the codification of biases, the weak accountability and the bare transparency of their decision-making process. Having little insights about the final output, they tend to be perceived as black-boxes with little control on the decision. 
This workshop aims at offering an all-round discussion around the importance of building trust and interpretability for Artificial Intelligence. The growing adoption of deep learning solutions sets demands in terms of accountability and transparency of the models. This is particularly true for healthcare and medical applications. In this workshop we will focus on understanding the decision process of medical imaging algorithms used in digital pathology.   
We will demonstrate how to implement different types of explainability techniques. Heatmaps of salient input pixels will be generated as a visualization of the attention of Convolutional Neural Networks on input medical images. We will also show how arbitrary concepts such as clinically relevant parameters for pathology can be used to obtain explanations with a higher level of abstraction. Finally, graph-based representations of pathology images will be introduced as an alternative interpretability solution that allows to shift the analysis from pixels to biologically comprehensible entities.
Hands-on tutorials will focus on the application of interpretability techniques to digital pathology images. Class Activation Maps and the concept-based explanations given by Regression Concept Vectors will be applied to explain the decisions of Convolutional Neural Networks. Further, cell-graphs will be introduced as a way of encoding the cellular relationships in an interpretable way, followed by GraphGrad to generate post-hoc explanations.

Workshop / Outcome

Participants will be given an overview of interpretability in deep learning emphasizing on the need to build trust between doctors, clinicians and AI agents. 

In the first part of the workshop, interpretability in healthcare will be motivated, contextualised and common techniques will be introduced. Then, the hands-on session will give participants ready-to-use tools that they will be able to further implement in their own projects.

Upon the completion of the workshop, participants will have experience with image-based and graph-based classification of tumor tissue in histology images and on how to interpret the decisions of these models. Participants will work on the implementation of Class Activation Mapping as an example of  heatmaps of salient input pixels. Regression Concept Vectors will be applied to generate complementary explanations in terms of clinically relevant measures such as nuclei area and appearance. Finally, participants will be shown how Graph Neural Networks operating on cell graphs can directly incorporate a higher level of transparency in terms of entity importance, which can be interpreted by graph-pruning.

Workshop / Difficulty

Intermediate level

Workshop / Prerequisites

  • Good command in Python 
  • Familiarity with deep learning frameworks like Keras, PyTorch 
  • Knowledge of common deep learning tools: Convolutional Neural Networks, Fully connected Networks 
  • Own Laptop. The tutorials will be conducted with Google Colab.

Track / Co-organizers

Guillaume Jaume

PhD student, EPFL/IBM Research

Pushpak Pati

Pre-doc student, IBM Research, ETH Zurich

Mara Graziani

PhD student, Hes-so Valais and UniGe

AMLD EPFL 2021 / Workshops

Towards ethical AI – practical tools for responsible data scientists

With Johan Rochel & Lea Strohm

10:00-11:30 November 10Online

How to make your NLP system multilingual

With Adam Bittlingmayer & Nerses Nersesyan

10:00-12:00 March 02Online

Deep Learning-Driven Text Summarization & Explainability with Reuters News Data

With Nadja Herger, Nina Hristozova & Andreea Iuga

15:00-17:30 March 02Online

AMLD / Global partners