What made the model decide that this image contained a damaged product? Why is this particular customer likely to churn? And can we trust the model’s decision that this ICU patient should be discharged into regular care?
With the rise of data-driven decision making by black box machine learning models, the motivations behind our decision-making can become nearly incomprehensible. This is a problem. Imagine you had no machine learning background whatsoever: would you truly feel comfortable putting your health in the (metallic) hands of automated decision making or put your trust in the automation of the self-driving car?
This is where Explainable AI comes in. In this workshop session, we dive into how you can make your ML models interpretable and explainable.
We will cover model-agnostic techniques that can be applied to any ML method. This will enable you to analyse the importance of features and how sensitive your predictions are to change both for individual predictions and your model as a whole.
You will be able to identify the most appropriate model-agnostic machine learning interpretability techniques to explain and interpret your predictions.
Furthermore, you will be able to apply these techniques with Python. You will also be able to implement these techniques in other programming languages, as you will gain a fundamental understanding of how the techniques work.
Beginner level
You will be expected to be familiar with Python & Jupyter Notebooks and have experience with using Scikit-Learn to perform machine learning.
A familiarity with the intuitive theory behind machine learning models like decision trees, random forests, linear regression and boosting algorithms is encouraged, but not required.