Workshop / Overview

Researchers and policymakers have raised concerns that data-based prediction systems might produce unintended discrimination and social injustice, a phenomenon which has been called “algorithmic bias” or “algorithmic fairness”. Systems are expected to be fair, non-biased, and non-discriminatory. The topic is meant to gain further relevance through legislative act (such as the Regulation on Artificial Intelligence (AI) proposed by the EU). 

However, in practice, it is not clear how to create fair algorithms and how to ensure that data-based prediction and decision models fulfill clearly defined requirements on fairness. In this hands-on workshop, you will learn how to combine data-based prediction models with fairness requirements. In the context of a practice-oriented InnoSuisse project, we have developed a tool called “Fairness Lab” to help represent fairness implications for prediction-based modeling.

We will provide an introduction to algorithmic fairness and discuss how to deal with fairness problems in prediction-based modeling. We will present a recently developed general approach for analyzing fairness issues in data-based decision making, and how to construct fair decision algorithms taking into account the underlying business goal. Using the “Fairness Lab”, participants will have the opportunity to apply and test this approach on concrete use cases and examples.

The workshop consists of three parts: Introduction to algorithmic fairness, assessment procedure for ethics in machine learning (ML), and hands-on activities – with a particular focus on the latter.

Part 1: Introduction to Algorithmic Fairness [45 minutes]

In the first part of the workshop, we give an introduction into the rather young research field of algorithmic fairness, touching on questions such as:

  • What different types of fairness exist?
  • How do we measure discrimination or fairness in practice?
  • How can we ensure that a data-based decision algorithm achieves maximum performance while satisfying a fairness constraint?

Part 2: Assessment Procedure Machine Learning and Ethics [45 minutes]

In the second part, we focus on the ethical assessment of ML algorithms. In particular, we provide the participants with a detailed procedure that can be applied to assess and improve the fairness of algorithms. This procedure draws on interdisciplinary research (from machine learning, ethics, and moral philosophy) and bridges the gap between theoretical concepts and how they can be applied in practice. Amongst others, we also answer the following questions:

  • Are there ethical responsibilities of computer scientist and data scientists who build predictive models, and of which sort are they?
  • How can the fairness requirements of a specific application be analyzed?
  • How to transfer this requirement into a fairness metric?
  • Why are there so many different fairness metrics, what is their moral basis, and is there a best one?

Part 3: Hands-on activities [2 hours]

In the third part of the workshop, we want participants to try out our methodology on practical use cases using our tool “Fairness Lab”. To do that, we provide a tutorial for several use cases, including the analysis of the underlying dataset and the training of ML models, before assessing and improving the fairness of these models. Thereby, the participants learn what can be done to make sure that systems are fair in a well-specified way and how to deal with the trade-off between prediction accuracy and fairness. In addition to that, we will also focus on questions such as:

  • Which part of the solution has to be provided by the computer scientists?
  • How should ethical considerations be integrated into algorithmic solutions?
  • What kind of trade-offs exist, and how can they be balanced?
  • How are optimal post-processing techniques implemented to ensure the fairness of ML algorithms?

Breaks and wrap-up [30 minutes]

Workshop / Outcome

  • Clear and concise overview of the ethics and ML debate. 
  • Usable procedure of fairness assessment for ML systems.
  • Experience in the use of the tool "Fairness Lab".

Workshop / Difficulty

Intermediate level

Workshop / Prerequisites

Our workshop is mainly intended for data scientists. However, deep technical knowledge is not required. Therefore, we are happy to welcome a broad audience with different backgrounds from both industry and academia.

Track / Co-organizers

Johan Rochel

Co-director, ethix - Lab for innovation ethics

Joachim Baumann

PhD Student, University of Zurich & Zurich University of Applied Sciences

Corinna Hertweck

PhD Student, University of Zurich & Zurich University of Applied Sciences

AMLD EPFL 2022 / Workshops

MLOps on AWS: a Hands-On Tutorial

With Gabriele Mazzola, Emanuele Fabbiani, Marco Paruscio, Matteo Moroni, Marta Peroni & Gabriele Orlandi

09:00-13:00 March 262ABC

Close the Gap between Proof-of-Concept and Data Science Product

With Dimira Petrova, Antoni Ivanov & Dako Dakov

10:00-16:00 March 263BC

Designing Effective Visualisations to Communicate Data Stories

With Jacqueline Stählin, Charlotte Cabane, Diana Mitache & Sebastian Baumhauer

10:00-16:00 March 264ABC

AMLD / Global partners