Track / Overview

Digitalization and new technologies play an increasingly important role in today’s humanitarian activities. Conflicts are more and more fragmented and complex, making it difficult for humanitarian organizations to access conflict areas and the vulnerable people affected.  It is against this background that humanitarian organizations look with interest at the possibilities offered by AI and machine learning.

Meanwhile AI and machine learning are set to change the way in which wars are fought. Parties to armed conflict are looking to these technologies to enable novel weapons and methods of warfare, such as: increasingly autonomous weapons; new forms of cyber and information warfare; and ‘decision-support’ systems for targeting.  It is critical to understand the foreseeable consequences of these developments, and to tackle accompanying legal question and ethical questions concerns.

To address these distinct dimensions, given the ICRC’s mandate both to protect and assist victims of armed conflict and to promote and strengthen international humanitarian law (the law of war), this full day track is divided in two parts:

In the morning session, we will explore a few common challenges through the lens of humanitarian action. The first panel will discuss how privacy challenges are different (or similar) in war torn contexts. We will then address the difficulties for a humanitarian organization in deciding when a commercially available solution is adequate to be used in very unique settings. Finally, the third panel will explore how fairness can be addressed when AI-generated predictions are made about the lives of the most vulnerable people.

In the afternoon session, we will switch our attention to the use of machine learning in the conduct of warfare itself. The first panel will explore emerging military applications and consider potential implications for civilian protection, compliance with international humanitarian law, and ethical acceptability. Building on these discussions, the second panel will explore what human-centred AI means in practice for armed conflict. How can we preserve meaningful human control and judgement for tasks, and in decisions, that have serious consequences for people’s lives and are governed by specific rules of international humanitarian law?

Track / Schedule

A privacy health check of Machine Learning

With Carmela Troncoso, Alessandro Mantelero, Marc Brockschmidt & Massimo Marelli

Break

How to deploy Machine Learning to support Humanitarian Action in war zones?

With Amina Chebira, Volkan Cevher, Anika Schumann, Francois Fleuret, Michela D'Onofrio & Anja Kaspersen

Weaponised AI: what are the implications?

With Neil Davison, Nadia Marsan, Helen Toner & Dustin Lewis

Break

Human-centred AI in practice

With Max Tegmark, Subhashis Banerjee, Netta Goussac & John C. Havens

Track / Speakers

Neil Davison

Scientific & Policy Adviser, ICRC

Max Tegmark

Professor, MIT

Carmela Troncoso

Professor, EPFL

Alessandro Mantelero

Private Law and Data Ethics & Protection, Polytechnic University of Turin

Michela D'Onofrio

Data Scientist, ICRC

Francois Fleuret

Senior Researcher, Idiap

Amina Chebira

Senior Manager, ELCA Informatique

John C. Havens

Executive Director, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 

Marc Brockschmidt

Senior Principal Research Manager

Helen Toner

Director of Strategy, Center for Security and Emerging Technology, Georgetown University

Dustin Lewis

Research Director, Harvard Law School Program on International Law and Armed Conflict

Subhashis Banerjee

Professor, Indian Institute of Technology Delhi

Netta Goussac

Legal Adviser, International Committee of the Red Cross

Volkan Cevher

Associate Professor, EPFL

Anika Schumann

Research Manager, IBM Research

Nadia Marsan

Senior Assistant Legal Adviser, NATO Office of Legal Affairs

Massimo Marelli

Head of Data Protection Office, ICRC

Anja Kaspersen

Director, UN Disarmament

Track / Co-organizers

Vincent Graf Narbel

Strategic Technology Adviser, ICRC

Neil Davison

Scientific & Policy Adviser, ICRC

AMLD EPFL 2020 / Tracks & talks

AI & Climate Change

Lynn Kaack, Nikola Milojevic-Dupont, Nicholas Jones, Felix Creutzig, Buffy Price, Slava Jankin, Olivier Corradi, Liam F. Beiser-McGrath, Marius Zumwald, Eniko Székely, Max Callaghan, Soon Hoe Lim, Mohamed Kafsi, Daniel de Barros Soares, Matthias Meyer, Chris Heinrich, Emmanouil Thrampoulidis, Marta Gonzalez, Kristina Orehounig, David Dao, Bibek Paudel

13:30-17:00 January 2709:00-12:30 January 285ABC

AI & Humanitarian Action

Neil Davison, Max Tegmark, Carmela Troncoso, Alessandro Mantelero, Michela D'Onofrio, Francois Fleuret, Amina Chebira, John C. Havens, Marc Brockschmidt, Helen Toner, Dustin Lewis, Subhashis Banerjee, Netta Goussac, Volkan Cevher, Anika Schumann, Nadia Marsan, Massimo Marelli, Anja Kaspersen

09:00-17:00 January 283A

AI & Cities

Konstantin Klemmer, Shin Koseki, Eun-Kyeong Kim, Nicholas Jones, Kamil Kaczmarek, Kiran Zahra, Roger Fischer, Doori Oh, Ran Goldblatt, Martí Bosch, Roman Prokofyev, Dmitry Kudinov, Camille Lechot, Ellie Cosgrave, Javier Pérez Trufero, Layik Hama, Hoda Allahbakhshi, Marta Gonzalez, Valery Fischer, Emmanouil Tranos, Jens Kandt, Yussuf Said Yussuf, Nyalleng Moorosi, Nick Lucius

09:00-17:00 January 281BC

AMLD / Global partners