Track / Overview

Trust is indispensable to the prosperity and well-being of societies. For millennia, we developed trust-building mechanisms to facilitate interactions. But as they become increasingly digital, many traditional mechanisms no longer function well, hence trust breaks down. As a result, low levels of trust discourage us from engaging in new forms of interactions and constrain business opportunities.

We must therefore invent trust mechanisms that will contribute to prosperous and peaceful societies in the digital age, and Artificial Intelligence could clearly contribute to implement trust in the digital world

AI & Privacy

AI often relies on Machine Learning, which requires massive training datasets. The current status quo in AI are cloud-based services that process the raw data of all users, and learn from all or a subset of it. Following the rise of IoT, and the deployment of sensors (cameras, microphones, etc) in our homes, this raw data becomes more and more sensitive. The development of legislation around privacy, and progresses made on private machine learning and edge computing, open the way to interesting alternatives. Key academic and industrial efforts are now being made to reconcile AI and Privacy.

AI for Good

In the midst of increasing concern over widespread misuse of data and the implications of AI, it's important to highlight those applying AI for social good. Whether its helping African farmers detect diseased crops, advancing sustainability, or helping the visually impaired, there are some great stories out there that should be shared. These stories include amazing technical achievements and outcomes that are positive for humanity, and a few of these will be highlighted during the "AI for Good" sub-track.

AI & Security

More and more AI and Machine Learning solutions are deployed across industries affecting society at scale. The well defined models for attack and defence in classical computing security do not transfer perfectly to the ML systems and are not coping well with the variety of possible ML attacks. The attack surfaces are not yet clearly defined, one can do minor alterations to the input data in order to manipulate or poison the system. On the other hand, as the AI industry is an emerging one, there are no obvious standards or formal definitions for testing and security for ML systems. This session brings together experts in an attempt to highlight the advances in the field of secure ML.

Trusting AI

Trust is the social glue that enables humankind to progress through interaction with each other and the environment, including technology. In the AI context, there are various reasons, why trust has become a very popular topic in research and practice. There is a lack of clear definition of processes, performance and especially the purpose of the AI, with respect to the intentions of its provider. Furthermore, open questions regarding ethical standards, the notion of dual-use research, lack of regulations and questionable data privacy as well as the uncontested supremacy of the tech-giants, leave a feeling of uncertainty behind. The Subtrack "Trusting AI" encompasses questions and issues from research and practice, which look at the heterogeneous concept and associations of trust from an individual, a relationship, a situation and/ or a process perspective: From the initial trust formation and antecedents to repairing a trust relationship once it is broken. Trust is one of the most critical influencing factors in the AI context.

Track / Schedule

Introduction

With David Leroy

Technologies for Privacy-Preserving Machine Learning

With Morten Dahl

Federated Learning in Practice at Google

With Peter Kairouz

When foes are friends: a privacy perspective on adversarial examples

With Carmela Troncoso

Spoken Language Understanding on the Edge

With Alice Coucke

Coffee Break

Introduction

AI for Earth: Using machine learning to monitor, model, and manage natural resources

With Jennifer Marsman

Solve for H: Leveraging AI to solve problems for human kind

With Anna Bethke

Artificial intelligence for making earth accessible, searchable and insightful

With Frank de Morsier

AI for good in Industry

With Chris Benson

Introduction

With Tereza Iofciu

A Marauder's Map of Security and Privacy in Machine Learning

With Nicolas Papernot

Byzantine Machine Learning: Safeguarding AI from Data Poisoning and Hacked Machines

With El Mahdi El Mhamdi

The past, present and future of generative models

With Mihaela Rosca

Building a security ML-based startup from scratch

With Raul Popa

How do attacks look in a world dominated by AI?

With Sharada Mohanty

Coffee Break

Introduction

With Marisa Tschopp

Trusting AI and the Future of War

With Anja Kaspersen

Trusting real-world AI applications

With Marc Schöni

How people perceive AI – Trust & Explanation

With Pearl Pu

Trusting AI – IBM's strategy, research, and practical approaches

With Anika Schumann

Track / Speakers

Alice Coucke

Head of Machine Learning Research, Sonos

Marisa Tschopp

Researcher, scip

Tereza Iofciu

Lead Data Scientist, Free Now

Sharada Mohanty

CEO & Founder, AIcrowd

Carmela Troncoso

Professor, EPFL

Anika Schumann

Research Manager, IBM Research

Frank de Morsier

Chief Technology Officer, Picterra

Anja Kaspersen

Director, UN Disarmament

David Leroy

Senior Machine Learning Engineer

Morten Dahl

Research Scientist, OpenMined & Dropout Labs

Peter Kairouz

Researcher, Google

Jennifer Marsman

Principal Software Development Engineer, Microsoft

Chris Benson

Chief Strategist, Artificial Intelligence Programs, Lockheed Martin

Nicolas Papernot

Research Scientist, Google Brain

El Mahdi El Mhamdi

PhD Stundent, EPFL

Mihaela Rosca

Research Engineer, DeepMind

Raul Popa

CEO & Data Scientist, Typing DNA

Marc Schöni

Advanced Analytics & AI, Microsoft

Pearl Pu

Professor, EPFL

Anna Bethke

Head of AI for Social Good, Intel

Track / Co-organizers

Daniel Whitenack

Data Scientist, SIL International

Olivier Crochat

Executive Director, C4DT / EPFL

David Leroy

Senior Machine Learning Engineer

Tereza Iofciu

Lead Data Scientist, Free Now

Marisa Tschopp

Researcher, scip

Center for Digital Trust

AMLD EPFL 2019 / Tracks & talks

AI & Health

Sunil Mallya, Elaine Nsoesie, Asif Jan, Gloria Macia, Evgeniy Gabrilovich, Tomas Dikk, Gabriel Krummenacher, Wojciech Samek, Thomas Hugle, Ali Oskooei, Eirini Arvaniti, Matthias Kämpf, Michael Tangermann, David Hübner, Valeria De Luca

09:00-12:30 January 295ABC

AI & Language

Jakob Uszkoreit, Nicolas Perony, Andrei Popescu-Belis, Lars Maaløe, Vered Shwartz, Hrant Khachatrian, Christian Reisswig, João Graça, Michele Sama, Richard Zens, Joern Wuebker, Ines Montani

13:30-17:00 January 285ABC

AI & Transport

Sunil Mallya, Erik Nygren, Francisco Pereira, Matthieu Cord, Arnaud de La Fortelle, Julian Kooij, Adrien Gaidon, Roxanne Tison, Alberto Chiappa, Mark Meeder, Adrian Egli

09:00-12:30 January 293A

AMLD / Global partners