Track / Overview

By 2025, there will be as many as 75 billion devices globally connected to the Internet. In this new era of hyper-connectivity, these devices will not only collect data, but also produce and process information directly on the products closest to their users, on the edge. Increased functionality and computing available on the edge is already changing the way products are designed and built. Edge computing refers to applications, services, and processing performed outside of a central data center and closer to end users. The definition of “closer” falls along a spectrum and depends highly on networking technologies used, the application characteristics, and the desired end user experience.

Edge applications do not need to communicate with the cloud, but they may still interact with servers and internet based applications. Many of the most common edge devices feature physical sensors (such as temperature, lights, speakers), and moving computing power closer to these sensors in the physical world makes sense. Do you really need to rely on an off-site cloud server when asking your lamp to dim the lights? With collection and processing power now available on the edge, the volumes of data that must be moved and stored in the cloud can be significantly reduced. On the other hand, the new hybrid infrastructures of edge computing yield interaction networks and distributed architectures that are more complex than their cloud counterparts. New software for resource allocation, application deployment, and scheduling need to be developed to manage this unprecedented execution flow.

Edge computing definitively impacts three distinct dimensions: Reliability, Privacy, and Latency—each with profound implications. A primary motivator driving edge computing’s adoption is the need for robust and reliable technology in “hard to reach” environments. Edge computing also helps to alleviate some privacy concerns by bringing processing and collection into the environment(s) where the data is produced. Finally, when the computing is on the edge, latency is much less of an issue. Users won’t have to wait while data is sent to and from a cloud server, as no data roundtrip with the remote machine dealing with computation is required, therefore no communication overhead is introduced. In some applications, latency is indeed key and some systems cannot afford the delay of sending information to off-site cloud servers.

In this track, we’ll explore three aspects of edge computing with a focus on applied machine learning. First, we’ll discuss neuromorphic architectures and hardware accelerated deep learning. The recent progress in this field are starting to enable running larger and more efficient neural networks on always smaller processing units. Then, we’ll turn to machine learning on the edge and explore several aspects of this field, from ML on smartphones to embedded speech recognition for IoT. Finally, we’ll focus on edge intelligence and tackle questions like resource optimization, federated learning, and ambient processing. 

Track / Schedule

Welcome & introduction

With Alice Coucke

State of the art in hardware-accelerated neural networks

With Frédéric Pétrot

Model quantization and Hardware acceleration, how fast can we get?

With Gaurav Arora & Abdel Younes

Deep Learning on Smartphones: A Detailed Overview

With Andrey Ignatov

Spoken Language Understanding on the Edge

With Francesco Caltagirone


Machine Learning at Facebook: Understanding Inference at the Edge

With Brandon Reagen

Choco-SGD: Communication Efficient Decentralized Learning

With Anastasia Koloskova

Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs

With Aurélien Bellet

Distributed learning on sensitive health data

With Camille Marini

Track / Speakers

Alice Coucke

Head of Machine Learning Research, Sonos

Anastasia Koloskova

PhD Student, EPFL

Aurélien Bellet

Research Scientist, INRIA

Gaurav Arora

Vice President - System Architecture and AI/ML Technologies, IoT Division, Synaptics

Abdel Younes

Technical Director – Software Architecture, AI/ML Technologies, IoT Division, Synaptics

Francesco Caltagirone

Senior Manager, Sonos

Frédéric Pétrot

Professor, Grenoble INP/Ensimag

Camille Marini

VP of Engineering, Owkin

Andrey Ignatov

PhD Student, Computer Vision Lab, ETH Zurich

Brandon Reagen

Research Scientist, Facebook

Track / Co-organizers

Alice Coucke

Head of Machine Learning Research, Sonos

AMLD EPFL 2020 / Tracks & talks

AI & Climate Change

Lynn Kaack, Nikola Milojevic-Dupont, Nicholas Jones, Felix Creutzig, Buffy Price, Slava Jankin, Olivier Corradi, Liam F. Beiser-McGrath, Marius Zumwald, Eniko Székely, Max Callaghan, Soon Hoe Lim, Mohamed Kafsi, Daniel de Barros Soares, Matthias Meyer, Chris Heinrich, Emmanouil Thrampoulidis, Marta Gonzalez, Kristina Orehounig, David Dao, Bibek Paudel

13:30-17:00 January 2709:00-12:30 January 285ABC

AI & Humanitarian Action

Neil Davison, Max Tegmark, Carmela Troncoso, Alessandro Mantelero, Michela D'Onofrio, Francois Fleuret, Amina Chebira, John C. Havens, Marc Brockschmidt, Helen Toner, Dustin Lewis, Subhashis Banerjee, Rebeca Moreno Jimenez, Netta Goussac, Volkan Cevher, Anika Schumann, Nadia Marsan, Massimo Marelli, Anja Kaspersen

09:00-17:00 January 283A

AI & Cities 2020

Konstantin Klemmer, Shin Alexandre Koseki, Eun-Kyeong Kim, Nicholas Jones, Kamil Kaczmarek, Kiran Zahra, Roger Fischer, Doori Oh, Ran Goldblatt, Martí Bosch, Roman Prokofyev, Dmitry Kudinov, Camille Lechot, Ellie Cosgrave, Javier Pérez Trufero, Layik Hama, Hoda Allahbakhshi, Marta Gonzalez, Valery Fischer, Emmanouil Tranos, Jens Kandt, Yussuf Said Yussuf, Nyalleng Moorosi, Nick Lucius

09:00-17:00 January 281BC

AMLD / Global partners