Track / Overview

By 2025, there will be as many as 75 billion devices globally connected to the Internet. In this new era of hyper-connectivity, these devices will not only collect data, but also produce and process information directly on the products closest to their users, on the edge. Increased functionality and computing available on the edge is already changing the way products are designed and built. Edge computing refers to applications, services, and processing performed outside of a central data center and closer to end users. The definition of “closer” falls along a spectrum and depends highly on networking technologies used, the application characteristics, and the desired end user experience.

Edge applications do not need to communicate with the cloud, but they may still interact with servers and internet based applications. Many of the most common edge devices feature physical sensors (such as temperature, lights, speakers), and moving computing power closer to these sensors in the physical world makes sense. Do you really need to rely on an off-site cloud server when asking your lamp to dim the lights? With collection and processing power now available on the edge, the volumes of data that must be moved and stored in the cloud can be significantly reduced. On the other hand, the new hybrid infrastructures of edge computing yield interaction networks and distributed architectures that are more complex than their cloud counterparts. New software for resource allocation, application deployment, and scheduling need to be developed to manage this unprecedented execution flow.

Edge computing definitively impacts three distinct dimensions: Reliability, Privacy, and Latency—each with profound implications. A primary motivator driving edge computing’s adoption is the need for robust and reliable technology in “hard to reach” environments. Edge computing also helps to alleviate some privacy concerns by bringing processing and collection into the environment(s) where the data is produced. Finally, when the computing is on the edge, latency is much less of an issue. Users won’t have to wait while data is sent to and from a cloud server, as no data roundtrip with the remote machine dealing with computation is required, therefore no communication overhead is introduced. In some applications, latency is indeed key and some systems cannot afford the delay of sending information to off-site cloud servers.

In this track, we’ll explore three aspects of edge computing with a focus on applied machine learning. First, we’ll discuss neuromorphic architectures and hardware accelerated deep learning. The recent progress in this field are starting to enable running larger and more efficient neural networks on always smaller processing units. Then, we’ll turn to machine learning on the edge and explore several aspects of this field, from ML on smartphones to embedded speech recognition for IoT. Finally, we’ll focus on edge intelligence and tackle questions like resource optimization, federated learning, and ambient processing. 

Track / Schedule

Welcome & introduction

With Alice Coucke

State of the art in hardware-accelerated neural networks

With Frédéric Pétrot

Model quantization and Hardware acceleration, how fast can we get?

With Abdel Younes & Gaurav Arora

Deep Learning on Smartphones: A Detailed Overview

With Andrey Ignatov

Spoken Language Understanding on the Edge

With Francesco Caltagirone

Break

Machine Learning at Facebook: Understanding Inference at the Edge

With Brandon Reagen

Choco-SGD: Communication Efficient Decentralized Learning

With Anastasia Koloskova

Fully Decentralized Joint Learning of Personalized Models and Collaboration Graphs

With Aurélien Bellet

Distributed learning on sensitive health data

With Camille Marini

Track / Speakers

Alice Coucke

Head of Machine Learning Research, Sonos

Anastasia Koloskova

PhD Student, EPFL

Aurélien Bellet

Research Scientist, INRIA

Gaurav Arora

Vice President - System Architecture and AI/ML Technologies, IoT Division, Synaptics

Abdel Younes

Technical Director – Software Architecture, AI/ML Technologies, IoT Division, Synaptics

Francesco Caltagirone

Senior Manager, Sonos

Frédéric Pétrot

Professor, Grenoble INP/Ensimag

Camille Marini

Chief Technology Officer, Owkin

Andrey Ignatov

PhD Student, Computer Vision Lab, ETH Zurich

Brandon Reagen

Research Scientist, Facebook

Track / Co-organizers

Alice Coucke

Head of Machine Learning Research, Sonos

AMLD EPFL 2020 / Tracks & talks

AI & Nutrition

Marinka Zitnik, Marcel Salathé, Fabio Mainardi, Tome Eftimov, Barbara Koroušić Seljak, Nives Ogrinc, Aleksandra Kovachev

13:30-17:00 January 282A

AI & Policy

Joanna Bryson, Sofia Olhede, Emanuele Baldacci, Sabrina Kirrane, Bruno Lepri, Dennis Diefenbach, Ioannis Kaloskampis, Benoît Otjacques, Steve MacFeely, Christina Corbane

13:30-17:30 January 272A

Challenge Track

Danny Lange, Sunil Mallya, Marcel Salathé, Florian Laurent, Erik Nygren, Sharada Mohanty, Parth Kothari, Navid Rekabsaz, Wilhelmina Welsch, Ewan Oglethorpe, Nicholas Jones, Gokula Krishnan, Jeremy Watson, Andrew Melnik

13:30-17:00 January 284A

AMLD / Global partners